
Senior Data Engineer - Full Stack
Codvo Private Limited, Santa Clara, CA, United States
Role Summary
We are seeking a highly skilled Senior Data Engineer – Full Stack to build and maintain internal tools, automation frameworks, and workflows that enhance the efficiency, reliability, and scalability of our data and machine learning platforms. This role will work closely with Data Engineers, Data Scientists, and ML Engineers to streamline operations across the data lifecycle.
Total Exp: 8+Years • Location: Santa Clara, CA(Hybrid)
Key Responsibilities
Design and develop CLI tools, scripts, and internal utilities to automate repetitive tasks across the data platform, including:
Pipeline execution and orchestration
Data governance workflows
Metadata synchronization
Environment setup and configuration
Test harness development
Automate workflows on Databricks, including:
Job deployment and scheduling
Environment provisioning
MLOps processes using APIs, Terraform, or Databricks SDK
Build and implement robust testing frameworks:
Integration testing for pipelines
End-to-end validation of ETL/ELT workflows
Testing and validation for ML inference workflows
Improve overall productivity, scalability, and reliability of the data and ML engineering ecosystem
Develop lightweight internal tools and dashboards using frameworks such as React, Streamlit, or similar technologies to:
Visualize data pipelines and workflows
Demonstrate model inference capabilities
Provide configuration and operational controls
Enable internal productivity monitoring and dashboards
Collaborate with cross-functional teams to identify automation opportunities and implement best practices
Required Skills & Qualifications
Strong experience in Python and scripting for automation and backend development
Hands‑on experience with Databricks platform and ecosystem
Experience with APIs, Terraform, and/or Databricks SDK for automation
Solid understanding of ETL/ELT pipelines and data platform architecture
Experience building testing frameworks for data pipelines and ML workflows
Familiarity with CLI tool development and system automation
Knowledge of MLOps principles and practices
Experience with modern development practices, including:
Spec‑driven development
Use of coding agents or automation‑assisted development tools
Version control and CI/CD pipelines
Nice to Have
Experience building dashboards or internal tools using React, Streamlit, or similar frameworks
Familiarity with Databricks AI/BI or other data visualization tools
Exposure to data governance and metadata management frameworks
Experience working with cloud platforms (AWS preferred)
Preferred Experience
8+ years of experience in Data Engineering, Platform Engineering, or related roles
Experience working in data‑driven or ML‑focused environments
What You’ll Bring
Strong problem‑solving mindset with a focus on automation and efficiency
Ability to work in a fast‑paced, collaborative environment
Passion for building scalable internal tools and improving developer productivity
#J-18808-Ljbffr
We are seeking a highly skilled Senior Data Engineer – Full Stack to build and maintain internal tools, automation frameworks, and workflows that enhance the efficiency, reliability, and scalability of our data and machine learning platforms. This role will work closely with Data Engineers, Data Scientists, and ML Engineers to streamline operations across the data lifecycle.
Total Exp: 8+Years • Location: Santa Clara, CA(Hybrid)
Key Responsibilities
Design and develop CLI tools, scripts, and internal utilities to automate repetitive tasks across the data platform, including:
Pipeline execution and orchestration
Data governance workflows
Metadata synchronization
Environment setup and configuration
Test harness development
Automate workflows on Databricks, including:
Job deployment and scheduling
Environment provisioning
MLOps processes using APIs, Terraform, or Databricks SDK
Build and implement robust testing frameworks:
Integration testing for pipelines
End-to-end validation of ETL/ELT workflows
Testing and validation for ML inference workflows
Improve overall productivity, scalability, and reliability of the data and ML engineering ecosystem
Develop lightweight internal tools and dashboards using frameworks such as React, Streamlit, or similar technologies to:
Visualize data pipelines and workflows
Demonstrate model inference capabilities
Provide configuration and operational controls
Enable internal productivity monitoring and dashboards
Collaborate with cross-functional teams to identify automation opportunities and implement best practices
Required Skills & Qualifications
Strong experience in Python and scripting for automation and backend development
Hands‑on experience with Databricks platform and ecosystem
Experience with APIs, Terraform, and/or Databricks SDK for automation
Solid understanding of ETL/ELT pipelines and data platform architecture
Experience building testing frameworks for data pipelines and ML workflows
Familiarity with CLI tool development and system automation
Knowledge of MLOps principles and practices
Experience with modern development practices, including:
Spec‑driven development
Use of coding agents or automation‑assisted development tools
Version control and CI/CD pipelines
Nice to Have
Experience building dashboards or internal tools using React, Streamlit, or similar frameworks
Familiarity with Databricks AI/BI or other data visualization tools
Exposure to data governance and metadata management frameworks
Experience working with cloud platforms (AWS preferred)
Preferred Experience
8+ years of experience in Data Engineering, Platform Engineering, or related roles
Experience working in data‑driven or ML‑focused environments
What You’ll Bring
Strong problem‑solving mindset with a focus on automation and efficiency
Ability to work in a fast‑paced, collaborative environment
Passion for building scalable internal tools and improving developer productivity
#J-18808-Ljbffr