
Data Engineer, Senior Staff
Qualcomm, San Diego, CA, United States
General Summary
We are seeking a Senior Staff Data Engineer to design, build, and operate a modern, scalable data platform with Databricks Lakehouse as a core foundation. In this role you will focus on building reusable data frameworks, shared platform components, and standardized pipelines that enable teams to deliver data products efficiently and consistently. Your work will support analytics, reporting, and downstream advanced use cases (including AI and machine learning) with a strong emphasis on reliability, governance, developer productivity, and intelligent automation.
This hands‑on role offers meaningful ownership across data engineering, framework development, AI‑driven automation, platform reliability, security, and cost management while contributing to architectural decisions and data standards. The position is full‑time on‑site (5 days per week) and may be based in San Diego, CA or Boulder, CO.
Note: This position is not eligible for Qualcomm immigration sponsorship.
Minimum Qualifications
7+ years of IT‑related work experience with a Bachelor’s degree in Computer Engineering, Computer Science, Information Systems or a related field.
OR 9+ years of IT‑related work experience without a Bachelor’s degree.
5+ years of work experience with programming (e.g., Java, Python).
3+ years of work experience with SQL or NoSQL Databases.
3+ years of work experience with Data Structures and algorithms.
What You’ll Do
Data Engineering, Frameworks & AI‑Driven Automation
Design, build, and maintain scalable batch and streaming data pipelines.
Develop reusable data engineering frameworks, libraries, and templates for ingestion, transformation, validation, and publishing.
Establish standardized patterns for data modeling, transformations, and pipeline orchestration.
Implement end‑to‑end data workflows from raw ingestion to curated analytical datasets.
Leverage AI‑based techniques to automate and optimize data engineering workflows, such as:
Intelligent schema inference and evolution.
Automated data quality checks and anomaly detection.
Pipeline failure detection and self‑healing mechanisms.
Experience building AI‑assisted or intelligent automation for:
Data quality monitoring.
Pipeline observability.
Cost or performance optimization.
Ensure data quality, reliability, and performance across pipelines and shared frameworks.
Support downstream consumers such as analytics, reporting, and AI/ML teams.
Reliability, Operations & Intelligent Automation
Define and monitor SLIs/SLOs for data pipelines, frameworks, and platform availability.
Participate in incident response, on‑call rotations, and post‑incident reviews.
Apply AI‑assisted monitoring and alerting to proactively detect performance issues, data drift, and operational anomalies.
Implement security, compliance, and data governance controls across shared data assets.
Drive performance tuning and cost optimization, including automated recommendations for resource utilization and workload optimization.
Collaboration & Technical Leadership
Partner with analytics, application, and platform teams to understand common data needs and platform gaps.
Drive adoption of standardized data frameworks, automation patterns, and best practices across teams.
Contribute to data architecture decisions, platform standards, and design guidelines.
Mentor junior engineers and provide technical guidance, including best practices for automating data workflows.
Required Qualifications
Data Engineering, Frameworks & System Design
8+ years of experience building and operating data platforms or distributed data systems.
Proven experience designing and building reusable data engineering frameworks, libraries, or platform components.
Strong experience designing scalable, reliable data pipelines using standardized patterns.
Solid understanding of data modeling, storage formats, schema evolution, and query performance.
Experience implementing automation in data pipelines, including rule‑based or AI‑assisted approaches.
Ability to reason about architectural trade‑offs across scalability, cost, reliability, and security.
Cloud & Data Platform Experience
Strong hands‑on experience with AWS, including IAM, networking, and multi‑account setups.
Proven experience with Databricks Lakehouse, including:
Delta Lake.
Unity Catalog.
Strong proficiency in Python for framework development, data processing, and automation.
Experience building data platforms that support multiple consumers and automated workflows.
Security & Communication
Understanding of cloud security best practices and data governance.
Experience working in regulated or compliance‑driven environments.
Strong communication skills and the ability to drive adoption of shared frameworks and automation patterns across teams.
Nice‑to‑Have
Experience building internal data platforms or enablement frameworks.
Experience supporting AI/ML teams as platform consumers (without owning models).
Experience with data observability and monitoring tools.
Experience with enterprise ingestion tools (e.g., Fivetran, HVR).
Experience with data lineage or metadata management.
Familiarity with secret management tools (Vault or similar).
Experience optimizing Databricks performance and cost.
Experience working with globally distributed teams.
Pay Range and Other Compensation & Benefits
$158,400.00 - $237,600.00
The above pay scale reflects the broad, minimum to maximum, pay scale for this job code. Salary is only one component of total compensation at Qualcomm. Additional components include a competitive annual discretionary bonus program and opportunities for annual RSU grants. Qualcomm also offers a highly competitive benefits package to support employees at work, at home, and at play.
EEO Statement
Qualcomm is an equal opportunity employer; all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or any other protected classification.
#J-18808-Ljbffr
We are seeking a Senior Staff Data Engineer to design, build, and operate a modern, scalable data platform with Databricks Lakehouse as a core foundation. In this role you will focus on building reusable data frameworks, shared platform components, and standardized pipelines that enable teams to deliver data products efficiently and consistently. Your work will support analytics, reporting, and downstream advanced use cases (including AI and machine learning) with a strong emphasis on reliability, governance, developer productivity, and intelligent automation.
This hands‑on role offers meaningful ownership across data engineering, framework development, AI‑driven automation, platform reliability, security, and cost management while contributing to architectural decisions and data standards. The position is full‑time on‑site (5 days per week) and may be based in San Diego, CA or Boulder, CO.
Note: This position is not eligible for Qualcomm immigration sponsorship.
Minimum Qualifications
7+ years of IT‑related work experience with a Bachelor’s degree in Computer Engineering, Computer Science, Information Systems or a related field.
OR 9+ years of IT‑related work experience without a Bachelor’s degree.
5+ years of work experience with programming (e.g., Java, Python).
3+ years of work experience with SQL or NoSQL Databases.
3+ years of work experience with Data Structures and algorithms.
What You’ll Do
Data Engineering, Frameworks & AI‑Driven Automation
Design, build, and maintain scalable batch and streaming data pipelines.
Develop reusable data engineering frameworks, libraries, and templates for ingestion, transformation, validation, and publishing.
Establish standardized patterns for data modeling, transformations, and pipeline orchestration.
Implement end‑to‑end data workflows from raw ingestion to curated analytical datasets.
Leverage AI‑based techniques to automate and optimize data engineering workflows, such as:
Intelligent schema inference and evolution.
Automated data quality checks and anomaly detection.
Pipeline failure detection and self‑healing mechanisms.
Experience building AI‑assisted or intelligent automation for:
Data quality monitoring.
Pipeline observability.
Cost or performance optimization.
Ensure data quality, reliability, and performance across pipelines and shared frameworks.
Support downstream consumers such as analytics, reporting, and AI/ML teams.
Reliability, Operations & Intelligent Automation
Define and monitor SLIs/SLOs for data pipelines, frameworks, and platform availability.
Participate in incident response, on‑call rotations, and post‑incident reviews.
Apply AI‑assisted monitoring and alerting to proactively detect performance issues, data drift, and operational anomalies.
Implement security, compliance, and data governance controls across shared data assets.
Drive performance tuning and cost optimization, including automated recommendations for resource utilization and workload optimization.
Collaboration & Technical Leadership
Partner with analytics, application, and platform teams to understand common data needs and platform gaps.
Drive adoption of standardized data frameworks, automation patterns, and best practices across teams.
Contribute to data architecture decisions, platform standards, and design guidelines.
Mentor junior engineers and provide technical guidance, including best practices for automating data workflows.
Required Qualifications
Data Engineering, Frameworks & System Design
8+ years of experience building and operating data platforms or distributed data systems.
Proven experience designing and building reusable data engineering frameworks, libraries, or platform components.
Strong experience designing scalable, reliable data pipelines using standardized patterns.
Solid understanding of data modeling, storage formats, schema evolution, and query performance.
Experience implementing automation in data pipelines, including rule‑based or AI‑assisted approaches.
Ability to reason about architectural trade‑offs across scalability, cost, reliability, and security.
Cloud & Data Platform Experience
Strong hands‑on experience with AWS, including IAM, networking, and multi‑account setups.
Proven experience with Databricks Lakehouse, including:
Delta Lake.
Unity Catalog.
Strong proficiency in Python for framework development, data processing, and automation.
Experience building data platforms that support multiple consumers and automated workflows.
Security & Communication
Understanding of cloud security best practices and data governance.
Experience working in regulated or compliance‑driven environments.
Strong communication skills and the ability to drive adoption of shared frameworks and automation patterns across teams.
Nice‑to‑Have
Experience building internal data platforms or enablement frameworks.
Experience supporting AI/ML teams as platform consumers (without owning models).
Experience with data observability and monitoring tools.
Experience with enterprise ingestion tools (e.g., Fivetran, HVR).
Experience with data lineage or metadata management.
Familiarity with secret management tools (Vault or similar).
Experience optimizing Databricks performance and cost.
Experience working with globally distributed teams.
Pay Range and Other Compensation & Benefits
$158,400.00 - $237,600.00
The above pay scale reflects the broad, minimum to maximum, pay scale for this job code. Salary is only one component of total compensation at Qualcomm. Additional components include a competitive annual discretionary bonus program and opportunities for annual RSU grants. Qualcomm also offers a highly competitive benefits package to support employees at work, at home, and at play.
EEO Statement
Qualcomm is an equal opportunity employer; all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or any other protected classification.
#J-18808-Ljbffr