Mediabistro logo
job logo

Data Engineer

Compunnel, Durham, NC, United States


Job Summary

We are seeking a highly motivated Data Engineer to design, build, and maintain operational and analytical data platforms. This role involves working on modern data infrastructure, including data lakes, and contributing to solution design, data analysis, production deployment, and ongoing support. The position offers an opportunity to play a key role in enhancing data capabilities using technologies such as Snowflake, AWS, and Python.

Key Responsibilities

Design, develop, and maintain scalable data pipelines and data platform solutions
Build and optimize ELT/ETL processes for data movement across systems, including Snowflake
Support data lake architecture and ensure efficient data ingestion, processing, and storage
Perform data analysis to support business and technical requirements
Participate in production deployment, monitoring, and support activities
Optimize SQL queries and improve performance of data processing workflows
Implement and maintain CI/CD pipelines for data engineering workflows
Collaborate with cross-functional teams to improve system efficiency and data accessibility
Work with data modeling techniques including dimensional and Data Vault models
Ensure adherence to best practices in data engineering, DevOps, and Agile methodologies
Explore and evaluate new tools and technologies to enhance data capabilities
Required Qualifications

Bachelor's or Master's degree in Computer Science, Engineering, or a related field
10+ years of overall experience in technology roles
6+ years of experience in data warehousing and data mart concepts and implementations
4+ years of experience developing ELT/ETL pipelines, particularly with Snowflake
4+ years of experience working with AWS services such as EC2, IAM, S3, EKS, KMS, SNS/SQS, CloudWatch, and CloudFormation
At least 1 year of experience in object-oriented programming languages such as Python or Java
Strong proficiency in SQL and SnowSQL
Experience in SQL query optimization and performance tuning
Experience with job scheduling tools such as Control-M or similar
Strong data analysis and data modeling skills
Experience with container technologies such as Docker and Kubernetes
Experience with DevOps practices and CI/CD tools such as Maven, Jenkins, Git-based tools, and Ansible
Experience working in Agile environments such as Scrum or Kanban
Ability to work in a fast-paced environment and manage ambiguity
Strong collaboration and interpersonal skills
Preferred Qualifications

Advanced expertise in SQL and performance optimization techniques
Experience working with modern data lake architectures
Exposure to automation and system optimization practices
Strong interest in learning new technologies and applying them to business use cases
Proven ability to contribute to process improvements and team efficiency