Mediabistro logo
job logo

Data Engineer - GCP

Euclid Innovations, Charlotte, NC, United States


Responsibilities

Design and build scalable ETL/data pipelines using Spark and Python

Develop data workflows to ingest, transform, and move large datasets

Implement data routing logic to direct data to:

Ensure data quality, validation, and reconciliation across systems

Collaborate with data science and platform teams to support predictive model pipelines

Optimize performance and scalability for high-volume data processing

Required Skills

Strong hands-on experience with Apache Spark / PySpark for large-scale data processing

Proficiency in Python for data engineering (ETL pipelines)

Experience designing and developing data pipelines / data engineering workflows

Solid background in ETL, data ingestion, transformation, and data movement

Experience working with big data technologies and handling large datasets (batch/streaming)

Experience with cloud platforms – GCP (Google Cloud Platform)

BigQuery, Dataflow, Dataproc, GCS (Google Cloud Storage)

Experience with data migration / data integration projects

Understanding of data pipeline architecture and distributed systems

#J-18808-Ljbffr