
Data Engineer (Wilmington)
Hallmark Global Solutions Ltd, Wilmington, DE, United States
Hiring: Databricks Engineer (PySpark & Data Lake)
Wilmington, DE ( IN Person Interview) |
5 Days Onsite | ⏳ Long-Term Project
We’re looking for a strong Data Engineer to drive the modernization of legacy ETL systems by migrating Ab Initio workflows into scalable PySpark pipelines on Databricks.
This is a great opportunity to work on large-scale data transformation, building cloud-native architectures while ensuring high performance, data quality, and reliability.
Key Responsibilities
Migrate legacy ETL workflows (Ab Initio → PySpark on Databricks)
Design & develop scalable, modular data pipelines
Build ETL/ELT pipelines integrating Snowflake & other data sources
Support batch & near real-time data processing
Create data lineage & optimize workflows
Perform testing, validation & UAT support
Drive deployment, migration & system cutover strategies
What We’re Looking For
✔ Strong experience with Databricks & PySpark
✔ Hands-on in ETL/ELT pipeline development
✔ Experience with Snowflake / Data Lake architectures
✔ Background in legacy ETL migration (Ab Initio preferred)
✔ Strong understanding of data governance, optimization & performance
If you or someone in your network is a fit, feel free to reach out or share your resume at
salomon@hgtechinc.net
Wilmington, DE ( IN Person Interview) |
5 Days Onsite | ⏳ Long-Term Project
We’re looking for a strong Data Engineer to drive the modernization of legacy ETL systems by migrating Ab Initio workflows into scalable PySpark pipelines on Databricks.
This is a great opportunity to work on large-scale data transformation, building cloud-native architectures while ensuring high performance, data quality, and reliability.
Key Responsibilities
Migrate legacy ETL workflows (Ab Initio → PySpark on Databricks)
Design & develop scalable, modular data pipelines
Build ETL/ELT pipelines integrating Snowflake & other data sources
Support batch & near real-time data processing
Create data lineage & optimize workflows
Perform testing, validation & UAT support
Drive deployment, migration & system cutover strategies
What We’re Looking For
✔ Strong experience with Databricks & PySpark
✔ Hands-on in ETL/ELT pipeline development
✔ Experience with Snowflake / Data Lake architectures
✔ Background in legacy ETL migration (Ab Initio preferred)
✔ Strong understanding of data governance, optimization & performance
If you or someone in your network is a fit, feel free to reach out or share your resume at
salomon@hgtechinc.net