
Data Engineer
Fast Switch, Windsor, CT, United States
Location: Windsor, Connecticut
Remote: Remote
Type: Contract
Job #61677
Salary: $50.00 - $63.00 Per Hour
Data Engineer
Target rate: $60 hr w2
Contract Length: 6 months
Location: Remote EST coast hours however local candidates strongly encouraged
***Candidates must work on our W2 without needing sponsorship at any time now or in the future
***We do not work with Corp to Corp in any manner including any form of referral bonus.
If after reading the description you would like to be submitted, please complete the questions below. Answer in implied first person and in a stand-alone sentence. These will go along with the resume to the client.
How many years of hands-on data engineering experience do you have building and supporting ETL or ELT pipelines, and in what environments?
How much experience do you have using SQL and Python to create, enhance, and troubleshoot data pipelines, and what types of data volumes or workloads have you supported?
What hands-on experience do you have with Azure and Databricks, including Delta Live Tables and Unity Catalog, and where have you used them in production?
Which orchestration and integration tools have you used, such as SnapLogic, Azure Data Factory, and Jenkins, and what kinds of pipeline scheduling or automation work did you own?
What experience do you have with Terraform for infrastructure as code and deployment pipeline management, and what did you specifically provision or maintain?
What data quality and monitoring tools have you used, such as Soda or similar platforms, and how did you apply them to validate pipeline reliability and data accuracy?
JOB DESCRIPTION
Profile Summary:
The Data Engineer is responsible for designing, building, and maintaining scalable data pipelines and systems that deliver trusted data for analytics and product use cases. This role partners with cross-functional teams to understand data needs and implement solutions that support both near-term and long-term objectives. The position requires the ability to contribute to technical design, ensure data quality, and operate with increasing independence and accountability.
Profile Description:
Develop and maintain batch and streaming data pipelines using modern tools and frameworks.
Design transformations, optimize performance, and ensure reliable data delivery.
Design and implement scalable, maintainable data models and storage solutions aligned with business needs and supporting efficient querying, analytics, and integration efforts.
Participate in agile best practices, help refine stories, identify dependencies, and proactively raise risks or concerns to keep work on track or escalate when needed.
Implement and enforce data quality controls, validation, and compliance standards across pipelines.
Support deployment, scheduling, and monitoring of data pipelines and workflows to ensure consistent, reliable execution.
Maintain clear documentation and promote coding standards, best practices, and reusable components.
Collaborate regularly with cross-functional teams to clarify data requirements, document assumptions, and deliver high-quality solutions.
Communicate clearly during stand-ups, design discussions, and retrospectives.
Contribute to team code reviews and share learnings with peers.
Knowledge & Experience:
2-5 years of experience in data engineering, data modeling, and ETL pipelines.
Strong SQL and Python skills for building, improving, and troubleshooting data pipelines.
Experience with cloud and data platforms, especially Azure and Databricks, including Delta Live Tables and Unity Catalog.
Strong understanding of tools such as SnapLogic, Azure Data Factory, and Jenkins for data integration and orchestration.
Practical experience with Terraform for infrastructure as code and deployment pipeline management.
Experience integrating with APIs.
Knowledge of data quality and monitoring tools, particularly Soda or similar tools.
Proficiency with version control and CI/CD workflows using tools such as GitHub.
Solid understanding of data modeling principles, including dimensional modeling and normalization.
Comfortable working in agile teams, with a proactive approach to planning, organizing tasks, and collaborating.
Remote: Remote
Type: Contract
Job #61677
Salary: $50.00 - $63.00 Per Hour
Data Engineer
Target rate: $60 hr w2
Contract Length: 6 months
Location: Remote EST coast hours however local candidates strongly encouraged
***Candidates must work on our W2 without needing sponsorship at any time now or in the future
***We do not work with Corp to Corp in any manner including any form of referral bonus.
If after reading the description you would like to be submitted, please complete the questions below. Answer in implied first person and in a stand-alone sentence. These will go along with the resume to the client.
How many years of hands-on data engineering experience do you have building and supporting ETL or ELT pipelines, and in what environments?
How much experience do you have using SQL and Python to create, enhance, and troubleshoot data pipelines, and what types of data volumes or workloads have you supported?
What hands-on experience do you have with Azure and Databricks, including Delta Live Tables and Unity Catalog, and where have you used them in production?
Which orchestration and integration tools have you used, such as SnapLogic, Azure Data Factory, and Jenkins, and what kinds of pipeline scheduling or automation work did you own?
What experience do you have with Terraform for infrastructure as code and deployment pipeline management, and what did you specifically provision or maintain?
What data quality and monitoring tools have you used, such as Soda or similar platforms, and how did you apply them to validate pipeline reliability and data accuracy?
JOB DESCRIPTION
Profile Summary:
The Data Engineer is responsible for designing, building, and maintaining scalable data pipelines and systems that deliver trusted data for analytics and product use cases. This role partners with cross-functional teams to understand data needs and implement solutions that support both near-term and long-term objectives. The position requires the ability to contribute to technical design, ensure data quality, and operate with increasing independence and accountability.
Profile Description:
Develop and maintain batch and streaming data pipelines using modern tools and frameworks.
Design transformations, optimize performance, and ensure reliable data delivery.
Design and implement scalable, maintainable data models and storage solutions aligned with business needs and supporting efficient querying, analytics, and integration efforts.
Participate in agile best practices, help refine stories, identify dependencies, and proactively raise risks or concerns to keep work on track or escalate when needed.
Implement and enforce data quality controls, validation, and compliance standards across pipelines.
Support deployment, scheduling, and monitoring of data pipelines and workflows to ensure consistent, reliable execution.
Maintain clear documentation and promote coding standards, best practices, and reusable components.
Collaborate regularly with cross-functional teams to clarify data requirements, document assumptions, and deliver high-quality solutions.
Communicate clearly during stand-ups, design discussions, and retrospectives.
Contribute to team code reviews and share learnings with peers.
Knowledge & Experience:
2-5 years of experience in data engineering, data modeling, and ETL pipelines.
Strong SQL and Python skills for building, improving, and troubleshooting data pipelines.
Experience with cloud and data platforms, especially Azure and Databricks, including Delta Live Tables and Unity Catalog.
Strong understanding of tools such as SnapLogic, Azure Data Factory, and Jenkins for data integration and orchestration.
Practical experience with Terraform for infrastructure as code and deployment pipeline management.
Experience integrating with APIs.
Knowledge of data quality and monitoring tools, particularly Soda or similar tools.
Proficiency with version control and CI/CD workflows using tools such as GitHub.
Solid understanding of data modeling principles, including dimensional modeling and normalization.
Comfortable working in agile teams, with a proactive approach to planning, organizing tasks, and collaborating.