
Data Engineer
Diverse Lynx, Dallas, TX, United States
Role Name - Data Engineer
ROLE_DESCRIPTION -
Engineer will be part of the datastore-migration Factory team that will be responsible to perform for the end-toend datastore migration from on-prem DataLake to AWS hosted LakeHouse. This is a high visibility and crucial project for Client.
Responsibilities of the Engineer include:
1. Pipeline Migration
a. Logic & Scheduling: Refactoring and migrating extraction logic and job scheduling from legacy frameworks to the new Lakehouse environment.
b. Data Transfer: Executing the physical migration of underlying datasets while ensuring data integrity.
c. Stakeholder Engagement: Acting as a technical liaison to internal clients, facilitating "hand-off and sign-off" conversations with data owners to ensure migrated assets meet business requirements.
2. Consumption Pattern Migration
a. Code Conversion: Translating and optimizing legacy SQL and Spark-based consumption patterns (raw and modeled) for compatibility with Snowflake and Iceberg.
b. Usage analysis: Understand usage patterns to deliver the required data products.
c. Stakeholder Engagement: Acting as a technical liaison to internal clients, facilitating "hand-off and sign-off" conversations with data owners to ensure migrated assets meet business requirements. d. Data Reconciliation & Quality
3. rigorous approach to data validation is required. Candidates must work with reconciliation frameworks to build confidence that migrated data is functionally equivalent to that already used within production flows.
Engineer will also need to work with internal data management platforms team and must have an aptitude for learning new workflows and language constructs as necessary.
Technical Skills:
1. Basic Qualifications
a. Education: Bachelor's or Masters in Computer Science, Applied Mathematics, Engineering, or a related quantitative field.
b. Experience: Minimum of 3-5 years of professional "hands-on-keyboard" coding experience in a collaborative, team-based environment. Ability to trouble shoot (SQL) and basic scripting experience.
c. Languages: Professional proficiency in Python or Java.
d. Methodology: Deep familiarity with the full Software Development Life Cycle (SDLC) and CI/CD best practices & K8s deployment experience.
2. Core Data Engineering Competencies: Candidates must demonstrate a sophisticated understanding of the following modeling concepts to ensure data correctness during reconciliation:
a. Temporal Data Modeling: Managing state changes over time (e.g., SCD Type 2).
b. Schema Management: Expertise in Schema Evolution (Ref: Iceberg Apache) and enforcement strategies.
c. Performance Optimization: Advanced knowledge of data partitioning and clustering.
d. rchitectural Theory: Balancing Normalization vs. Denormalization and the strategic use of Natural vs. Surrogate Keys.
3. Technical Stack Requirements: While candidates are not expected to be experts in every tool, the collective team must cover the following technologies:
Diverse Lynx LLC is an Equal Employment Opportunity employer. All qualified applicants will receive due consideration for employment without any discrimination. All applicants will be evaluated solely on the basis of their ability, competence and their proven capability to perform the functions outlined in the corresponding role. We promote and support a diverse workforce across all levels in the company.
ROLE_DESCRIPTION -
Engineer will be part of the datastore-migration Factory team that will be responsible to perform for the end-toend datastore migration from on-prem DataLake to AWS hosted LakeHouse. This is a high visibility and crucial project for Client.
Responsibilities of the Engineer include:
1. Pipeline Migration
a. Logic & Scheduling: Refactoring and migrating extraction logic and job scheduling from legacy frameworks to the new Lakehouse environment.
b. Data Transfer: Executing the physical migration of underlying datasets while ensuring data integrity.
c. Stakeholder Engagement: Acting as a technical liaison to internal clients, facilitating "hand-off and sign-off" conversations with data owners to ensure migrated assets meet business requirements.
2. Consumption Pattern Migration
a. Code Conversion: Translating and optimizing legacy SQL and Spark-based consumption patterns (raw and modeled) for compatibility with Snowflake and Iceberg.
b. Usage analysis: Understand usage patterns to deliver the required data products.
c. Stakeholder Engagement: Acting as a technical liaison to internal clients, facilitating "hand-off and sign-off" conversations with data owners to ensure migrated assets meet business requirements. d. Data Reconciliation & Quality
3. rigorous approach to data validation is required. Candidates must work with reconciliation frameworks to build confidence that migrated data is functionally equivalent to that already used within production flows.
Engineer will also need to work with internal data management platforms team and must have an aptitude for learning new workflows and language constructs as necessary.
Technical Skills:
1. Basic Qualifications
a. Education: Bachelor's or Masters in Computer Science, Applied Mathematics, Engineering, or a related quantitative field.
b. Experience: Minimum of 3-5 years of professional "hands-on-keyboard" coding experience in a collaborative, team-based environment. Ability to trouble shoot (SQL) and basic scripting experience.
c. Languages: Professional proficiency in Python or Java.
d. Methodology: Deep familiarity with the full Software Development Life Cycle (SDLC) and CI/CD best practices & K8s deployment experience.
2. Core Data Engineering Competencies: Candidates must demonstrate a sophisticated understanding of the following modeling concepts to ensure data correctness during reconciliation:
a. Temporal Data Modeling: Managing state changes over time (e.g., SCD Type 2).
b. Schema Management: Expertise in Schema Evolution (Ref: Iceberg Apache) and enforcement strategies.
c. Performance Optimization: Advanced knowledge of data partitioning and clustering.
d. rchitectural Theory: Balancing Normalization vs. Denormalization and the strategic use of Natural vs. Surrogate Keys.
3. Technical Stack Requirements: While candidates are not expected to be experts in every tool, the collective team must cover the following technologies:
Diverse Lynx LLC is an Equal Employment Opportunity employer. All qualified applicants will receive due consideration for employment without any discrimination. All applicants will be evaluated solely on the basis of their ability, competence and their proven capability to perform the functions outlined in the corresponding role. We promote and support a diverse workforce across all levels in the company.