Mediabistro logo
job logo

Data Engineer II

3B Staffing LLC, St Louis, MO, United States


Data Engineer II

St. Louis, MO

Hybrid

4 days onsite confirmed but TONS of flexibility.

Ideally USC or GC

SQL/T-SQL, Azure Data Factory, Snowflake, Oracle, Python, Palantir Foundry, Star Schema/Data Warehouse design, ETL/ELT pipelines, Azure DevOps, Power BI/Power Query.
Key must haves: SQL, T-SQL, Azure (ADF), Snowflake, Oracle
Getting into Palantir as their primary data platform for AI use cases
Azure SQL Server will be their secondary platform
Growth for this role is Sr. Data Engineer, then Architect
Position Summary
Seeking a full-time Data Engineer II located in St. Louis, MO who, as part of the Enterprise Solutions and Analytics team, will play a crucial part in shaping our data infrastructure, supporting our existing systems, and driving innovation through ML and AI solutions. As we expand our capabilities, your expertise will be pivotal in delivering end-to-end data pipelines and repositories.
Key Responsibilities

Profile source system data and design data warehouse and lake schemas
Transform raw data into usable format based on analytics, reporting and integration requirements.
Design, implement, and support ETL, ELT and integration pipelines
Optimize data pipelines for performance, scalability and efficiency.
Implement data quality checks and validations within data pipelines to ensure accuracy, consistency and completeness of data.
Collaborate with cross-functional teams to promote AI solutions in the enterprise.
Partner with teams implementing enterprise analytics platforms (e.g., Palantir) to design and execute data integration pipelines, manage ingestion from source systems, and ensure interoperability across platforms
Apply best practices in solution development, quality control and security in implementing pipelines.
Participate in code and design review to ensure alignment to standards and best practices
Research, analyze, recommend, and select technical approaches for solving challenging and complex development and integration problems
Qualifications

5+ years of work experience in data management disciplines, including data modeling, integration and development of ETL/ELT pipelines.
5+ years of data solution delivery experience using modern data platforms (e.g. Azure, Fabric, Palantir, etc.) and tools (Azure Data Factory, Azure DevOps, Palantir Foundry etc.)
Advanced SQL development for relational database systems
Experience developing complex SQL queries, stored procedures, functions, and database objects
Experience with Python for building, testing, and maintaining data pipelines and analytics workflows
Experience querying API endpoints
Experience with Power Query
Solid understanding of data warehouse and data lake concepts and design (including star schemas)
Experience with Agile/Scrum methodology preferred
Excellent analytical, conceptual, and problem-solving abilities with aptitude for acquiring new technical skills
Entrepreneurial attitude with a passion to deliver value for the organization
Highly motivated team player with strong interpersonal skills
Constant learner with a passion for continuous growth and improvement
Python proficiency for data management, ML and AI development.
Nice to Haves

Experience developing integration solutions using a low-code platform such as Boomi, Snaplogic or Mulesoft.
Experience developing solutions using Lakehouse architecture
Experience working with enterprise analytics platforms such as Palantir (e.g., Foundry)
Experience developing and promoting work through devops/dataops pipelines and utilizing source control
Experience with NoSQL database systems and building data pipelines to ingest unstructured and streaming data
Experience preparing data for Data Science and Machine Learning use cases
Experience with master data management
Experience designing reports using Power BI, Power Query, and DAX