
Data Engineer
High Trail, King Of Prussia, PA, United States
The Data Engineer plays a key role in designing, building, and maintaining scalable data pipelines and architectures using Microsoft Fabric. This position supports the organization’s data strategy by ensuring reliable data availability, optimizing workflows for performance, and enabling effective data visualization through tools like Power BI.
This role involves developing ETL processes, monitoring and troubleshooting data pipelines, and maintaining clear documentation of data systems. The Data Engineer will collaborate cross-functionally to deliver efficient data solutions that support strategic decision-making and operational performance.
The ideal candidate has strong programming experience (Python, Java, or Scala), a solid understanding of data governance and security, and hands-on experience working within modern cloud-based data environments.
Key Responsibilities
Design, develop, and maintain scalable data pipelines and architectures using Microsoft Fabric
Build and manage ETL processes to ensure data accuracy and accessibility
Optimize data workflows for performance and scalability
Provide guidance on data visualization tools, particularly Power BI
Support development of standards and best practices for dashboards and reporting
Monitoring & Troubleshooting
Monitor data pipelines and resolve issues in a timely manner to ensure reliability
Documentation & Best Practices
Create and maintain documentation for data processes, systems, and workflows
Stay current with industry trends and advancements in data engineering and Microsoft Fabric
Qualifications
Education & Experience
Proven experience working with Microsoft Fabric
Experience with Azure DevOps Git for version control
Experience building data pipelines within Fabric (Lakehouse → Warehouse workflows)
Understanding of data governance and security best practices
Technical Skills
Proficiency in Python, Java, or Scala
Experience with Azure or similar cloud platforms
Familiarity with CI/CD pipelines and automation tools
#J-18808-Ljbffr
This role involves developing ETL processes, monitoring and troubleshooting data pipelines, and maintaining clear documentation of data systems. The Data Engineer will collaborate cross-functionally to deliver efficient data solutions that support strategic decision-making and operational performance.
The ideal candidate has strong programming experience (Python, Java, or Scala), a solid understanding of data governance and security, and hands-on experience working within modern cloud-based data environments.
Key Responsibilities
Design, develop, and maintain scalable data pipelines and architectures using Microsoft Fabric
Build and manage ETL processes to ensure data accuracy and accessibility
Optimize data workflows for performance and scalability
Provide guidance on data visualization tools, particularly Power BI
Support development of standards and best practices for dashboards and reporting
Monitoring & Troubleshooting
Monitor data pipelines and resolve issues in a timely manner to ensure reliability
Documentation & Best Practices
Create and maintain documentation for data processes, systems, and workflows
Stay current with industry trends and advancements in data engineering and Microsoft Fabric
Qualifications
Education & Experience
Proven experience working with Microsoft Fabric
Experience with Azure DevOps Git for version control
Experience building data pipelines within Fabric (Lakehouse → Warehouse workflows)
Understanding of data governance and security best practices
Technical Skills
Proficiency in Python, Java, or Scala
Experience with Azure or similar cloud platforms
Familiarity with CI/CD pipelines and automation tools
#J-18808-Ljbffr