
Senior IT Data Engineer
Bryant Technologies, Washington, District of Columbia, United States
Contract Performance Period:
April 30, 2026 - December 31, 2026
Job Location:
Washington, DC location(s) (Metro Access) On-Site 5 days/week.
Direct Hire:
Term: through 31/12/2026, with a strong possibility of extension
Pay Range:
Depending on Experience
Travel Requirements:
Washington, DC location(s) (Metro Access) On-Site 5 days/week.
Working Remotely:
Project Description
This role involves participating in the planning and execution of policies, practices, and projects designed to acquire, control, protect, and enhance the value of organisational data assets.
Qualification Requirements
General Experience: 5+ years of experience in application/data development, specifically with Python.
Specialized Experience: 5+ years of experience with data integration and ingestion tools, such as Apache NiFi.
Methodology: Experience working with Scrum and Kanban methodologies.
Platform Experience: Proficiency in the long-term operations of data pipelines or processing systems running in the Cloudera Data Platform.
US Citizenship is a requirement for this position.
Skills Requirements
Data Processing & Engineering: Proficiency in PySpark, pandas, or dbt.
Data Ingestion: Expertise in Apache NiFi.
Languages & Databases: Advanced knowledge of SQL, Java, and Microsoft SQL Server.
Distributed Computing: Experience with platforms including Hadoop, MapReduce, Hive, HBase, Kafka, and Spark.
DevOps & Tools: Understanding of git and DevOps-enabled technologies.
Systems: Proficiency in UNIX/Linux, including basic commands and shell scripting.
Technical Operations: Knowledge of data extraction, transformation, loading (ETL), and performance tuning.
Responsibilities
Data Acquisition: Facilitate obtaining data from a variety of sources in correct formats while adhering to quality standards.
Pipeline Development: Build robust data pipelines that clean, transform, and aggregate unorganized data into databases.
Platform Operations: Develop, maintain, monitor, and manage the long-term operations of data pipelines or processing systems within the Cloudera Data Platform.
Issue Resolution: Resolve information flow and content issues as they arise.
CI/CD Implementation: Implement and maintain continuous integration and continuous delivery (CI/CD) pipelines and manage data platforms.
Strategic Planning: Participate in the planning of practices and projects to enhance data asset value.
Job ID:
1532
#J-18808-Ljbffr
April 30, 2026 - December 31, 2026
Job Location:
Washington, DC location(s) (Metro Access) On-Site 5 days/week.
Direct Hire:
Term: through 31/12/2026, with a strong possibility of extension
Pay Range:
Depending on Experience
Travel Requirements:
Washington, DC location(s) (Metro Access) On-Site 5 days/week.
Working Remotely:
Project Description
This role involves participating in the planning and execution of policies, practices, and projects designed to acquire, control, protect, and enhance the value of organisational data assets.
Qualification Requirements
General Experience: 5+ years of experience in application/data development, specifically with Python.
Specialized Experience: 5+ years of experience with data integration and ingestion tools, such as Apache NiFi.
Methodology: Experience working with Scrum and Kanban methodologies.
Platform Experience: Proficiency in the long-term operations of data pipelines or processing systems running in the Cloudera Data Platform.
US Citizenship is a requirement for this position.
Skills Requirements
Data Processing & Engineering: Proficiency in PySpark, pandas, or dbt.
Data Ingestion: Expertise in Apache NiFi.
Languages & Databases: Advanced knowledge of SQL, Java, and Microsoft SQL Server.
Distributed Computing: Experience with platforms including Hadoop, MapReduce, Hive, HBase, Kafka, and Spark.
DevOps & Tools: Understanding of git and DevOps-enabled technologies.
Systems: Proficiency in UNIX/Linux, including basic commands and shell scripting.
Technical Operations: Knowledge of data extraction, transformation, loading (ETL), and performance tuning.
Responsibilities
Data Acquisition: Facilitate obtaining data from a variety of sources in correct formats while adhering to quality standards.
Pipeline Development: Build robust data pipelines that clean, transform, and aggregate unorganized data into databases.
Platform Operations: Develop, maintain, monitor, and manage the long-term operations of data pipelines or processing systems within the Cloudera Data Platform.
Issue Resolution: Resolve information flow and content issues as they arise.
CI/CD Implementation: Implement and maintain continuous integration and continuous delivery (CI/CD) pipelines and manage data platforms.
Strategic Planning: Participate in the planning of practices and projects to enhance data asset value.
Job ID:
1532
#J-18808-Ljbffr