
Hadoop Developer
Centraprise, Jersey City, NJ, United States
Job Title :
Hadoop Developer
Job Location : Charlotte, NC / New York, NY / Dallas, TX / Jersey City, NJ (ONSITE)
Job Type : Full-Time
Job Description:
Must Have Technical/Functional Skills
Primary Skill: Hadoop ecosystem (HDFS, Hive, Spark),PySpark,Python,Apache Kafka
Secondary: UI - Angular.
Experience: Minimum 9 years
Roles & Responsibilities
Architectural Leadership:
• Define end-to-end architecture for data platforms, streaming systems, and web applications.
• Ensure alignment with enterprise standards, security, and compliance requirements.
• Evaluate emerging technologies and recommend adoption strategies.
Data Engineering :
• Design and implement data ingestion, transformation, and processing pipelines using Hadoop, PySpark, and related tools.
• Optimize ETL workflows for large-scale datasets and real-time streaming.
• Integrate Apache Kafka for event-driven architectures and messaging.
Application Development :
• Build and maintain backend services using Python and microservices architecture.
• Develop responsive, dynamic front-end applications using Angular.
• Implement RESTful APIs and ensure seamless integration between components.
Collaboration & Leadership:
• Work closely with product owners, business analysts, and DevOps teams.
• Mentor junior developers and data engineers.
• Participate in agile ceremonies, code reviews, and design discussions.
Required Skills & Qualifications:
Technical Expertise:
• Strong experience with Hadoop ecosystem (HDFS, Hive, Spark).
• Proficiency in PySpark for distributed data processing.
• Advanced programming skills in Python.
• Hands-on experience with Apache Kafka for real-time streaming.
• Frontend development using Angular (TypeScript, HTML, CSS).
Architectural Skills:
• Expertise in designing scalable, secure, and high-performance systems.
• Familiarity with microservices, API design, and cloud-native architectures.
Additional Skills:
• Knowledge of CI/CD pipelines, containerization (Docker/Kubernetes).
• Exposure to cloud platforms (AWS, Azure, GCP).
Education:
• Bachelor's or Master's degree in Computer Science, Engineering, or related field.
Experience:
• 9+ years in software development, with at least 4 + years in architecture and Big Data technologies.
Preferred Qualifications:
• BFSI domain experience or large-scale enterprise systems.
• Understanding of data governance, security, and compliance standards.
Soft Skills:
• Strong analytical and problem-solving abilities.
• Excellent communication and leadership skills.
• Ability to thrive in a fast-paced, agile environment.
Hadoop Developer
Job Location : Charlotte, NC / New York, NY / Dallas, TX / Jersey City, NJ (ONSITE)
Job Type : Full-Time
Job Description:
Must Have Technical/Functional Skills
Primary Skill: Hadoop ecosystem (HDFS, Hive, Spark),PySpark,Python,Apache Kafka
Secondary: UI - Angular.
Experience: Minimum 9 years
Roles & Responsibilities
Architectural Leadership:
• Define end-to-end architecture for data platforms, streaming systems, and web applications.
• Ensure alignment with enterprise standards, security, and compliance requirements.
• Evaluate emerging technologies and recommend adoption strategies.
Data Engineering :
• Design and implement data ingestion, transformation, and processing pipelines using Hadoop, PySpark, and related tools.
• Optimize ETL workflows for large-scale datasets and real-time streaming.
• Integrate Apache Kafka for event-driven architectures and messaging.
Application Development :
• Build and maintain backend services using Python and microservices architecture.
• Develop responsive, dynamic front-end applications using Angular.
• Implement RESTful APIs and ensure seamless integration between components.
Collaboration & Leadership:
• Work closely with product owners, business analysts, and DevOps teams.
• Mentor junior developers and data engineers.
• Participate in agile ceremonies, code reviews, and design discussions.
Required Skills & Qualifications:
Technical Expertise:
• Strong experience with Hadoop ecosystem (HDFS, Hive, Spark).
• Proficiency in PySpark for distributed data processing.
• Advanced programming skills in Python.
• Hands-on experience with Apache Kafka for real-time streaming.
• Frontend development using Angular (TypeScript, HTML, CSS).
Architectural Skills:
• Expertise in designing scalable, secure, and high-performance systems.
• Familiarity with microservices, API design, and cloud-native architectures.
Additional Skills:
• Knowledge of CI/CD pipelines, containerization (Docker/Kubernetes).
• Exposure to cloud platforms (AWS, Azure, GCP).
Education:
• Bachelor's or Master's degree in Computer Science, Engineering, or related field.
Experience:
• 9+ years in software development, with at least 4 + years in architecture and Big Data technologies.
Preferred Qualifications:
• BFSI domain experience or large-scale enterprise systems.
• Understanding of data governance, security, and compliance standards.
Soft Skills:
• Strong analytical and problem-solving abilities.
• Excellent communication and leadership skills.
• Ability to thrive in a fast-paced, agile environment.