Logo
job logo

Scala Engineers

Appex Innovation, Chicago, Illinois, United States, 60290

Save Job

Job Description Staff Scala Engineer (Onsite)

Location:

Chicago, IL (Onsite)

Open for Virginia Locations too.

Overview:

We are a technology-driven company seeking a highly experienced and technically profound

Staff Scala Engineer

to anchor our Data Platform team. This role demands leadership in designing, developing, and optimizing high-throughput, fault-tolerant data solutions using a modern stack centered on Scala, Apache Spark, and AWS cloud services.

Core Responsibilities System Architecture:

Lead the architectural definition and implementation of robust, scalable, and efficient data processing systems utilizing

Scala

and

Apache Spark .

Self-Correction Note: Consider the trade-offs between batch and stream processing architectures (e.g., using Spark Streaming or Flink).

Engineering Excellence:

Develop high-quality, maintainable, and performant functional code in

Scala . Drive performance tuning and optimization of large-scale Spark jobs. Cloud Infrastructure:

Architect and manage the deployment of data pipelines using core

AWS

services (e.g., S3, EMR, Glue, ECS). Ensure optimal usage of cloud resources for cost and efficiency. Technical Leadership:

Serve as a subject matter expert for the team. Mentor peers, define coding standards, and lead complex technical design reviews. Collaboration:

Partner with cross-functional teams (Product, DevOps, Analytics) to ensure technical solutions meet business requirements and are seamlessly integrated into the ecosystem. Operational Support:

Implement and manage monitoring, logging, and alerting strategies to maintain the health and reliability of production data services. Required Technical Qualifications Minimum 7+ years

of professional experience in software engineering, with significant time spent building distributed data applications. Expertise in Scala:

Deep, demonstrable experience with production-level

Scala

development, emphasizing Functional Programming paradigms. Expertise in Apache Spark:

Mastery of

Apache Spark

(Scala API) for complex, large-scale data transformation, ETL/ELT, and performance optimization techniques (shuffling, partitioning). AWS Cloud Proficiency:

Strong, practical experience with primary AWS data and compute services (e.g.,

S3, EMR, Glue, Step Functions, IAM, CloudFormation/Terraform ). Foundational Knowledge:

Solid grasp of distributed systems design, data structures, and algorithms. Database Experience:

Proficiency with various data storage technologies (relational, NoSQL). DevOps Practices:

Working knowledge of CI/CD pipelines and infrastructure as code tools (Terraform, CloudFormation). Preferred Qualifications Experience with stream processing (e.g., Kafka, Kinesis, Flink). Familiarity with container orchestration (Docker and Kubernetes/EKS). Prior experience in a Staff, Principal, or Lead Engineer role.