
Role description
Job Description
Job Summary
The Chief Machine Learning Compiler Architect, within the NPU Hardware & Software organization, is intended for an individual with broad background in compiler development and architecture, with significant experience in AI/ML hardware accelerators and advanced compilation technologies. The Chief Machine Learning Compiler Architect will be responsible for designing and developing the compiler architecture for our state-of-the-art Neural Processing Unit (NPU), optimizing and transforming machine learning models into efficient executable formats that are tailored for our specialized hardware. Additionally, you will be responsible for leading research initiatives in advanced compilation techniques and driving adoption of cutting-edge optimization strategies and compilation methodologies.
Years of experience needed
12+ years of experience in compiler development or architecture, particularly targeting AI or ML hardware accelerators
Location is on-site Mountain View
Onsite / hybrid i.e. 2 days a week work from home but candidate must be physically in Mountain View., California, US
Technical Skills:
Compiler Architecture & Design
Design and develop a robust compiler architecture that effectively interacts with our NPU
Implement advanced graph optimizations that incorporate both hardware agnostic and hardware specific enhancements
Develop and optimize algorithms for tiling and memory management to efficiently utilize the NPU's resources
Create sophisticated optimization passes for neural network inference and training workloads
Code Generation & Hardware Integration
Map high-level operations to optimized library macros and convert them into hardware-level instructions
Generate and manage DMA commands to facilitate data movement and operation within the hardware ecosystem
Collaborate with hardware engineers and system architects to ensure seamless integration and maximal performance of the NPU
Implement efficient scheduling and resource allocation algorithms for concurrent AI workload execution
Innovation & Technology Leadership
Stay updated with the latest trends and advancements in compiler technology and machine learning to continuously improve the compiler design
Lead research initiatives in advanced compilation techniques for AI accelerators
Drive adoption of cutting-edge optimization strategies and compilation methodologies
Mentor engineering teams on compiler design principles and best practices
Certifications Needed:
Any certification relevant to the above skill requirement is desirable.
Required Specialized Skills:
12+ years of experience in compiler development or architecture, particularly targeting AI or ML hardware accelerators
Strong understanding of machine learning algorithms and their computational implications
Working experience with TVM, IREE, XLA, MLIR or LLVM
Proficiency in programming languages such as C++ and Python
Experience with graph optimization techniques and memory management strategies in compilers
Demonstrated ability to translate high-level functional requirements into detailed technical designs
Deep knowledge of hardware architecture principles and AI accelerator design concepts
Proven track record of leading compiler architecture projects from concept to production deployment
Nice to have
Desired Skills:
Prior experience with NPU hardware
Knowledge of automotive industry standards and functional safety requirements
Experience with neural network quantization and optimization techniques
Background in high-performance computing and parallel processing architectures
Publications or contributions to open-source compiler projects
Experience with GenAI tools for accelerated engineering workflows and AI-assisted development practices
Enthusiasm for adopting innovative AI-augmented development practices and continuous learning in rapidly evolving GenAI technologies
Required Qualification:
B.Tech or MCA
About Mphasis
Mphasis applies next-generation technology to help enterprises transform businesses globally. Customer centricity is foundational to Mphasis and is reflected in the Mphasis' Front2Back™ Transformation approach. Front2Back™ uses the exponential power of cloud and cognitive to provide hyper-personalized C=X2C2TM=1 digital experience to clients and their end customers. Mphasis' Service Transformation approach helps 'shrink the core' through the application of digital technologies across legacy environments within an enterprise, enabling businesses to stay ahead in a changing world. Mphasis' core reference architectures and tools, speed and innovation with domain expertise and specialization are key to building strong relationships with marquee clients.
Job Description
Job Summary
The Chief Machine Learning Compiler Architect, within the NPU Hardware & Software organization, is intended for an individual with broad background in compiler development and architecture, with significant experience in AI/ML hardware accelerators and advanced compilation technologies. The Chief Machine Learning Compiler Architect will be responsible for designing and developing the compiler architecture for our state-of-the-art Neural Processing Unit (NPU), optimizing and transforming machine learning models into efficient executable formats that are tailored for our specialized hardware. Additionally, you will be responsible for leading research initiatives in advanced compilation techniques and driving adoption of cutting-edge optimization strategies and compilation methodologies.
Years of experience needed
12+ years of experience in compiler development or architecture, particularly targeting AI or ML hardware accelerators
Location is on-site Mountain View
Onsite / hybrid i.e. 2 days a week work from home but candidate must be physically in Mountain View., California, US
Technical Skills:
Compiler Architecture & Design
Design and develop a robust compiler architecture that effectively interacts with our NPU
Implement advanced graph optimizations that incorporate both hardware agnostic and hardware specific enhancements
Develop and optimize algorithms for tiling and memory management to efficiently utilize the NPU's resources
Create sophisticated optimization passes for neural network inference and training workloads
Code Generation & Hardware Integration
Map high-level operations to optimized library macros and convert them into hardware-level instructions
Generate and manage DMA commands to facilitate data movement and operation within the hardware ecosystem
Collaborate with hardware engineers and system architects to ensure seamless integration and maximal performance of the NPU
Implement efficient scheduling and resource allocation algorithms for concurrent AI workload execution
Innovation & Technology Leadership
Stay updated with the latest trends and advancements in compiler technology and machine learning to continuously improve the compiler design
Lead research initiatives in advanced compilation techniques for AI accelerators
Drive adoption of cutting-edge optimization strategies and compilation methodologies
Mentor engineering teams on compiler design principles and best practices
Certifications Needed:
Any certification relevant to the above skill requirement is desirable.
Required Specialized Skills:
12+ years of experience in compiler development or architecture, particularly targeting AI or ML hardware accelerators
Strong understanding of machine learning algorithms and their computational implications
Working experience with TVM, IREE, XLA, MLIR or LLVM
Proficiency in programming languages such as C++ and Python
Experience with graph optimization techniques and memory management strategies in compilers
Demonstrated ability to translate high-level functional requirements into detailed technical designs
Deep knowledge of hardware architecture principles and AI accelerator design concepts
Proven track record of leading compiler architecture projects from concept to production deployment
Nice to have
Desired Skills:
Prior experience with NPU hardware
Knowledge of automotive industry standards and functional safety requirements
Experience with neural network quantization and optimization techniques
Background in high-performance computing and parallel processing architectures
Publications or contributions to open-source compiler projects
Experience with GenAI tools for accelerated engineering workflows and AI-assisted development practices
Enthusiasm for adopting innovative AI-augmented development practices and continuous learning in rapidly evolving GenAI technologies
Required Qualification:
B.Tech or MCA
About Mphasis
Mphasis applies next-generation technology to help enterprises transform businesses globally. Customer centricity is foundational to Mphasis and is reflected in the Mphasis' Front2Back™ Transformation approach. Front2Back™ uses the exponential power of cloud and cognitive to provide hyper-personalized C=X2C2TM=1 digital experience to clients and their end customers. Mphasis' Service Transformation approach helps 'shrink the core' through the application of digital technologies across legacy environments within an enterprise, enabling businesses to stay ahead in a changing world. Mphasis' core reference architectures and tools, speed and innovation with domain expertise and specialization are key to building strong relationships with marquee clients.