
Principal AI Compiler Engineer
Renesas Electronics Corporation, Dallas, TX, United States
AI Compiler Engineer
Renesas Electronics is searching for a hands‑on AI Compiler Engineer who thrives at the convergence of cutting‑edge AI, compiler tech, and hardware design. Here, you’ll not only architect and scale a production‑class AI compiler toolchain, but also rethink how AI automates, optimizes, and accelerates every step of building and deploying neural networks on Renesas SoCs. You’ll work shoulder‑to‑shoulder with visionary engineers—both human and AI—enabling adaptive compilers that learn, evolve, and redefine what’s possible for embedded intelligence. With a relentless focus on hardware‑software co‑design, you’ll collaborate across teams to translate high‑level AI models into blazing‑fast, energy‑efficient executables, unlocking the full potential of our silicon for real‑world impact. Innovation here isn’t a catchphrase—it’s your everyday.
What You’ll Do
Own the design, implementation, and evolution of an AI compiler toolchain that leverages AI agents to seamlessly map neural networks onto Renesas SoC platforms.
Pioneer new graph transformations, lowering, scheduling, and codegen strategies for CPUs and custom accelerators, driven by insights from AI‑powered analytics.
Build deep integrations with leading AI frameworks (PyTorch, TensorFlow, ONNX, and more), using AI agents to rapidly onboard new model architectures and ops.
Push the envelope on quantization, operator fusion, memory planning, and layout transformations—combining human expertise and AI‑guided design for state‑of‑the‑art results.
Partner with hardware and software architects, kernel hackers, and AI agents to co‑design next‑gen compiler and accelerator features, aligning silicon and code for maximum impact.
Diagnose and crush performance bottlenecks with AI‑enabled profiling and diagnostics, relentlessly tuning for latency, throughput, and power efficiency.
Level up validation, benchmarking, and regression pipelines by harnessing AI agents—ensuring compiler correctness and world‑class performance, release after release.
Uplevel the developer experience by streamlining usability, diagnostics, and documentation—AI agents are your copilots for user support, troubleshooting, and rapid iteration.
Qualifications
MS/PhD (or equivalent experience) in Computer Science, EE, or related field
Deep experience building AI compilers, accelerator backends, or graph optimization frameworks.
Strong expertise in graph optimization and performance optimization for NPUs or custom accelerators.
Experience with MLIR, LLVM, TVM‑like systems, or proprietary compiler IRs.
Excellent C/C++ and Python skills.
Solid understanding of AI inference workloads (CNNs, transformers, perception or generative models).
Strong communication skills are required, e.g. agile development experience in Scrum team (Product Owner or Scrum Master).
Nice to Have
Experience with automotive or safety‑critical systems.
Background in heterogeneous SoCs (CPU/GPU/DSP/NPU).
Performance modeling or hardware‑software co‑design experience.
Equal Opportunity Statement
Renesas Electronics is an equal opportunity and affirmative action employer, committed to celebrating diversity and fostering a work environment free of discrimination on the basis of sex, race, religion, national origin, gender, gender identity, gender expression, age, sexual orientation, military status, veteran status, or any other basis protected by federal, state or local law.
#J-18808-Ljbffr
Renesas Electronics is searching for a hands‑on AI Compiler Engineer who thrives at the convergence of cutting‑edge AI, compiler tech, and hardware design. Here, you’ll not only architect and scale a production‑class AI compiler toolchain, but also rethink how AI automates, optimizes, and accelerates every step of building and deploying neural networks on Renesas SoCs. You’ll work shoulder‑to‑shoulder with visionary engineers—both human and AI—enabling adaptive compilers that learn, evolve, and redefine what’s possible for embedded intelligence. With a relentless focus on hardware‑software co‑design, you’ll collaborate across teams to translate high‑level AI models into blazing‑fast, energy‑efficient executables, unlocking the full potential of our silicon for real‑world impact. Innovation here isn’t a catchphrase—it’s your everyday.
What You’ll Do
Own the design, implementation, and evolution of an AI compiler toolchain that leverages AI agents to seamlessly map neural networks onto Renesas SoC platforms.
Pioneer new graph transformations, lowering, scheduling, and codegen strategies for CPUs and custom accelerators, driven by insights from AI‑powered analytics.
Build deep integrations with leading AI frameworks (PyTorch, TensorFlow, ONNX, and more), using AI agents to rapidly onboard new model architectures and ops.
Push the envelope on quantization, operator fusion, memory planning, and layout transformations—combining human expertise and AI‑guided design for state‑of‑the‑art results.
Partner with hardware and software architects, kernel hackers, and AI agents to co‑design next‑gen compiler and accelerator features, aligning silicon and code for maximum impact.
Diagnose and crush performance bottlenecks with AI‑enabled profiling and diagnostics, relentlessly tuning for latency, throughput, and power efficiency.
Level up validation, benchmarking, and regression pipelines by harnessing AI agents—ensuring compiler correctness and world‑class performance, release after release.
Uplevel the developer experience by streamlining usability, diagnostics, and documentation—AI agents are your copilots for user support, troubleshooting, and rapid iteration.
Qualifications
MS/PhD (or equivalent experience) in Computer Science, EE, or related field
Deep experience building AI compilers, accelerator backends, or graph optimization frameworks.
Strong expertise in graph optimization and performance optimization for NPUs or custom accelerators.
Experience with MLIR, LLVM, TVM‑like systems, or proprietary compiler IRs.
Excellent C/C++ and Python skills.
Solid understanding of AI inference workloads (CNNs, transformers, perception or generative models).
Strong communication skills are required, e.g. agile development experience in Scrum team (Product Owner or Scrum Master).
Nice to Have
Experience with automotive or safety‑critical systems.
Background in heterogeneous SoCs (CPU/GPU/DSP/NPU).
Performance modeling or hardware‑software co‑design experience.
Equal Opportunity Statement
Renesas Electronics is an equal opportunity and affirmative action employer, committed to celebrating diversity and fostering a work environment free of discrimination on the basis of sex, race, religion, national origin, gender, gender identity, gender expression, age, sexual orientation, military status, veteran status, or any other basis protected by federal, state or local law.
#J-18808-Ljbffr