
Director, Technical Program Management - AI Inference
AMD, san jose, ca, United States
WHAT YOU DO AT AMD CHANGES EVERYTHING
At AMD, our mission is to build great products that accelerate next‑generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges, striving for execution excellence while being direct, humble, collaborative and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond.
The Role
We are seeking a TPM Director to lead inference programs for the AI Group BRAIN organization. You will shape the vision for inference platform impact, engage internal and external stakeholders, and navigate programs from inception to delivery. You will drive end‑to‑end execution of complex, cross‑functional inference initiatives while owning multi‑quarter planning, roadmap alignment and an operating cadence that turns strategy into predictable delivery across the inference engineering workstreams.
The Person
The ideal candidate is a highly effective program leader with strong technical depth in AI/ML systems and large‑scale inference. Comfortable operating in ambiguity, you translate strategy into executable roadmaps across a broad set of teams and priorities, leveraging an AI vision to drive business results. You communicate crisply at all levels, influence without direct authority, and build trust with senior engineering leaders. You proactively surface risks, trade‑offs and decision points before they become blockers, and create mechanisms that improve organizational visibility and delivery predictability. You thrive in a fast‑moving environment, bring strong operational discipline and establish durable processes for portfolio planning, executive reviews, milestone tracking and accountability without creating unnecessary overhead for engineering teams.
Key Responsibilities
- Own the inference portfolio planning process by translating strategy into a multi‑quarter roadmap, quarterly execution plans and measurable business and engineering outcomes.
- Establish and run an execution operating model across the inference organization, including planning reviews, OKRs, dashboards, decision logs, milestone tracking and risk management mechanisms that drive rigor, transparency and predictable delivery.
- Drive end‑to‑end delivery of large‑scale inference capabilities across cross‑functional engineering teams, managing scope, milestones, dependencies, critical path and release readiness.
- Partner with engineering and product leadership to align priorities, sequencing and resource planning across a complex portfolio of inference initiatives spanning platform readiness, model support, serving performance, benchmark readiness and ecosystem integration.
- Apply technical judgment to identify and manage architecture‑level trade‑offs, technical dependencies and execution risks across inference workloads, runtimes, software stacks and deployment environments.
- Analyze and quantify project risks; develop and maintain risk management plans; and proactively mitigate issues by driving clear owners, timelines and path‑to‑green actions.
- Develop, maintain and manage program requirements, execution plans, timelines, issues, risks and challenges; ensure milestones, dependencies and resources are tracked and escalated appropriately.
- Lead executive‑level program reviews by clearly communicating status, key decisions, risks, dependencies and resource needs; ensure leadership has accurate visibility into progress, gaps and path‑to‑green plans.
- Drive cross‑organizational alignment with internal stakeholders and external ecosystem partners where needed, helping remove blockers and accelerate delivery across upstream and downstream dependencies.
- Improve operational maturity across the organization by standardizing TPM best practices, governance frameworks and planning mechanisms that increase accountability, reduce execution friction and strengthen delivery consistency.
Preferred Experience
- Strong familiarity with modern AI inference ecosystems, including model serving, runtime software, compiler/toolchain dependencies, optimization techniques and deployment workflows for production inference.
- Experience leading large, cross‑functional programs across software, systems, architecture, hardware and product teams in highly technical environments.
- Track record of building multi‑quarter roadmaps, execution cadences and governance mechanisms that improve predictability across fast‑moving engineering organizations.
- Experience working across open‑source and ecosystem‑driven environments, including upstream dependencies and release planning.
- Strong executive presence with demonstrated success communicating program health, risks, trade‑offs and decisions to senior leadership.
- Proven ability to influence across matrixed organizations, resolve ambiguity and drive alignment among teams with competing priorities.
- Experience managing, mentoring or scaling TPM teams is preferred.
Academic Credentials
Master’s or Bachelor’s degree in Computer Engineering, Computer Science, Electrical Engineering or a related technical field is desired.
Benefits
Benefits offered are described: AMD benefits at a glance.
Equal Opportunity Employer
AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third‑party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.
Artificial Intelligence Screening
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here.
Posting Status
This posting is for an existing vacancy.
#J-18808-Ljbffr