Director, Technical Program Management - AI Inference

Advanced Micro Devices, IncSan Jose, CA

About The Position

AMD is seeking a TPM Director to lead Inference programs for the AI Group BRAIN organization. You will be at the forefront of innovation and experimentation, shaping a vision for inference platform impact and ecosystem adoption, and engaging with internal and external stakeholders to navigate programs from inception to delivery. You will drive end-to-end execution of complex, cross-functional inference initiatives while owning multi-quarter planning, roadmap alignment, and the operating cadence that turns strategy into predictable delivery across the Inference engineering workstreams. In this role, you will be a key partner to engineering, product, and business leadership, ensuring that near-term execution strength is matched by clear long-term planning, rigorous prioritization, and proactive management of risks, dependencies, and decision points across a rapidly evolving AI ecosystem. You will help scale execution across initiatives spanning inference software, runtime enablement, model optimization, systems integration, performance, benchmark readiness, deployment workflows, ecosystem readiness, and product enablement deliverables—including engagement with public inference projects and ecosystems (e.g., SGLang, vLLM) where relevant, as well as benchmark platforms (e.g., MLPerf and InferenceX) where we drive submissions and readiness. This role requires strong technical judgment, executive communication, and the ability to align multiple organizations around shared goals and measurable outcomes.

Requirements

  • Strong technical depth in AI/ML systems and large-scale inference, comfortable operating in ambiguity and translating strategy into executable roadmaps across a broad set of teams and priorities.
  • Leverage an AI vision to drive business results and bring broad knowledge of AI technology, algorithms, and tools.
  • Communicate crisply at all levels, influence without direct authority, and build trust with senior engineering leaders by bringing structure, clarity, and rigor to complex technical programs.
  • Proactively surface risks, tradeoffs, and decision points before they become blockers, and you create mechanisms that improve organizational visibility and delivery predictability.
  • Thrive in a fast-moving environment, bring strong operational discipline, and can establish durable processes for portfolio planning, executive reviews, milestone tracking, and accountability without creating unnecessary overhead for engineering teams.

Nice To Haves

  • Strong familiarity with modern AI inference ecosystems, including model serving, runtime software, compiler/toolchain dependencies, optimization techniques, and deployment workflows for production inference.
  • Experience leading large, cross-functional programs across software, systems, architecture, hardware, and product teams in highly technical environments.
  • Track record of building multi-quarter roadmaps, execution cadences, and governance mechanisms that improve predictability across fast-moving engineering organizations.
  • Experience working across open-source and ecosystem-driven environments, including upstream dependencies and release planning
  • Strong executive presence with demonstrated success communicating program health, risks, tradeoffs, and decisions to senior leadership.
  • Proven ability to influence across matrixed organizations, resolve ambiguity, and drive alignment among teams with competing priorities.
  • Experience managing, mentoring, or scaling TPM teams is preferred.

Responsibilities

  • Own the Inference portfolio planning process by translating strategy into a multi-quarter roadmap, quarterly execution plans, and measurable business and engineering outcomes.
  • Establish and run an execution operating model across the Inference organization, including planning reviews, OKRs, dashboards, decision logs, milestone tracking, and risk management mechanisms that drive rigor, transparency, and predictable delivery.
  • Drive end-to-end delivery of large-scale inference capabilities across cross-functional engineering teams, including software, systems, architecture, performance, model enablement, runtime, and platform integration; manage scope, milestones, dependencies, critical path, and release readiness.
  • Partner with engineering and product leadership to align priorities, sequencing, and resource planning across a complex portfolio of inference initiatives spanning platform readiness, model support, serving performance, benchmark readiness, and ecosystem integration.
  • Apply technical judgment to identify and manage architecture-level tradeoffs, technical dependencies, and execution risks across inference workloads, runtimes, software stacks, and deployment environments.
  • Analyze and quantify project risks; develop and maintain risk management plans; and proactively mitigate issues by driving clear owners, timelines, and path-to-green actions.
  • Develop, maintain, and manage program requirements, execution plans, timelines, issues, risks, and challenges; ensure milestones, dependencies, and resources are tracked and escalated appropriately.
  • Lead executive-level program reviews by clearly communicating status, key decisions, risks, dependencies, and resource needs; ensure leadership has accurate visibility into progress, gaps, and path-to-green plans.
  • Drive cross-organizational alignment with internal stakeholders and external ecosystem partners where needed, helping remove blockers and accelerate delivery across upstream and downstream dependencies.
  • Improve operational maturity across the organization by standardizing TPM best practices, governance frameworks, and planning mechanisms that increase accountability, reduce execution friction, and strengthen delivery consistency.

Benefits

  • AMD benefits at a glance.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Director

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service