Senior AI Infrastructure Engineer

Analog DevicesWilmington, MA

About The Position

About Analog Devices Analog Devices, Inc. (NASDAQ: ADI ) is a global semiconductor leader that bridges the physical and digital worlds to enable breakthroughs at the Intelligent Edge. ADI combines analog, digital, and software technologies into solutions that help drive advancements in digitized factories, mobility, and digital healthcare, combat climate change, and reliably connect humans and the world. With revenue of more than $9 billion in FY24 and approximately 24,000 people globally, ADI ensures today's innovators stay Ahead of What's Possible™. Learn more at www.analog.com and on LinkedIn and Twitter (X). Senior AI Infrastructure Engineer, Developer Experience Analog Devices, Inc. (NASDAQ: ADI) is a global semiconductor leader that bridges the physical and digital worlds to enable breakthroughs at the Intelligent Edge. ADI combines analog, digital, and software technologies into solutions that help drive advancements in digitized factories, mobility, and digital healthcare, combat climate change, and reliably connect humans and the world. With revenue of more than $9 billion in FY24 and approximately 24,000 people globally, ADI ensures today's innovators stay Ahead of What's Possible™. Learn more at www.analog.com and on LinkedIn and Twitter (X). This role is within the global Developer Experience (DevEx) team, specifically the Model Developer Experience (MDX) sub-team – whose mission is to deliver world-class infrastructure and platforms that enable AI builders across ADI to move faster, with confidence, and at scale. You will join a high-performance, mission-driven interdisciplinary team that spans data science, software engineering, platform architecture, cloud infrastructure, and security expertise. We believe in a culture founded on trust, mutual respect, growth mindsets, and an obsession for building extraordinary products with extraordinary people. Role Summary As a Senior AI Infrastructure Engineer (individual contributor), you will bring deep technical expertise in building scalable AI infrastructure across diverse deployment environments, and the ability to design elegant solutions that absorb operational complexity on behalf of AI builders across the organization. You will work embedded with data science and ML engineering teams to understand their infrastructure needs at the cutting edge, then translate those learnings into reusable, org-wide architectural patterns and platforms. You'll design and implement at critical infrastructure capabilities that span on-prem, hybrid, and cloud environments—from model serving and orchestration, to compute optimization, cost efficiency, and governance. You will work at the intersection of research, technical architecture, and developer enablement—translating research and product goals into infrastructure systems that reduce complexity and accelerate innovation.

Requirements

  • Deep expertise in designing and operating AI infrastructure across multiple deployment paradigms (on-premises, hybrid, cloud-native).
  • Proven ability to work embedded with technical teams, understand complex requirements, and translate them into scalable architectural solutions.
  • Strong experience with Kubernetes, container orchestration, and distributed compute frameworks (Ray, Spark, or equivalent) at production scale.
  • Expert-level Infrastructure-as-Code proficiency (Terraform, CDK, or equivalent) with demonstrated ability to build reusable, multi-team infrastructure templates.
  • Deep knowledge of GPU/accelerator resource management, including scheduling, optimization, and cost tracking across heterogeneous hardware.
  • Experience designing model serving infrastructure and inference optimization pipelines for production AI workloads.
  • Strong understanding of modern cloud platforms (AWS, Azure, or equivalent) with hands-on experience building multi-cloud or hybrid strategies.
  • Demonstrated ability to solve complex infrastructure problems through systematic analysis and creative engineering.
  • Strong communication skills and ability to translate technical concepts for both engineering and non-technical audiences.
  • Mentoring orientation with demonstrated success upskilling engineers on infrastructure and platform topics.

Nice To Haves

  • Physical Intelligence and Industrial AI Experience building or scaling AI infrastructure for robotics, autonomous systems, or industrial perception applications.
  • Familiarity with ROS/ROS2 ecosystems and the infrastructure challenges of deploying ML models in robotic systems.
  • Background in edge AI deployment, including optimization for low-latency inference on resource-constrained devices.
  • Experience designing ML infrastructure that supports rapid iteration and few-shot adaptation in physical systems.
  • Knowledge of heterogeneous compute architectures combining CPUs, GPUs, and specialized processors (NPUs, FPGAs, etc.).
  • Experience with real-time operating systems or hard real-time constraints in distributed systems.
  • Understanding of manufacturing, autonomous vehicle, or healthcare domains and their infrastructure requirements for AI applications.

Responsibilities

  • AI Developer Experience Partner directly with AI builder teams (data scientists, ML engineers, researchers) as an embedded technical advisor, understanding their infrastructure bottlenecks and platform needs at the point of creation.
  • Translate specific team engagements into generalizable patterns, standards, and architectural guidance that can be adopted across the organization.
  • Drive initiatives that reduce friction in the AI development lifecycle—from experimentation through production—by removing operational and technical barriers for builders.
  • Design and advocate for developer-friendly abstractions and APIs that hide infrastructure complexity while maintaining flexibility for advanced use cases.
  • Collaborate with cross-functional stakeholders to define what "excellent developer experience" means for AI infrastructure, then measure and iterate.
  • Contribute to org-wide standards for AI governance, model versioning, experiment tracking, and deployment workflows that balance flexibility with reliability.
  • On-Prem, Hybrid, and Cloud AI Infrastructure Engineering Design and optimize AI infrastructure strategies that span heterogeneous environments—on-premise GPU clusters, hybrid cloud-edge deployments, and cloud-native architectures—ensuring seamless developer experience across all.
  • Architect compute orchestration and scheduling solutions (Kubernetes, Ray, or equivalent) that efficiently allocate resources across multiple environments and workload types.
  • Own infrastructure for model serving, inference optimization, and real-time inference pipelines supporting low-latency, edge-deployed AI models.
  • Define and implement cost optimization strategies across cloud and on-prem resources, including resource allocation, auto-scaling policies, and workload consolidation.
  • Build reusable Infrastructure-as-Code frameworks and tooling that other teams can adopt to provision and manage AI workloads consistently across environments.
  • Establish observability and monitoring strategies for AI infrastructure, including resource utilization, cost tracking, and performance metrics that enable proactive problem-solving.
  • Drive security and compliance standards for AI infrastructure, ensuring data residency, access control, and auditability across deployment environments.
  • Mentor engineers on infrastructure best practices, distributed systems concepts, and optimization techniques that improve platform reliability and developer productivity

Benefits

  • medical, vision and dental coverage
  • 401k
  • paid vacation, holidays, and sick time

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service