Siri, Eval Architect Engineer

AppleCupertino, CA

About The Position

Do you want to define the architecture of the systems that measure Siri's quality across every platform, every locale, and every model update? Apple's Agentic Eval Engineering organization is building the evaluation infrastructure that determines how Siri's quality is measured, trusted, and improved — spanning large-scale automation on real devices, model-in-the-loop simulation, AI-powered auto-evaluators, and closed-loop agentic fix pipelines. We are seeking a senior Eval Systems Architect to own the end-to-end technical vision and system architecture across our entire evaluation stack, ensuring that we build toward a coherent, scalable, and trustworthy system. As the Eval Systems Architect, you will own the technical architecture of Siri's evaluation infrastructure — a system spanning real-device automation, simulated product evaluation, AI-powered auto-evaluators, developer workflows, and observability tooling. You will work across the Agentic Eval Engineering and Siri to ensure architectural coherence, define interfaces and contracts between systems, and drive the technical roadmap for the evaluation platform as a whole. This is not a role where you design in isolation. You will embed with teams, understand their systems deeply, and make architectural decisions that balance local team autonomy with system-wide consistency. You will lead a first-principles review of existing evaluation tooling and infrastructure — identifying gaps, redundancies, and opportunities to simplify or unify. You will represent the technical perspective in leadership discussions, influence build-vs-integrate decisions, and set the standards that enable teams to move fast without creating fragmentation. Your work will directly influence how Apple evaluates its most important AI products. Your architectural decisions will impact the speed, confidence, and quality with which Siri ships to billions of users.

Requirements

  • BS/MS/PhD in Computer Science, Software Engineering, or a related field.
  • 10+ years of software engineering experience, with at least 5 years in a systems architecture, staff/principal engineer, or technical leadership role.
  • Proven track record of designing and shipping large-scale distributed systems serving multiple teams or organizations.
  • Deep expertise in system design: API design, service architecture, data flow modeling, interface contracts, and schema evolution.
  • Solid software engineering fundamentals with production experience, including CI/CD, testing strategies, system monitoring, debugging complex multi-service systems, and code maintainability.
  • Demonstrated expertise in using AI-assisted software development workflows to accelerate engineering while maintaining code quality.

Nice To Haves

  • Experience architecting evaluation, testing, or quality infrastructure at scale — particularly for AI/ML products where quality is non-binary and continuous.
  • Experience with building LLM applications, LLM-as-judge evaluation frameworks, and offline evaluation pipelines.
  • Familiarity with MLOps principles for model lifecycle management and training data pipelines.
  • Experience with VM orchestration, fleet management, or large-scale job scheduling systems.
  • Knowledge of simulation and service virtualization techniques for complex software stacks.
  • Experience with observability platforms (metrics, logging, tracing, dashboarding) and defining SLOs for platform reliability.
  • Experience with agentic AI systems, including tool-use, multi-step reasoning, and human-in-the-loop workflows.
  • Track record of leading cross-team architectural initiatives (e.g., platform migrations, API unification, system consolidation) in organizations with 50+ engineers

Responsibilities

  • Own the technical architecture of Siri's evaluation infrastructure
  • Work across the Agentic Eval Engineering and Siri to ensure architectural coherence, define interfaces and contracts between systems, and drive the technical roadmap for the evaluation platform as a whole
  • Embed with teams, understand their systems deeply, and make architectural decisions that balance local team autonomy with system-wide consistency
  • Lead a first-principles review of existing evaluation tooling and infrastructure — identifying gaps, redundancies, and opportunities to simplify or unify
  • Represent the technical perspective in leadership discussions, influence build-vs-integrate decisions, and set the standards that enable teams to move fast without creating fragmentation

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service