About The Position

The Apple Services Engineering team is looking for a Software Developer In Test for our Tools & Automation team with expertise in building developer tools, test frameworks and libraries and strong experience in leading cross functional projects on a fast moving team. This role requires someone with a passion for quality engineering as well as solid software engineering skills, to deliver high quality services to Apple's customers. You will be responsible for guiding tool analysis, creating proof of concept models, and making recommendations to support the tools selection process. You will also analyze, recommend, and implement best practices for coding guidelines and design principles, process workflow, quality gates, CI/CD process, etc. You will design, implement and maintain automation tests, testing & development frameworks and tooling to support various backend services and machine learning models and pipelines. Partner with cross functional teams to define and drive quality assurance best practices, techniques and methodologies needed to enhance productivity and quality. Identify process and architecture inefficiencies and help drive improvements while reducing risk, fostering a culture of built-in quality and continuous testing throughout the SDLC. Mentor junior team members. You will own and define the testing strategy for end-to-end ML pipelines, data flows, and AI platform services. You will guide the selection and integration of tools and platforms that support scalable test automation, data validation, continuous training (CT), and continuous integration/continuous delivery (CI/CD) in ML workflows. You will collaborate closely with AI/ML engineers, MLOps, and data science teams to ensure testability, model governance, and validation of ML outputs. You will define and enforce standards for quality in ML systems — including unit, integration, regression, and fairness testing. You will define and track quality metrics such as test coverage for ML pipelines, test flakiness, and pipeline reliability. You will influence ML Engineering and Platform teams to adopt a quality-driven approach in their design and implementation. You will explore new tools and research in AI quality assurance, ML testing frameworks, and integrate them where beneficial.

Requirements

  • Bachelor’s degree with a minimum of 5 years, or Master’s degree with a minimum of 3 years, of experience in software development and/or test automation, including at least 3 years leading complex, distributed systems.
  • BS or MS in Computer Science or related field or relevant industry experience.
  • Proficiency in Java, Python or similar programming languages
  • Experience with test frameworks and tools such as PyTest, JUnit, or equivalent.
  • Experience in designing and implementing testing frameworks for distributed systems, machine learning pipelines, and service-layer testing, including backend APIs, data processing, and infrastructure components
  • Experience with planning and execution of validating REST and gRPC APIs
  • Passion for quality engineering and delivering creative approaches for testing machine learning algorithms and large scale distributed data systems
  • Creative problem solving with attention to detail
  • Impeccable communication skills and ability to effectively collaborate with multiple stakeholders across organizations and project timelines
  • Highly organized, creative, self-motivated, and passionate about achieving results.
  • Excited about the possibilities unlocked by AI and ML technologies
  • Advocacy for a positive customer experience

Nice To Haves

  • Knowledge of Big Data systems, Apache Spark is a plus
  • Experience with testing or working with AI/ML systems or platforms that include ML model training or data pipelines and algorithms is a plus
  • Adept at leveraging technology to solve problems, including building tooling, automating tasks, and developing supporting systems to streamline development workflows

Responsibilities

  • Guide tool analysis, create proof of concept models, and make recommendations to support the tools selection process.
  • Analyze, recommend, and implement best practices for coding guidelines and design principles, process workflow, quality gates, CI/CD process, etc.
  • Design, implement and maintain automation tests, testing & development frameworks and tooling to support various backend services and machine learning models and pipelines
  • Partner with cross functional teams to define and drive quality assurance best practices, techniques and methodologies needed to enhance productivity and quality
  • Identify process and architecture inefficiencies and help drive improvements while reducing risk, fostering a culture of built-in quality and continuous testing throughout the SDLC
  • Mentor junior team members
  • Own and define the testing strategy for end-to-end ML pipelines, data flows, and AI platform services.
  • Guide the selection and integration of tools and platforms that support scalable test automation, data validation, continuous training (CT), and continuous integration/continuous delivery (CI/CD) in ML workflows.
  • Collaborate with ML Engineers & Data Scientists: Partner closely with AI/ML engineers, MLOps, and data science teams to ensure testability, model governance, and validation of ML outputs.
  • Champion Best Practices: Define and enforce standards for quality in ML systems — including unit, integration, regression, and fairness testing.
  • Measure & Improve Quality: Define and track quality metrics such as test coverage for ML pipelines, test flakiness, and pipeline reliability.
  • Mentor Engineering Teams: Influence ML Engineering and Platform teams to adopt a quality-driven approach in their design and implementation
  • Stay Ahead of AI Testing Trends: Explore new tools and research in AI quality assurance, ML testing frameworks, and integrate them where beneficial.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service