About The Position

Marcom is the creatively-led global team that oversees Apple’s consumer facing marketing. We ensure the flawless development and execution of world-class communications across all medias and platforms. The Marcom Quality Engineering team is seeking a Software Development Engineer in Test to lead test automation and quality initiatives for web applications and APIs. In this role, you will design scalable, intelligent automation frameworks, influence architectural decisions, and leverage AI/LLM-powered tools for smarter test generation, faster failure analysis, and actionable quality insights—delivering reliable, high-impact releases to millions of users worldwide. This role sits at the intersection of test engineering, AI-assisted quality tooling, and operational systems design. You will architect automation frameworks that scale across large, content-rich applications, collaborate on LLM-powered quality tools, and build the knowledge and accountability systems that keep Apple's engineering teams in control of quality — even across a distributed, agency-driven delivery model. From hands-on test automation to vendor audits and data-driven operational oversight, you will be the connective layer between technical quality and scalable program execution.

Requirements

  • Bachelor’s degree in Computer Science, a related technical field, or 5 years of relevant industry experience.
  • Proficiency in Node.js/Typescript with hands-on experience building or maintaining web test automation and related tooling.
  • Experience testing web applications using modern automation frameworks such as Playwright, WebdriverIO, or XCUITest, including practices for scalable, reliable, and maintainable test automation.
  • Experience testing APIs, including RESTful and/or GraphQL services, with automated frameworks and an understanding of API design principles.
  • Experience working with CI/test infrastructure, including improving reliability and feedback speed or operating CI runners and executors using tools like GitHub Actions, Jenkins, or Harness.
  • Experience working with vendor teams, including contributing to shared processes and onboarding standards.

Nice To Haves

  • Deep experience with Playwright or WebdriverIO, including best practices for browser automation, fixtures, parallelization, and network interception.
  • Familiarity with AI-assisted quality techniques, such as using LLM-enabled tools for test generation, failure analysis, triage, or supporting CI/CD quality gates.
  • Experience improving testability of features by collaborating with software engineers and making deliberate choices around mocking, dependency management, and validating component and service interfaces.
  • Experience with cross-platform automation (web, native, APIs) and techniques to reduce test flakiness, improve time-to-signal, and increase result reliability.
  • Familiarity with deterministic test data strategies, including seeding known records, masked production subsets, synthetic or golden datasets, and versioning.
  • Experience establishing documentation or knowledge management standards across distributed or multi-vendor teams — including testing strategies, onboarding materials, architectural decisions, and known issues — in shared, accessible systems.
  • Experience contributing to or operating within a staff augmentation or vendor rotation model, including designing handoff processes, defining interoperability standards, or building tooling that reduces dependency on any single team or individual.
  • Familiarity with agentic or LLM-powered workflows applied to operational use cases — such as surfacing program health, tracking delivery status, or querying structured knowledge systems — beyond test generation alone.
  • Strong communication and influence skills, with the ability to define standards, align external teams around shared processes, and explain technical systems and tradeoffs to both technical and non-technical audiences.

Responsibilities

  • Lead test automation and quality initiatives for web applications and APIs.
  • Design scalable, intelligent automation frameworks.
  • Influence architectural decisions.
  • Leverage AI/LLM-powered tools for smarter test generation, faster failure analysis, and actionable quality insights.
  • Deliver reliable, high-impact releases to millions of users worldwide.
  • Architect automation frameworks that scale across large, content-rich applications.
  • Collaborate on LLM-powered quality tools.
  • Build knowledge and accountability systems that keep Apple's engineering teams in control of quality.
  • Conduct hands-on test automation.
  • Perform vendor audits.
  • Provide data-driven operational oversight.
  • Act as the connective layer between technical quality and scalable program execution.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service