About The Position

Join the Siri Runtime Platform team and help shape the future of Apple Intelligence. We are engineers, scientists, and problem solvers working to bring smarter, faster, and more natural interactions to millions of users worldwide. As a Software Engineer on our team, you will build the on-device runtime foundation that powers Siri, enabling seamless, intelligent, and secure user experiences. This is an opportunity to collaborate with a passionate, diverse group of cross-functional partners to deliver the next generation of innovative features across the Apple ecosystem. DESCRIPTION In this role, you will be at the forefront of integrating Apple Intelligence into Siri. You will drive the design and implementation of new features, optimize system performance for low-latency interaction, and help integrate intelligent experiences into people’s daily lives. You will work closely with engineers across Siri and the wider Apple organization to create scalable, efficient, and user-focused solutions that leverage the latest advancements in Large Language Models (LLMs) and on-device intelligence.

Requirements

  • BS or MS degree in Computer Science, Electrical Engineering, or a related technical field, or equivalent practical experience.
  • Strong programming proficiency in languages such as Swift, C++, or Objective-C, and comfort using them for technical interviews.
  • Foundational knowledge in computer science, including data structures, algorithms, system design, concurrency, and object-oriented programming.
  • Familiarity with modern development tools and practices (e.g., Git, CI/CD, code review, automated testing).
  • Experience utilizing GenAI and LLM-based tools to accelerate engineering tasks (e.g., code generation, testing, or system analysis).
  • Demonstrated ability to learn new technologies and development environments quickly.
  • Strong verbal and written communication skills, with a collaborative approach to problem-solving.

Nice To Haves

  • Proficiency with Swift and Xcode, with experience building applications for Apple platforms (personal, academic, or professional).
  • Exposure to on-device development workflows (e.g., debugging tools, memory profiling, performance optimization).
  • Experience working on voice assistants or conversational AI systems leveraging Large Language Models (LLMs).
  • Experience working on large-scale or multi-team software projects (including open-source or research projects).

Responsibilities

  • Drive the design and implementation of new features
  • Optimize system performance for low-latency interaction
  • Integrate intelligent experiences into people’s daily lives
  • Create scalable, efficient, and user-focused solutions that leverage the latest advancements in Large Language Models (LLMs) and on-device intelligence.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service