About The Position

As a Software Engineer on the Apple Maps team, you will lead the design and implementation of large-scale, high-performance inference services that support a wide range of models used across Maps, including deep learning and large language models. You will collaborate closely with research and product partners to bring models into production, with a strong focus on efficiency, reliability, and scalability. Your responsibilities span the full server stack, including onboarding new use cases, optimizing inference across heterogeneous accelerated compute hardware, deploying services on Kubernetes, building and integrating inference engines and control-plane components, and ensuring seamless integration with Maps infrastructure.

Requirements

  • Bachelor's degree in Computer Science, Engineering, or related field (or equivalent experience).
  • 5+ years in software engineering focused on ML inference, GPU acceleration, and large-scale systems.
  • Expertise in deploying and optimizing LLMs for high-performance, production-scale inference.
  • Proficiency in Python, Java or C++.
  • Experience with deep learning frameworks like PyTorch, TensorFlow, and Hugging Face Transformers.
  • Experience with model serving tools (e.g., NVIDIA Triton, TensorFlow Serving, VLLM, etc)
  • Experience with optimization techniques like Attention Fusion, Quantization, and Speculative Decoding.
  • Skilled in GPU optimization (e.g., CUDA, TensorRT-LLM, cuDNN) to accelerate inference tasks.
  • Skilled in cloud technologies like Kubernetes, Ingress, HAProxy for scalable deployment.

Nice To Haves

  • Master’s or PhD in Computer Science, Machine Learning, or a related field.
  • Understanding of ML Ops practices, continuous integration, and deployment pipelines for machine learning models.
  • Familiarity with model distillation, low-rank approximations, and other model compression techniques for reducing memory footprint and improving inference speed.
  • Strong understanding of distributed systems, multi-GPU/multi-node parallelism, and system-level optimization for large-scale inference.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service