Principal Engineer, Inference Service

DigitaloceanSeattle, WA
89d$206,000 - $250,000

About The Position

Dive in and do the best work of your career at DigitalOcean. Journey alongside a strong community of top talent who are relentless in their drive to build the simplest scalable cloud. If you have a growth mindset, naturally like to think big and bold, and are energized by the fast-paced environment of a true industry disruptor, you'll find your place here. We value winning together—while learning, having fun, and making a profound difference for the dreamers and builders in the world. We're seeking an experienced Principal Software Engineer to drive the design, development and scaling of our Large Language Model (LLM) inference services. As a Principal Software Engineer at DigitalOcean, you will join a dynamic team dedicated to revolutionizing cloud computing and AI. This team will be building a new product that will bring our famed DigitalOcean Simplicity to the world of LLM hosting, serving, and optimization. In this role, you will build systems for inference serving of popular open source / open weights LLMs as well as custom models, develop novel techniques for optimizing these models and scale the platform to handle millions of users across the globe.

Requirements

  • 10+ years of experience in software engineering, which should include 2+ years building AI/ML technologies (ideally related to LLM hosting and inference).
  • Enduring interest in distributed systems design, AI/ML, and implementation at scale in the cloud.
  • Deep expertise in cloud computing platforms and modern AI/ML technologies.
  • Experience with modern LLMs, ideally related to hosting, serving, and optimizing such models.
  • Experience with one or more inference engines would be a bonus: vLLM, SGLang, Modular Max etc.
  • Experience researching, evaluating, and building with open source technologies.
  • Proficiency in programming languages commonly used in cloud development, such as Python and Go.
  • Experience with various GPU platforms from AMD and NVIDIA and associated toolsets for tuning, configuring, and accelerating workloads on them would be ideal, but not required (e.g., CUDA and ROCm).
  • A strong sense of ownership and a drive to figure out and resolve any issues preventing you and your team from delivering value to your customers.
  • An appreciation for process and developing cross-disciplinary collaboration between engineering, operations, support, and product groups.
  • Familiarity with end-to-end quality best practices and their implementation.
  • Experience coordinating with partner teams across time zones and geographies.
  • Experience with infrastructure as code (IaC) tools like Terraform or Ansible.
  • A passion for coaching and mentoring junior software engineers.

Responsibilities

  • Design and implement an inference platform for serving large language models optimized for the various GPU platforms they will be run on.
  • Develop and shepherd complex AI and cloud engineering projects through the entire product development lifecycle (PDLC) - ideation, product definition, experimentation, prototyping, development, testing, release, and operations.
  • Optimize runtime and infrastructure layers of the inference stack for best model performance.
  • Build native cross platform inference support across NVIDIA and AMD GPUs for a variety of model architectures.
  • Contribute to open source inference engines to make them perform better on DigitalOcean cloud.
  • Build tooling and observability to monitor system health, and build auto tuning capabilities.
  • Build benchmarking frameworks to test model serving performance to guide system and infrastructure tuning efforts.
  • Mentor engineers on inference systems, GPU infrastructure, and distributed systems best practices.

Benefits

  • Competitive array of benefits to support you from our Employee Assistance Program to Local Employee Meetups to flexible time off policy.
  • Reimbursement for relevant conferences, training, and education.
  • Access to LinkedIn Learning's 10,000+ courses to support continued growth and development.
  • Salary range for this position is between $206,000 - $250,000 based on market data, relevant years of experience, and skills.
  • Bonus in addition to base salary; bonus amounts are determined based on company and individual performance.
  • Equity compensation to eligible employees, including equity grants upon hire and the option to participate in our Employee Stock Purchase Program.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service