About The Position

The Red Hat Performance and Scale Engineering team is seeking a Senior Performance Engineer to join the Performance and Scale for AI Platforms (PSAP) team. In this role, you will help drive the performance and scalability of distributed inference for Large Language Models (LLMs) as part of the llm-d open source project. Serving modern LLMs in production requires distributing models, computation, and requests across specialized hardware accelerators and multi-node environments. In this role, you will characterize, model, and optimize these systems to deliver industry leading throughput, latency, and cost efficiency across Red Hat’s AI platforms. We are looking for an engineer who is curious, adaptable, and excited to work at the intersection of distributed systems, performance engineering, and AI. You will join a highly collaborative, open source driven team focused on advancing performance across Red Hat’s product and cloud services portfolio. At Red Hat, open source principles guide how we build and innovate. We encourage teams to thoughtfully leverage AI to improve workflows, reduce complexity, and unlock higher impact work.

Requirements

  • 5+ years of overall software engineering experience, including at least 3 years focused on performance engineering or systems-level development.
  • Strong understanding of operating systems and distributed systems
  • Foundational knowledge of AI and LLM inference workflows
  • Proficiency in Python for data and machine learning workflows, along with strong Linux and Bash skills
  • Excellent communication skills, with the ability to translate performance data into clear business and customer value
  • Passion for and commitment to open source principles

Nice To Haves

  • Master’s or PhD in Computer Science, AI, or a related field
  • Experience contributing to open source projects or leading community initiatives
  • Hands on experience with Kubernetes or OpenShift
  • Familiarity with performance and observability tools such as perf, eBPF tools, Nsight Systems, and PyTorch Profiler
  • Experience with modern LLM inference stacks such as vLLM, TensorRT LLM, Hugging Face TGI, and Triton Inference Server

Responsibilities

  • Define and track key performance indicators (KPIs) and service level objectives (SLOs) for large-scale, distributed LLM inference services in Kubernetes/OpenShift
  • Participate in the performance roadmap for distributed inference, including multi-node and multi-GPU scaling studies, interconnect performance analysis, and competitive benchmarking
  • Formulate performance test plans and execute performance benchmarks to characterize performance, drive improvements, and detect performance issues through data analysis and visualization
  • Develop and maintain tools, scripts, and automated solutions that streamline performance benchmarking tasks.
  • Collaborate with cross-functional engineering teams to identify and address performance issues.
  • Partner with DevOps to bake performance gates into GitHub Actions/OpenShift Pipelines.
  • Explore and experiment with emerging AI technologies relevant to software development, proactively identifying opportunities to incorporate new AI capabilities into existing workflows and tooling.
  • Triage field and customer escalations related to performance; distill findings into upstream issues and product backlog items.
  • Publish results, recommendations, and best practices through internal reports, presentations, external blogs, and official documentation.
  • Represent the team at internal and external conferences, presenting key findings and strategies.

Benefits

  • Comprehensive medical, dental, and vision coverage
  • Flexible Spending Account - healthcare and dependent care
  • Health Savings Account - high deductible medical plan
  • Retirement 401(k) with employer match
  • Paid time off and holidays
  • Paid parental leave plans for all new parents
  • Leave benefits including disability, paid family medical leave, and paid military leave
  • Additional benefits including employee stock purchase plan, family planning reimbursement, tuition reimbursement, transportation expense account, employee assistance program, and more!
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service