Sr. Staff Software Engineer, Systems Infrastructure

LinkedInMountain View, CA
$198,000 - $326,000Hybrid

About The Position

This role will be based in Sunnyvale or Mountain View, CA. At LinkedIn, our approach to flexible work is centered on trust and optimized for culture, connection, clarity, and the evolving needs of our business. The work location of this role is hybrid, meaning it will be performed both from home and from a LinkedIn office on select days, as determined by the business needs of the team. Join LinkedIn’s Serving Foundations team and work on one of the most critical layers of our AI platform—powering large-scale model inference across all major AI use cases. This team sits at the center of LinkedIn’s AI stack and is responsible for making our models faster, more efficient, and more scalable at production scale. This is a deeply technical, systems-focused position at the intersection of machine learning, compilers, and hardware. You will work across the full stack—from model graphs and optimization techniques to runtime systems, kernels, and GPU execution—to push the limits of performance and efficiency. You will lead efforts to optimize large-scale inference systems serving billions of requests, driving improvements in latency, throughput, and cost. This includes advancing GPU utilization, designing custom kernel and operator optimizations, improving model efficiency through quantization and compression, and shaping how models are compiled and executed in production environments. As a Sr. Staff engineer, you will operate with a high degree of autonomy and influence, identifying bottlenecks across the system and driving end-to-end solutions across model, runtime, and infrastructure layers. Your work will directly impact how AI systems perform at LinkedIn scale and will help define the future of our AI serving platform.

Requirements

  • BS/BA in Computer Science or related technical field or equivalent experience
  • 8+ years of experience in software engineering with a focus on systems and performance
  • Experience building or optimizing large-scale production ML systems
  • Experience programming in Python, C++, or similar languages
  • Experience working with distributed systems and large-scale infrastructure

Nice To Haves

  • Deep expertise in GPU programming and optimization (CUDA, Triton, or similar)
  • Experience with model optimization techniques such as quantization, pruning, and compression
  • Experience with ML compilers or runtimes such as TensorRT, XLA, TVM, TorchInductor, or similar
  • Hands-on experience with kernel-level or operator-level optimization
  • Experience building or scaling high-performance inference systems, including LLM serving
  • Understanding of latency, throughput, and cost tradeoffs in production ML systems
  • Background in high-performance computing or hardware-aware optimization

Responsibilities

  • Lead end-to-end optimization of large-scale AI inference systems across model, runtime, and hardware layers
  • Design and implement GPU-efficient solutions, including kernel optimization, operator fusion, and memory optimization
  • Apply model optimization techniques such as quantization, pruning, and mixed precision to improve performance and efficiency
  • Optimize model execution using ML compilers and runtimes (e.g., TensorRT, XLA, TVM, Triton)
  • Build and scale low-latency, high-throughput inference systems for both online and offline workloads
  • Identify and resolve bottlenecks across distributed systems and model serving pipelines
  • Set technical direction and influence best practices for AI performance and efficiency across teams
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service