Anthropic Pbc-posted 5 months ago
$320,000 - $485,000/Yr
Full-time • Senior
San Francisco, CA

Anthropic is seeking a Linux OS and System Programming Subject Matter Expert to join our Infrastructure team. In this role, you'll lead efforts to accelerate and optimize our virtualization and VM workloads that power our AI infrastructure. Your deep expertise in low-level system programming, kernel optimization, and virtualization technologies will be crucial in ensuring Anthropic is able to scale our compute infrastructure efficiently and reliably for training and serving frontier AI models.

  • Lead optimization initiatives for our virtualization stack, improving performance, reliability, and efficiency of our VM environments
  • Design and implement custom kernel modules, drivers, and system-level components to enhance our compute infrastructure
  • Troubleshoot complex performance bottlenecks in virtualized environments and develop solutions
  • Collaborate with cloud engineering teams to optimize interactions between our workloads and underlying hardware
  • Develop tooling for monitoring and improving virtualization performance
  • Work with our ML engineers to understand their computational needs and optimize our systems accordingly
  • Contribute to the design and implementation of our next-generation compute infrastructure
  • Mentor other engineers on low-level systems programming and Linux kernel internals
  • Partner closely with cloud providers to influence hardware and platform features for AI workloads
  • Have 5+ years of experience with Linux kernel development, system programming, or related low-level software engineering
  • Possess deep understanding of virtualization technologies (KVM, Xen, QEMU, etc.) and their performance characteristics
  • Have experience optimizing system performance for compute-intensive workloads
  • Are familiar with modern CPU architectures and memory systems
  • Have strong C/C++ programming skills and experience with systems languages like Rust
  • Understand the intricacies of Linux resource management, scheduling, and memory management
  • Have experience profiling and debugging complex system-level performance issues
  • Are comfortable diving into unfamiliar codebases and technical domains
  • Are results-oriented, with a bias towards practical solutions and measurable impact
  • Care about the societal impacts of AI and are passionate about building safe, reliable systems
  • GPU virtualization and acceleration technologies
  • Cloud infrastructure at scale (AWS, GCP)
  • Container technologies and their underlying implementation (Docker, containerd, runc, OCI)
  • eBPF programming and kernel tracing tools
  • OS-level security hardening and isolation techniques
  • Developing custom scheduling algorithms for specialized workloads
  • Performance optimization for ML/AI specific workloads
  • Network stack optimization and high-performance networking
  • Experience with TPUs, custom ASICs, or other ML accelerators
  • Visa sponsorship available
  • Hybrid work policy requiring at least 25% in-office presence
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service