Inference Software Engineer - Collectives

EtchedSan Jose, CA
107dOnsite

About The Position

Etched is building the world’s first AI inference system purpose-built for transformers - delivering over 10x higher performance and dramatically lower cost and latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep & parallel chain-of-thought reasoning agents. Backed by hundreds of millions from top-tier investors and staffed by leading engineers, Etched is redefining the infrastructure layer for the fastest growing industry in history. Etched’s Inference SW team enables optimal mapping of models to Sohu’s dataflow architecture and serving requests across multiple chips, hosts and racks. We are seeking a highly skilled and motivated engineer to formalize and optimize our collectives (e.g. Send/Recieve, AllReduce, Broadcast, etc.). You’ll build SW enabling frontier inference performance to satisfy exponentially growing serving demand. In this role, your core focus will be working across systems and research to realize Mixture of Expert (MoE) architectures on Sohu’s system. You will play a key role in scaling out Sohu’s nascent runtime, with a focus on collectives.

Requirements

  • Strong proficiency in Rust and/or C++; familiarity with PyTorch and/or JAX.
  • Experience designing/optimizing collectives (e.g. NCCL, MPI collectives, XLA collectives, etc.)
  • Strong systems knowledge, including Linux internals, accelerator architectures (e.g., GPUs, TPUs), high-speed interconnects (e.g., NVLink, InfiniBand) and RDMA
  • Solid understanding of distributed systems concepts, algorithms, and challenges, including consensus protocols, consistency models, and communication patterns
  • Experience analyzing performance traces and logs from distributed systems and ML workloads.
  • A knack for designing user-facing interfaces and libraries, and enjoy looking for that elusive optimum between performance and usability.

Nice To Haves

  • Large language model architectures, particularly Mixture-of-Experts (MoE).
  • Familiarity with network simulation techniques
  • Developed low-latency, high-performance applications using both kernel-level and user-space networking stacks.
  • Ported applications to non-standard or accelerator hardware platforms.
  • Contributed to runtime systems with complex, well-documented interfaces, such as distributed storage systems or machine learning runtimes.
  • Built applications with extensive SIMD (Single Instruction, Multiple Data) optimizations for performance-critical paths.

Responsibilities

  • Formalize and optimize our collectives (e.g. Send/Recieve, AllReduce, Broadcast, etc.)
  • Collaborate across systems and research teams to bring MoE architectures to Sohu’s runtime
  • Optimize expert routing and communication layers using Sohu’s collectives
  • Contribute to scaling and enhancing Sohu’s runtime, including multi-node inference, intra-node execution, state management, and robust error handling
  • Develop tools for performance profiling and debugging, identifying bottlenecks and correctness issues

Benefits

  • Full medical, dental, and vision packages, with generous premium coverage
  • Housing subsidy of $2,000/month for those living within walking distance of the office
  • Daily lunch and dinner in our office
  • Relocation support for those moving to West San Jose

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

51-100 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service