The Inference Infrastructure team is the creator and open-source maintainer of AIBrix, a Kubernetes-native control plane for large-scale LLM inference. We are part of ByteDance's Core Compute Infrastructure organization, responsible for designing and operating the platforms that power microservices, big data, distributed storage, machine learning training and inference, and edge computing across multi-cloud and global datacenters. With ByteDance's rapidly growing businesses and a global fleet of machines running hundreds of millions of containers daily, we are building the next generation of cloud-native, GPU-optimized orchestration systems. Our mission is to deliver infrastructure that is highly performant, massively scalable, cost-efficient, and easy to use-enabling both internal and external developers to bring AI workloads from research to production at scale. We are expanding our focus on LLM inference infrastructure to support new AI workloads, and are looking for engineers passionate about cloud-native systems, scheduling, and GPU acceleration. You'll work in a hyper-scale environment, collaborate with world-class engineers, contribute to the open-source community, and help shape the future of AI inference infrastructure globally. We are looking for talented individuals to join our team in 2026. As a graduate, you will get opportunities to pursue bold ideas, tackle complex challenges, and unlock limitless growth. Launch your career where inspiration is infinite at ByteDance. Successful candidates must be able to commit to an onboarding date by end of year 2026. Please state your availability and graduation date clearly in your resume.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Career Level
Entry Level
Industry
Publishing Industries
Education Level
Bachelor's degree
Number of Employees
5,001-10,000 employees