The Infrastructure System Lab is a hybrid research and engineering group focused on building next-generation AI-native data infrastructure. Positioned at the intersection of databases, large-scale systems, and AI, the team leads innovation in areas such as vector and multi-modal databases, infrastructure optimization through machine learning, and LLM-based tooling like NL2SQL and NL2Chart. They also develop high-performance cache systems, including multi-engine key-value stores and LLM inference KV caches. The team thrives on collaboration, with researchers and engineers working closely to take ideas from paper to prototype to production. Their work supports key products used by millions and is regularly published and deployed at scale. We are seeking a systems researcher or engineer with deep expertise in large-scale distributed storage and caching infrastructure to design and maintain a high-performance KV cache layer for large language model (LLM) inference. This role focuses on improving latency, throughput, and cost-efficiency in transformer-based model serving by optimizing the reuse of attention key-value states and prompt embeddings. You'll work on cutting-edge AI systems problems with real-world impact, alongside a world-class team. The role offers opportunities to publish, contribute to open-source, attend top conferences, and enjoy competitive compensation, generous research resources, and an innovation-driven culture.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Industry
Publishing Industries
Number of Employees
5,001-10,000 employees