We’re building the next-generation AI-native knowledge platform to help organizations easily access and retrieve internal knowledge using the power of LLMs. You’ll join a fast-moving engineering team to build scalable, secure, and intelligent Retrieval-Augmented Generation (RAG) infrastructure — powering enterprise search, AI assistants, and knowledge discovery experiences. About the Team You’ll collaborate with world-class engineers, designers, and product thinkers to define what "AI-powered search" really means in the enterprise. As a core engineer on this team, you'll work across real-time document pipelines, vector databases, and permission-aware retrieval to push the boundaries of applied LLM systems at scale.