Ai2 is seeking talented and motivated Research Engineer, working on the next generation of large language model architectures , with a focus on Mixture-of-Experts (MoE), long-context language models (LCLMs), and flexible data use.. You are a talented, hands-on engineer who thrives in a fast-paced environment, is self-directed, a team player, and knows how to get things done. You have a strong understanding of modern deep learning, natural language processing, language models, and the inner workings of the transformer architecture, especially MoEs. You can translate high-level goals into concrete research and implementation steps, set an approach, follow through, and present results. When it’s time to explain your ideas, you bring clarity to complex technical issues. You use these skills to create real-world benefits for researchers and other practitioners, and you are excited to help advance our effort to create the best-performing open AI model. We are a non-profit AI institute, focused on developing foundational AI research and innovation to deliver real-world impact through large-scale open models, data, and artifacts (e.g., OLMo, Tulu, Asta, OlmoEarth). We unite the best and brightest scientific and engineering minds to explore the potential of truly open AI. Through our efforts, we endeavor to empower academics, researchers, and AI developers more broadly to advance language models and generative AI models. Through close collaboration, we rapidly identify, define, and act on the most exciting and promising new ideas in AI. The FlexOlmo team designs new architectures and training methods that help models use data more effectively—through improved training, inference-time conditioning, and retrieval—broadening the types of data they can leverage and ultimately enhancing performance. We also develop scientific methodologies for evaluating and understanding these systems. Our team produces high-impact research and expertly engineered open-source tools that accelerate NLP research worldwide. Our first release in July 2025 focused on a new Mixture-of-Experts architecture. Looking ahead, we plan to pursue creative, groundbreaking research that delivers scientific insights and practical solutions for building architectures and training methods that unlock the use of large and diverse data sources. You will be a part of the core team of research engineers working on the infrastructure, architecture, modeling and training of the next generation of architectures of foundation models, with a focus on continual learning. In this role you will be owning the design and implementation of the systems that train these models. You will be responsible for building scalable machine learning pipelines as we push the boundaries of large language modeling research. You will be collaborating with colleagues inside and outside your own team, but you are responsible for a feature or experiment from start to finish, from conception to implementation.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Number of Employees
101-250 employees