Lead Software Engineer, Runtime

Mistral AIParis, TX
55dHybrid

About The Position

At Mistral AI, we believe in the power of AI to simplify tasks, save time and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life. We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work. We are a dynamic, collaborative team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, USA, UK, Germany and Singapore. We are creative, low-ego and team-spirited. Join us to be part of a pioneering company shaping the future of AI. Together, we can make a meaningful impact. See more about our culture on https://mistral.ai/careers. Role summary As the Technical Lead for the Inference team, you will drive the architecture and optimization of our inference backbone, ensuring high performance, scalability, and efficiency in a dynamic environment. You will lead the acquisition and automation of benchmarks, collaborate with cross-functional teams, and innovate solutions to enhance our AI-powered applications.

Requirements

  • Extensive experience in C++ and Python, with a strong focus on backend development and performance optimization.
  • Deep understanding of modern ML architectures and experience with performance optimization for inference.
  • Proven track record with large-scale distributed systems, particularly performance-critical ones.
  • Familiarity with PyTorch, TensorRT, CUDA, NCCL.
  • Strong grasp of infrastructure, continuous integration, and continuous development principles.
  • Ability to lead and mentor team members, driving projects from concept to implementation.
  • Results-oriented mindset with a bias towards flexibility and impact.
  • Passion for staying ahead of emerging technologies and applying them to Al-driven solutions.
  • Humble attitude, eagerness to help colleagues, and a desire to see the team succeed.

Responsibilities

  • Architect and optimize the inference for high-volume, low-latency, and high-availability environments.
  • Lead the acquisition and automation of benchmarks at both micro and macro scales.
  • Introduce new techniques and tools to improve performance, latency, throughput, and efficiency in our model inference stack.
  • Build tools to identify bottlenecks and sources of instability, and design solutions to address them.
  • Collaborate with machine learning researchers, engineers, and product managers to bring cutting-edge technologies into production.
  • Optimize code and infrastructure to maximize hardware utilization and efficiency.
  • Mentor and guide team members, fostering a culture of collaboration, innovation, and continuous learning.

Benefits

  • Competitive salary and equity (stock-options)
  • ️ Health insurance
  • Transportation allowance
  • Sport allowance
  • Meal vouchers
  • Private pension plan
  • Generous parental leave policy
  • Visa sponsorship

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Industry

Publishing Industries

Education Level

No Education Listed

Number of Employees

251-500 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service