About The Position

Mistral AI is seeking Applied Scientists and Research Engineers focused on multimodal learning (text, image, audio, video) to drive innovative research and collaborate with clients on complex projects. You will design, train, and deploy SOTA multimodal models (e.g., Omni-models, VLMs, Audio, Image generation, Robotics and much more) and apply them to diverse use cases: enterprise search, agents grounded in images and documents, video understanding, and speech interfaces. You'll work cross‑functionally with internal and external science, engineering, and product teams to deliver high‑impact AI solutions.

Requirements

  • You are fluent in English, and have excellent communication skills. You are at ease explaining complex technical concepts to both technical and non-technical audiences.
  • You're an expert with PyTorch or JAX.
  • You're not afraid of contributing to a big codebase and can find yourself around independently with little guidance.
  • You have experience in one of the following: VLMs, diffusion for image/video, audio processing (ASR/TTS), image processing, robotics.
  • You write clean, readable, high-performance, fault-tolerant Python code.
  • You don't need roadmaps: you just do. You don't need a manager: you just ship.
  • Low-ego, collaborative and eager to learn.
  • You have a track record of success through personal projects, professional projects or in academia.

Nice To Haves

  • Hold a PhD / master in a relevant field (e.g., Mathematics, Physics, Machine Learning), but if you're an exceptional candidate from a different background, you should apply.
  • Can bring a variety of research experience (agents, multi-modality, robotics, diffusion, time-series).
  • Have contributed to a large codebase used by many (open source or in the industry).
  • Have a track record of publications in top academic journals or conferences.
  • Love improving existing code by fixing typing issues, adding tests and improving CI pipelines.

Responsibilities

  • Run pre-training, post-training and deploy state of the art models on clusters with thousands of GPUs. You don't panic when you see OOM errors or when NCCL feels like not wanting to talk.
  • Generate and curate multimodal datasets (web‑scale image‑text, document‑image, audio‑text, video‑text), and build robust evaluators/benchmarks for perception, grounding, OCR, and captioning.
  • Develop the necessary tools and frameworks to facilitate data generation, model training, evaluation and deployment.
  • Collaborate with cross-functional teams to tackle complex use cases using agents and RAG pipelines.
  • Manage research projects and communications with client research teams.

Benefits

  • Competitive cash salary and equity
  • Food : Daily lunch vouchers
  • Sport : Monthly contribution to a Gympass subscription
  • Transportation : Monthly contribution to a mobility pass
  • ️ Health : Full health insurance for you and your family
  • Parental : Generous parental leave policy
  • Visa sponsorship
  • Insurance
  • Transportation: Reimburse office parking charges, or 90GBP/month for public transport
  • Sport: 90GBP/month reimbursement for gym membership
  • Meal voucher: £200 monthly allowance for its meals
  • Pension plan: SmartPension (percentages are 5% Employee & 3% Employer)

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Industry

Publishing Industries

Number of Employees

251-500 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service