Generative AI Inference Engineer

Stability.aiAustin, TX
35d

About The Position

We are seeking passionate Machine Learning Engineers to join our Inference team, focusing on the creative applications of generative AI models. The ideal candidate will have substantial experience developing and running inference for multi-modal models. A deep understanding of diffusion model architectures and familiarity with workflow tools like ComfyUI are a big plus. You will be expected to leverage and push the boundaries of state-of-the-art inference optimization techniques for multi-modal generative models. This role offers the opportunity to work alongside top researchers and engineers, utilizing cutting-edge high-performance computing resources to make a significant impact in the rapidly evolving field of generative AI.

Requirements

  • 7+ years working on productionizing machine learning systems, including inference pipeline development
  • Expert level knowledge on writing and running python services at scale
  • 5+ years working on python scientific stack, pyTorch and at least one high-performance inference framework (e.g. Triton and TensorRT)
  • Deep understanding of Diffusion Architecture
  • Experience profiling and optimizing deep neural networks on Nvidia GPUs, using profiling tools such as NVIDIA Nsight
  • Experience with python-based image manipulation/encoding/decoding frameworks, such as OpenCV
  • Experience deploying to cloud orchestration systems such as Kubernetes and cloud providers such as AWS, GCP, and Azure
  • Experience with Docker
  • Ability to rapidly prototype solutions and iterate on them with tight product deadlines
  • Strong communication, collaboration, and documentation skills
  • Experience with the open-source ML ecosystem (HuggingFace, W&B, etc.)

Responsibilities

  • Lead efforts to drive the design, development of customer-facing multi modal ML inference systems.
  • Work with the Platform and Inference teams on building inference systems for the next generation of models, where you will work on areas such as optimization, model tuning and deployment.
  • Partner with leading cloud providers to deliver hosted Stability AI inference solutions.
  • Be a strategic thought partner for leaders across the organization on driving business impact through machine learning
  • Be part of the team to bring new Stability models and pipelines into existence
  • Prototype and productionize inference platform improvements and new features

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Industry

Publishing Industries

Education Level

No Education Listed

Number of Employees

101-250 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service