AI/ML Engineer

VersantOrlando, FL
2hRemote

About The Position

We are seeking an AI/ML Engineer with deep expertise in large language models (LLMs) and generative AI systems. In this role, you’ll design, build, and deploy intelligent systems that leverage generative models to create new digital capabilities. You will work end-to-end, from prototyping through deployment, collaborating with backend, frontend, and product peers to deliver production ready GenAI systems.

Requirements

  • 5+ years of professional software engineering experience, with a strong focus on applied AI.
  • Hands-on expertise with LLMs, embeddings, vector databases, and prompt engineering, with samples.
  • Experience building RAG pipelines and integrating models into real-world systems.
  • Proficiency with Python and ML frameworks (e.g., PyTorch, Hugging Face).
  • Experience deploying AI systems into production with modern cloud platforms (AWS, Azure, GCP).
  • Familiarity with MLOps practices for LLMs (e.g., LangChain, LlamaIndex, MLflow, Weights & Biases).
  • Strong understanding of observability and evaluation for generative systems (bias, drift, hallucinations).

Nice To Haves

  • Experience with fine-tuning, instruction tuning, or RLHF.
  • Experience with other large entertainment focused platforms, centered around being consumer-facing at a large scale
  • Familiarity and hands-on exposure with MCP Servers and highly accessible and available API platforms
  • Knowledge of microservices and event-driven architectures at enterprise scale
  • Cloud-native, global scaling experience (AWS and their various tool offerings)
  • Understanding of the mindset of “Fail Fast, Learn Fast” and continuous improvement

Responsibilities

  • Design and implement LLM-powered applications, including chatbots, agents, and workflow automation tools.
  • Develop retrieval-augmented generation (RAG) systems that combine proprietary data with foundation models.
  • Fine-tune, adapt, or prompt-engineer LLMs for domain-specific use cases.
  • Build production-grade inference services and optimize them for performance and cost.
  • Integrate GenAI systems into larger platform architectures alongside backend and frontend engineers.
  • Implement guardrails, evaluation metrics, and monitoring for LLM behavior, safety, and quality.
  • Stay ahead of advances in generative AI, assessing opportunities to apply emerging techniques pragmatically.
  • Document methods, share learnings, and mentor engineers on applied LLM/GenAI practices.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service