About The Position

What You’ll Do Design and implement end-to-end ML workflows supporting semantic search, classification, entity resolution, and retrieval. Build production-ready AI services with strong attention to reliability, testability, and maintainability. Develop and tune embedding pipelines, retrieval systems, and retrieval-augmented generation (RAG) components. Collaborate with data engineering and backend teams to integrate ML capabilities into scalable systems. Implement evaluation workflows tied to measurable mission performance (accuracy, latency, robustness). Support deployment, monitoring, and versioning of ML models as part of a disciplined MLOps lifecycle. Participate in architecture discussions and propose solutions aligned to platform constraints and mission needs.

Requirements

  • B.S. or M.S. in Computer Science, Engineering, or related technical field.
  • 3–6 years of experience building applied ML systems or NLP workflows.
  • Strong Python development skills with ability to write production-quality services.
  • Experience training, tuning, evaluating, and deploying ML models.
  • Familiarity with modern ML/NLP libraries (Transformers, spaCy, scikit-learn, PyTorch).
  • Exposure to cloud environments and containerized deployment patterns.
  • Strong communication skills and ability to collaborate across teams.

Nice To Haves

  • Experience with embeddings, vector search, RAG, and semantic retrieval systems.
  • Familiarity with MLflow, DVC, Kubeflow, SageMaker, or similar tooling.
  • Experience with graph-based retrieval, agentic systems, or tool-use architectures.
  • Experience supporting defense, intelligence, ISR, or mission environments.

Responsibilities

  • Design and implement end-to-end ML workflows supporting semantic search, classification, entity resolution, and retrieval.
  • Build production-ready AI services with strong attention to reliability, testability, and maintainability.
  • Develop and tune embedding pipelines, retrieval systems, and retrieval-augmented generation (RAG) components.
  • Collaborate with data engineering and backend teams to integrate ML capabilities into scalable systems.
  • Implement evaluation workflows tied to measurable mission performance (accuracy, latency, robustness).
  • Support deployment, monitoring, and versioning of ML models as part of a disciplined MLOps lifecycle.
  • Participate in architecture discussions and propose solutions aligned to platform constraints and mission needs.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service