Software Engineer, AI

Lattice
Remote

About The Position

This is Engineering at Lattice Lattice’s Engineering team is continuously improving both our product and our craft. We build maintainable, performant systems using modern technologies, and we collaborate closely with product and design to deliver agentic, high quality user experiences. Our AI Engineering team is building the systems that power how AI works across Lattice. Within the Quality sub-team, we focus on how AI systems are evaluated, measured, and improved over time. You’ll contribute to the infrastructure and tooling that help us understand how our AI performs in production and ensure we’re building reliable, high-quality experiences for our customers.

Requirements

  • 2–5 years of professional software engineering experience.
  • Experience contributing to production systems as part of a team.
  • Exposure to AI/ML systems with a strong interest in LLM-powered products.
  • Experience debugging systems, working with data, and iterating on performance.
  • Proficiency in Python or a similar language.
  • Strong understanding of LLM concepts (prompting, RAG, evaluation).
  • Familiarity with backend systems, APIs, and cloud environments (e.g., AWS, GCP).
  • Exposure to logging, monitoring, or debugging tools.
  • Interest in learning tools like LangGraph, vector databases, and evaluation platforms.
  • Strong ownership: you reliably deliver high-quality work on well defined tasks, on time and communicate progress clearly.
  • Learning mindset: you actively seek feedback and improve quickly.
  • Pragmatic and product-minded: you focus on solving problems effectively and not perfectly.
  • Collaborative: you contribute to team discussions and uphold engineering best practices.
  • Growth-oriented: you actively seek feedback and expand your skills in AI engineering.

Nice To Haves

  • Hands-on experience with LLMs, prompt iteration, or MLOps.
  • Familiarity with vector databases or retrieval systems.
  • Exposure to experimentation, metrics, or basic statistical analysis.
  • Familiarity with TypeScript. Our full-stack engineers use it and cross-pollination is valuable.

Responsibilities

  • Contribute to AI evaluation pipelines, including offline evals, production tracing, and feedback systems.
  • Implement and maintain performance metrics (e.g., response quality, task success, reliability) using established frameworks.
  • Help create and maintain evaluation datasets and test cases to identify regressions.
  • Analyze results and propose incremental improvements to model and agent quality.
  • Contribute to AI system components such as RAG pipelines, retrieval systems, and multi-step workflows within existing architectures.
  • Write clean, maintainable Python code that integrates with LLM providers and internal services.
  • Support improvements to system reliability, observability, and performance in production.
  • Deliver well-scoped projects with guidance from more senior engineers.
  • Break down tasks, make steady progress, and be proactive in unblocking yourself by asking for help when needed.
  • Contribute to team excellence through code reviews, documentation, and knowledge sharing
  • Collaborate with cross-functional partners to ship user-facing features.

Benefits

  • Medical insurance
  • Dental insurance
  • Life, AD&D, and Disability Insurance
  • Natural Disaster Support Program
  • Wellness Apps
  • Paid Parental Leave
  • Paid Time off inclusive of holidays and sick time
  • Working Remotely Stipend
  • One time WFH Office Set-Up Stipend
  • Retirement Plan
  • Financial Planning
  • Learning & Development Budget
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service