Lead AI & LLM Engineer

Signature AviationOrlando, FL
1d

About The Position

The Lead AI / LLM Engineer is responsible for designing, building, deploying, and operating generative AI and large language model (LLM) solutions that deliver measurable business value. Reporting to the Director of Data Science & AI, this role serves as a senior technical leader responsible for translating enterprise AI use cases into secure, scalable, and production-grade applications. The Lead AI / LLM Engineer partners closely with Data Scientists, ML Engineers, Platform Engineers, and business stakeholders to drive the architecture and implementation of applied AI solutions. This role focuses on applied AI engineering, including LLM orchestration, prompt engineering, retrieval-augmented generation (RAG), system integration, and operational reliability while providing technical leadership and guidance to ensure solutions align with enterprise standards and long-term platform strategy.

Requirements

  • Typically requires a minimum of 8 years of related experience with a bachelor’s degree in Computer Science, Engineering, Statistics, or a related quantitative field (or equivalent experience).
  • 6–10+ years of experience in software engineering, ML engineering, or AI engineering.
  • Demonstrated experience building and deploying LLM-based or generative AI applications in production environments.
  • Experience designing scalable AI systems or platforms supporting enterprise use cases.
  • Strong proficiency in Python and/or JavaScript
  • Experience with LLM platforms and APIs
  • Experience implementing RAG architectures and vector databases
  • Familiarity with modern data platforms (e.g. Databricks, Snowflake)
  • Experience with cloud platforms (AWS, Azure, or GCP)
  • Understanding of MLOps and CI/CD concepts
  • Familiarity with AI safety, prompt injection risks, and mitigation techniques
  • Strong problem-solving and systems thinking skills
  • Ability to explain AI behavior and limitations to non-technical stakeholders
  • Clear written and verbal communication skills
  • Pragmatic, impact-oriented mindset

Nice To Haves

  • Experience fine-tuning or training LLMs
  • Experience building internal AI platforms or copilots

Responsibilities

  • Design, build, and deploy AI applications leveraging hosted or proprietary large language models.
  • Lead technical design decisions for enterprise generative AI implementations, ensuring solutions are scalable, secure, and production-ready.
  • Develop use cases such as: Natural language querying of enterprise data Document summarization and information extraction Conversational AI, copilots, and knowledge assistants Decision-support and workflow automation tools
  • Develop APIs, microservices, and integrations to expose AI capabilities to internal platforms and applications.
  • Ensure solutions are aligned with enterprise architecture, security, and engineering standards.
  • Provide technical guidance and design reviews for other engineers contributing to AI-enabled applications.
  • Design, test, and optimize prompts for accuracy, consistency, safety, and business alignment.
  • Architect orchestration workflows that combine LLMs with external tools, APIs, structured data, and business logic.
  • Implement guardrails to mitigate hallucinations, enforce policy constraints, and reduce prompt injection risks.
  • Establish repeatable testing frameworks and evaluation methodologies for prompt and workflow performance.
  • Drive best practices for prompt engineering, orchestration design, and model interaction patterns across engineering teams.
  • Design and implement RAG architectures leveraging enterprise data sources.
  • Build and maintain embedding pipelines, vector databases, and retrieval services.
  • Ensure relevance, freshness, access control, and compliance for retrieved content.
  • Optimize retrieval strategies and grounding techniques to improve response quality and accuracy.
  • Provide architectural guidance on enterprise knowledge systems that support AI-driven applications.
  • Deploy AI applications into production environments with robust CI/CD pipelines and infrastructure practices.
  • Establish monitoring frameworks for: Model output quality and drift Latency, availability, and reliability Cost, usage, and scaling patterns
  • Diagnose and resolve production incidents related to AI systems.
  • Improve system robustness through testing, logging, and observability best practices.
  • Guide engineering teams on operational standards for running reliable AI services in production environments.
  • Implement safeguards for data privacy, security, and access control.
  • Ensure AI systems comply with enterprise governance and regulatory standards.
  • Support explainability, auditability, and responsible AI practices.
  • Prevent leakage of sensitive or proprietary information in AI interactions.
  • Partner with security, legal, and governance teams to establish guardrails for enterprise AI adoption.
  • Partner closely with: Director of Data Science & AI Data Scientists and ML Engineers Data Engineers and Platform Engineers Legal, Security, and Governance teams
  • Lead technical design discussions and architecture reviews for AI solutions.
  • Participate in Agile delivery processes and guide implementation planning for complex AI initiatives.
  • Communicate risks, trade-offs, assumptions, and limitations of AI solutions to both technical and non-technical stakeholders.
  • Mentor engineers and contribute to the growth of AI engineering capabilities within the organization.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service