Python - AI Engineer Contractor

Omm IT SolutionsDallas, TX
Onsite

About The Position

This is a 100% onsite contractor position in Dallas, TX, with a preference for local candidates. The role focuses on building AI-driven solutions using Python, including LLMs, embeddings, knowledge graphs, and RAG architectures. The engineer will build GenAI applications leveraging foundation models and advanced architectures such as GraphRAG, develop autonomous AI agents using modern agentic frameworks, and design and deploy RAG and GenAI services using Python (FastAPI), Docker, and cloud platforms (AWS, Azure, or GCP). Responsibilities include building scalable REST APIs that power LLM-driven applications integrated with enterprise data sources, implementing LLM evaluation frameworks using tools such as Ragas, LangSmith, or custom benchmarks, and applying LLMOps/MLOps practices. The role also involves developing systems leveraging embeddings at scale, knowledge graphs, and ontology extraction, as well as collaborating across engineering teams, mentoring developers, and driving innovation in GraphRAG and agentic AI architectures.

Requirements

  • Degree in Computer Science, AI/ML, or related field.
  • 5+ years of AI/ML-focused software engineering experience.
  • Production experience building LLM-based or agentic AI systems.
  • Strong expertise in Python and modern AI frameworks.
  • Experience with embeddings, knowledge graphs, ontology extraction, and advanced RAG/GraphRAG implementations.
  • Full-stack development experience (Python back end + modern front end).
  • Experience deploying AI workloads to AWS, Azure, or GCP.
  • Familiarity with LLMOps/MLOps tooling and model evaluation frameworks.
  • Strong problem-solving, communication, and collaboration skills.
  • Eligible for Hacker Rank assessment.

Responsibilities

  • Build GenAI applications leveraging foundation models and advanced architectures such as GraphRAG.
  • Develop autonomous AI agents using modern agentic frameworks.
  • Design and deploy RAG and GenAI services using Python (FastAPI), Docker, and cloud platforms (AWS, Azure, or GCP).
  • Build scalable REST APIs that power LLM-driven applications integrated with enterprise data sources.
  • Implement LLM evaluation frameworks using tools such as Ragas, LangSmith, or custom benchmarks to measure answer relevance, groundedness, and hallucination rates.
  • Apply LLMOps/MLOps practices, including CI/CD pipelines, prompt/version management, automated testing, and monitoring of latency, cost, and response quality.
  • Develop systems leveraging embeddings at scale, knowledge graphs, and ontology extraction.
  • Collaborate across engineering teams, mentor developers, and help drive innovation in GraphRAG and agentic AI architectures.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service