About The Position

Designs and maintains robust AI agents and data pipelines. Performs data orchestrations and supports enterprise AI and Data efforts. Works across departments to build scalable AI solutions that ensure reliable, secure, and high-quality data is available to business users, analysts, upstream and downstream applications. Responsible for the full lifecycle of AI Development – from selecting foundation models (FM’s) to deploying scalable orchestration layers on hybrid cloud environments. Contributes to data labelling, MLOps integration, AI Observability, Language Models testing, and documentation in collaboration with AI analysts, AI Architects, data scientists, developers, and system owners.

Requirements

  • Education and experience equivalent to a Bachelor’s degree from an accredited college or university in Computer Science, Information Systems, Data Science, AI and Analytics, or in a job-related field of study.
  • Five (5) years of work-related experience in data engineering, data analytics, or AI/ML data processing.
  • Must have a valid Texas Driver's License and good driving record.
  • Will be required to provide a copy of 10-year driving history.
  • Must maintain a good driving record and remain in compliance with Article II, Subdivision II of Chapter 90 of the Dallas County Code.
  • Excellent analytical and problem-solving abilities.
  • Strong communication and documentation skills.
  • Ability to work independently and collaboratively on technical projects.
  • Strong collaboration and communication skills.
  • Ability to work independently and mentor junior team members.
  • Knowledge of DevOps, CI/CD, and containerized applications (Docker, Kubernetes).
  • Ability to design and optimize scalable data workflows.
  • Knowledge of Sovereign Cloud requirements or GovCloud environments.
  • Knowledge of big data frameworks (Snowflake, Spark, Databricks, Vector Databases, Graph Databases).
  • Knowledge of data warehousing, data lakes, and data modeling best practices.
  • Skill in SQL, Rust, Go, Python, and/or Scala for data transformation.
  • Knowledge of data privacy, compliance regulations (HIPAA, GDPR, CJIS).
  • Skill in implementing AI within county/government policy frameworks.
  • Knowledge of Git, CI/CD pipelines, data catalogs, Containers (Kubernetes, Docker) and business intelligence tools.
  • Knowledge of cloud platforms (Azure, AWS, or GCP) and UI/UX (ReactJS/NextJS) including data storage technologies (e.g., SQL Server, Snowflake, Parquet, etc.).
  • Skill in Python, AWS Sage maker, Lang chain, Pydantic, Model Context Protocols, Amazon Bedrock, Vector Databases, RAGs and data integration tools (e.g., Jupiter Notebooks, API Gateways etc.).
  • Knowledge of streaming data technologies (Kafka, Kinesis, Pub/Sub).

Nice To Haves

  • Master’s degree preferred.
  • Certifications in cloud architecture (Azure, AWS, GCP), data modeling, and governance tools.
  • Amazon Certified: AWS Data Engineer Associate
  • AWS Certified Data Analytics – Specialty
  • Snowflake or Databricks certification.

Responsibilities

  • Designs, develops, and maintains scalable AI Agents and Orchestration workflows across structured and semi-structured data sources.
  • Ensures consistent design and delivery of data and AI platforms supporting Data Engineering, Cloud, and AI centers of excellence.
  • Integrates internal and external data sources with enterprise data platforms, lakes, or warehouses.
  • Designs and develops multi-agent systems using frameworks like LangGraph, CrewAI, or Amazon Bedrock to automate complex enterprise reviews and workflows.
  • Performs data profiling, cleansing, and standardization to improve data quality.
  • Monitors data pipeline health and troubleshoots failures or anomalies.
  • Documents AI architecture, APIs, AI Business rules, and data logic for internal users.
  • Collaborates with DevOps or infrastructure teams to implement automated AI processing workflows.
  • Collaborates with Enterprise Architecture teams to ensure AI solutions align with internal policies, vendor questionnaires, and ethical AI guidelines.
  • Maintains data access controls, validation rules, and retention policies.
  • Translates business and AI requirements into technical specifications and AI pipeline designs.
  • Participates in Agile planning, backlog grooming, and technical design sessions.
  • Develops data and AI flow diagrams, Machine learning models, and transformation logic.
  • Supports dataset design and delivery for dashboards, reports, or self-service analytics.
  • Collaborates with application owners to understand source system structures and data changes.
  • Contributes to solution architecture decisions related to Language model performance, security, storage, and data delivery.
  • Assists in scoping and estimating new data initiatives and enhancement requests.
  • Identifies reuse opportunities for data components, tools, or models.
  • Builds in validation and error-handling logic into data and AI pipelines to support reliability.
  • Performs root cause analysis for data inconsistencies and recommends preventive actions.
  • Contributes to and follows testing procedures for data validation, performance, and integrity.
  • Implements version control, data lineage, and reproducibility practices.
  • Identifies performance bottlenecks and refactor inefficient data processes.
  • Recommends improvements to schema design, data granularity, and source-system integration.
  • Maintains awareness of industry standards for data governance, security, and accessibility.
  • Supports automation of routine data workflows and manual reporting processes.
  • Works closely with analysts, data scientists, application developers, and stakeholders to deliver high-quality datasets.
  • Coordinates with system owners and system administrators to manage source data access and schema changes.
  • Supports QA and testing teams by validating expected output and data quality criteria.
  • Participates in data and AI design reviews, standups, retrospectives, and sprint demos.
  • Communicates technical limitations or trade-offs to business stakeholders in an understandable way.
  • Partners with cybersecurity teams to ensure sensitive data is handled securely and in compliance with County policy.
  • Continues building technical proficiency in cloud platforms, big data tools, and AI frameworks.
  • Stays current with trends in data engineering, streaming pipelines, and ML Ops practices.
  • Contributes to internal wikis, playbooks, and best practices documentation.
  • Mentors junior data engineers or interns on development and testing practices.
  • Participates in knowledge-sharing sessions, communities of practice, or hackathons.
  • Proactively seeks opportunities for cross-training with related disciplines (e.g., AI, Big Data, DevOps and MLOps).
  • Implements robust LLM engineering practices using tools like Langfuse or Weights & Biases for tracing, debugging, and evaluating model outputs.
  • Tracks personal learning goals and reflects on performance improvement opportunities.
  • Communicates progress, risks, and needs to project leads or data managers.
  • Documents data sources, logic, and transformations in data dictionaries or metadata repositories.
  • Supports stakeholder training or onboarding on new datasets and data services.
  • Assists in writing user guides, technical diagrams, and documentation for AI Orchestrations and data pipelines.
  • Participates in requirement gathering and feedback sessions with business users.
  • Supports audit and compliance documentation as needed.
  • Provides timely responses to questions or data requests from supported teams.
  • Coordinates deployment of data updates with impacted teams or systems.
  • Performs other duties as assigned.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service