Senior IT Data Engineer (Onsite)

Tyson Foods, Inc.Springdale, AR
Onsite

About The Position

The Senior IT Data Engineer is an expert in building and optimizing modern data platforms, including real-time streaming pipelines that are cost-optimized for cloud resources. This role leads complex projects across data engineering, enterprise data modeling, and agentic AI — architecting scalable solutions, enforcing governance and security, and deploying AI-powered autonomous workflows that transform data engineering practices. This position requires working side by side with users of the solution, understanding the opportunities, and rapidly iterating on the solution; architecting and building solutions that leverage business-critical data and the latest advancements in AI to solve them. You’ll work in small, agile teams and own the end-to-end execution and implementation of high-stakes projects for Tyson’s extensive manufacturing footprint. Very few companies provide the opportunity to work end-to-end projects and initiatives with such massive scale with significant could and data infrastructure already in place. With over 100+ manufacturing facilities worldwide, this position will be front and center influencing change and deploying technology to more than 100,000 team members.

Requirements

  • Bachelor's Degree or relevant experience.
  • 3+ years of relevant and practical experience.
  • Proficiency in Python and SQL for data engineering at scale.
  • Expertise in modern data platforms (Databricks, Snowflake, BigQuery), lakehouse architectures (Delta Lake, Iceberg), and streaming (Kafka, Flink, Pub/Sub).
  • Deep knowledge of GCP.
  • Hands-on experience with orchestration (Airflow, Dagster), transformation (dbt), containerization (Docker, K8s), and IaC (Terraform).
  • Advanced data modeling, warehousing, dimensional modeling, and data contracts.
  • Expertise in CI/CD, data observability, governance, and cataloging.
  • Advanced expertise in agentic AI architectures, multi-agent systems, LLMOps, RAG pipelines, and AI safety/guardrails.
  • A highly analytical approach and eagerness to solve technical problems with data structures, storage systems, cloud infrastructure, front-end frameworks, and other technical tools.
  • Experience or curiosity about working with and using large-scale data to take on valuable business problems.
  • Ability to collaborate efficiently in teams of technical and non-technical individuals, and comfortable working in a dynamic environment with evolving objectives and iteration with users.

Nice To Haves

  • AWS Solutions Architect Professional, Google Professional Data Engineer, Databricks Certified Data Engineer Professional, or equivalent.
  • Project Management: Leading complex data and AI initiatives end-to-end.
  • Mentorship: Guiding team members in technical and professional growth.
  • Strategic Thinking: Aligning data and AI solutions with organizational goals.
  • Communication: Articulating strategies to technical and non-technical stakeholders.
  • Problem-Solving: Resolving complex data and AI system challenges.
  • Adaptability: Staying current with rapidly evolving technologies and practices.
  • Creativity: Innovating approaches to pipeline design, modeling, and AI automation.
  • Customer obsession
  • Team work & Collaboration

Responsibilities

  • Lead the design and orchestration of complex data pipelines and ETL/ELT processes using Python, SQL, and modern frameworks (e.g., dbt, Airflow, Dagster) for a $50B company.
  • Architect scalable data solutions using modern platforms (BigQuery), lakehouse patterns (Delta Lake, Iceberg), and event-driven streaming architectures (Kafka, Flink, Pub/Sub).
  • Must be able to design enterprise-wide data models using advanced techniques — dimensional modeling, multi-dimensional modeling, ERDs — ensuring consistency and alignment with business processes.
  • Define and implement data contracts and APIs to ensure reliable interfaces between data producers and consumers.
  • Establish and enforce data governance, security, cataloging, and stewardship standards across all data and AI systems.
  • Optimize cloud costs (AWS, GCP, or Azure) through efficient architecture and resource management.
  • Implement CI/CD pipelines for data workflows and manage containerized workloads (Docker, Kubernetes) with infrastructure as code (Terraform).
  • Must be able to work with DBT to model relevant data sources and ensure quality and uptime of that data
  • Drive data observability, including proactive monitoring, alerting, and automated detection of freshness, volume, and schema drift issues.
  • Lead code reviews, design and deploy agentic AI architectures and multi-agent systems that automate data engineering workflows, including RAG systems, vector databases, and LLM-integrated platforms.
  • Implement AI guardrails, observability, and evaluation frameworks, including LLMOps practices (prompt versioning, A/B testing, drift monitoring), cost optimization (token strategies, model selection), and security measures (prompt injection prevention, PII handling).
  • Lead code reviews, establish coding standards, perform other assigned job-related duties that align with our organization's vision, mission, and values and fall within your scope of practice.
  • Collaborating with fellow engineers on architecture and design decisions.
  • Must be able to work with the other developers on the team, specifically the data scientist and AI engineers to assist with what they need.
  • Wrangling massive-scale data and using AI to accelerate and enhance critical operations.
  • Developing custom applications tailored to customer needs.
  • Engaging directly with customer stakeholders, from consumers to technical teams and executives.

Benefits

  • paid time off
  • 401(k) plans
  • affordable health, life, dental, vision and prescription drug benefits
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service