Development Engineer II - SQL/Python (Onsite)

Tyson Foods, Inc.Springdale, AR
10dOnsite

About The Position

Continue growing with our family. Our team members make it happen. If you want to continue to grow in a new role internally and see a position that looks right for you, we encourage you to apply! Thanks for your commitment to Tyson Foods. Management Level: P3 The IT Data Engineer II advances by incorporating data architecture principles to design and propose data solutions that may require architecture reviews. They have a solid understanding of data pipeline orchestration, including developing solutions that manage data flow sequences with appropriate monitoring. Their skill set includes developing in modern data platforms and enhancing their knowledge of data warehousing in an analytics environment. This role also requires working knowledge of agentic AI frameworks and the ability to integrate AI-driven automation into data engineering workflows, with an awareness of AI cost management and data privacy considerations in AI contexts.

Requirements

  • Bachelor's Degree or relevant experience.
  • 1+ years relevant or practical experience.
  • Proficiency in SQL and Python for data pipeline development.
  • Understanding of data warehousing concepts and basic data architecture.
  • Hands-on experience with at least one orchestration tool (Airflow, Dagster, or Prefect) and modern transformation frameworks (dbt).
  • Working knowledge of modern data platforms (e.g., Spark, Databricks, Snowflake, BigQuery).
  • Familiarity with containerization (Docker) and version control (Git).
  • Hands-on experience with at least one cloud platform (AWS, GCP, or Azure).
  • Exposure to event-driven and streaming architectures (Kafka, Pub/Sub).
  • Proficiency in using AI-assisted development tools and IDEs (e.g., Cursor, GitHub Copilot) to accelerate coding, debugging, and testing.
  • Strong ability to leverage prompt engineering and advanced LLMs to enhance data workflows, generate boilerplate code, and automate routine tasks.
  • Ability to use prompt engineering to enhance data workflows and automation.
  • Initiative: Proactively identifying and proposing data solutions, including AI-enhanced approaches.
  • Teamwork: Collaborating effectively with stakeholders across departments.
  • Communication: Ability to explain complex data and AI concepts to non-technical audiences.
  • Critical Thinking: Analyzing data architecture needs and evaluating AI tools for process improvement.
  • Adaptability: Quickly learning and applying new data platform and AI technologies.
  • Time Management: Balancing multiple projects and priorities effectively.
  • Attention to Detail: Ensuring data accuracy and quality in all processes, including AI-generated outputs.
  • Not eligible for visa sponsorship now or in the future

Nice To Haves

  • Any relevant IT Certification (e.g., AWS Solutions Architect Associate, Google Professional Data Engineer, Azure Data Engineer Associate, Databricks Certified Data Engineer Associate).

Responsibilities

  • Design, implement, and optimize data pipelines using SQL and Python, with a strong focus on code quality and reusability.
  • Develop and maintain ETL/ELT processes using modern transformation frameworks (e.g., dbt) and contribute to data warehousing solutions, including star schema and Kimball or Snowflake methodologies.
  • Implement data pipeline orchestration using tools such as Airflow, Dagster, or Prefect, managing scheduling, dependencies, and error handling.
  • Write and optimize SQL and calculations within data visualization tools to enhance data models and performance.
  • Contribute to the enforcement of data governance policies and support compliance with data security standards.
  • Build, deploy, and support data visualization solutions that effectively communicate data insights.
  • Participate in cloud cost optimization efforts within AWS, GCP, or Azure, ensuring efficient use of cloud resources.
  • Contribute to the development of data architecture and support data streaming initiatives using technologies such as Kafka or Pub/Sub.
  • Work with containerized environments (Docker) for pipeline development, testing, and deployment.
  • Implement data quality tests and validation checks using frameworks like dbt tests, Great Expectations, or similar tools to ensure pipeline reliability.
  • Use Git for version control, including branching strategies, code reviews, and CI/CD pipeline integration.
  • Communicate and collaborate with key internal stakeholders to align data solutions with business needs.
  • Contribute to AI-assisted data pipelines and automated workflows using agentic AI frameworks (e.g., LangChain, CrewAI, AutoGen).
  • Leverage AI-powered tools for intelligent data quality monitoring, anomaly detection, and automated issue resolution.
  • Gain exposure to vector databases and embeddings to support retrieval-augmented generation (RAG) and semantic search use cases.
  • Apply prompt engineering techniques to optimize AI-driven data processing and reporting tasks.
  • Develop awareness of AI cost management (token economics, model selection) and data privacy risks in AI contexts (PII handling, prompt injection, data leakage through LLMs).
  • Drive a high level of development productivity through the strategic use of AI tools, paired with the critical assessment and validation of generated outputs to ensure quality in production workflows
  • Perform other assigned job-related duties that align with our organization's vision, mission, and values and fall within your scope of practice

Benefits

  • paid time off
  • 401(k) plans
  • affordable health, life, dental, vision and prescription drug benefits
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service