Data Engineer

EquifaxSt. Louis, MO
Hybrid

About The Position

As a Data Engineer, you will play a crucial role in the evolution of our data platform, support the activation of AI capabilities across the team, and build data products and self-service solutions for the users of our data platform. This role requires being in the office 3 days/week on Tues - Thurs. This position does not offer immigration sponsorship (current or future) including F-1 STEM OPT extension support.

Requirements

  • At least 5 years of experience in data engineering, data architecture, or a related field.
  • A strong understanding of data engineering principles and best practices, including data modeling, data warehousing, and data integration.
  • At least 1 year of experience working in a GCP big data environment.
  • Experience building complex data pipelines and solutions using two or more of the following: BigQuery, DataFlow, DataProc, Pub/Sub, Cloud Functions.
  • Experience with Airflow or Cloud Composer.
  • Experience with Vertex AI.
  • Proficiency in Python development.
  • Professional experience with SQL.
  • Proven ability to effectively communicate complex technical concepts to both technical and non-technical stakeholders.
  • A Bachelor's degree or higher in Computer Science, Information Systems, or a related field.

Nice To Haves

  • Demonstrated experience with prompt engineering to interact with and guide AI models for optimal outputs.
  • Hands-on experience with AI-assisted coding tools
  • Familiarity with frameworks for building AI applications or agents (e.g., LangChain, LlamaIndex).
  • Experience integrating AI services and models into data pipelines via APIs
  • DevOps/CICD experience (Jenkins and GitHub) and IaC experience using Terraform for managing GCP infrastructure.
  • GCP Associate Cloud Engineer or Professional Data Engineer certification.

Responsibilities

  • Build complex batch and streaming pipelines to ingest data from upstream Equifax cloud systems.
  • Design and implement data engineering frameworks to scale the development and deployment of data pipelines across the D&A organization.
  • Leverage AI-powered coding assistants to accelerate development, optimize code, and generate documentation for data pipelines and infrastructure.
  • Develop and refine prompts for Large Language Models (LLMs) to assist in data-related tasks such as data cleansing, transformation logic generation, and automated data documentation.
  • Design, build, and maintain scalable data pipelines that support AI/ML applications.
  • Explore and implement AI agents to automate repetitive data management tasks, monitor data quality, and orchestrate complex data workflows.
  • Play an active role in setting engineering standards and best practices in EWS D&A.

Benefits

  • comprehensive compensation and healthcare packages
  • 401k matching
  • paid time off
  • organizational growth potential through our online learning platform with guided career tracks
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service