Engineer II, Data

CarMaxRichmond, TX
Hybrid

About The Position

CarMax, the way your career should be! About this job The bulk of the data engineer’s work would be in building, managing and optimizing data pipelines and then moving these data pipelines effectively into production for key data and analytics consumers (like business/data analysts, data scientists or any persona that needs curated data for operational and analytical data use cases. Data engineers also need to guarantee compliance with data governance and data security requirements while creating, improving and operationalizing these integrated and reusable data pipelines. This would enable faster data access, integrated data reuse and vastly improved time-to-solution for CarMax’s operational and analytical initiatives Data Engineer II – Enterprise Data Services We are seeking a Data Engineer with hands‑on, production experience building and operating data pipelines in a cloud environment. This role sits on the Enterprise Data Services team, which owns streaming data ingestion, curation, and delivery for Analytics and Data Science teams.

Requirements

  • 2+ years of experience as a Data Engineer or Software Engineer working with data
  • Strong experience writing production Python code
  • Experience building and supporting pipelines on Azure or an equivalent cloud platform (AWS/GCP)
  • Hands‑on experience with distributed data processing, such as: Apache Spark Databricks (or Spark runners such as EMR, Dataproc)
  • Experience working with event streaming or messaging platforms, such as: Azure Event Hubs Apache Kafka / Confluent Kafka Amazon Kinesis
  • Practical experience with CI/CD pipelines, version control, and automated deployments
  • Familiarity with data modeling, schema management, and data reliability concepts
  • Experience operating systems in an agile, collaborative engineering team
  • Applicants must be currently authorized to work in the United States on a full-time basis. Sponsorship will not be considered for this specific role.

Nice To Haves

  • Supporting production streaming systems with uptime and latency expectations
  • Azure-native services such as: Azure Databricks Azure Functions (or equivalent serverless frameworks: AWS Lambda, Cloud Functions) Cosmos DB (or comparable NoSQL stores such as DynamoDB, Cassandra)
  • Experience working with Analytics or Data Science platform teams
  • Debugging live pipelines and handling operational ownership/on‑call responsibilities

Responsibilities

  • Design, build, and maintain production-grade data pipelines (streaming and batch)
  • Implement event-driven ingestion and near real‑time processing
  • Build transformations using distributed data processing frameworks
  • Write and maintain clean, testable Python code following software engineering best practices
  • Support CI/CD pipelines, automated deployments, and environment promotions
  • Monitor and troubleshoot pipeline failures, latency issues, and data quality problems
  • Collaborate closely with Data Scientists, Analysts, and platform engineers

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service