Data Engineer

DriveTimeTempe, AZ
Hybrid

About The Position

The DriveTime Family of Brands is seeking a Data Engineer to support the development, modeling, optimization, and governance of their data processes. This role is crucial for enabling business teams, analytics, and vendor partners with high-quality, cutting-edge solutions. The Data Engineer will contribute to the delivery and design of scalable processes and data models using Snowflake, SQL, Python, and Kafka, while integrating best practices in performance optimization, data quality, and CI/CD workflows. They will collaborate with engineers, product leadership, and analysts to translate complex data requirements into stable data structures within an AI-first, Agile framework. Key contributions include transactional and application support, building and operating data pipelines for internal systems and external integrations, developing consumer-ready datasets in Snowflake using ELT best practices, and optimizing platform performance, reliability, and cost efficiency.

Requirements

  • 4+ years of experience in data engineering, analytics engineering, or a related data discipline.
  • Bachelor’s degree in Information Technology, Computer Science, or related field — or equivalent work experience.
  • Advanced SQL skills spanning transactional query patterns and analytical/reporting workloads across both relational and cloud data platforms.
  • Experience with relational database systems (SQL Server) and modern cloud data platforms (Snowflake).
  • Familiarity with streaming and event-driven data pipelines (Kafka, CDC, or similar).
  • Strong collaboration and communication skills, with the ability to work across technical and non-technical teams.
  • Solid understanding of data modeling across paradigms — dimensional/star schema for analytics and normalized/3NF design for transactional systems.
  • Proficiency in Python for scripting, pipeline automation, data validation, and integration work.
  • Experience with Git-based version control and CI/CD workflows (GitHub or Azure DevOps).
  • Openness and an appetite to proactively identify opportunities to leverage AI/ML for automation, anomaly detection, predictive insights, and intelligent decision-making across data pipelines and reporting solutions.

Nice To Haves

  • Open to an AI-first Execution mindset: using the latest models and tooling, working to provide value using Agentic Programming.

Responsibilities

  • Design and maintain data structures that power application and operational workflows reliably and at scale.
  • Build and operate pipelines that move data between internal systems and key vendor/partner integrations using Snowflake, SQL Server, Argo (workflow orchestrator), and Python.
  • Develop consumer-ready datasets in Snowflake using ELT best practices, dimensional modeling, and well-documented transformation logic.
  • Monitor and tune Snowflake performance, pipeline reliability, and cost efficiency across the full data stack.

Benefits

  • Medical, dental, and vision insurance
  • 401(K)
  • Company paid life insurance policy
  • Short and long-term disability coverage
  • Tuition Reimbursement
  • Wellness Program
  • Competitive pay
  • In-House Gym
  • Paid time off (wellness days, holidays, personal time)
  • Vacation time for part-timers
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service