What’s Under the Hood DriveTime Family of Brands is the largest privately owned used car sales finance & servicing company in the nation. Headquartered in Tempe, Arizona and Dallas, Texas, we create opportunities and improve the lives of our customers and our employees by placing a focus on putting the right customer, in the right vehicle, on the right terms and on their path to ownership. The DriveTime Family of Brands spans across DriveTime, Bridgecrest and SilverRock. You can find us at the intersection of technology and innovation as we use our proprietary tools and over two decades of industry knowledge to redefine the process of purchasing, financing, and protecting your vehicle. That’s Nice, But What’s the Job? This is not a position for which sponsorship will be provided. Individuals with temporary visas or who need sponsorship now or in the future are not eligible for hire at this time. In short, we are seeking a Data Engineer to support the development, modeling, optimization, and governance of our data processes: transactional support for applications, consumption and generation of external data to and from our key partners, and growth of our agentic programming layer. This role is pivotal in enabling business teams, analytics, and our vendor partners with performant, high-quality, bleeding-edge solutions to all the challenges that a rapidly growing company faces. In long, as a Data Engineer of SilverRock Data, you will contribute to the delivery and design of scalable processes and data models using Snowflake, SQL, Python, and Kafka; all while integrating best practices in performance optimization, data quality, and CI/CD workflows via GitHub or Azure DevOps. You will collaborate closely with fellow engineers, product leadership, and analysts to translate complex data requirements into stable, trusted processes and data structures – all in an AI-first, Agile framework. In addition, you will contribute to: Transactional & application support: design and maintain data structures that power application and operational workflows reliably and at scale. Data movement: build and operate pipelines that move data between internal systems and key vendor/partner integrations using Snowflake, SQL Server, Argo (workflow orchestrator), and Python. Transformation & modeling: develop consumer-ready datasets in Snowflake using ELT best practices, dimensional modeling, and well-documented transformation logic. Platform optimization: monitor and tune Snowflake performance, pipeline reliability, and cost efficiency across the full data stack. So, What Kind of Folks Are We Looking for? Technically strong and collaborative data engineer who thrives at the intersection of engineering and analytics. Open to an AI-first Execution mindset: using the latest models and tooling, working to provide value using Agentic Programming. Bring hands-on experience with modern data tooling and a passion for building trusted, scalable, and business-aligned data products. Eager to grow alongside a high-performing team and make a real impact on the data that drives our business.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Number of Employees
501-1,000 employees