Staff Software Engineer, Data Ingestion

RevolutionPartsTempe, AZ
3dHybrid

About The Position

RevolutionParts is a mission-driven organization and pioneer in industry cloud, disrupting and innovating in the automotive space by creating the most active parts network in North America. Since our inception, dealerships have sold more than $1 BILLION in parts and accessories online using the RevolutionParts platform. We are here to transform the way parts buyers and sellers connect. At the heart of RevolutionParts are our core values: Think Big, Own It, Wow Them, Leave Your Ego at the Door, Work Together Win Together, and Move Fast, Fail Fast. Join us in transforming the automotive industry, committed to making a positive impact on our customers and their digital transformation journey. THE ROLE: RevolutionParts is growing rapidly, and the reliability and quality of our core data—catalog, pricing, and inventory—is paramount to our success. This data flows through our established, high-volume ETL pipeline, which is the heart of our platform. As a Staff Software Engineer, Data Ingestion, you will serve as the technical authority for our data ingestion and persistence ecosystem. You are a "force multiplier," balancing deep hands-on execution with high-level architectural strategy. You will own the reliability of our mission-critical, high-volume ETL pipelines (PHP, MySQL, PostgreSQL) while simultaneously defining the 2–3 year roadmap for our next-generation data architecture. Please note: Applicants must be authorized to work for any employer in the U.S. We are unable to sponsor employment-based visas at this time. LOCATION & HYBRID WORK REQUIREMENT Please note that preference will be given to candidates who live in the greater Phoenix, Arizona, area. This role will require working in our Tempe, AZ (HQ) twice a week. AI FLUENCY & MODERN TOOLING At RevolutionParts, we expect team members to actively use modern tools — including AI-powered systems — to improve decision-making, productivity, and quality of work. This includes: Using AI tools responsibly to accelerate research, analysis, documentation, and problem-solving Exercising strong judgment around data privacy, accuracy, and ethical use Continuously learning and adapting as AI capabilities evolve Pproven examples of using AI to improve outcomes in prior roles is expected.

Requirements

  • Experience: 10+ years of experience in software or data engineering, with at least 3+ years in a leadership or staff-level capacity managing complex, high-volume systems.
  • Expertise in orchestrating ELT/ETL pipelines using dbt, Airflow, or Dagster, with a focus on data modeling and warehouse optimization (Snowflake, BigQuery, or Databricks).
  • Proficiency in real-time streaming architectures using Kafka, Redpanda, or Flink to bridge the gap between application backends and analytical layers.
  • Technical Depth: Deep, hands-on mastery of PHP and backend application development (or comparable imperative languages) in mission-critical environments.
  • Database Expert: Exceptional ability to design, tune, and optimize complex SQL and relational schemas (MySQL/PostgreSQL).
  • System Design: Proven track record of architecting distributed systems and migrating legacy pipelines to modern architectures (e.g., microservices, event-driven systems).
  • Education: Bachelor’s and/or Master’s degree in Computer Science, Engineering, or equivalent professional experience.
  • Leadership: Experience leading through influence across multiple teams using Agile, Scrum, or Kanban methodologies.
  • Communication: Elite ability to communicate technical complexity to non-technical stakeholders and executives.

Nice To Haves

  • Experience in the domains of payments, eCommerce, Marketplaces, and/or complex Product Information Management modeling.

Responsibilities

  • Cross-Functional Liaison and Project Management: Act as the primary technical partner for Product, BI, Platform Engineering, and Executive Leadership, translating complex data trade-offs into business strategy. Own the end-to-end lifecycle of complex, multi-quarter initiatives—from technical discovery to execution. You will utilize Agile, Scrum, or Kanban to manage dependencies across teams, ensuring high-velocity delivery without compromising architectural integrity.
  • Technical Roadmap: Define, govern, and drive the long-term evolution of the data ingestion stack. You will lead the strategic transition toward a V2 architecture (e.g., event-based processing, modern languages/services) while ensuring the current PHP-based pipeline remains performant and stable.
  • Architectural Governance: Establish and enforce engineering standards and data hygiene practices (e.g., schema design, query optimization, observability) across all teams interacting with core persistence tiers.
  • Complex Problem Resolution: Lead the investigation of the most ambiguous, high-severity issues regarding data quality, latency, or performance that span multiple microservices and databases.
  • Expert System Ownership: Become the ultimate authority on our custom ETL workflows for catalog, pricing, and inventory data, taking accountability for its architecture and day-to-day performance.
  • Data Reliability Champion: Design and implement sophisticated monitoring, alerting, and validation frameworks that ensure data accuracy and timely delivery across the organization.
  • Pragmatic Debt Management: Own the final technical decisions for the ingestion domain, striking the critical balance between immediate stability, feature delivery, and long-term technical debt.
  • Mentorship: Act as a formal coach for Senior Engineers, leveling up the organization’s skills in high-volume database performance, distributed systems, and advanced data modeling.

Benefits

  • competitive compensation
  • career development
  • benefits
  • 401K match
  • parental leave
  • many more valuable perks
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service