VP, Data Engineering Technical Lead

Axos BankSan Diego, CA
13h$125,000 - $150,000

About The Position

We are seeking a hands-on VP, Data Engineering Technical Lead to help shape, modernize, and scale our enterprise data platform. This role is central to our mission of transforming legacy data systems into a modern, cloud-native Lakehouse environment that powers analytics, AI, and business intelligence across the organization. As a technical lead, you will design and deliver scalable data pipelines, define data design patterns, enforce engineering standards, and leverage AI-assisted tools to accelerate modernization, improve productivity, and reduce technical debt. You will drive proofs of concept (POCs) and points of view (POVs) to evaluate emerging technologies and frameworks, ensuring that the platform remains innovative, cost-efficient, and future-ready.

Requirements

  • Bachelors degree
  • 8+ years of experience in data engineering or related technical fields, with at least 3+ years in a lead or senior role.
  • Proven experience designing and implementing data design patterns (e.g., CDC, SCD, Medallion, Data Vault, streaming, and batch patterns).
  • Deep expertise with Databricks, Apache Spark, dbt, Fivetran, Census, Airflow, and Kafka.
  • Solid experience across Azure and/or GCP (e.g., Synapse, Data Factory, BigQuery, Pub/Sub).
  • Hands-on experience modernizing legacy ETL (SSIS/SSRS) workloads into cloud-native pipelines.
  • Demonstrated ability to build POCs and POVs that validate new tools, frameworks, or architectures.
  • Working knowledge of AI-assisted engineering tools for development, observability, or optimization.
  • Proficiency in SQL and one programming language (Python, Scala, or Java).
  • Strong problem-solving, architectural thinking, and collaboration skills.
  • Excellent communicator with the ability to translate technical topics to business stakeholders.

Responsibilities

  • Modernize legacy ETL pipelines : Lead the transformation of SSIS/SSRS workloads into modular, high-performance pipelines using Databricks, dbt, Fivetran, and Airflow.
  • Architect reusable data design patterns : Define and implement standardized frameworks for ingestion, transformation, curation, and consumption layers across the Lakehouse.
  • Develop and lead POCs/POVs : Experiment with new technologies (e.g., Delta Live Tables, Iceberg, streaming ingestion, AI-driven observability) to validate architecture choices and influence the enterprise roadmap.
  • Leverage AI to accelerate engineering : Use AI-enabled tools like Databricks Assistant, Cursor AI, GitHub Copilot, and dbt Mesh AI tests for code generation, automated testing, documentation, and pipeline optimization.
  • Apply ML for operational intelligence : Integrate predictive models to detect pipeline anomalies, data drift, and optimize compute and scheduling.
  • Enforce engineering excellence : Drive CI/CD, version control, peer reviews, and observability practices across the data platform.
  • Collaborate cross-functionally : Partner with data architects, platform engineers, analysts, and business product owners to translate business needs into technical solutions.
  • Mentor data engineers : Provide technical guidance, foster continuous learning, and help the team adopt modern data engineering best practices.
  • Optimize performance and cost : Continuously tune Spark workloads, storage tiers, and orchestration logic across Azure and GCP environments.

Benefits

  • Medical, Dental, Vision, and Life Insurance
  • Paid Sick Leave, 3 weeks’ Vacation, and Holidays (about 11 a year)
  • HSA or FSA account and other voluntary benefits
  • 401(k) Retirement Saving Plan with Employer Match Program and 529 Savings Plan
  • Employee Mortgage Loan Program and free access to an Axos Bank Account with Self-Directed Trading
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service