Senior Databricks Engineer

Hike2Remote, PA
Remote

About The Position

HIKE2 is seeking a Senior Data Engineer with extensive experience in Databricks and modern data platforms, particularly within large, complex enterprise environments. This is a hands-on role for an individual who has successfully built and delivered data solutions at scale, preferably in greenfield settings, and is comfortable engaging with Fortune 500 clients. The role involves leading technical direction, making critical architectural decisions, and contributing to the development of other engineers on the team. You will collaborate closely with clients and internal teams to design and implement enterprise-grade data platforms and pipelines from inception to completion. This position requires a candidate capable of operating across the entire lifecycle, from initial architecture and design through delivery and optimization, ensuring solutions are practical, reliable, and aligned with business objectives.

Requirements

  • Deep Databricks-native expertise, including experience architecting and implementing end-to-end lakehouse solutions that run primarily or entirely on Databricks.
  • Advanced experience with modern Databricks architecture patterns, including declarative pipelines / Delta Live Tables, Unity Catalog, Delta Lake, workflow orchestration, governance, performance tuning, and operational monitoring.
  • Familiarity with infrastructure-as-code (Terraform, Bicep), environment provisioning, and CI/CD automation (Github, Azure DevOps) for Databricks-based platforms.
  • Strong learning agility, technical curiosity, and comfort using AI-enabled development workflows or automation tools to accelerate delivery and improve quality.
  • Familiarity with other modern cloud data architectures and tools, including cloud-native data warehouses (Snowflake, BigQuery, Redshift), data lakes, orchestration frameworks (Airflow/Astronomer), transformation tools (dbt), catalog/governance platforms, and scalable batch or streaming data processing services (Kafka, Kinesis).
  • Demonstrated ability to mentor and guide data engineers and analysts.
  • US Citizenship
  • Must reside in the U.S.

Responsibilities

  • Design and build large-scale data platforms on Databricks (Delta Lake, Spark, Unity Catalog) in Azure
  • Develop and maintain batch and streaming data pipelines for high-volume, complex data sources
  • Implement medallion/lakehouse architectures from the ground up in greenfield environments
  • Build and optimize data models to support analytics, reporting, and downstream applications
  • Integrate Databricks with enterprise systems (APIs, event streams, warehouses, ML workflows)
  • Tune Spark jobs and pipelines for performance, reliability, and cost at scale
  • Support production deployments, including CI/CD pipelines, testing, and release management

Benefits

  • medical
  • dental
  • vision
  • 401k
  • holiday pay
  • vacation
  • personal and family sick leave
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service