Databricks Engineer- Remote

Cystems Logic IncHouston, TX
Remote

About The Position

We are seeking a highly skilled Sr. Databricks Engineer to design, develop, and optimize scalable big data and analytics solutions. The ideal candidate will have extensive experience with Databricks, Spark, cloud-based data platforms, and modern ETL/ELT frameworks. This role requires strong expertise in building high-performing data pipelines, supporting enterprise analytics, and collaborating with cross-functional teams in a remote environment.

Requirements

  • 10+ years of overall experience in data engineering or related fields
  • 4+ years of hands-on experience with Databricks and Apache Spark
  • Strong proficiency in PySpark, SQL, and performance tuning
  • Experience with ETL/ELT pipeline development and orchestration
  • Expertise in Delta Lake data modeling and optimization
  • Strong experience with cloud platforms such as Azure, AWS, or GCP
  • Familiarity with Python or Scala for data engineering tasks
  • Experience with CI/CD pipelines and DevOps practices
  • Strong analytical, problem-solving, and communication skills
  • Strong hands-on experience with Databricks, Apache Spark, and PySpark
  • Experience with Azure Data Factory, AWS Glue, or similar orchestration tools
  • Expertise in SQL query optimization and large-scale data processing
  • Experience with data lakes, data warehouses, and modern analytics platforms
  • Knowledge of Git, Jenkins, Terraform, or similar DevOps tools
  • Familiarity with Agile and Scrum methodologies
  • Ability to work independently in a remote setup

Nice To Haves

  • Databricks Certification
  • Experience with Power BI, Tableau, or other visualization tools
  • Knowledge of streaming technologies such as Kafka or Spark Streaming
  • Exposure to machine learning workflows and MLOps
  • Experience in insurance, banking, healthcare, or retail domains

Responsibilities

  • Design, develop, and maintain scalable ETL/ELT pipelines using Databricks, PySpark, and Spark SQL
  • Build and optimize Delta Lake architectures and data workflows
  • Develop reusable frameworks for ingestion, transformation, and validation
  • Collaborate with data architects, analysts, and business stakeholders to deliver data solutions
  • Optimize Spark jobs and SQL queries for performance and scalability
  • Implement data quality monitoring, logging, and alerting mechanisms
  • Develop and maintain CI/CD pipelines for Databricks notebooks, jobs, and workflows
  • Work with cloud-based storage and compute services
  • Support production deployments, troubleshooting, and incident resolution
  • Ensure security, governance, and compliance standards are followed
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service