Senior Data Engineer

GuidehouseSan Antonio, TX
Onsite

About The Position

Guidehouse is seeking a Senior Data Engineer to join their Data Engineering & Architecture Consulting team. This role involves designing, building, and operating Azure Lakehouse architectures to support analytical and operational workloads. The engineer will develop and optimize scalable ETL/ELT pipelines, process large-scale datasets using Apache Spark and Delta Lake, and collaborate with cross-functional teams to deliver enterprise solutions on Azure. A key aspect of the role is ensuring data quality, performance tuning, and cost optimization for data platforms and pipelines.

Requirements

  • Must be able to OBTAIN and MAINTAIN a Federal or DoD "Public Trust" security clearance; candidates must obtain approved adjudication of clearance prior to onboarding with Guidehouse.
  • Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or equivalent experience.
  • FIVE (5) or more years of experience as a Data Engineer, including THREE (3) years delivering enterprise solutions on Azure, specifically using Azure Databricks and Azure-native data services.
  • Hands-on experience designing, building, and operating Azure Lakehouse architectures using Azure Databricks, ADLS Gen2, Azure Synapse Analytics, and Azure Data Factory.
  • Experience in Apache Spark, Delta Lake, Python, SQL, and Scala, with demonstrated ability to process large-scale structured and unstructured datasets using optimized batch and streaming pipelines.
  • Experience designing, developing, and maintaining scalable ETL/ELT pipelines using Databricks Workflows, Spark jobs, and Delta Lake, ensuring reliability, performance, and data quality at enterprise scale.
  • Experience with real-time and batch data processing.

Nice To Haves

  • Candidates with an ACTIVE "Public Trust" or higher-level clearance are preferred.
  • Azure certifications (for example, Azure Data Engineer) or equivalent cloud/data platform certifications.
  • Experience implementing CI/CD for data engineering, MLOps pipelines, or data platform automation.
  • Familiarity with data governance, observability, and cost-optimization practices at scale.

Responsibilities

  • Design, build, and operate Azure Lakehouse architectures using Azure Databricks, Azure Data Lake Storage (ADLS Gen2), Azure Synapse Analytics, and Azure Data Factory to support analytical and operational workloads.
  • Develop, maintain, and optimize scalable ETL/ELT pipelines using Databricks Workflows, Spark jobs, and Delta Lake to ensure reliability, performance, and enterprise-grade data quality.
  • Process large-scale structured and unstructured datasets using optimized batch and streaming pipelines leveraging Apache Spark, Delta Lake, Python, SQL, and Scala.
  • Collaborate with cross-functional teams to deliver enterprise solutions on Azure and support production deployments, monitoring, and operational excellence.
  • Implement data quality, performance tuning, and cost-optimization practices for data platforms and pipelines.

Benefits

  • Medical, Rx, Dental & Vision Insurance
  • Personal and Family Sick Time & Company Paid Holidays
  • Parental Leave and Adoption Assistance
  • 401(k) Retirement Plan
  • Basic Life & Supplemental Life
  • Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts
  • Short-Term & Long-Term Disability
  • Student Loan PayDown
  • Tuition Reimbursement, Personal Development & Learning Opportunities
  • Skills Development & Certifications
  • Employee Referral Program
  • Corporate Sponsored Events & Community Outreach
  • Emergency Back-Up Childcare Program
  • Mobility Stipend

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

Associate degree

Number of Employees

5,001-10,000 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service