Data Engineer, Staff

QualcommSan Diego, CA
Onsite

About The Position

We are seeking a Staff Data Engineer to design, build, and operate a modern, scalable data platform with Databricks Lakehouse as a core foundation. In this role, you will focus on building reusable data frameworks, shared platform components, and standardized pipelines that enable teams to deliver data products efficiently and consistently. Your work will support analytics, reporting, and downstream advanced use cases (including AI and machine learning), with a strong emphasis on reliability, governance, developer productivity, and intelligent automation. This is a hands-on role with meaningful ownership across data engineering, framework development, AI‑driven automation, platform reliability, security, and cost management, while contributing to architectural decisions and data standards. This role requires full-time onsite work in San Diego, CA (5 days per week).

Requirements

  • 5+ years of IT-related work experience with a Bachelor's degree in Computer Engineering, Computer Science, Information Systems or a related field.
  • OR 7+ years of IT-related work experience without a Bachelor’s degree.
  • 3+ years of work experience with programming (e.g., Java, Python).
  • 3+ years of work experience with SQL or NoSQL Databases.
  • 3+ years of work experience with Data Structures and algorithms.
  • 8+ years of experience building and operating data platforms or distributed data systems
  • Proven experience designing and building reusable data engineering frameworks, libraries, or platform components
  • Strong experience designing scalable, reliable data pipelines using standardized patterns
  • Solid understanding of data modeling, storage formats, schema evolution, and query performance
  • Experience implementing automation in data pipelines, including rule‑based or AI‑assisted approaches
  • Ability to reason about architectural trade-offs across scalability, cost, reliability, and security
  • Strong hands-on experience with AWS, including IAM, networking, and multi-account setups
  • Proven experience with Databricks Lakehouse, including: Delta Lake Unity Catalog
  • Strong proficiency in Python for framework development, data processing, and automation
  • Experience building data platforms that support multiple consumers and automated workflows
  • Understanding of cloud security best practices and data governance
  • Experience working in regulated or compliance-driven environments
  • Strong communication skills and the ability to drive adoption of shared frameworks and automation patterns across teams

Nice To Haves

  • Experience building AI‑assisted or intelligent automation for: Data quality monitoring Pipeline observability Cost or performance optimization
  • Experience building internal data platforms or enablement frameworks
  • Experience supporting AI/ML teams as platform consumers (without owning models)
  • Experience with data observability and monitoring tools
  • Experience with enterprise ingestion tools (e.g., Fivetran, HVR)
  • Experience with data lineage or metadata management
  • Familiarity with secret management tools (Vault or similar)
  • Experience optimizing Databricks performance and cost
  • Experience working with globally distributed teams

Responsibilities

  • Design, build, and maintain scalable batch and streaming data pipelines
  • Develop reusable data engineering frameworks, libraries, and templates for ingestion, transformation, validation, and publishing
  • Establish standardized patterns for data modeling, transformations, and pipeline orchestration
  • Implement end-to-end data workflows from raw ingestion to curated analytical datasets
  • Leverage AI‑based techniques to automate and optimize data engineering workflows, such as: Intelligent schema inference and evolution Automated data quality checks and anomaly detection Pipeline failure detection and self-healing mechanisms
  • Ensure data quality, reliability, and performance across pipelines and shared frameworks
  • Support downstream consumers such as analytics, reporting, and AI/ML teams
  • Define and monitor SLIs/SLOs for data pipelines, frameworks, and platform availability
  • Participate in incident response, on-call rotations, and post-incident reviews
  • Apply AI‑assisted monitoring and alerting to proactively detect performance issues, data drift, and operational anomalies
  • Implement security, compliance, and data governance controls across shared data assets
  • Drive performance tuning and cost optimization, including automated recommendations for resource utilization and workload optimization
  • Partner with analytics, application, and platform teams to understand common data needs and platform gaps
  • Drive adoption of standardized data frameworks, automation patterns, and best practices across teams
  • Contribute to data architecture decisions, platform standards, and design guidelines
  • Mentor junior engineers and provide technical guidance, including best practices for automating data workflows

Benefits

  • We also offer a competitive annual discretionary bonus program and opportunity for annual RSU grants (employees on sales-incentive plans are not eligible for our annual bonus). In addition, our highly competitive benefits package is designed to support your success at work, at home, and at play.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service