Database Engineer Lead - Mountlake Terrace, WA

Mindful Support ServicesMountlake Terrace, WA
Hybrid

About The Position

The Database Engineer Team Lead is responsible for the design, development pipelines, and operational ownership of Mindful Support Services’ data platform, including databases, data warehouses, and Azure-based data pipelines. This role operates with a high degree of autonomy yet will manage the Database Administrator as well as the PowerBI Developer to lead the architecture, implementation, and ongoing management of data systems that support both operational workflows and analytics. In this role, you will own end-to-end data movement and transformation across Microsoft Fabric and Azure environments, leveraging PySpark, T-SQL, and modern data engineering practices. You will manage two team members supporting in this work. You will be responsible for building resilient data pipelines, maintaining CI/CD processes in Azure DevOps, and ensuring that high-impact data workflows continue to operate reliably with minimal direct support. This is a hands-on engineering and leadership role suited for someone comfortable operating independently in a production-critical environment, where pipelines and databases are core to daily business operations. This is a full-time, in-person role based out of our Mountlake Terrace Headquarters, with occasional travel to other MSS locations as needed and Hybrid work schedule available with tenure.

Requirements

  • Bachelor’s degree in Computer Science, Information Systems, Engineering, or related field, or equivalent practical experience.
  • 5+ years of experience in a Database Engineer, Data Engineer, or similar role owning database and pipeline development in production environments.
  • 3+ years of hands-on experience working with PySpark in distributed data processing environments.
  • 3+ years of experience building and maintaining data platforms in Microsoft Fabric or comparable modern data platforms (e.g., Synapse, Databricks).
  • 5+ years of advanced T-SQL experience, including performance tuning, indexing, and complex query development.
  • 3+ years of experience managing CI/CD pipelines in Azure DevOps (ADO) for database and data pipeline deployments.
  • 3+ years of experience managing or developing enterprise data warehouses, including dimensional modeling and large-scale data transformations.
  • 2+ years of experience operating and supporting high-impact production data pipelines with limited oversight or support.
  • Strong proficiency in: PySpark & Python, T-SQL, Microsoft Fabric, Azure DevOps (CI/CD pipelines)
  • Ability to independently manage and prioritize work in a high-impact, low-support environment.
  • Ability to oversee a small team in following best practices in database management.
  • Strong troubleshooting and root-cause analysis skills across complex data systems.
  • High attention to detail in data integrity, performance, and system reliability.
  • Clear communication skills for both technical and non-technical stakeholders.

Nice To Haves

  • Experience with: Azure data services (Azure SQL, Fabric, Data Factory, Logic Apps, Functions)
  • ETL/ELT pipeline design patterns
  • Data modeling methodologies (Kimball, Medallion)
  • Microsoft PowerBI dashboard builder
  • Familiarity with: DAX for analytical modeling
  • Scripting languages (Python, PowerShell)

Responsibilities

  • Design, build, and maintain enterprise-grade data warehouses within Microsoft Fabric (Warehouse, Lakehouse) using PySpark.
  • Implement and manage data models including fact and dimension tables supporting operational and analytical workloads.
  • Apply Medallion architecture (Bronze, Silver, Gold) to structure scalable, maintainable data pipelines.
  • Optimize storage, query performance, and cost efficiency across Fabric and Azure environments.
  • Develop, deploy, and maintain data pipelines using PySpark, SQL, and Azure-native services to facilitate development across the DevOps team.
  • Engineer scalable ETL/ELT processes that ingest, transform, and load data across multiple systems.
  • Own pipeline reliability, monitoring, alerting, and failure recovery in production environments.
  • Troubleshoot and resolve issues across ingestion, transformation, and storage layers with minimal escalation support.
  • Design and manage CI/CD pipelines in Azure DevOps for database changes, data pipelines, and data models.
  • Implement version control, automated deployments, and testing strategies for all data assets.
  • Maintain release processes for schema changes, stored procedures, and pipeline updates across environments.
  • Write, optimize, and maintain advanced T-SQL code (stored procedures, views, functions, indexing strategies).
  • Perform query tuning and performance optimization across high-volume transactional and analytical workloads.
  • Design and enforce database standards, naming conventions, and documentation practices.
  • Lead the Database Administrator in best practices around Data governance, reliability and security.
  • Implement role-based access controls and data security aligned with HIPAA and internal policies.
  • Own backup, restore, and disaster recovery strategies for databases and pipelines.
  • Conduct regular validation of data integrity and pipeline outputs.
  • Partner with analysts, operations, and engineering teams to translate business requirements into scalable data solutions.
  • Provide technical guidance on data modeling, pipeline design, and platform best practices.
  • Document systems, pipelines, and processes to ensure maintainability despite lean operational support.

Benefits

  • 75% employer covered Health, Dental & Vision benefits plan
  • 401(k) savings plan with employer matching upon eligibility
  • 8 paid holidays
  • 15 PTO days accrued annually
  • Professional and career development opportunities
  • Compensation evaluated with opportunities for advancement
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service