Senior Platform Product and Technical Lead

BoeingSeattle, WA
Onsite

About The Position

Boeing has a current need for a Senior Platform Product and Technical Lead to drive the modernization of legacy ETL pipelines to a scalable, configuration-driven ETL framework running on AWS. This role will lead end-to-end migration efforts, define reusable data pipeline patterns, and ensure high-quality, governed, and performant data solutions aligned with enterprise standards for migrating existing legacy pipelines to new config-driven framework as well as to support future pipelines needs of the enterprise.

Requirements

  • Must be a U.S. Person (Green Card holder or US Citizenship)
  • Must be located in Seattle area or willing to relocate at your own expense. Will consider those who live in Southern California or Dallas.
  • Bachelor’s Degree or higher in Computer Science, Engineering, Information Systems, or equivalent practical experience
  • 7+ years of strong hands-on experience with ETL tools (i.e. DataStage, Informatica, etc.,)
  • 5+ years of proven experience with Databricks and Apache Spark (PySpark/Spark SQL)
  • 3+ years of experience in designing and implementing metadata-driven, pattern-based ETL/ELT frameworks.
  • 3+ of experience with AWS cloud platform managed services (VPC, EC2, S3, IAM, KMS, Secrets Manager), cloud data lake and data warehouses
  • 5+ years of experience with CI/CD pipeline implementation, and DevOps/DevSecOps practices (e.g. Github/Gitlab, Terraform, Ansible, Jenkins)
  • Demonstrated experience in large-scale data pipeline migration to cloud data platform
  • Strong skills in performance tuning and optimization of new and migrated data pipelines
  • 3+ years of experience with workflow orchestration tools (i.e. Autosys, Airflow, Databricks Workflows).
  • 5+ years of deep hands-on experience in data security, governance, and compliance, including encryption, role-based access (RBAC), metadata management, and regulatory alignment (FedRAMp, NIST, GDPR).
  • Proven experience in designing and building enterprise data platforms, data modeling, schema design, data warehouses, data lakes, data architecture patterns, repeatable ETL/ELT pipelines
  • Experience with programming languages used for data engineering (e.g., Python, SparkSQL, PySpark, Shell, Java) and experience with distributed processing frameworks
  • Hands-on experience with data ingestion and data integration technologies including batch, streaming, CDC.
  • Proven ability to lead cross-functional teams and manage complex delivery programs.
  • Leadership, mentoring, strong written/oral communication, and ability to work across distributed teams and vendors
  • Experience with monitoring and observability for data systems (logging, metrics, alerting, lineage tools)

Nice To Haves

  • 10+ years of strong hands-on experience with ETL tools (i.e. DataStage, Informatica, etc.,)
  • 7+ years of proven experience with Databricks and Apache Spark (PySpark/Spark SQL)
  • Experience with cloud platforms (AWS, Azure, GCP) and cloud data services (e.g., BigQuery, Cloud Storage, DataProc, Snowflake)
  • Experience with database design and management across SQL, NoSQL, and columnar stores; migrating/integrating complex on prem sources (e.g., SAP) to cloud
  • Experience with data replication tolls like GoldenGate and HVR
  • Experience with streaming and event-driven systems and real‑time ingestion patterns
  • Experienced in FinOps-driven cloud cost optimization and capacity planning to balance cost, performance, and scalability for cloud data environments.
  • Experience with Agile software development lifecycle and tooling (ADO, JIRA)

Responsibilities

  • Lead large scale ETL modernization initiative migrating legacy pipelines (like DataStage, GoldenGate, HVR, etc.,) to a scalable, configuration-driven, metadata-based ETL framework, and ensure adherence to data governance, security, and compliance standards.
  • Lead the implementation of a metadata‑driven, reusable ETL framework on AWS cloud data platform and champion repeatable, self‑service cloud and data architecture patterns that enable teams to deploy scalable, high‑performant, maintainable, and compliant data pipelines autonomously across the enterprise.
  • Lead end-to-end data integration and ETl/ELT processes to ingest, transform and deliver complex structured and unstructured data into a governed Data Lakehouse, enabling seamless access for analytics, reporting and data science workloads.
  • Designing and solutioning cloud-native & Cloud agnostic data platforms and data engineering solution on AWS, and experience in SaaS products like Databricks to ensure portability, resilience and consistent governance across environments
  • Drive automation, DevOps/DevSecOps, and Infrastructure as Code (IaaC) initiatives to deliver repeatable, testable, and deployable artifacts and accelerate migrations.
  • Troubleshoot and resolve implementation issues throughout the SDLC; monitor architecture compliance and operational health.
  • Design and configure data pipelines with enterprise orchestration and scheduling tools, and establish monitoring, alerting, and operational runbooks for production support.
  • Provide technical leadership, mentorship, and guidance to ETL engineering teams, provide best coding practices, enable team in automation strategies and tools, conduct peer reviews, and knowledge sharing across distributed teams.
  • Build and maintain strong relationships with vendors, partners, and cross‑functional teams, own stakeholder communications and collaboration channels, and drive accountability and organizational change through regular updates to product managers, DBAs, architects, and senior leadership.
  • Operationalize and standardize cloud platforms (AWS/Azure), applying architecture patterns, guardrails, and enterprise standards for scalability, reliability, security, compliance, and cost control.

Benefits

  • Relocation assistance is offered for eligible candidates.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service