About The Position

At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. We do our best work when we are guided by a common purpose to help make financial lives better through the power of every connection. We are intentional about fostering an inclusive workplace where every teammate has the opportunity to succeed, build a career and contribute to our shared success. This includes attracting and developing exceptional talent, recognizing and rewarding performance, and supporting our teammates’ physical, emotional, and financial wellness through affordable, competitive and flexible benefits. We value the unique perspectives individuals bring from all backgrounds and career paths - whether shaped by military service, community college education, or a wide range of work and life experiences. These journeys foster resilience, leadership and innovation, strengthening our workforce and positively impact the communities we serve. Bank of America is committed to an in-office culture that supports collaboration, engagement, and career development. Our approach includes clear in-office expectations, while providing an appropriate level of flexibility based on role-specific responsibilities and business needs. At Bank of America, you can build a successful career with opportunities to learn, grow, and make an impact. Join us! This job is responsible for providing leadership, technical direction and oversight to a team delivering technology solutions. Key responsibilities of the job are to provide oversight of the design, implementation, and maintenance of complex computer programs, align technical solutions to business objectives, and ensure that coding practices/quality comply with software development standards. Job expectations include conducting multiple software implementations and applying both depth and breadth in knowledge of several technical competencies. We're seeking a seasoned engineer to design, operate, and scale our workflow orchestration platform with a primary focus on Apache Airflow. You'll own the Airflow control plane and developer experience end-to-end—architecture, automation, security, observability, and reliability—while also evaluating and operating complementary schedulers where appropriate. You'll build automation infrastructure and partner across data, trading, and engineering teams to deliver mission-critical pipelines at scale.

Requirements

  • 5–8+ years building/operating data or platform systems; 3+ years running Airflow in production at scale (hundreds–thousands of DAGs and high task throughput).
  • Deep Airflow expertise: DAG design and testing, idempotency, deferrable operators/sensors, dynamic task mapping, task groups, datasets, pools/queues, SLAs, retries/backfills, cross-DAG dependencies.
  • Strong Kubernetes experience running Airflow and supporting services: Helm, autoscaling, node/pod tuning, topology spread, network policies, PDBs, and blue/green or canary strategies.
  • Automation-first mindset: Terraform, Helm, GitOps (Argo CD/Flux), and CI/CD for platform lifecycle; policy-as-code (OPA/Gatekeeper/Conftest) for DAG, connection, and secrets changes.
  • Proficiency in Python for authoring operators/hooks/utilities; solid Bash; familiarity with Go or Java is a plus.
  • Observability and SRE practices: Prometheus/Grafana/StatsD, centralized logging, alert design, capacity/throughput modeling, performance tuning.
  • Data platform experience with at least one major cloud (AWS/Azure/GCP) and systems like Snowflake/BigQuery/Redshift, Databricks/Spark, EMR/Dataproc; strong grasp of IAM, VPC networking, and storage (S3/GCS/ADLS).
  • Security/compliance: SSO/OIDC, RBAC, secrets management (Vault/Secrets Manager), auditing, least-privilege connection management, and change control.
  • Proven incident leadership, runbook creation, and platform roadmap execution; excellent cross-functional communication.

Nice To Haves

  • Experience leading migrations to/from Airflow.
  • OpenLineage/Marquez adoption; Great Expectations or other data quality frameworks; data contracts.
  • dbt Core/Cloud orchestration patterns (state management, artifacts, slim CI).
  • Cost optimization and capacity planning for schedulers and workers; spot instance strategies.
  • Multi-region HA/DR for Airflow metadata DB; backup/restore and disaster drills.
  • Building internal developer platforms/portals (e.g., Backstage) for self-service pipelines.
  • Contributions to Apache Airflow or provider packages; familiarity with recent AIPs/Airflow 2.7+ features.

Responsibilities

  • Designs, develops and is accountable for feature delivery
  • Applies enterprise standards for solution design, coding and quality
  • Ensures solution meets product acceptance criteria with minimal technical debt
  • Guides the team on work breakdown and execution
  • Works with the Product Owner to ensure that product backlog/requirements are healthy, with clear acceptance criteria
  • Plays a team lead role (as an individual contributor) and mentors the team
  • Guides team members with skills and practices (planning and estimation, peer reviews, and other engineering practices)

Benefits

  • Access to paid time off
  • Resources and support to our employees
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service