Lead Data Engineer

Lakeview Loan ServicingNew York, NY
Remote

About The Position

The Lead Data Engineer on the Nebula team plays a significant technical leadership role in shaping and scaling the data foundation that powers analytics, reporting, AI development, and operational decision-making across the organization. This role combines hands-on data engineering execution with practical team leadership, helping the organization build reliable, flexible, and production-ready data systems. The Lead Data Engineer heads a lean, high-caliber squad of data engineers, while remaining deeply hands-on in the design, development, and operation of core data systems. The role balances direct technical contribution with mentoring, coaching, coordination, and day-to-day support for the engineers on the squad. Working across ingestion, transformation, storage, modeling, orchestration, and delivery, this role partners closely with Product, Engineering, AI, Analytics, and domain Subject Matter Experts (SMEs) to translate complex business processes into scalable data platforms, pipelines, and trusted datasets. This role owns the technical direction for core data capabilities, including ETL/ELT, batch and real-time processing, OLTP and OLAP systems, BI-ready data models, and cloud-based data infrastructure in a regulated, high-stakes environment. Success requires strong architectural judgment, operational discipline, and the ability to raise the technical bar for both systems and people.

Requirements

  • 5-8+ years of experience building and operating production-grade data pipelines, platforms, and distributed data systems
  • 2+ years of experience leading, mentoring, or managing data engineers in a tech lead, staff-level project lead, engineering manager, or TLM capacity
  • Strong hands-on experience with industry-standard tools and platforms for ETL/ELT, orchestration, data warehousing, streaming, and BI
  • Deep understanding of OLTP and OLAP systems, including the ability to design architectures that support transactional, analytical, and operational workloads
  • Experience building flexible data pipelines across many source and destination types, including databases, APIs, files, queues, event streams, SaaS platforms, and internal systems
  • Strong experience with both batch and real-time processing patterns, including tradeoffs in latency, reliability, cost, and operational complexity
  • Experience deploying and operating cloud-based data infrastructure on AWS, GCP, or Azure
  • Advanced SQL and data modeling expertise, including schema design, warehouse optimization, semantic modeling, and performance tuning
  • Strong programming ability in languages commonly used in data engineering, such as Python, Java, Scala, Go, or similar
  • Comfort with CI/CD, infrastructure-as-code, automated testing, observability, incident response, and production operations for data systems
  • Strong architectural judgment in ambiguous environments where systems must balance speed, reliability, compliance, maintainability, and long-term leverage
  • Clear communication skills with both technical and non-technical teammates, including the ability to explain tradeoffs and influence direction

Nice To Haves

  • Experience operating as a Technical Lead or Tech Lead Manager responsible for both technical implementation, technical direction, and people development
  • Experience with modern orchestration and transformation tools such as Airflow, Dagster, dbt, or similar platforms
  • Experience with cloud-native warehouses or lakehouse platforms such as Snowflake, BigQuery, Redshift, Databricks, or equivalent technologies
  • Experience with streaming systems such as Kafka, Kinesis, Pub/Sub, Flink, Spark Streaming, or similar technologies
  • Experience enabling BI and self-service analytics through curated datasets, semantic layers, and reporting platforms such as Looker, Tableau, Power BI, or similar tools
  • Experience building data platforms that support AI, machine learning, decisioning, or LLM-powered workflows
  • Experience scaling a data engineering function, including technical standards, operating rhythms, hiring, onboarding, and team development
  • Experience in fintech, mortgage, lending, payments, insurance, or other regulated domains

Responsibilities

  • Own the architecture and evolution of core data systems, including ingestion, transformation, orchestration, storage, modeling, and delivery layers
  • Set technical direction for ETL/ELT, batch processing, real-time pipelines, OLTP and OLAP systems, and BI-ready data assets
  • Make pragmatic architecture decisions that balance scalability, reliability, security, performance, cost, and delivery speed
  • Establish engineering standards, reusable patterns, and design principles that improve quality and leverage across the data platform
  • Lead the design, build, rollout, and operations of greenfield data infrastructure
  • Build and maintain complex data pipelines across diverse source and destination systems, including databases, APIs, files, SaaS platforms, event streams, and internal applications
  • Design and optimize data models, warehouse schemas, semantic layers, and curated datasets for analytics, reporting, AI, and product use cases
  • Contribute directly to critical implementation work, including writing code, code and design reviews, migrations, reliability improvements, and production issue resolution
  • Lead a lean, high-caliber squad of data engineers, spending focused time mentoring, coaching, managing, and coordinating the team
  • Develop engineers through regular feedback, technical guidance, code reviews, career support, and clear expectations around quality and ownership
  • Help prioritize team work, clarify scope, remove blockers, and ensure the squad delivers reliably against business and technical goals
  • Contribute to hiring, onboarding, performance development, and team operating rhythms as the data engineering function grows
  • Deploy, operate, and improve data pipelines, data stores, and supporting infrastructure on major cloud platforms such as AWS, GCP, or Azure
  • Drive strong practices for CI/CD, infrastructure-as-code, automated testing, monitoring, alerting, and incident response
  • Ensure data systems are observable, fault-tolerant, recoverable, and maintainable in production
  • Identify opportunities to reduce operational toil, improve platform reliability, and manage cloud infrastructure costs effectively
  • Define and enforce standards for data quality, validation, reconciliation, lineage, schema evolution, metadata, and documentation
  • Establish patterns for data contracts, ownership, SLAs, and runbooks that help downstream teams trust and use data confidently
  • Partner with security, compliance, and business stakeholders to support privacy, auditability, access controls, and regulated data handling
  • Raise the maturity of data governance and reliability practices without slowing down pragmatic delivery
  • Partner closely with Product, Engineering, AI, Analytics, and business stakeholders to align data architecture with organizational priorities
  • Translate ambiguous business needs and operational workflows into clear technical plans, milestones, and production-ready solutions
  • Serve as a senior technical point of contact for data-heavy initiatives, communicating tradeoffs, risks, sequencing, and timelines clearly
  • Enable downstream consumers, including analysts, product teams, data scientists, and operational users, through reliable and well-modeled data assets
  • Contribute to a culture of ownership, curiosity, operational rigor, pragmatism, and engineering excellence
  • Raise the bar for the team through thoughtful design, clear abstractions, strong reviews, and sound technical judgment
  • Balance staff-level technical depth with practical people leadership, helping the team grow while continuing to ship high-quality systems

Benefits

  • medical coverage starting on day one
  • company-matched 401(k)
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service