About The Position

Senior Manager, Software Engineering – Data Platform At Lytx, we make roadways safer by transforming real-time video and telematics into actionable safety intelligence. The Data Platform is the backbone of that mission – ingesting and processing massive event streams, shaping trusted datasets, and serving low-latency insights that power products and analytics across the company. As Senior Manager, you’ll grow exceptional engineering talent and deliver at scale – owning our data lake/warehouse, streaming and batch pipelines, data transformations, data quality, and the self-service platform that enables application teams to rapidly onboard new datasets. What You’ll Do Develop leaders & teams: mentor senior engineers and tech leads across US and India-based teams, build leadership depth, and foster a culture of ownership, learning, and craftsmanship. Drive close collaboration across geographies to ensure alignment and high velocity. Drive strong program execution: convert strategy into clear roadmaps and predictable delivery; manage cross-team dependencies and outcomes across multiple concurrent initiatives. Lead at scale: guide architecture for high-throughput streaming pipelines (Kafka, Flink), robust batch ETL (Airflow, Redshift, S3), and modern open table formats (Apache Iceberg, Athena) – ensuring reliability, performance, and cost efficiency. Advance the self-service platform: drive the vision and delivery of tooling and workflows that enable application teams to quickly onboard new datasets to the platform with minimal friction. Champion AI-assisted engineering: be an early adopter of AI tools (coding assistants, AI-driven testing, automated code review, etc.) and drive adoption across the team. Develop strategies, procedures, and best practices for using AI effectively to accelerate development velocity, improve code quality, and reduce toil. Elevate engineering excellence: advance data quality, observability, lineage, testing, validation, incident response, and cost/performance optimization across all pipelines and datasets. Own outcomes: ensure SLAs/SLOs, data freshness, query performance, and data quality meet business goals; deliver measurable improvements in reliability and cost efficiency. Partner broadly: collaborate with product, application, analytics, and domain teams to translate business needs into durable, scalable data assets and well-documented data contracts. What You’ll Need Bachelor’s degree in CS/Engineering or equivalent experience. 10+ years building scalable data systems and backend/distributed systems; 5+ years managing teams and growing senior-level talent. Strong technical depth in data engineering: streaming pipelines (Kafka, Flink), batch ETL (Airflow), data warehousing (Redshift), and query/serving patterns (Athena, APIs/GraphQL). Comfortable reviewing pipeline code, SQL, data models, and architecture decisions. Proven success with cloud-native data architectures at scale, including data lake/warehouse design, schema evolution, data contracts, and CDC patterns. Experience with open table formats such as Apache Iceberg, including partition evolution and schema evolution. Strong program execution: roadmap ownership, delivery predictability, cross-functional coordination, measurable results. Solid data modeling skills for telemetry/events and analytics use cases; understanding of data quality frameworks, observability, and lineage. Experience leading geographically distributed teams with strong cross-cultural collaboration skills. Proficiency in object-oriented programming languages such as C# / Java / Python; strong software engineering fundamentals (testing, CI/CD, code review). Early adopter mindset for AI and emerging technologies; demonstrated experience using AI tools to enhance engineering productivity and a track record of ramping up AI adoption within teams. Continuous learner who evaluates new tech pragmatically and uplifts team capabilities; excellent communicator and collaborator. Preferred AWS expertise (S3, Redshift, Athena, IAM, EKS) – preferred cloud vendor at Lytx. Experience with dbt, Great Expectations, or other data testing/quality frameworks. IoT/telematics data at scale. Terraform/IaC, Kubernetes, and cost-aware architecture. Experience building self-service data platforms or internal developer/data tooling. Hands-on experience with AI coding assistants (e.g., Copilot, Claude Code) and AI-powered development workflows; experience defining team-level guidelines and procedures for effective AI usage. What Success Looks Like Senior engineers grow into trusted tech leads under your coaching. Programs land on time and on target, with improved reliability, latency, and cost efficiency across data pipelines and datasets. The platform matures with stronger observability, data quality, and adoption of modern technologies like Iceberg. Application teams can self-serve dataset onboarding with minimal friction, accelerating time-to-insight. AI tools are actively used across the team with clear strategies and procedures in place, resulting in measurable gains in development velocity and quality. Cross-functional partners view your teams as accountable, predictable, and innovative. Reliable, observable pipelines with clear SLAs; lower latency, fewer incidents, and higher trust in data across the organization.

Requirements

  • Bachelor’s degree in CS/Engineering or equivalent experience.
  • 10+ years building scalable data systems and backend/distributed systems; 5+ years managing teams and growing senior-level talent.
  • Strong technical depth in data engineering: streaming pipelines (Kafka, Flink), batch ETL (Airflow), data warehousing (Redshift), and query/serving patterns (Athena, APIs/GraphQL). Comfortable reviewing pipeline code, SQL, data models, and architecture decisions.
  • Proven success with cloud-native data architectures at scale, including data lake/warehouse design, schema evolution, data contracts, and CDC patterns.
  • Experience with open table formats such as Apache Iceberg, including partition evolution and schema evolution.
  • Strong program execution: roadmap ownership, delivery predictability, cross-functional coordination, measurable results.
  • Solid data modeling skills for telemetry/events and analytics use cases; understanding of data quality frameworks, observability, and lineage.
  • Experience leading geographically distributed teams with strong cross-cultural collaboration skills.
  • Proficiency in object-oriented programming languages such as C# / Java / Python; strong software engineering fundamentals (testing, CI/CD, code review).
  • Early adopter mindset for AI and emerging technologies; demonstrated experience using AI tools to enhance engineering productivity and a track record of ramping up AI adoption within teams.
  • Continuous learner who evaluates new tech pragmatically and uplifts team capabilities; excellent communicator and collaborator.

Nice To Haves

  • AWS expertise (S3, Redshift, Athena, IAM, EKS) – preferred cloud vendor at Lytx.
  • Experience with dbt, Great Expectations, or other data testing/quality frameworks.
  • IoT/telematics data at scale.
  • Terraform/IaC, Kubernetes, and cost-aware architecture.
  • Experience building self-service data platforms or internal developer/data tooling.
  • Hands-on experience with AI coding assistants (e.g., Copilot, Claude Code) and AI-powered development workflows; experience defining team-level guidelines and procedures for effective AI usage.

Responsibilities

  • Develop leaders & teams: mentor senior engineers and tech leads across US and India-based teams, build leadership depth, and foster a culture of ownership, learning, and craftsmanship. Drive close collaboration across geographies to ensure alignment and high velocity.
  • Drive strong program execution: convert strategy into clear roadmaps and predictable delivery; manage cross-team dependencies and outcomes across multiple concurrent initiatives.
  • Lead at scale: guide architecture for high-throughput streaming pipelines (Kafka, Flink), robust batch ETL (Airflow, Redshift, S3), and modern open table formats (Apache Iceberg, Athena) – ensuring reliability, performance, and cost efficiency.
  • Advance the self-service platform: drive the vision and delivery of tooling and workflows that enable application teams to quickly onboard new datasets to the platform with minimal friction.
  • Champion AI-assisted engineering: be an early adopter of AI tools (coding assistants, AI-driven testing, automated code review, etc.) and drive adoption across the team. Develop strategies, procedures, and best practices for using AI effectively to accelerate development velocity, improve code quality, and reduce toil.
  • Elevate engineering excellence: advance data quality, observability, lineage, testing, validation, incident response, and cost/performance optimization across all pipelines and datasets.
  • Own outcomes: ensure SLAs/SLOs, data freshness, query performance, and data quality meet business goals; deliver measurable improvements in reliability and cost efficiency.
  • Partner broadly: collaborate with product, application, analytics, and domain teams to translate business needs into durable, scalable data assets and well-documented data contracts.

Benefits

  • Medical, dental and vision insurance
  • Health Savings Account
  • Flexible Spending Accounts
  • Telehealth
  • 401(k) and 401(k) match
  • Life and AD&D insurance
  • Short-Term and Long-Term Disability
  • FTO or PTO
  • Employee Well-Being program
  • 11 paid holidays plus 1 inclusive holiday per year
  • Volunteer Time Off
  • Employee Referral program
  • Education Reimbursement Program
  • Employee Recognition and Appreciation program
  • Additional perk and voluntary benefit programs
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service