About The Position

The Finance Data Team sits at the intersection of Finance & Accounting teams and Life360’s data. We provide the data ingestion / processing / reporting / egress needed by our partner teams in Finance & Accounting to enable their work and ensure SOX compliance with rigor. We push the envelope on how work is done through implementation of AI tools and capabilities to enhance our own pace of development and capabilities that we deliver to our stakeholders. We are hiring a bar-raising Staff Data Engineer to drive our ingress / egress capabilities and build cross-cutting capabilities to drive developer experience and security. This role requires someone who can step into ambiguity, make sound architectural decisions, eliminate operational fragility, and establish an engineering discipline that others adopt. You will serve as a technical reference point - shaping standards, influencing cross-team architecture, and driving initiatives to clear, production-ready outcomes. We value engineers who are direct, collaborative, and proactive in surfacing risks early, while helping build a team culture where high standards and psychological safety coexist. The Life360 Finance Data Team works as the integrator for numerous systems - bringing data into the Finance Data Warehouse, transforming it, and pushing it to its relevant destinations (reporting, data asset deliverables, tools, etc). To support our role we are continuously building and enhancing our system - adding new data, new transformations, and new tooling to improve developer velocity and ‘buy down’ overhead costs associated with maintaining our system. As a Staff Data Engineer you will drive forward the: Data ingestion suite, Data transformation suite, Data egress suite, CI/CD pipelines and other tools / capabilities to enhance the developer experience and velocity, Infrastructure and networking behind our warehouse and related connectors, Databricks configuration and capabilities, Security posture and access controls. The ideal candidate has spent years building out data platforms / infrastructure as well as creating ingress / egress data frameworks that are used in pipelines. They have tackled numerous challenges and found novel solutions to problems for data ingestion, processing, and egress. They have learned how to leverage LLMs for development velocity and analytics - not just asking it to write the code but leveraging the tools to support their development under clear guidance and accepts ownership of the work produced as their own. They have learned to think about scalability / velocity / experience of future development and not just shipping the current project. They are part software engineer, part data integration engineer, and part data platform engineer. They adhere to the controls / procedures / separation of duties necessary to maintain our SOX compliance. We are looking for someone with strong engineering depth who demonstrates ownership, decisiveness, and the ability to elevate both the system and the team around them.

Requirements

  • 8+ years designing and operating high-volume distributed data systems in production.
  • Deep expertise with a cloud data platform (Databricks strongly preferred) and AWS from an infrastructure / services architecture, deployment, and ownership perspective.
  • Strong proficiency in Python, SQL, and Spark for large-scale processing.
  • Strong proficiency with modern CI/CD practices (creating GitHub Actions, writing Terraform code to manage infrastructure in Databricks / Airflow / AWS / and others).
  • Hands-on experience with dbt from an infrastructure / deployment perspective and understanding of how platform decisions impact downstream modeling.
  • Strong grasp of data modeling, partitioning strategies, storage formats, and analytical workload optimization.
  • Experience with Airflow and data flow orchestration.
  • Experience with networking challenges in data ingestion (e.g., VPC peering, firewall traversal, API rate limiting, cross-AWS account access, etc.)
  • Able to effectively leverage / oversee LLM-supported code development while maintaining a high quality bar.
  • Demonstrated experience with AI tools to support / enhance development - Claude Code, Cursor, etc.
  • Demonstrated ability to independently scope ambiguous problems and drive them to decisive outcomes.
  • Track record of proactively escalating risks and closing long-running efforts with clear recommendations.
  • Experience defining ingestion validation standards and implementing data quality controls.
  • Proven ability to reduce operational fragility and eliminate single points of failure.
  • Strong systems design skills across distributed and event-based architectures.
  • Demonstrated technical leadership influencing cross-team architectural decisions.
  • Excellent communication skills across engineering, analytics, product, and executive stakeholders.
  • BS in Computer Science, Engineering, Mathematics, or equivalent experience.

Responsibilities

  • Architect and evolve scalable data ingestion and egress frameworks and pipelines that are well tested and offer strong data quality monitoring.
  • Architect and evolve our CI/CD processes - enhancing the testing environment and observability (such as building LLM-driven reviews with context awareness through data diffing, lineage analysis / downstream impact analysis, and general context).
  • Architect delivery architecture of data assets to external team partners to reduce manual operational overhead associated with month end close.
  • Enhance our Claude Code / LLM development support capabilities - creating tools / skills / agents that give our LLMs more context and help us continually improve their abilities to debug, create code, and maintain systems.
  • Enhance our security posture in our AWS / Databricks environment.
  • Design and implement distributed data processing systems using Spark and Databricks on AWS.
  • Establish clear ingestion and integration boundaries that eliminate single points of failure.
  • Proactively surface risks, dependencies, and tradeoffs before they impact delivery.
  • Produce clear technical artifacts and recommendations for stakeholders and leadership.
  • Design logical and physical data models balancing flexibility, performance, governance, and scalability.
  • Partner closely with the Analytics Engineers on the Finance Data Team to support high-quality downstream data modeling & reporting.
  • Harden pipelines with monitoring, alerting, SLAs, and recovery mechanisms.
  • Mentor engineers and elevate distributed systems rigor across the team.

Benefits

  • Competitive pay and benefits.
  • Medical, dental, vision, life and disability insurance plans (100% paid for US employees). We offer supplemental plans for medical and dental for Canadian employees.
  • 401(k) plan with company matching program in the US and RRSP with DPSP plan for Canadian employees.
  • Employee Assistance Program (EAP) for mental wellness.
  • Flexible PTO and 12 company wide days off throughout the year.
  • Learning & Development programs.
  • Equipment, tools, and reimbursement support for a productive remote environment.
  • Free Life360 Platinum Membership for your preferred circle.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service