Engineer II – Data Engineer

GEICOPalo Alto, CA
Remote

About The Position

GEICO is looking for an experienced engineer who enjoys building fast, reliable platforms and applications that are easy to operate and designed for continuous availability. In this role, you will help advance our insurance business as we evolve into a technology organization grounded in engineering excellence. This role supports our Finance Data Warehouse. As an Engineer II, you will be an important part of our FinTech organization, to uphold strong standards for data protection, reliability, and availability. Our team succeeds by shipping high-quality technology products and services in a high-growth setting where priorities can change quickly. We are looking for someone with broad technical depth, comfortable across the stack from user-facing experiences through backend and data systems and the integrations that connect them.

Requirements

  • Strong hands-on experience with SQL, dbt and Python for data transformation and pipeline automation.
  • Proven understanding of data pipeline architecture (batch workflows, idempotency, data quality, error handling, backfills) and how pipelines interface with a warehouse-centric analytics stack.
  • Experience contributing to the architecture and design of data systems (layering, modeling patterns, reliability, scaling, cost awareness).
  • Working knowledge of structured data interchange (e.g., JSON, XML/CSV as sources), APIs, and file-based ingestion patterns as used in analytics pipelines.
  • Solid grounding in computer science fundamentals (e.g., complexity, joins, partitioning concepts) applied to data processing.
  • Experience with Git tools and standard branching/review workflows.
  • Familiarity with cloud data and orchestration services (e.g., Snowflake and managed Airflow or equivalent).
  • Experience with continuous delivery and Infrastructure as Code for pipeline repos or supporting infrastructure.
  • Strong oral and written communication skills.
  • Strong problem-solving and debugging skills across SQL, logs, and orchestration failures.
  • Practical experience working in an Agile environment.
  • Ability to deliver in a fast-paced, priority-driven setting.
  • Knowledge of developer tooling across the SDLC (task management, source control, build/deploy, operations, collaboration tools) including AI-assisted IDEs used responsibly alongside dbt and Airflow workflows.
  • 2+ years of non-internship professional experience in data engineering, software engineering with a data focus, or equivalent.
  • 2+ years contributing to design and architecture of data pipelines or analytics data products (models, DAGs, warehouse objects).
  • 2+ years building and operating ETL/ELT or transformation-heavy systems using SQL-centric tooling (required: dbt or equivalent transform discipline; Airflow or comparable orchestration).
  • 2+ years with AWS, GCP, Azure, or comparable cloud platforms in a data or backend context.

Responsibilities

  • Scope, design, and build scalable, resilient data pipelines (orchestration, transformation, delivery) that support analytics and downstream products.
  • Use modern developer tooling effectively, including AI-assisted coding (e.g., Cursor, GitHub Copilot) to accelerate delivery while maintaining code review, testing, and governance (no secrets in prompts or code, repo-aligned patterns).
  • Engage in cross-functional collaboration across the full data lifecycle with analysts, platform engineers, and product partners from requirements through production support.
  • Participate in design sessions and code reviews with peers to improve correctness, performance, security, and operability of data systems.
  • Define, create, and support reusable pipeline patterns and standards (e.g., layering, testing, incremental design, naming, documentation) from both business and technology perspectives.
  • Leverage AI models to create SQL and Python, dbt (models, tests, macros, incremental strategies), Apache Airflow (DAGs, dependencies, backfill/retry patterns), cloud data warehouse platforms (e.g., Snowflake), and related integration patterns; then leverage their expertise to review and improve code quality.
  • Execute delivery using an Agile methodology, continuous integration/continuous delivery, Infrastructure as Code where applicable, scripting for automation, platform consoles for warehouse and orchestration, and observability tooling (logging, metrics, alerting—for example dashboards and APM where used).
  • Build pipeline definitions and apply strong technical judgment to choose and implement solutions that balance latency, cost, freshness, and reliability.
  • Share best practices and improve processes within and across teams.

Benefits

  • 401K savings plan vested from day one that offers a 6% match
  • performance and recognition-based incentives
  • tuition assistance
  • mental healthcare
  • fertility and adoption assistance
  • workplace flexibility
  • GEICO Flex program, which offers the ability to work from anywhere in the US for up to four weeks per year
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service