About The Position

At Snowflake, we are powering the era of the agentic enterprise. To usher in this new era, we seek AI-native thinkers across every function who are energized by the opportunity to reinvent how they work. You don’t just use tools; you possess an innate curiosity, treating AI as a high-trust collaborator that is core to how you solve problems and accelerate your impact. We look for low-ego individuals who thrive in dynamic and fast-moving environments and move with an experimental mindset — who rapidly test emerging capabilities to discover simpler, more powerful ways to deliver results. At Snowflake, your role isn't just to execute a function, but to help redefine the future of how work gets done. Observe by Snowflake is an AI-powered observability platform built on the Snowflake AI Data Cloud and engineered for scale. We ingest and store logs, metrics, traces, and events on an open, scalable data lakehouse using open formats like Apache Iceberg — at dramatically lower cost. A dynamic Context Graph and chat-based AI SRE provide rich context and automated workflows so teams can move from detection to root cause and resolution 10x faster. Leading engineering teams at companies like Capital One, Topgolf, and Dialpad rely on Observe to troubleshoot hundreds of terabytes of telemetry daily while maintaining reliability at enterprise scale. As part of Snowflake, Observe combines startup-style ownership and velocity with the global reach, operational excellence, and ecosystem of one of the world's leading data platforms. We are hiring a Senior Software Engineer on the Observe team at Snowflake to own the streaming data product surface — the tables, views, and materialized views at the core of Observe's architecture. Observe's data lake approach lets customers correlate heterogeneous telemetry — logs, metrics, traces, events — across a unified data model. This role owns that data model: how customers define, shape, and query the semi-structured data that makes cross-signal correlation low-latency and cost-efficient, at petabyte scale, over continuous streaming telemetry.

Requirements

  • 7+ years of software engineering experience with deep expertise in databases, SQL, stream processing, or data pipeline systems
  • Deep knowledge of data processing or streaming internals — late-arriving data, backfill and reprocessing on schema changes, event-time vs. processing-time semantics — with experience building products and applications on top of them
  • Demonstrated experience designing and shipping APIs with strong taste in DB schema design, versioning, and developer ergonomics
  • An architect's mental model — you think in systems, interfaces, contracts, and long-term evolution rather than short-term hacks
  • A strong sense of user empathy and product intuition — you think beyond APIs and care about how customers define and query their data
  • Proficiency in Go or another systems language, with ability to write production-grade distributed systems code

Nice To Haves

  • Experience building customer-facing data modeling or pipeline authoring products
  • Hands-on experience with streaming semantics in production: watermarks, windowing, ordering, delivery guarantees, late-arriving data
  • Background in designing or extending query languages, schema DSLs, or transformation DAG semantics
  • Prior work building internal data platforms that turned raw event streams into curated, queryable tables for internal teams
  • Familiarity with Apache Iceberg, open table formats, or data lakehouse architectures

Responsibilities

  • Own the data modeling product surface — the APIs, schemas, and abstractions through which customers create tables, views, and materialized views that unify their telemetry for correlation and querying, designed for high-performance execution at scale
  • Design the right abstractions for how customers create and manage queryable data — from streaming materialized views to reference tables to log-derived metrics — each serving different needs but composing under one coherent, evolvable model
  • Define freshness and staleness semantics that let customers trust their materialized views are current, and design the controls to tune the trade-off between query latency and compute cost
  • Design APIs with strong schema taste: versioning, backwards compatibility, polymorphic data models, and clean contracts between systems
  • Drive requirements and shape the execution engine based on what the product surface needs
  • Layer complexity so an SRE gets a useful table from opinionated defaults in minutes, while a data engineer can express multi-stage pipelines with custom joins, windowing, and time-based aggregations
  • Lead a team technically — setting architectural direction, writing production code, and mentoring engineers

Benefits

  • Salary and benefits information can be found on the Snowflake Careers Site for jobs located in the United States: careers.snowflake.com
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service