Senior Software Engineer - Data Lake & BI

CoreWeaveSunnyvale, CA
15h$162,000 - $242,000Hybrid

About The Position

CoreWeave is The Essential Cloud for AI™. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability. Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025. Learn more at www.coreweave.com. What You'll Do: CoreWeave is the top-rated AI-cloud for high-performance GPU infrastructure across AI/ML, visual effects, rendering, and real-time inference. Our stack is engineered for speed, scale, and cost-efficiency—an unmatched alternative to traditional hyperscalers. At CoreWeave, infrastructure is the product. About this role: We're looking for a Senior Engineer to be a driving force on CoreWeave's Benchmarking & Performance team, with a singular focus on our planet-scale performance data warehouse. You will own the architecture and evolution of how we ingest, store, transform, and surface performance data across every data center in our global infrastructure—turning billions of raw events into the trusted, queryable insights that power our engineering and business decisions. If you believe that the right storage format, the right schema, and the right query engine can turn a mountain of telemetry into a competitive advantage, this role was built for you. You will shape the data foundations that underpin industry-leading benchmark publications, internal performance SLAs, and executive-level reporting—working hand-in-hand with world-class partners and communities to ensure every number we publish is authoritative, reproducible, and actionable.

Requirements

  • 5+ years of experience building distributed systems, data platforms, or cloud services.
  • Strong coding in Python or Go (C++ a plus) and deep familiarity with networked systems and performance.
  • Hands-on experience with Kubernetes at production scale, CI/CD, and observability stacks (Prometheus, Grafana, OpenTelemetry).
  • Demonstrated expertise with data lake architectures, columnar databases, and modern table formats (Iceberg, Parquet, Avro); you understand the trade-offs between them and know when to reach for each.
  • Practical experience designing and managing hot/cold storage tiers for large-scale analytical workloads.
  • Strong schema design instincts—you think in partitions, sort keys, and evolution strategies, not just tables and columns.
  • Working knowledge of time-series databases and fluency in PromQL or MetricsQL for building dashboards, alerts, and ad-hoc analysis.
  • Experience building BI views, visualizations, and data-driven playbooks that turn raw data into organizational decision-making tools.
  • Strong communicator comfortable collaborating with cross-functional teams and external partners.

Nice To Haves

  • Experience with time-series databases, LSM-based storage engines, or custom data pipelines.
  • Experience running MLPerf submissions or similar large-scale audited benchmarks.
  • Contributions to OSS projects such as Apache Iceberg, Apache Spark, Trino, llm-d, vLLM, or PyTorch.
  • Exposure to benchmarking large GPU fleets or multi-region clusters.
  • Experience with CUDA kernels, NCCL/SHARP, RDMA/NUMA, or GPU interconnect topologies.
  • Familiarity with data cataloging, lineage tools, or data governance frameworks.

Responsibilities

  • Data Lake Architecture - Design and build our core performance data lake on columnar storage foundations. Select, integrate, and optimize table formats (Apache Iceberg, Parquet, Avro) to balance query performance, storage cost, and schema evolution. Implement hot and cold storage tiering strategies that keep recent data instantly queryable while efficiently archiving historical benchmarks at petabyte scale.
  • Schema Design & Data Modeling - Define and govern schemas for performance telemetry: latency distributions, throughput metrics, GPU utilization, cost-per-token, and hardware health signals. Establish naming conventions, partitioning strategies, and lifecycle policies that keep the warehouse fast, consistent, and self-documenting as new workloads and hardware generations come online.
  • Time-Series & Metrics Infrastructure - Own and extend our time-series database (TSDB) layer. Write and optimize PromQL/MetricsQL queries that power real-time dashboards, alerting, and trend analysis across thousands of GPUs and hundreds of benchmark runs. Bridge the gap between streaming metrics and batch-analytical workloads so engineers get sub-second answers and analysts get complete historical context from the same data.
  • BI, Visualization & Data-Driven Processes - Build compelling, self-service BI views and dashboards (Grafana, Looker, or similar) that translate raw performance data into clear stories for engineers, product managers, and executives. Design playbooks and data-driven runbooks that tie benchmark regressions, capacity decisions, and competitive analyses directly to live data. Champion a culture where every performance claim is backed by a reproducible query and a versioned dataset.
  • Query Optimization & Performance - Profile and tune query engines against columnar and time-series stores; reduce scan times, optimize join strategies, and introduce materialized views or pre-aggregations where they matter most. Benchmark the benchmarking infrastructure itself—ensuring our data platform meets its own strict P99 latency and freshness SLAs.

Benefits

  • Medical, dental, and vision insurance - 100% paid for by CoreWeave
  • Company-paid Life Insurance
  • Voluntary supplemental life insurance
  • Short and long-term disability insurance
  • Flexible Spending Account
  • Health Savings Account
  • Tuition Reimbursement
  • Ability to Participate in Employee Stock Purchase Program (ESPP)
  • Mental Wellness Benefits through Spring Health
  • Family-Forming support provided by Carrot
  • Paid Parental Leave
  • Flexible, full-service childcare support with Kinside
  • 401(k) with a generous employer match
  • Flexible PTO
  • Catered lunch each day in our office and data center locations
  • A casual work environment
  • A work culture focused on innovative disruption

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

251-500 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service