About The Position

NVIDIA’s Hardware Infrastructure organization is looking for a Senior Data Engineer to become part of the Data & Observability Platform. We serve and collaborate directly with NVIDIA’s rapidly growing AI, HW, and SW engineering and research teams to provide the data backbone that powers our massive-scale operations. We are seeking an Infrastructure-Focused Data Engineer to develop the foundational infrastructure of our data platform. In this role, you will build high-throughput pipelines that move petabytes of telemetry data and manage our central Data Lakehouse. Uniquely, you will also work in an embedded capacity with engineering teams, optimizing their data schemas and efficiency to solve real-world scale challenges. What you’ll be doing:

Requirements

  • BS or MS in Computer Science, Electrical Engineering, or related field (or equivalent experience).
  • 8+ years of experience in Data Engineering with a strong focus on Infrastructure, Streaming, or Platform building.
  • Strong Coding Fluency: Expert proficiency in Python for automation, tooling, and orchestration. Proficiency in Java or Scala for high-performance data processing (Spark/Flink).
  • Deep Streaming Expertise: Extensive experience with Kafka. You have a deep understanding of consumer groups, partition strategies, offset management, and handling backpressure in high-volume environments.
  • Data Lake Experience: Hands-on experience with modern table formats (Apache Iceberg, Delta Lake, or Hudi) and distributed query engines (Trino/Presto/Spark).
  • Containerization & Ops: Deploy, configure, and debug applications on Kubernetes using Helm.

Nice To Haves

  • Familiarity with EDA workflows, semiconductor design lifecycles, or experience managing simulation/emulation logs for hardware engineering teams.
  • Ability to navigate complex organizational structures, partnering with hardware architects and engineering leads to translate broad requirements into concrete data infrastructure solutions.
  • Experience migrating from legacy search stores (Elasticsearch/OpenSearch) to Cold Storage (S3/Iceberg).
  • Experience with high-performance log routing frameworks like Vector.
  • Background in identifying cost drivers in petabyte-scale environments and implementing storage cost optimization initiatives.

Responsibilities

  • Build Scalable Data Pipelines: Develop and deploy high-throughput, reliable pipelines to move substantial volumes of telemetry information from global edge locations to our central Data Lakehouse.
  • Architect the Data Lakehouse: Lead the implementation of our tiered storage strategy. You will design efficient schemas that optimize for both write-heavy real-time ingestion and fast, cost-effective interactive queries.
  • Orchestration & Automation: Modernize workflow scheduling by implementing robust, code-based data pipelines. You will build workflows that handle complex dependencies, automated backfills, and intelligent retries.
  • Drive Embedded Data Optimization: Partner directly with internal engineering teams to audit their data usage. You will identify heavy-hitter datasets and primary storage consumers, refactor inefficient schemas, and enforce lifecycle policies to significantly reduce storage costs.
  • Manage Data Infrastructure: Own the operation of the underlying platform. You will manage stateful deployments on Kubernetes, optimize Spark performance, and ensure the reliability of our streaming architecture.
  • Enforce Quality & Governance: Implement automated schema validation and data quality checks to prevent bad data from entering the lake. You will collaborate with security teams to apply automated masking and access controls.

Benefits

  • You will also be eligible for equity and benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service