Rippling People Center-posted 3 months ago
Full-time
San Francisco, CA
1,001-5,000 employees
Publishing Industries

At Rippling's core is a sophisticated data platform that connects thousands of data sources through the employee record, creating a comprehensive graph that powers our entire ecosystem. This employee-centric graph enables complex workflows, approval processes, permission systems, and analytical capabilities. We're building a high-performance, distributed data platform that must support real-time event processing across thousands of integrations, complex data transformations and enrichment workflows, low-latency queries for interactive user experiences, massive scale processing for analytics and reporting, and cross-system data consistency with strong reliability guarantees. Our technology stack includes Kafka, Flink, MongoDB, PostgreSQL, Apache Pinot, Apache Presto, Amazon S3-each carefully selected to address specific data processing patterns and access requirements.

  • Architect for Scale: Design and implement next-generation streaming data infrastructure to handle 100x growth in data volume and velocity while maintaining performance and reliability SLAs.
  • Build Unified Data Pipelines: Create robust, fault-tolerant streaming pipelines that seamlessly connect disparate systems, ensuring data consistency across our distributed architecture.
  • Solve Complex Distributed Systems Challenges: Tackle problems like exactly-once processing, event ordering, schema evolution, cross-datacenter replication, and graceful failure recovery.
  • Drive Technical Strategy: Collaborate with product and engineering leadership to define the technical roadmap for Rippling's streaming infrastructure, making critical architecture decisions that will shape our platform for years to come.
  • Mentor and Lead: Guide junior engineers through complex technical challenges, establish best practices, and elevate the entire team's capabilities through knowledge sharing and code reviews.
  • Operational Excellence: Implement sophisticated observability solutions, establish SLOs, create runbooks, and participate in on-call rotations to ensure the reliability of mission-critical systems.
  • 8+ years of experience building distributed systems with a focus on high-throughput data processing
  • Deep expertise with stream processing technologies (Kafka, Flink, Spark Streaming, etc.)
  • Experience working in a fast-paced, dynamic environment.
  • Experience in building projects with the right user abstractions and architecture.
  • Comfortable developing scalable and extendable core services used in many products.
  • Competitive salary
  • Benefits
  • Equity
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service