About The Position

The proliferation of machine log data has the potential to give organizations unprecedented real-time visibility into their infrastructure and operations. With this opportunity comes tremendous technical challenges around ingesting, managing, and understanding high-volume streams of heterogeneous data. As a Staff Software Engineer - Core Ingest, you will actively contribute and lead engineers in the design and development of new distributed data processing capabilities. You will be instrumental in helping us solve complex, low-latency, distributed systems challenges to handle our ever-increasing scale. Our system is a highly distributed, fault-tolerant, multi-tenant platform that includes bleeding-edge components related to storage, messaging, search, and analytics. This system ingests and analyzes petabytes of data a day, while making exabytes of data available for search and forensic analysis. You are a strong software engineer who is passionate about large-scale systems. You care about producing clean, elegant, maintainable, robust, well-tested code; you do this as a member of a team, helping the group come up with a better solution than you would as individuals. Ideally, you have experience with performance, scalability, and reliability issues of 24x7 commercial services.

Requirements

  • B.S. or higher in Computer Sciences or related discipline (M.S. a plus)
  • 6-8+ years of industry experience with a proven track record of ownership and delivery
  • Experience developing scalable distributed data processing solutions
  • Experience in multi-threaded programming
  • Experience in running large scalable distributed services following a microservice architecture
  • Experience in Apache Kafka
  • Hands-on object-oriented programming experience (e.g., Java, Scala)
  • Excellent verbal and written communication skills
  • Understand performance characteristics of commonly used data structures (maps, lists, trees, etc).

Nice To Haves

  • Experience in big data and/or 24x7 commercial service is highly desirable.
  • You should be happy working with Unix (Linux, OS X).
  • Agile software development experience (test-driven development, iterative and incremental development) is a plus.

Responsibilities

  • Designing and implementing extremely high-volume, fault-tolerant, scalable backend systems that process and manage petabytes of customer data.
  • Work to improve algorithms built to schedule load on clusters of thousands of machines elastically at runtime.
  • Experience in multi-threaded programming and distributed systems
  • Improve systems to provide performance guarantees to customers in a shared-everything multi-tenant architecture.
  • Lead and contribute to the re-architecting of our internal message processing technology to petabyte per day scale.
  • Help manage exabytes of data using the latest and greatest technologies such as Kafka, Kubernetes, and Docker.
  • Work across Sumo, interfacing with multiple teams, including Search, Security, and Metrics & Tracing, to identify requirements and architect solutions to meet their data core ingest needs.

Benefits

  • bonus or commission plans
  • benefits offerings
  • equity awards
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service