About The Position

Sumo Logic is hiring a Software Engineer II for our Core Ingest team. You will actively contribute to the design and development of new distributed data processing capabilities across large data platforms. You will be instrumental in helping us solve complex, low-latency, distributed systems challenges across large data to handle our ever-increasing scale. Our system is a highly distributed, fault-tolerant, multi-tenant platform that includes state-of-the-art components related to storage, messaging, search, and analytics. This system ingests and analyzes petabytes of data a day, while making exabytes of data available for search and forensic analysis. Sumo Logic supports both remote and hybrid locations for this role.

Requirements

  • B.S. or higher in Computer Sciences or related discipline (M.S. a plus)
  • 2+ years of industry experience with a proven track record of ownership and delivery
  • Experience developing scalable distributed data processing solutions
  • Experience in multi-threaded programming
  • Experience in running large, scalable distributed services following a microservice architecture
  • Hands-on object-oriented programming experience (e.g., Java, Scala)
  • Excellent verbal and written communication skills
  • Willingness and experience with occasional on-call availability. Rotations scheduled approximately every 6-8 weeks for a 12-hour timeline, duration 1 week primary, 1 week to assist primary only if needed, starting 9-11 am PDT/MDT/CDT and ending 12 hours later.
  • Must be authorized to work in the United States at the time of hire and for the duration of employment. At this time, we are not able to offer non-immigrant visa sponsorship for this position.

Nice To Haves

  • Experience in big data and/or 24x7 commercial service is highly desirable.
  • Experience with Apache Kafka is a plus.
  • You should be happy working with Unix (Linux, OS X).
  • Agile software development experience (test-driven development, iterative and incremental development) is a plus.

Responsibilities

  • Designing and implementing large data, extremely high-volume, fault-tolerant, scalable backend systems that process and manage petabytes of customer data.
  • Working to improve algorithms built to schedule load on clusters of thousands of machines elastically at runtime.
  • Improving systems to provide performance guarantees to customers in a shared-everything multi-tenant architecture.
  • Contributing to the re-architecting of our internal message processing technology to petabyte per day scale.
  • Helping manage exabytes of data using the latest and greatest technologies such as Kafka, Kubernetes and Docker.
  • Partnering across Sumo interfacing with multiple teams including Search, Security and Metrics & Tracing to identify requirements and architect solutions to meet their data core ingest needs.

Benefits

  • bonus or commission plans
  • equity awards

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Associate degree

Number of Employees

251-500 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service