Senior Distributed Systems Engineer / Architect

RapidFort, Inc.
4d$170,000 - $200,000Hybrid

About The Position

RapidFort is a Series A cybersecurity company backed by $42M from leading investors, building the next generation of container and software supply-chain security. Our platform helps enterprises and U.S. government agencies eliminate vulnerabilities in container images, secure Kubernetes environments, and protect cloud-native infrastructure at runtime. Due to our work with DoD and U.S. federal customers, U.S. citizenship is required for this role. We are looking for a Distributed Systems Engineer / Architect to design and build highly scalable custom systems that process large volumes of data across CPU, disk, and network intensive workloads. This role is deeply hands-on and requires strong systems thinking, algorithm design, and performance optimization skills. You will work on core infrastructure and algorithms, building systems that maximize resource utilization across distributed environments. The ideal candidate enjoys working close to the metal, writing efficient code and tooling (primarily in Python and Bash) while building the instrumentation needed to continuously measure, analyze, and improve system performance. This role requires a data-driven mindset and a passion for building reliable, scalable systems from first principles.

Requirements

  • Strong experience building distributed systems or large-scale backend infrastructure
  • Deep understanding of systems performance (CPU, memory, disk I/O, networking)
  • Experience optimizing workloads for throughput and efficiency
  • Strong Python development skills
  • Strong Bash / shell scripting
  • Ability to implement and reason about algorithms and system-level logic
  • Experience with parallel processing, distributed job execution, or large data pipelines
  • Familiarity with Linux systems, resource scheduling, and performance tuning
  • Understanding of networked systems and distributed coordination
  • Strong data-driven mindset with focus on measurement and experimentation
  • Experience building observability, metrics, and instrumentation
  • Ability to debug complex systems in production environments

Nice To Haves

  • Experience with high-performance computing (HPC) workloads
  • Experience with containerized environments (Docker/Kubernetes)
  • Background in large-scale data processing or distributed compute frameworks
  • Familiarity with performance profiling tools and system tracing

Responsibilities

  • Design and implement scalable distributed systems that handle heavy CPU, disk, and network workloads.
  • Architect systems for high throughput, reliability, and efficient resource utilization.
  • Develop distributed algorithms and data processing pipelines.
  • Analyze system behavior to identify bottlenecks across compute, storage, and network layers.
  • Optimize workloads for maximum efficiency and minimal resource waste.
  • Develop strategies for parallelization, batching, and workload scheduling.
  • Implement system components and tooling primarily in Python and Bash.
  • Build custom orchestration, automation, and distributed job execution mechanisms.
  • Write efficient algorithms and low-level logic to manage large-scale workloads.
  • Build instrumentation, metrics, and telemetry to measure system performance.
  • Develop dashboards and analysis workflows to guide optimization decisions.
  • Use empirical data and experimentation to improve system behavior.
  • Design systems that operate reliably across distributed environments.
  • Implement monitoring, debugging, and recovery mechanisms for large-scale systems.
  • Collaborate with infrastructure and platform teams to ensure smooth deployment and operation.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service