Staff Software Engineer - Supernal

InfinityNew York, NY
Remote

About The Position

Supernal helps small-to-medium businesses hire their first AI employee. Our AI teammates are built using intelligent, agentic workflows deployed on a proprietary platform. We deliver working, value-generating AI Employees—not tools—that handle real business processes alongside human teams. The Role We're looking for a Staff/Principal Software Engineer to own and evolve the core platform that powers our AI employees. This is a technical leadership position responsible for the systems that enable our agents to scale reliably: the Django backend, distributed task infrastructure, event-driven architecture, Kubernetes deployments, and observability stack. You'll work across the full system—from database query optimization to Helm chart tuning to designing new platform abstractions. You'll be a force multiplier for the engineering team, driving architectural decisions, eliminating scaling bottlenecks, and establishing patterns that make the platform more robust and developer-friendly. This role reports to the Director of Engineering and involves significant autonomy in shaping technical direction.

Requirements

  • 10+ years building and operating production backend systems at scale
  • Deep expertise in Python (Django preferred) and relational databases (PostgreSQL)
  • Hands-on experience with Kubernetes, Helm, and cloud infrastructure (GCP preferred)
  • Strong background in distributed systems: message queues, event sourcing, workflow orchestration
  • Production experience with async task systems (Celery, Dramatiq, or similar)
  • Track record of debugging complex production issues across multiple services
  • Ability to work autonomously and drive technical initiatives without close supervision
  • Clear technical communication—able to explain tradeoffs and build consensus
  • Overlap with Americas timezones for collaboration
  • Reliable high-speed internet

Nice To Haves

  • Experience with Temporal or similar workflow engines
  • Background in LLM infrastructure, RAG systems, or AI/ML platforms
  • Familiarity with OpenTelemetry, Datadog, or similar observability stacks
  • Experience with KEDA or other Kubernetes autoscaling solutions
  • Contributions to multi-tenant SaaS platform architecture
  • History of improving developer experience and platform abstractions

Responsibilities

  • Drive platform architecture decisions and align the team on scalable patterns and long-term maintainability
  • Review a high volume of code, design docs, and architectural proposals for scalability, reliability, security, and operability
  • Be a technical mentor and force multiplier: unblock engineers, raise the bar on production readiness, and establish platform best practices
  • Own and evolve the core backend platform (Django/DRF/ASGI) performance and correctness
  • Scale async execution across Celery + Dramatiq + Temporal/Cortex; implement resilient workflow patterns (retries, circuit breakers, graceful degradation)
  • Optimize PostgreSQL/pgvector (query tuning, connection pooling) and caching strategies
  • Maintain and improve Kubernetes deployment infrastructure (GKE, Helm, Terraform/OpenTofu) and CI/CD + rollout strategies. Own KEDA autoscaling policies and resource allocation across worker pools.
  • Own reliability of RabbitMQ, Redis, and PostgreSQL infrastructure; lead incident response and post-mortems
  • Extend OpenTelemetry + Datadog instrumentation, dashboards, alerts, and SLOs; profile and reduce latency/memory bottlenecks
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service