Backend Engineer

Mem0San Francisco Bay Area, CA
2dOnsite

About The Position

Role Summary: Own the backend that powers Mem0’s memory platform. You’ll design clean REST APIs, model data across relational and graph stores, and operate services in production. When customers hit issues, you’ll chase them down to root cause, ship fixes, and harden the system—while collaborating tightly with frontend and research to deliver fast, reliable features. What You'll Do: Design & ship REST APIs: Define contracts, versioning, auth, rate limits; write migrations and docs. Model data & schemas: Relational (Postgres) and graph (e.g., Neo4j); enforce integrity and performance. Debug customer issues end-to-end: Trace with logs/metrics/traces, reproduce, fix, and write preventative guardrails. Optimize performance: Tune slow SQL with EXPLAIN/ANALYZE , indexes, partitioning, pagination, and caching (e.g., Redis). Build services in Python: Async where it helps (FastAPI/Starlette, Django/DRF, Flask), background jobs, queues, schedulers. Operate in the cloud: Containerize with Docker , deploy on Kubernetes (EKS), and use AWS primitives (EC2, RDS/Aurora, S3, IAM). Instrument everything: Custom metrics, structured logging, tracing; set SLOs and alerts (CloudWatch/Prometheus/OpenTelemetry). Collaborate & ship: Work with frontend and research to scope APIs and deliver features to production.

Requirements

  • 3+ years building backend systems and shipping REST APIs to production.
  • Strong Python fundamentals; experience with async programming and a major web framework (FastAPI/Django/Flask).
  • Solid data modeling and SQL skills; hands-on with query tuning and performance debugging in Postgres/MySQL.
  • Experience with graph databases (e.g., Neo4j or Amazon Neptune) and appropriate data modeling trade-offs.
  • Comfortable running services on AWS with Docker and Kubernetes.
  • Demonstrated root-cause analysis and ownership from incident to prevention.
  • Clear communicator and effective collaborator with frontend, research, and customers.

Nice To Haves

  • GraphQL/gRPC; event-driven systems (SNS/SQS/Kafka) and background workers (Celery/RQ).
  • Caching, rate limiting, multi-tenancy, and feature-flag strategies.
  • Security & privacy best practices (PII handling, secrets management).
  • Deep observability experience (OpenTelemetry, SLO-based alerting).
  • Prior work with search/retrieval or memory systems.
  • On-call experience and running blameless postmortems.

Responsibilities

  • Design & ship REST APIs: Define contracts, versioning, auth, rate limits; write migrations and docs.
  • Model data & schemas: Relational (Postgres) and graph (e.g., Neo4j); enforce integrity and performance.
  • Debug customer issues end-to-end: Trace with logs/metrics/traces, reproduce, fix, and write preventative guardrails.
  • Optimize performance: Tune slow SQL with EXPLAIN/ANALYZE , indexes, partitioning, pagination, and caching (e.g., Redis).
  • Build services in Python: Async where it helps (FastAPI/Starlette, Django/DRF, Flask), background jobs, queues, schedulers.
  • Operate in the cloud: Containerize with Docker , deploy on Kubernetes (EKS), and use AWS primitives (EC2, RDS/Aurora, S3, IAM).
  • Instrument everything: Custom metrics, structured logging, tracing; set SLOs and alerts (CloudWatch/Prometheus/OpenTelemetry).
  • Collaborate & ship: Work with frontend and research to scope APIs and deliver features to production.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service