About The Position

Planet’s mission is to image the entire world every day, making global change visible, accessible, and actionable. We are at a critical inflection point: moving from broad AI research to a delivery-focused "productization" model. To drive this, we are building a new product group focused on launching an AI Geospatial Assistant that transforms how our customers interact with global imagery to solve high-stakes problems in forensics and daily change detection. Our goal is to make these complex insights accessible through an intuitive interface that requires zero user training. Operating with a zero-to-one startup mindset, this team prioritizes weekly learning velocity and customer-driven graduation criteria to move rapidly from private alpha to general availability. As a Software Engineer, you will help to build the backend systems that bring our AI Geospatial Assistant to life. While our research teams develop the core models, you will be responsible for the 'last mile' of delivery, architecting the high-throughput backend services, scaling our systems, and ensuring our agentic workflows are fast, reliable, and cost-effective at a global scale. This is a full-time, hybrid role which will require you to work from our San Francisco office 3 days per week.

Requirements

  • Bachelor’s Degree in computer science or an equivalent field
  • 4+ years of experience building and scaling high-performance backend systems in Python, Go
  • Proficiency in AWS or GCP, including experience with Infrastructure as Code (Terraform) and CI/CD pipelines for high-availability services
  • Solid familiarity with LLM orchestration frameworks (LangChain, LlamaIndex) from an implementation perspective—knowing how to build reliable agents, handle retries, and manage state
  • Proficiency with vector search pipelines and high-performance distributed computing

Nice To Haves

  • Experience with distributed computing frameworks like Docker, Kubernetes, or Ray Serve
  • A track record of applying scientific rigor to "ground truth" verification in AI models to maintain user trust and credibility

Responsibilities

  • Develop and optimize multimodal LLM applications
  • Work with and support the infrastructure needed for scaling and delivering embeddings
  • Architect AI Orchestration: Build and maintain the high-scale systems required for LLM orchestration and agentic workflows, ensuring low-latency responses across petabytes of imagery
  • Operationalize Research: Collaborate with backend engineers to transition experimental AI models into stable, low-latency production inference endpoints
  • Build AI Observability: Implement production-grade monitoring, logging, and tracing for our AI services to ensure reliability and facilitate rapid debugging of our systems
  • Benchmark Performance: Define model success criteria and instrumentation to ensure the assistant consistently outperforms vanilla LLM alternatives

Benefits

  • Comprehensive Medical, Dental, and Vision plans
  • Health Savings Account (HSA) with a company contribution
  • Generous Paid Time Off in addition to holidays and company-wide days off
  • 16 Weeks of Paid Parental Leave
  • Wellness Program and Employee Assistance Program (EAP)
  • Home Office Reimbursement
  • Monthly Phone and Internet Reimbursement
  • Tuition Reimbursement and access to LinkedIn Learning
  • Equity
  • Commuter Benefits (if local to an office)
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service