Data Engineer I

ClimateAiSan Francisco, CA
$115,000 - $145,000Hybrid

About The Position

At ClimateAi, we choose to act. We believe resilience is just as urgent as mitigation. We are building technology to empower people and industries to make smarter, faster decisions in the face of weather volatility. Our mission is to climate-proof the global economy, with the goal of achieving zero loss of lives, livelihoods, and nature. From farmers and supply chain managers to risk analysts and policymakers, our users depend on ClimateAi’s forecasts and insights to prepare for what’s coming and take action in time. In 2022, ClimateAi was recognized by TIME Magazine’s Best Inventions, alongside innovators like OpenAI, for our breakthrough work in climate resilience technology. What if your next position helped protect entire communities and safeguard the future of food, water, and livelihoods? As a Data Engineer I at ClimateAi, you will be a hands-on contributor on the team that powers our climate models, forecasts, and customer-facing products. You’ll build and maintain the pipelines that turn raw climate, weather, and agronomic data into trusted, reliable datasets used every day by Data Science, ML Engineering, and Product. This is an early-career role designed for a strong CS or Computer Engineering graduate who is excited to grow into a great data engineer. You’ll work closely with senior engineers and a manager who is invested in your development — with regular code reviews, design feedback, and pairing on real production systems from day one.

Requirements

  • B.S. in Computer Science, Computer Engineering, or a closely related field (recent graduates encouraged to apply)
  • Solid programming fundamentals in Python and working knowledge of SQL
  • Exposure to cloud services (AWS, GCP, or Azure) as demonstrated through coursework, internships, or personal projects
  • Understands core data types, formats (CSV, Parquet, JSON), and ingestion patterns
  • Strong problem-solving instincts: you debug methodically, read other people’s code carefully, and ask for help when blocked
  • A growth mindset and openness to feedback. You treat code reviews and design feedback as opportunities to level up

Nice To Haves

  • Internship, co-op, or open-source experience writing data pipelines in production-like environments
  • Coursework or projects in distributed systems, machine learning, or climate/environmental science
  • Exposure to data pipeline scheduling and orchestration tools, such as Apache Airflow, Dagster, Argo Workflows, or similar

Responsibilities

  • Write production-grade Python and SQL to ingest, clean, and transform climate, weather, and agronomic datasets from a wide range of third-party sources
  • Build and maintain scalable pipelines that perform scheduled and event-driven data workflows, following existing patterns and conventions
  • Contribute to data integrations across AWS services and connect new data services end-to-end with guidance from senior engineers
  • Participate in design discussions, code reviews, and team planning
  • Write clear documentation for the pipelines, data sources, and dependencies you own
  • Ask great questions and share what you’re learning!

Benefits

  • Competitive salary and equity
  • Medical, dental, vision benefits
  • Learning budget per year
  • Unlimited PTO policy with minimum time off requirements
  • Flexible working hours on many teams
  • Culture of diversity and inclusion including employee resource groups
  • Work with smart, curious, passionate people and be part of the mission to help the world
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service