zaimler-posted 4 months ago
San Mateo, CA

zaimler is building the semantic platform that links fragmented enterprise data and extracts meaning with knowledge-distilled models. We’re creating the foundation for AI systems that don’t just generate, but retrieve, link, and reason over enterprise knowledge. In just over a year, we’ve begun partnering with Fortune 500 design partners in insurance, travel, and technology, deploying semantic AI infrastructure into some of the world’s most complex data ecosystems. Our platform enables enterprises to make data AI-ready from the start: automating ontology creation, data mapping, and retrieval-augmented reasoning at scale. Our team comes from LinkedIn, Visa, Meta, and Branch, and has spent decades solving data and infrastructure challenges at scale. Backed by top VCs, we’re building the next foundational layer for enterprise AI.

  • Build and operate large-scale data pipelines on Spark, Kafka, and Ray.
  • Design fault-tolerant streaming and batch systems that move terabytes reliably.
  • Optimize data workflows for performance, cost, and latency.
  • Collaborate with ML and product engineers to ensure data is discoverable, structured, and queryable.
  • Automate deployments with Kubernetes, Terraform, and CI/CD pipelines.
  • Monitor, debug, and improve distributed jobs in production.
  • Deep experience with distributed data systems (Spark, Kafka, Flink, Ray).
  • Strong programming skills (Python, Scala, or Java).
  • Comfort with Kubernetes and cloud environments (AWS/GCP/Azure).
  • Solid understanding of streaming vs. batch tradeoffs, state management, and scaling patterns.
  • Ability to collaborate across data, infra, and ML teams.
  • Competitive salary, benefits, and meaningful equity.
  • Full benefits package (Medical, Dental, Vision, 401k).
  • Onsite culture in San Mateo, designed for deep collaboration and high-velocity building.
  • We sponsor H-1B visas and assist with immigration processes.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service