Founding Machine Learning Engineer

CompositeSan Francisco, CA
19h

About The Position

We're looking for founding Machine Learning Engineers (MLEs) to own and improve our core action models end-to-end - the intelligence that powers Composite's proactive automation platform. You'll work at the intersection of LLM inference, browser understanding, and low-latency systems, shipping models that need to feel instant (sub-250ms) while reasoning over complex page state and user context. Unlike hosted browser solutions that introduce latency and auth barriers, or consumer-focused "AI browsers," we run AI directly through professionals' existing browsers via a Chrome extension, creating instant response times with zero migration or IT friction. This architecture creates unique ML challenges. This is a high-ownership role on our small, exceptional team where your work ships directly to users and has the potential to tangibly improve the work lives of hundreds of millions of people. About Composite College-educated professionals spend 85% of their day as digital factory workers in Chrome, clicking through repetitive browser tasks. Composite is building the proactive layer for productivity so professionals around the world can focus on meaningful, high-leverage work. We're training action prediction models that run in real time, anticipating what you'll do next based on page context and prior interactions. We've raised $5.6M in seed funding led by Nat Friedman and Daniel Gross, with participation from Menlo Ventures, Anthropic's Anthology Fund, SVAngel, and other incredible investors.

Requirements

  • ML & Systems Strong ML fundamentals with hands-on experience training and deploying models in production
  • Obsessive about latency — experience optimizing inference pipelines to feel instant to end users
  • Deep care about data quality, with the instinct to build tooling that ensures it
  • Experience with LLMs, transformer architectures, or sequence prediction problems
  • Comfortable working across the stack — our system spans a Chrome extension, Electron app, Cloudflare Workers edge proxy, and inference providers
  • Character: You're someone we'd want to work closely with for the next ten years. You approach challenges with curiosity rather than ego. You're a team player, a great communicator, and aren't afraid to be wrong.
  • Work Ethic: You're energized by hard problems and comfortable working intensely toward ambitious goals.
  • Raw Intelligence: You can quickly understand complex systems and solve novel, ambiguous problems with self-guidance.

Nice To Haves

  • Experience with browser automation, Chrome extensions, or web scraping at scale
  • Familiarity with accessibility tree / DOM parsing for page understanding
  • Background in RL or online learning from user interaction data
  • Experience with vector databases (e.g., Turbopuffer, Pinecone) and hybrid search
  • Full-stack development experience (TypeScript, Node.js, React)

Responsibilities

  • Improve the accuracy and latency of our core models across diverse web applications to predict users' intended next actions and execute them faster than manual input
  • Design and optimize LLM inference pipelines, including token caching strategies, streaming architectures, and network-level optimizations between client and server
  • Build evaluation frameworks and data pipelines to measure and improve model quality at scale
  • Experiment with retrieval-augmented approaches using vector databases for contextual memory
  • Develop synthetic data generation pipelines for browser interaction training data
  • Work with DOM states, accessibility trees, and user interaction data to improve browser understanding
  • Ship features end-to-end that go directly to users — this is not a research-only role
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service