Machine Learning Engineer

Inflection AIPalo Alto, CA
4d$172,000 - $250,000

About The Position

At Inflection AI, our public benefit mission is to harness the power of AI to improve human well-being and productivity. The next era of AI will be defined by agents we trust to act on our behalf. We’re pioneering this future with human-centered AI models that unite emotional intelligence (EQ) and raw intelligence (IQ)—transforming interactions from transactional to relational, to create enduring value for individuals and enterprises alike. Our work comes to life in two ways today: Pi, your personal AI, designed to be a kind and supportive companion that elevates everyday life with practical assistance and perspectives. Platform — large-language models (LLMs) and APIs that enable builders, agents, and enterprises to bring Pi-class emotional intelligence into experiences where empathy and human understanding matter most. We are building toward a future of AI agents that earn trust, deepen understanding, and create aligned, long-term value for all. About the Role As a Senior Machine Learning Engineer on the AI Engineering team, you will be a key technical leader responsible for designing and scaling the systems that bring our models from research into reliable, production-grade deployments. You will work at the intersection of large-scale ML systems, low-latency inference, distributed infrastructure, and product integration. Your work will directly impact how intelligence is delivered to millions of users—ensuring performance, reliability, safety, and continuous improvement of our AI systems.

Requirements

  • 1-4 years of experience in machine learning engineering, backend systems, or distributed infrastructure.
  • Proven experience deploying and operating ML models in production environments.
  • Strong programming skills in Python and/or C++ (or equivalent systems language).
  • Experience with large-scale model serving (LLMs, transformers, or similar architectures).
  • Deep understanding of distributed systems, API design, and cloud infrastructure.
  • Experience with MLOps tools and workflows (CI/CD, model monitoring, experiment tracking).

Nice To Haves

  • Experience scaling high-throughput, low-latency inference systems.
  • Familiarity with GPU acceleration, model optimization (quantization, batching, caching), and performance tuning.
  • Experience working with conversational AI systems or real-time user-facing AI products.
  • Knowledge of ML evaluation methodologies, safety systems, and guardrail design.
  • Background collaborating closely with research teams in fast-paced AI environments.

Responsibilities

  • Design and implement scalable, low-latency model-serving infrastructure for large language models and multimodal systems.
  • Build and maintain robust APIs and services to support real-time conversational workloads.
  • Optimize inference systems for throughput, latency, cost-efficiency, and reliability.
  • Architect and improve end-to-end ML pipelines spanning training, evaluation, deployment, monitoring, and rollback.
  • Develop model lifecycle management systems with strong observability and performance tracking.
  • Partner with infrastructure teams to scale compute resources efficiently across distributed environments.
  • Improve CI/CD workflows and automation for model releases and infrastructure updates.
  • Collaborate with ML researchers to productionize new model architectures and capabilities.
  • Design abstractions that enable rapid experimentation while preserving safety, quality, and reliability.
  • Implement evaluation frameworks and guardrails to ensure models meet performance and safety standards before deployment.
  • Define data requirements and feedback loops to enable continuous model improvement.
  • Partner with product and safety teams to integrate telemetry, evaluation signals, and user feedback into training pipelines.
  • Ensure high-quality data ingestion and metadata tracking for ML readiness.
  • Lead architectural decisions that balance performance, scalability, safety, and maintainability.
  • Contribute to code reviews and engineering best practices across the team.
  • Mentor engineers and raise the bar for production ML excellence.
  • Help shape long-term technical strategy for deploying AI systems at global scale.

Benefits

  • Diverse medical, dental and vision options
  • 401k matching program
  • Unlimited paid time off
  • Parental leave and flexibility for all parents and caregivers
  • Support of country-specific visa needs for international employees living in the Bay Area
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service