AI or ML Engineer - Remote

UnitedHealth GroupEden Prairie, MN
Remote

About The Position

Optum Tech is a global leader in health care innovation. Our teams develop cutting-edge solutions that help people live healthier lives and help make the health system work better for everyone. From advanced data analytics and AI to cybersecurity, we use innovative approaches to solve some of health care’s most complex challenges. Your contributions here have the potential to change lives. Ready to build the next breakthrough? Join us to start Caring. Connecting. Growing together. Optum AI is UnitedHealth Group’s enterprise AI team. We are AI/ML scientists and engineers with deep expertise in AI/ML engineering for healthcare. We develop AI/ML solutions for the highest impact opportunities across UnitedHealth Group businesses including UnitedHealthcare, Optum Financial, Optum Health, Optum Insight, and Optum Rx. In addition to transforming the healthcare journey through responsible AI/ML innovation, our charter also includes developing and supporting an enterprise AI/ML development platform. As an AI/ML Engineering, you will be an important part of teams responsible for building and operating scalable machine learning platforms and production ML systems across the enterprise. You will drive the design and implementation of ML infrastructure, model lifecycle management systems, and MLOps platforms that enable reliable experimentation, deployment, monitoring, and governance of machine learning and generative AI models. This role requires deep experience in MLOps and cloud-based ML platforms, and the ability to collaborate closely with data science, engineering, and platform teams. You’ll enjoy the flexibility to work remotely from anywhere within the U.S. as you take on some tough challenges. For all hires in the Minneapolis or Washington, D.C. area, you will be required to work in the office a minimum of four days per week.

Requirements

  • 5+ years of experience in machine learning engineering, MLOps, or AI platform engineering building production ML systems and scalable model pipelines
  • 4+ years of experience programming in Python for ML systems with familiarity with frameworks such as PyTorch, TensorFlow, or scikit-learn
  • 3+ years of experience working with ML lifecycle platforms such as MLflow, Kubeflow, SageMaker, Azure ML, or GCP Vertex AI
  • 3+ years of experience building cloud-native ML platforms using Docker, Kubernetes, and distributed systems
  • 3+ years of experience working with distributed data processing and orchestration tools such as Spark, Ray, Airflow, Dagster, or Prefect
  • 3+ years of experience with Generative AI and LLMs, including prompt engineering, prompt chaining, and fine-tuning (instruction tuning and LoRA/qLoRA)

Nice To Haves

  • Master’s degree in Computer Science, Engineering, Data Science, or related discipline
  • Experience with LLM orchestration frameworks (e.g., LangChain, LlamaIndex, Semantic Kernel)
  • Experience working in regulated or enterprise environments, with emphasis on security, compliance, and responsible AI
  • Experience with vibe coding tools, such as Cursor, Claude Code, and Replit
  • Experience operating multi-cloud or hybrid ML platforms
  • Experience in Healthcare or Life Sciences
  • Knowledge of LLM cost optimization and performance tuning techniques
  • Exposure to knowledge graphs or hybrid search
  • Contributions to open-source ML or MLOps tooling

Responsibilities

  • Build enterprise ML and GenAI platforms supporting experimentation, model training, evaluation, deployment, monitoring, and lifecycle management
  • Productionize machine learning and generative AI models using batch and real-time inference architectures
  • Build and operate MLOps and LLMOps pipelines including CI/CT/CD workflows for model testing, validation, versioning, and promotion across environments
  • Develop scalable, cloud-native ML infrastructure using Docker, Kubernetes, and cloud ML platforms such as AWS SageMaker, Azure ML, or GCP Vertex AI
  • Build and manage LLM application stacks, including LLM gateways, orchestration layers, model routing, caching, and cost/performance optimization
  • Implement model monitoring and lifecycle management systems to track drift, latency, bias, and data quality while enabling automated retraining
  • Ensure governance, security, and compliance of ML systems including lineage, auditability, reproducibility, and observability
  • Partner with data scientists, data engineers, and software engineers to define production ML standards and scalable AI solutions

Benefits

  • a comprehensive benefits package
  • incentive and recognition programs
  • equity stock purchase
  • 401k contribution
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service