GenAI Machine Learning Ops Engineer

Enact Mortgage InsuranceRaleigh, NC
1dHybrid

About The Position

At Enact, we understand that there’s no place like home. That’s why we bring our deep expertise, insightful offerings, and extra mile service to work every day to help lenders put more people in homes and keep them there. We’re looking for a GenAI Machine Learning Ops Engineer in Raleigh, NC to join us in fulfilling our mission, while utilizing our values of excellence, improvement, and connection. In this role, you’ll design, build, and operate the end-to-end platform and pipelines that move models from notebooks to resilient, observable, cost-efficient production services on AWS. You’ll partner with Data Science, Cloud Engineering, and Enterprise Architecture to standardize patterns, automate delivery, and ensure models are versioned, governed, and monitored across their lifecycle—batch and real time. The ideal candidate has a proven track record building and operating cloud-native ML platforms and pipelines—taking models from experimentation to reliable, observable, and cost-efficient production. They lead end-to-end delivery across training, packaging, CI/CD, deployment (batch and real-time), and monitoring, using modern tooling and Infrastructure as Code. They’re comfortable mentoring data scientists and engineers when needed on MLOps best practices—containerization, testing, model/version governance, and cloud-first design—while setting standards, templates, and golden paths that level up the team’s ability to ship ML services at scale. LOCATION Enact Headquarters, Raleigh, NC – Hybrid Schedule

Requirements

  • Bachelor’s degree in computer science, information technology, cloud engineering or similar field
  • 5+ years experience with Python (pandas, PySpark, scikit-learn; familiarity with PyTorch/TensorFlow helpful), bash, Make; strong containerization with Docker.
  • 5+ years experience with ML Tooling: Experience with SageMaker (training, processing, pipelines, model registry, endpoints) or equivalents (Kubeflow, MLflow/Feast, Vertex, Databricks ML).
  • 5+ years experience with Pipelines & Orchestration: Step Functions, SageMaker Pipelines, event-driven designs with EventBridge/SQS/Kinesis.
  • 3+ + years experience with AWS Foundations: ECR/ECS, Lambda, API Gateway, S3, Glue/Athena/EMR, RDS/Aurora (PostgreSQL/MySQL), DynamoDB, CloudWatch, IAM, VPC, WAF.
  • Snowflake Foundations: Warehouses, databases, schemas, stages, Snowflake SQL, RBAC, UDF, Snowpark
  • 3+ years hands on experience with CI/CD: CodeBuild/CodePipeline or GitHub Actions/GitLab; blue/green, canary, and shadow deployments for models and services.
  • Proven experience with feature pipelines for batch/stream, schema management, partitioning, performance tuning; parquet/iceberg best practices.
  • Demonstrated ability to perform unit/integration tests for data and models, contract tests for features, reproducible training; data drift/performance monitoring.
  • Experience with IaC & Platforms: Terraform (preferred), parameterized modules, environment promotion, tagging and cost governance.
  • Demonstrated operational mindset with experience in incident response for model services, SLOs, dashboards, runbooks; strong debugging across data, model, and infra layers.
  • Clear communication, collaborative mindset, and a bias to automate & document.

Nice To Haves

  • Experience adhering to and implementing data security and governance best practices in a highly regulated environment (encryption, RBAC, auditing, regulatory standards – HIPAA, SOC2, etc)

Responsibilities

  • Productionize ML: Build repeatable paths from experimentation to deployment (batch, streaming, and low-latency endpoints), including feature engineering, training, evaluation, packaging, and release.
  • Own ML Platform: Stand up and operate core platform components—model registry, feature store, experiment tracking, artifact stores, and standardized CI/CD for ML.
  • Pipeline Engineering: Author robust data/ML pipelines (orchestrated with Step Functions / Airflow / Argo) that train, validate, and release models on schedules or events
  • Observability & Quality: Implement end-to-end monitoring, data validation, model/drift checks, and alerting SLA/SLOs.
  • Governance & Risk: Enforce model/version lineage, reproducibility, approvals, rollback plans, auditability, and cost controls aligned to enterprise policies.
  • Partner & Mentor: Collaborate with on-shore/off-shore teams; coach data scientists on packaging, testing, and performance; contribute to standards and reviews.
  • DevEx & Templates: Provide golden paths, IaC modules, and templates that help DS/DE teams ship safely and quickly (containers, build specs, Git workflows).
  • Hands-on Delivery: Prototype new patterns; troubleshoot production issues across data, model, and infrastructure layers.

Benefits

  • Hybrid work schedule (in-office days Tues/Wed/Thurs)
  • Generous Time Off
  • 40 Hours of Volunteer Time Off
  • Tuition Reimbursement and Student Loan Repayment
  • Paid Family Leave and Flexible Spending Accounts
  • 401k with up to 5% employer match
  • Fitness and Emotional Wellness Reimbursements
  • Onsite Gym
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service