Vice President, Engineering — Data, Platforms & Digital Products

Revolution MedicinesRedwood City, CA
3dHybrid

About The Position

Revolution Medicines is a clinical-stage precision oncology company focused on developing novel targeted therapies to inhibit frontier targets in RAS-addicted cancers. The company’s R&D pipeline comprises RAS(ON) Inhibitors designed to suppress diverse oncogenic variants of RAS proteins, and RAS Companion Inhibitors for use in combination treatment strategies. As a new member of the Revolution Medicines team, you will join other outstanding Revolutionaries in a tireless commitment to patients with cancers harboring mutations in the RAS signaling pathway. The Opportunity: We are pioneering a data-driven discovery and development ecosystem that integrates chemistry, biology, and digital innovation to accelerate insight generation across the R&D continuum — from discovery to clinical development and commercialization. The VP, Engineering — Data, Platforms & Digital Products reports to the Chief Digital Officer. This position will be the senior engineering counterpart to the VP, Head of Data Product Management, jointly owning the end-to-end execution of RevMed’s data and AI platform strategy. This position will be responsible for building and operating the engineering “engine” that turns strategy into shipped, reliable capabilities—spanning platforms, data products, integrations, and user-facing digital experiences.

Requirements

  • 20+ years in software engineering with substantial leadership experience across platform engineering and product delivery.
  • Proven experience building and operating enterprise data platforms and data engineering capabilities at scale.
  • Strong architecture background across cloud, distributed systems, integration patterns, and modern data stacks.
  • Demonstrated ability to ship user-facing digital products that integrate multiple systems and support real workflows.
  • Hands-on operational excellence: reliability, observability, incident response, security-by-design, and cost-aware engineering.
  • Track record of building high-performing teams, setting standards, and delivering through ambiguity.
  • Bachelor’s degree in Computer Science/Engineering or equivalent experience.

Nice To Haves

  • Advanced degree a plus.

Responsibilities

  • Ship production-grade data products through the Data Engineering function—curated, governed, and dependable datasets/services with automated pipelines, quality controls, lineage, and clear operational ownership.
  • Build the integration layer that connects existing transactional systems to each other and to data products—APIs, data contracts, connectors, event/stream patterns, and workflow services.
  • Deliver new digital products and workflow applications that sit on top of systems + data products—purpose-built experiences that enable new end-to-end workflows.
  • Stand up and run DataOps / MLOps / LLMOps so analytics, ML, and GenAI move from prototypes to production—CI/CD, environments, monitoring, evaluation, governance, reliability, and cost controls.
  • Enable self-serve insight experiences such as analytical copilots / “ask me anything” applications that expose trusted data safely, with appropriate guardrails, observability, and feedback loops.
  • Provide the secure cloud and engineering foundation (cloud infrastructure engineering, CI/CD, IaC, identity/access patterns, observability) that makes delivery fast, consistent, and scalable across domains.
  • Partner with the Information Sciences organization (owners of enterprise business applications and transactional systems) to ensure platform and product engineering efforts integrate cleanly with system roadmaps, data stewardship, and operational ownership.
  • Build, lead, and scale teams across data engineering, platform engineering, cloud infrastructure engineering, architecture, and software engineering.
  • Establish the operating model for execution across central platforms and domain delivery teams, enabling speed while maintaining standards and reliability.
  • Partner closely with Security, Privacy, QA/Validation (as applicable), and business stakeholders to ensure delivery is secure, compliant, and adopted.
  • Drive an outcomes-oriented culture: product-minded engineering, measurable impact, and disciplined execution.
  • Implement repeatable patterns for data pipelines, quality gates, testing, backfills, versioning, and remediation.
  • Ensure high trust through consistent metadata, lineage, stewardship, and access controls embedded in engineering workflows.
  • Define and enforce platform standards through templates, reference implementations, and automated guardrails (not “docs-only” governance).
  • Lead architecture across cloud, data, and application layers to ensure scalability, interoperability, and long-term maintainability.
  • Build the developer experience: self-service environments, golden paths, reusable libraries, and observability baked into every workload.
  • Establish integration patterns that connect transactional systems to each other and to the data platform.
  • Build shared services for workflow orchestration, eventing, APIs, and data contracts to reduce fragmentation and vendor lock-in.
  • Improve time-to-integrate for new systems and partners by standardizing connectors and exchange patterns.
  • Deliver new user-facing applications that solve workflow gaps across R&D, G&A, Commercial and operations—integrating systems and data products into cohesive experiences.
  • Co-design digital experiences with Information Sciences to align with enterprise application architecture, identity/access patterns, and operational support responsibilities.
  • Ensure applications meet enterprise expectations: performance, reliability, security, and maintainability.
  • Build and operate the tooling and practices to productionize analytics, ML, and GenAI.
  • Own CI/CD for data and models, environment strategy, monitoring, evaluation, governance controls, and cost management.
  • Collaborate with Information Sciences and Security to ensure AI experiences respect authorization boundaries, data access policies, and auditability requirements.
  • Establish safe GenAI patterns (e.g., RAG/agent architectures, evaluation harnesses, usage telemetry, guardrails) suitable for enterprise decision-making.
  • Own cloud foundations, environment provisioning, logging/monitoring, and incident response.
  • Establish reliability practices: SLOs, on-call readiness (as appropriate), runbooks, operational dashboards, and post-incident learning.
  • Drive cost visibility and controls (FinOps) across platform and product workloads.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service