About The Position

As the Product Manager for Data Center Optimization, you will lead the operationalization of high-performance Power and Cooling models under the Next Optimization framework. Your mission is to bridge the gap between facility infrastructure and workload intelligence by managing the lifecycle of optimization software across Cloud and On-Premises environments. This role requires a strong mathematical background to validate complex predictive models and ensure that automated logic accurately translates into physical infrastructure performance. You will own the service-led product lifecycle, turning deep analytical insights into global operational standards that maximize infrastructure availability and hardware throughput.

Requirements

  • Experience: 5+ years in Product Management or Technical Operations within Global Service organizations, specifically in DCIM, BMS, or Critical Infrastructure.
  • Mathematical Proficiency: Strong background in mathematics (Linear Algebra, Calculus, Statistics, or Numerical Analysis) to interpret AI model outputs and perform complex ROI and performance calculations.
  • AI & Software Expertise: Deep understanding of AI Optimization software stacks, including the integration of Cloud-based training/inference engines with On-Premises control logic for low-latency mechanical response.
  • Technical Knowledge: Strong understanding of liquid cooling (CDUs, Secondary Fluid Networks) and integrated electrical power distribution models.
  • Systems Thinking: Ability to map complex telemetry across power and cooling domains to physical service workflows and hardware safety guardrails.
  • Commercial Acumen: Experience in Performance-Based Contracting and ROI modeling, demonstrating value through service-led optimization.
  • Global Mindset: Proven ability to scale software-enabled services across multiple regions with varying technical maturity levels.

Responsibilities

  • Hybrid Software Strategy: Oversee the deployment and lifecycle of AI Optimization software, managing the architectural nuances between Cloud-based predictive modeling and On-Premises real-time execution layers.
  • Mathematical Modeling & Validation: Apply advanced statistical and mathematical methods to validate the efficacy of predictive cooling algorithms, ensuring that the software’s "Workload-Aware" logic aligns with actual thermodynamic and electrical outcomes.
  • Power & Cooling Model Strategy: Develop and maintain global service standards for integrated Next Optimization power and cooling models, ensuring that electrical and thermal management systems operate as a single, workload-aware entity.
  • Service-Led Roadmap: Define requirements for the global rollout of Next AI Optimization, ensuring service technicians and remote engineers have the tools to maintain peak hardware performance.
  • Operationalizing AI: Work with GSO to embed Next AI Optimization logic into standard operating procedures (SOPs), allowing global teams to preemptively address thermal challenges through workload-aware cooling strategies.
  • Performance Auditing: Establish the global framework for auditing Recovered Capacity and verifying performance gains through rigorous data analysis to trigger performance-based contract milestones.
  • SLA Innovation: Shift Global Service SLAs from "Response Time" to "Performance Guarantees," leveraging real-time telemetry to benchmark facility health against optimal conditions.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service