At F5, we strive to bring a better digital world to life. Our teams empower organizations across the globe to create, secure, and run applications that enhance how we experience our evolving digital world. We are passionate about cybersecurity, from protecting consumers from fraud to enabling companies to focus on innovation. Everything we do centers around people. That means we obsess over how to make the lives of our customers, and their customers, better. And it means we prioritize a diverse F5 community where each individual can thrive. The Mission We are building an AI-native enterprise, and high-fidelity data is the substrate. We are looking for a technically fluent Product Manager to architect and scale an AI-Ready Data Quality Platform built on Databricks and Unity Catalog. This is not a traditional MDM or stewardship role. You will define and ship the platform capabilities that make our AI Data Fabric trustworthy, observable, and production-grade — from real-time anomaly detection to CI/CD-native schema enforcement to automated data contract validation. If you think of data quality as code, treat governance as infrastructure, and believe AI systems are only as good as the data feeding them — this role is for you. What You’ll Own Build the AI-Ready Data Quality Platform Define and ship native data quality capabilities inside Databricks Lakehouse Productize policies and controls within Unity Catalog (lineage, access, schema enforcement) Embed data contracts and validation logic directly into pipelines Partner with data engineering to integrate dbt-based transformation layers into quality frameworks Drive metadata, lineage, and semantic standardization as first-class platform features Operationalize Data Quality in the AI Data Fabric Design real-time anomaly detection systems (statistical + ML-driven) Build upstream schema validation into CI/CD workflows (shift-left quality) Define SLOs/SLAs for data products Enable automated drift detection for training and inference datasets Implement observability across streaming and batch architectures You will treat data quality like SRE treats uptime. Drive Data Ownership as a Product Discipline Establish a data product ownership model across service teams Define what “production-grade data” means for AI use cases Build self-service tooling for teams to monitor and certify their data Incentivize measurable quality accountability at the domain level This role transforms culture by building the platform that enforces it. AI + Governance Convergence Define how governed datasets become AI-ready assets Enable traceability from raw source → curated feature sets → model inputs Align catalog metadata with AI feature stores and inference pipelines Partner with ML teams to support model reproducibility and dataset versioning
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed