As the Senior Architect of AI Governance & Risk, you will lead the design and operationalization of Babel Street’s AI trust framework across safety, privacy, security, bias/fairness, and transparency. You will ensure our AI-enabled products—spanning LLM-powered workflows, agentic systems, and multimodal capabilities—are built and deployed with measurable controls, defensible documentation, and audit-ready evidence. A core part of this role is to create, institutionalize, and operationalize Babel Street AI Principles—and translate them into the policies, engineering standards, delivery gates, customer assurances, and reporting artifacts that guide how we build and deploy AI across the company. This role requires extensive partnership with Product, Engineering, Security (CISO), Legal/Privacy, and Customer Success teams. You will serve as the connective tissue between these functions, ensuring governance requirements are understood, adopted, and embedded into the AI lifecycle—from design through production monitoring and incident response. The ideal candidate is execution-oriented with a focus on customer-facing outcomes. You will translate emerging AI policy and customer requirements into concrete engineering controls and reusable collateral that accelerates RFI/RFP responses, supports due diligence, and reduces risk without slowing product velocity. This role spans three practical execution areas: AI Principles & Governance Architecture You will define Babel Street’s AI Principles and build the governance operating system that turns principles into action—standards, controls, documentation, and release gates that are implementable by engineering teams and measurable in production . AI Policy Intelligence, RFI/RFP Enablement & Reporting Collateral You will track and interpret emerging AI policy, regulations, and standards and assess their impact on Babel Street’s products and business. You will translate these requirements into roadmap implications, compliance strategies, and customer-ready collateral—enabling fast, consistent responses to RFIs/RFPs, security questionnaires, audits, and due diligence requests. Responsible AI Assurance Across Safety, Privacy, Security, Bias/Fairness & Transparency You will own the assurance posture for Babel Street AI— partnering with Engineering, Security, Legal/Privacy, and Product teams to ensure safety metrics, privacy controls, AI security testing, bias/fairness evaluation practices, and transparency artifacts (model/system cards) are defined, implemented, measured, and maintained over time.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Number of Employees
251-500 employees