Responsible AI Data Scientist

AccordionChicago, IL
Hybrid

About The Position

We are seeking a Vice President, Responsible AI Data Scientist who combines technical expertise in Data Science and Machine Learning with a strong advisory lens to help drive trustworthy AI solution development across Accordion's practices. In this role, you will serve as an internal subject matter expert, embedding responsible AI principles and controls across the AI development lifecycle to drive validated, explainable, compliant, and auditable AI solutions. Drawing on your experience with AI policy, regulatory standards, and testing methodologies, you will translate governance requirements into actionable and testable playbooks that teams across the business can apply consistently. This position is critical to co-designing AI solutions with embedded Responsible AI practices aligned to Accordion's AI Governance principles. As Accordion scales AI capabilities across its practice areas, this role will provide the validation rigor and oversight needed to ensure we develop AI solutions our clients and firm can trust. This position must be based in our New York City or Chicago office and is a hybrid role with the flexibility to work remotely 2 days a week. This position is not eligible for immigration sponsorship.

Requirements

  • 5+ years of hands-on experience in data and ML, including developing, deploying, and evaluating solutions using a range of skillsets (e.g., statistics, machine learning, NLP, data visualization)
  • Bachelor's degree in Computer Science, Data Science, Information Systems, Mathematics, Engineering, Statistics, or a related quantitative field
  • Proven technical experience in regulated or complex industries with demonstrated collaboration across Legal/Risk/Compliance, Security, and AI/ML engineering teams
  • Track record of operationalizing responsible AI through technical development frameworks, testing protocols, and quantitative evaluation that embed fairness, transparency, and accountability into production AI systems
  • Strong programming proficiency in Python, R, or similar languages, with experience in ML frameworks and responsible AI tooling
  • Understanding of AI and privacy regulations and standards — including EU AI Act, NIST AI RMF, GDPR, CCPA, and ISO 42001 — and the ability to translate regulatory mandates into technical controls
  • Exceptional communication skills with the ability to convey complex technical and regulatory considerations to non-technical stakeholders, including leadership and cross-functional teams

Nice To Haves

  • Advanced degree (Master's) in Computer Science, Data Science, Statistics, Information Systems, Engineering, or a related quantitative discipline
  • Consulting experience at a leading management or professional services firm
  • Direct experience working with Legal/Privacy/Compliance teams on AI, data governance, or emerging technology matters
  • Familiarity with responsible AI tooling, red teaming platforms, or model evaluation frameworks

Responsibilities

  • Translate AI governance, regulatory, and compliance requirements into testable operational Responsible AI playbooks with quantifiable standards for transparency, explainability, accuracy, fairness, privacy, and accountability — tailored to Accordion's AI solutions and business model
  • Develop frameworks and guidance supporting systematic evaluation, testing, and risk mitigation that enable auditability, lineage, and transparent decision records
  • Build evaluation frameworks that guide data scientists and engineering teams in developing, testing, and monitoring AI systems requirements; define mitigation strategies, detection methodologies, and acceptability thresholds
  • Develop and implement testing principles for bias detection, model robustness evaluation, privacy preservation, and AI/privacy regulation alignment across different AI architectures and data types
  • Establish benchmarks and governance processes aligned with industry standards (e.g., NIST AI RMF, EU AI Act, ISO 42001) that enable auditable AI transparency and explainability against applicable regulations and Accordion standards
  • Collaborate with data science and engineering teams across our AI solution practices to embed responsible AI controls throughout the development lifecycle
  • Develop repeatable testing and validation playbooks and evaluation frameworks for use across practice areas
  • Pressure test AI solutions for accuracy, reliability, and trustworthiness, including output anomaly detection, logging, and observability mechanisms
  • Develop methods to produce model cards and support audit trails for outputs across Accordion's business practice pillars and Client solutions
  • Act as the firm's internal subject matter expert on Responsible AI topics including algorithmic fairness, transparency, privacy preservation, safety protocols, and risk mitigation strategies across the AI lifecycle
  • Partner with Legal/Privacy/Risk, Technology, and D&A teams to translate regulatory requirements into actionable and measurable controls and governance structures
  • Collaborate with teams developing AI solutions on risk-informed design decisions — advising on model selection, testing approaches, and appropriate guardrails
  • Establish Responsible AI frameworks and operational playbooks that demonstrate alignment with evolving AI regulations (e.g., EU AI Act, GDPR, CCPA) and sector-specific requirements
  • Monitor emerging AI regulations, industry standards, and responsible AI methodologies — translating insights into actionable internal design guidance
  • Where applicable, support higher-risk or higher-profile engagements and AI product development where rigorous testing and evaluation of AI solutions is required
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service