AI Evaluation Data Scientist

Apple Inc.Cupertino, CA
56d

About The Position

The Health Sensing team builds outstanding technologies to support our users in living their healthiest, the happiest lives by providing them with objective, accurate, and timely information about their health and well-being. As part of the larger Sensor SW u0026 Prototyping team, we take a multimodal approach using a variety of data types across HW platforms, such as camera, PPG, and natural languages, to build products to support our users in living their healthiest, happiest lives. In this role, you will be at the forefront of developing and validating evaluation methodologies for Generative AI systems in health and wellbeing applications. You will design comprehensive human annotation frameworks, build automated evaluation tools, and conduct rigorous statistical analyses to ensure the reliability of both human and AI-based assessment systems. Your work will directly impact the quality and trustworthiness of customer-facing health products.In this role you will:

Requirements

  • MS or PhD or equivalent experience in relevant fields
  • Real-world experience with LLM-based evaluation systems and human annotation and human evaluation methodologies
  • Experience in rigorous, evidence-based approaches to test development, e.g. quantitative and qualitative test design, reliability and validity analysis
  • Customer-focused mindset with experience or strong interest in building consumer digital health and wellness products
  • Strong communication skills and ability to work cross-functionally with technical and non-technical stakeholders

Responsibilities

  • Design and analyze human evaluations of AI systems to create reliable annotation frameworks, and ensure validity and reliability of measurements of latent constructs
  • Develop and refine benchmarks and evaluation protocols, using statistical modeling, test theory, and task design to capture model performance across diverse contexts and user needs
  • Conduct statistical analysis of evaluation data to extract meaningful insights, identify systematic issues, and inform improvements to both models and evaluation processes
  • Analyze model behavior, identify weaknesses, and drive design decisions with failure analysis. Examples include, but not limited to: model experimentation, adversarial testing, counterfactual analysis, creating tools to assess model behavior and user impact
  • Collaborate with engineers to translate evaluation methods and analysis techniques into scalable, adaptable, and reliable solutions that can be reused across different features, use cases, and evaluation workflows
  • Work cross-functionally to apply methods to real-world applications with designers, clinical experts, and engineering teams across Hardware and Software
  • Independently run and analyze experiments for real improvements

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Industry

Computer and Electronic Product Manufacturing

Education Level

Ph.D. or professional degree

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service