At Zoom, we're redefining how AI bridges Communications to Completions. We are the Responsible AI Team, dedicated to ensuring that advanced AI systems are safe, reliable, and aligned with human values. Our mission is to pioneer research and develop practical solutions that make AI systems trustworthy, controllable, and beneficial for society. What you can expect We are seeking exceptional PhD students to join our team as Research Interns. You will work on cutting-edge research to ensure the safety, reliability, and alignment of the world's most advanced AI systems. This internship offers the opportunity to actively contribute to both foundational research and real-world product impact. Research Focus Areas As an intern, you will contribute to one or more of the following interconnected research areas: 1. AI Agent System Quality Evaluation & Benchmarking Design and develop comprehensive evaluation frameworks and benchmarks for state-of-the-art AI systems, including agentic AI and foundation models Create novel metrics and methodologies to assess AI system capabilities, safety, and reliability Build datasets and benchmarks for quality evaluation across diverse scenarios and modalities 2. Controllable & Aligned AI Research methods to ensure AI systems remain controllable and aligned with human intentions and values Develop techniques for robustness, interpretability, controllability, and ethicality (RICE principles) Explore alignment training approaches including reinforcement learning from human feedback (RLHF) and related methodologies Investigate methods to prevent agentic misalignment in autonomous AI systems 3. Red Teaming & Adversarial Robustness Design and execute adversarial testing strategies to identify vulnerabilities in AI systems Develop automated red teaming tools and frameworks to stress-test model safety controls Research novel attack vectors including prompt injection, jailbreaking, and multimodal adversarial inputs Contribute to identifying failure modes in both adversarial and benign user scenarios 4. AI Guardrail Solutions & Defense Systems Build robust guardrail systems to defend against adversarial attacks and harmful outputs Develop runtime monitoring and content moderation solutions Create self-improving security systems that anticipate evolving threats Research defense-in-depth strategies for comprehensive AI safety
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Career Level
Intern
Education Level
Ph.D. or professional degree