Abuse Investigator (AI Self-Improvement Risk)

OpenAISan Francisco, CA
Onsite

About The Position

As an Abuse Investigator focused on AI Self-Autonomy and Agentic Risk on the Intelligence and Investigations team, you will be responsible for identifying and investigating cases where models exhibit autonomous or agentic behavior, including chaining capabilities, acting with increasing independence, or demonstrating patterns that may introduce safety risk. This includes detecting behaviors that are not explicitly intended, understood, or covered by existing safeguards. This role requires deep domain-specific expertise in identifying, understanding, and mitigating risk from agentic systems, model autonomy, and AI self-improvement signals. You’ll need experience investigating complex systems where behavior emerges across multiple steps, tools, or interactions, and the ability to distinguish between normal task execution and concerning patterns such as persistence, workaround behavior, or capability expansion. You’ll need a proven ability to navigate ambiguous signals in a rapidly evolving and highly technical environment. This role is based in our San Francisco office. Investigations may involve reviewing complex or sensitive model behaviors and edge-case outputs, requiring strong judgment and resilience in high-pressure environments.

Requirements

  • Have deep expertise in investigating complex, adversarial, or emergent system behavior, ideally in AI safety, security, cyber, or trust & safety environments
  • Have strong familiarity with technical investigations, especially using SQL, Python, or similar tools, in a government, research, or technology setting
  • Have experience analyzing multi-step systems, automation, or agentic workflows, and understanding how behaviors emerge across interactions
  • Have at least 6 years of experience conducting investigations, threat analysis, or research in complex and ambiguous domains
  • Have experience identifying failure modes, unintended behaviors, or system-level risks, particularly in AI or software systems
  • Have at least two years of experience helping to develop automated or scalable approaches to detection or investigation
  • Experience presenting analytic work in technical, research, or policy settings

Responsibilities

  • Review leads, investigate model behavior, and identify cases where systems demonstrate agentic or autonomous patterns that that introduce safety risks
  • Detect and analyze behaviors such as multi-step planning, capability chaining, tool use, persistence, and workaround behavior
  • Develop signals and tracking strategies to help proactively identify emerging agentic risk patterns across our platform
  • Identify gaps in existing safeguards, evaluations, or monitoring systems and propose improvements
  • Communicate investigation findings clearly to technical, policy, and leadership stakeholders
  • Be someone people enjoy working with and appreciate the opportunity to help others

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

1-10 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service