At Innodata, we’re working with the world’s largest technology companies on the next generation of generative AI and large language models (LLMs). We’re looking for smart, savvy, and curious Red Teaming Specialists to join our team. This is the role that writers and hackers dream about: you’ll be challenging the next generation of LLMs to ensure their robustness and reliability. We’re testing generative AI to think critically and act safely, not just to generate content. This isn’t just a job: it’s a once-in-a-lifetime opportunity to work on the frontlines of AI safety and security. There’s nothing more cutting-edge than this. Joining us means becoming an integral member of a global team dedicated to identifying vulnerabilities and improving the resilience of AI systems. You’ll be creatively crafting scenarios and prompts to test the limits of AI behavior, uncovering potential weaknesses and ensuring robust safeguards. You’ll be shaping the future of secure AI-powered platforms, pushing the boundaries of what’s possible. Keen to learn more?
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Entry Level
Number of Employees
5,001-10,000 employees