Centific-posted 7 days ago
Full-time • Intern
Remote • Redmond, WA
1,001-5,000 employees

Centific is a frontier AI data foundry that curates diverse, high-quality data, using our purpose-built technology platforms to empower the Magnificent Seven and our enterprise clients with safe, scalable AI deployment. Our team includes more than 150 PhDs and data scientists, along with more than 4,000 AI practitioners and engineers. We harness the power of an integrated solution ecosystem—comprising industry-leading partnerships and 1.8 million vertical domain experts in more than 230 markets—to create contextual, multilingual, pre-trained datasets; fine-tuned, industry-specific LLMs; and RAG pipelines supported by vector databases. Our zero-distance innovation™ solutions for GenAI can reduce GenAI costs by up to 80% and bring solutions to market 50% faster. Our mission is to bridge the gap between AI creators and industry leaders by bringing best practices in GenAI to unicorn innovators and enterprise customers. We aim to help these organizations unlock significant business value by deploying GenAI at scale, helping to ensure they stay at the forefront of technological advancement and maintain a competitive edge in their respective markets. Are you advancing the frontiers of AI safety, LLM jailbreak detection and defense, and agentic AI—with publications to show for it? Join us to translate pioneering research into robust security and trustworthy LLM systems that resist adversarial and behavioral exploits. We’re tackling cutting-edge AI safety across adversarial robustness, jailbreak defense, agentic workflows, and human-in-the-loop risk modeling. As a Ph.D. Research Intern, you’ll own high-impact experiments from concept to prototype to deployable modules, directly contributing to our platform’s security guarantees.

  • Advance AI Safety: Design, implement, and evaluate attack and defense strategies for LLM jailbreaks (prompt injection, obfuscation, narrative red teaming).
  • Evaluate AI Behavior: Analyze and simulate human-AI interaction patterns to uncover behavioral vulnerabilities, social engineering risks, and over-defensive vs. permissive response tradeoffs.
  • Agentic AI Security: Prototype workflows for multi-agent safety (e.g., agent self-checks, regulatory compliance, defense chains) that span perception, reasoning, and action.
  • Benchmark & Harden LLMs: Create reproducible evaluation protocols/KPIs for safety, over-defensiveness, adversarial resilience, and defense effectiveness across diverse models (including latest benchmarks and real-world exploit scenarios).
  • Deploy and Monitor: Package research into robust, monitorable AI services using modern stacks (Kubernetes, Docker, Ray, FastAPI); integrate safety telemetry, anomaly detection, and continuous red-teaming.
  • Ph.D. student in CS/EE/ML/Security (or related); actively publishing in AI Safety, NLP robustness, or adversarial ML (ACL, NeurIPS, BlackHat, IEEE S&P, etc.).
  • Strong Python and PyTorch/JAX skills; comfort with toolkits for language models, benchmarking, and simulation.
  • Demonstrated research in at least one of: LLM jailbreak attacks/defense, agentic AI safety, human-AI interaction vulnerabilities.
  • Proven ability to go from concept → code → experiment → result, with rigorous tracking and ablation studies.
  • Experience in adversarial prompt engineering, jailbreak detection (narrative, obfuscated, sequential attacks).
  • Prior work on multi-agent architectures or robust defense strategies for LLMs.
  • Familiarity with red-teaming, synthetic behavioral data, and regulatory safety standards.
  • Scalable training and deployment: Ray, distributed evaluation, CI/telemetry for defense protocols.
  • Public code artifacts (GitHub) and first-author publications or strong open-source impact.
  • Real Impact: Your research ships—directly securing our core features and AI infrastructure.
  • Mentorship: Collaborate with Principal Architects and senior researchers in AI safety and adversarial ML.
  • Velocity + Rigor: Balance high-quality research with mission-critical product focus.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service