AI Engineer - Responsible AI

CentificRedmond, WA
1d$150 - $160Remote

About The Position

About Centific Centific is a frontier AI data foundry that curates diverse, high-quality data, using our purpose-built technology platforms to empower the Magnificent Seven and our enterprise clients with safe, scalable AI deployment. Our team includes more than 150 PhDs and data scientists, along with more than 4,000 AI practitioners and engineers. We harness the power of an integrated solution ecosystem—comprising industry-leading partnerships and 1.8 million vertical domain experts in more than 230 markets—to create contextual, multilingual, pre-trained datasets; fine-tuned, industry-specific LLMs; and RAG pipelines supported by vector databases. Our zero-distance innovation™ solutions for GenAI can reduce GenAI costs by up to 80% and bring solutions to market 50% faster. Our mission is to bridge the gap between AI creators and industry leaders by bringing best practices in GenAI to unicorn innovators and enterprise customers. We aim to help these organizations unlock significant business value by deploying GenAI at scale, helping to ensure they stay at the forefront of technological advancement and maintain a competitive edge in their respective markets. About Job Role: AI Engineer - Responsible AI Location: Seattle, WA | Palo Alto, CA | Remote Type: Full-time Build the Future of Safe and Responsible AI Are you an experienced AI engineer advancing the frontiers of AI safety, LLM jailbreak detection and defense, and agentic AI, with publications and production deployments to show for it? Join us to translate pioneering research into robust, scalable security systems and trustworthy LLM platforms that resist adversarial and behavioral exploits at enterprise scale. The Mission We're tackling cutting-edge AI safety across adversarial robustness, jailbreak defense, agentic workflows, and human-in-the-loop risk modeling. As an AI Engineer, you'll own high-impact projects from research conception through production deployment, directly contributing to our platform's security guarantees while building scalable, maintainable infrastructure.

Requirements

  • Master's degree in CS/EE/ML/Security or related field (Ph.D. preferred)
  • 2+ years of industry experience in applied ML/AI research or ML engineering
  • Track record of publications in AI Safety, NLP robustness, or adversarial ML (ACL, NeurIPS, ICML, EMNLP, IEEE S&P, etc.) or equivalent applied research impact
  • Strong Python and PyTorch/JAX skills with experience deploying ML models to production
  • Demonstrated experience in at least one of: LLM jailbreak attacks/defense, agentic AI safety, adversarial ML, or human-AI interaction vulnerabilities
  • Experience with containerization (Docker, Kubernetes) and cloud platforms (AWS, GCP, or Azure)
  • Proven ability to take research from concept to code to production deployment with rigorous testing and monitoring

Nice To Haves

  • Experience in adversarial prompt engineering, jailbreak detection (narrative, obfuscated, sequential attacks)
  • Prior work on multi-agent architectures or robust defense strategies for LLMs in production environments
  • Experience with large-scale data processing frameworks (Spark, Flink, Kafka) and data warehousing
  • MLOps expertise: model serving (Triton, TensorRT, vLLM), experiment tracking (W&B, MLflow), and CI/CD for ML
  • Infrastructure as Code experience (Terraform, Pulumi) and DevOps best practices
  • Experience with distributed computing frameworks (Ray, Dask) for scalable training and evaluation
  • Familiarity with observability stacks (Prometheus, Grafana, DataDog) and incident management
  • First-author publications, strong GitHub profile, or significant open-source contributions

Responsibilities

  • Advance AI Safety: Design, implement, and evaluate attack and defense strategies for LLM jailbreaks (prompt injection, obfuscation, narrative red teaming) and deploy them as production-grade services.
  • Build Scalable Safety Infrastructure: Architect and deploy distributed safety evaluation pipelines handling millions of requests, with real-time monitoring, alerting, and incident response capabilities.
  • Large-Scale Data Engineering: Design ETL pipelines for processing terabytes of safety-related data (attack patterns, behavioral logs, model outputs); build data lakes and feature stores for safety ML systems.
  • Evaluate AI Behavior: Analyze and simulate human-AI interaction patterns at scale to uncover behavioral vulnerabilities, social engineering risks, and over-defensive vs. permissive response tradeoffs.
  • Agentic AI Security: Build production workflows for multi-agent safety (agent self-checks, regulatory compliance, defense chains) spanning perception, reasoning, and action.
  • MLOps & Model Deployment: Deploy safety models to production using containerized microservices, implement CI/CD pipelines for model updates, and manage model versioning and A/B testing infrastructure.
  • Benchmark & Harden LLMs: Create reproducible, automated evaluation protocols for safety, over-defensiveness, and adversarial resilience across diverse models with continuous integration.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service