About The Position

We are seeking a Principal ML Architect to lead the design and development of next-generation AI systems for cybersecurity, leveraging state-of-the-art LLMs/SLMs and advanced machine learning techniques. This role requires deep expertise in model architecture, training, fine-tuning, and distillation, combined with a strong understanding of security domains such as threat detection, anomaly detection, data protection, and AI safety. You will drive the development of intelligent, security-focused AI systems and agents capable of operating at scale across high-volume, adversarial, and sensitive environments, while ensuring robustness, explainability, and compliance.

Requirements

  • 10+ years in ML/AI systems, with significant focus on deep learning and production ML
  • Proven experience in: Building or scaling LLMs/SLMs or advanced ML systems, Applying ML/AI in security, fraud, risk, or adversarial domains
  • Track record of delivering production-grade AI systems at scale
  • Deep understanding of: Transformer architectures and modern LLM techniques, Retrieval-augmented generation (RAG) and hybrid AI systems, Model training dynamics, scaling laws, and optimization
  • Hands-on experience with: Training, fine-tuning, and distilling models, Efficient inference (quantization, pruning, batching), Distributed training frameworks (PyTorch, DeepSpeed, FSDP, etc.)
  • Strong understanding of one or more: Security telemetry (logs, network traffic, endpoint data), Threat detection and anomaly detection systems, Identity, access, and data protection systems
  • Familiarity with security tooling ecosystems (SIEM, EDR, CASB, etc.)
  • Experience designing high-throughput, low-latency ML systems
  • Strong programming skills in Python, with production experience
  • Understanding of data pipelines, feature engineering, and MLOps practices

Nice To Haves

  • Experience building AI systems for SaaS security or GenAI security platforms
  • Familiarity with multi-agent systems for security automation
  • Experience with synthetic data generation for security use cases
  • Contributions to AI/ML research, open-source, or security tooling
  • Background in AI safety, adversarial ML, or model interpretability

Responsibilities

  • Design and lead architecture for AI-driven security platforms, including: Threat detection and behavioral analytics, Data loss prevention (DLP) and insider risk detection, AI usage monitoring and policy enforcement (GenAI security)
  • Build systems that process high-volume, high-velocity security telemetry in real time
  • Lead development of state-of-the-art SLMs/LLMs tailored for security use cases: Log analysis, alert triage, threat intelligence, policy reasoning
  • Drive experimentation with modern architectures (Transformers, MoE, retrieval-augmented systems, hybrid models)
  • Balance trade-offs between model accuracy, latency, interpretability, and cost
  • Architect pipelines for: Domain adaptation and instruction tuning on security-specific datasets, Model distillation and compression for efficient deployment in enterprise environments
  • Design and execute experiments for: Alignment (RLHF/RLAIF) in security-sensitive contexts, Red-teaming and adversarial robustness of models
  • Design and oversee AI agents that: Automate security operations (SOC workflows, triage, investigation), Integrate with enterprise tools (SIEM, EDR, SaaS platforms)
  • Define architectures for tool use, reasoning, memory, and policy-aware decision making
  • Establish rigorous evaluation frameworks for: Detection accuracy, false positives/negatives, Model robustness under adversarial conditions, Safety, hallucination, and misuse risks
  • Lead deep experimentation cycles to continuously improve model performance and reliability
  • Guide deployment of models into enterprise-scale, real-time environments
  • Optimize inference systems for low latency, high throughput, and cost efficiency
  • Collaborate with platform teams on ML infrastructure, data pipelines, and observability
  • Ensure models and systems meet enterprise security standards (SOC2, ISO, GDPR, etc.)
  • Establish best practices for: Secure model development and deployment, Data privacy and protection in training pipelines, Responsible AI and model safety in adversarial environments

Benefits

  • Competitive compensation
  • Comprehensive benefits
  • Career success on your terms
  • Flexible work environment
  • Annual wellness and community outreach days
  • Always on recognition for your contributions
  • Global collaboration and networking opportunities
  • Flexible time off
  • Comprehensive well-being program with two paid Wellbeing Days and two paid Volunteer Days per year
  • Three-week Work from Anywhere option
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service