Sr. Staff Software Engineer, Machine Learning

LinkedInMountain View, CA
Hybrid

About The Position

The Generative AI (GenAI) Safety team sits at the heart of LinkedIn’s Responsible AI & Governance (RAI‑G) organization, with a mission to set the gold standard for AI safety across all AI applications company‑wide. We ensure that every generative AI product is developed and deployed responsibly, ethically, and securely. By combining rigorous governance with cutting‑edge ML research, we identify and mitigate risks such as bias, hallucination, misuse, and privacy leakage. As both the AI Safety Research team and the central AI safety engineering function, we build safety guardrails, evaluation pipelines, and alignment techniques that enable safe innovation at scale. Our work is foundational to the company’s AI strategy and influences standards across the industry. We partner closely with Legal, Compliance, AI Infrastructure, and Product teams to embed safety into every stage of the AI lifecycle. The work location of this role is hybrid, meaning it will be performed both from home and from a LinkedIn office on select days, as determined by the business needs of the team. This role will be based in Sunnyvale, CA.

Requirements

  • 2+ years as a Technical Lead, Staff Engineer, Principal Engineer, or equivalent.
  • 5+ years of industry experience in AI or Machine Learning Engineering.
  • BA/BS Degree in Computer Science or related technical discipline or equivalent practical experience

Nice To Haves

  • 10+ years of industry and/or research experience in AI/ML delivering impact at scale.
  • PhD in CS/AI/ML or related field (or equivalent research/industry achievements).
  • Expert understanding of Transformers; hands-on experience training, fine‑tuning, distilling/compressing, and deploying LLMs in production.
  • Track record applying LLMs to recommender systems and language agents.
  • Demonstrated leadership in red‑teaming (manual + automated), safety benchmarking/evaluations, content safety/guardrails, prompt‑injection/jailbreak detection, and abuse/misuse prevention.
  • Experience translating Legal/Compliance requirements (e.g., EU AI Act) into technical controls, including harm taxonomies, model cards, and risk assessments.
  • Proven ability to design safety‑first architectures (evaluation pipelines, moderation services, policy engines, incident response & telemetry) for distributed, real‑time ML systems.
  • Strong understanding of RL (e.g., RLHF/RLAIF, offline/online RL) for language‑based agents, including safety‑aware reward design and feedback loops.
  • Advanced Python and PyTorch; familiarity with TensorFlow.
  • Experience with safety evaluation tooling (e.g., platforms akin to LLUME) and safety datasets/benchmarks.
  • Significant contributions via top‑tier publications (NeurIPS, ICLR, ICML, ACL) and/or impactful open‑source or widely used safety tooling.
  • Proven technical leadership mentoring ~15 engineers, setting direction, and elevating execution quality.
  • Effective liaison with Product Engineering (tracking experiments and venture bets; aligning safety research to upcoming bets) and strong collaboration with Legal, Compliance, AI Infra, and Policy.
  • Experience with advanced reasoning/planning (e.g., CoT/ToT, self‑reflection, program synthesis, symbolic/neuro‑symbolic methods, search‑augmented reasoning, verification‑aware decoding).

Responsibilities

  • Serve as the senior technical leader shaping the company’s generative AI safety direction. Define the roadmap for safety alignment research, model evaluation, and system‑level protections.
  • Guide LinkedIn’s research agenda in alignment, robustness, and responsible model behaviors. Stay ahead of academic and industry advances, rapidly translating insights into practical, production‑ready solutions.
  • Provide architectural leadership for scalable safety systems—benchmarking, red‑teaming, content safety, privacy‑preserving training, and real‑time guardrails — ensuring they are reliable, performant, and deeply integrated into AI infrastructure.
  • Tackle LinkedIn’s toughest ethical, regulatory, and risk‑driven problems. Bring clarity and direction in areas with evolving standards, ensuring the company ships safe GenAI experiences at speed.
  • Partner closely with product engineering teams to stay current on emerging experiments, venture bets, and product innovations, ensuring safety research and tooling anticipate and support the next wave of product development.
  • Collaborate with Legal, Compliance, Privacy, Infra, and Policy teams to operationalize safety requirements, translate regulatory guidance into technical specifications, and ensure end‑to‑end alignment across disciplines.
  • Mentor and grow a team of ~15 engineers across research, ML, and systems. Elevate engineering rigor, drive high bar execution, and nurture future technical leaders in AI safety.
  • Ensure safety techniques, tools, and evaluations are deployed across all GenAI products, safeguarding member trust while enabling safe, scalable innovation.

Benefits

  • Generous health and wellness programs
  • Time away for employees of all levels
  • Annual performance bonus
  • Stock
  • Benefits
  • Other applicable incentive compensation plans
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service