Scientific Fellow, AI Safety, R&D Data Science and Digital Health

Johnson & Johnson Innovative MedicineSan Diego, CA

About The Position

Johnson & Johnson Innovative Medicine (IM) is recruiting for a Scientific Fellow, AI Safety, R&D Data Science and Digital Health. This position can be located in New Brunswick, NJ; Titusville, NJ; Springhouse, PA; La Jolla, CA; Cambridge, MA; Beerse, Belgium; or Zug, Switzerland.. This position will require up to 25% travel. Candidates interested in European Locations please apply to: R-070274 Our expertise in Innovative Medicine is informed and inspired by patients, whose insights fuel our science-based advancements. Visionaries like you work on teams that save lives by developing the medicines of tomorrow. Join us in developing treatments, finding cures, and pioneering the path from lab to life while championing patients every step of the way. Learn more at https://www.jnj.com/innovative-medicine About the Role We are seeking a highly technical leader in AI safety for our Research & Development Data Science & Digital Health (DSDH) organization. Reporting directly to the Vice President of AI/ML & Digital Health, this role is responsible for embedding AI safety, robustness, and observability into the design, evaluation, and deployment of advanced AI systems across the DSDH portfolio and R&D use cases. These systems span foundation and predictive AI models, generative AI, and autonomous agentic systems supporting discovery, development, clinical, and regulatory workflows. This is a hands-on, technical, deeply scientific fellow role, focused on shaping model and AI system design and evaluation while contributing to policy, compliance, and enterprise governance. The Scientific Fellow will work closely with AI scientists, engineers, AI Quality & Optimization, Global Regulatory Affairs, Quantitative Scientists, and Johnson & Johnson Technology (JJT) to ensure AI systems deployed in R&D workflows are safe, trustworthy, and fit‑for‑purpose as AI capability and autonomy scale.

Requirements

  • PhD or equivalent advanced degree in Computer Science, Artificial Intelligence, Machine Learning, Data Science, or a related field.
  • Minimum of 10 years of post-academic, industry experience.
  • Proven track record and strong hands‑on experience with modern AI systems, including foundation models, multimodal generative AI, large reasoning models or agentic systems
  • Extensive experience with AI safety, robustness, reliability, or evaluation in high‑impact or high‑stakes domains.
  • Demonstrated ability to reason about system‑level behavior, failure modes, and risk, beyond model accuracy and robustness alone.
  • Excellent coding and software development capabilities.
  • Experience working in highly interdisciplinary and matrixed environments spanning AI, data science, engineering, and life science.
  • Strong communication skills and ability to influence AI model and systems design without formal authority.

Nice To Haves

  • Experience in the Life Sciences, Healthcare, Pharmaceutical or Medical Tech sector is preferred.

Responsibilities

  • Strategic direction and research priorities Shape DSDH and IM R&D strategy for safe and trustworthy AI by defining multi-year research priorities, capability roadmaps, and investment recommendations for AI safety across discovery, development, clinical, and regulatory workflows. Represent AI safety as a senior scientific voice in function- and enterprise-level councils/working groups; set standards and priorities for safe scaling of GenAI and agentic systems, and provide technical leadership on safety principles and implementation for agentic and autonomous systems.
  • AI safety research and development Research, embed and implement AI safety‑by‑design principles into the development of foundation models, AI and generative AI applications, and agentic systems across R&D use cases. During all design phases, partner directly with AI and quantitative scientists across IM R&D, as well as with technical leads in JJT to: Identify potential failure modes, risks, appropriate levels of autonomy and human oversight, define safety‑relevant observability signals, acceptable failure envelopes and mitigation strategies tailored to different R&D contexts (research, clinical, regulatory) ensure monitoring captures unsafe behaviors, not only performance drift. Design and execute safety‑focused models and evaluations, including but not limited to stress testing for hallucinations, edge cases, and failure propagation in multi‑step reasoning and agent workflows.
  • Technical guidance and Policy influence/setting Provide technical leadership for AI safety in regulated environment, covering use cases, e.g. regulatory documentation for AI-enabled R&D processes and submissions, autonomous agents in GxP environments, etc.. Influence internal policy and external best practices by contributing to guidance documents/points-to-consider for safe GenAI and agentic systems in pharma R&D, including participation in expert working groups and advisory panels. Track emerging risks, research, and best practices in AI safety and translate them into practical guidance for internal teams.
  • Securing funding/resources Develop business cases to secure investment for AI safety capabilities and lead execution of funded initiatives
  • External leadership Drive J&J innovation in the field, leading to high visibility publications in top-tier AI conferences and journals, patents around AI safety in generative AI, reasoning, multi-agent systems, etc. Serve as an external ambassador for J&J IM R&D AI safety: invited talks and keynotes, conference leadership roles (area chair, workshop organizer), and participation in cross-industry consortia and standards bodies. Establish and lead strategic external collaborations with academic, industry, and governmental partners focused on AI safety in high-stakes biomedical and regulatory contexts.
  • Mentorship Establish a sustained mentorship program for AI safety across IM R&D, including coaching on technical approaches and trends, publication strategy, and cross-functional influence. Actively contribute to IM R&D Fellow Community by showcasing technical excellence in various internal and external events, being the AI safety voice in the community, and contributing to activities sponsored by the IM R&D Science & Technology Council

Benefits

  • Vacation –120 hours per calendar year
  • Sick time - 40 hours per calendar year; for employees who reside in the State of Colorado –48 hours per calendar year; for employees who reside in the State of Washington –56 hours per calendar year
  • Holiday pay, including Floating Holidays –13 days per calendar year
  • Work, Personal and Family Time - up to 40 hours per calendar year
  • Parental Leave – 480 hours within one year of the birth/adoption/foster care of a child
  • Bereavement Leave – 240 hours for an immediate family member: 40 hours for an extended family member per calendar year
  • Caregiver Leave – 80 hours in a 52-week rolling period10 days
  • Volunteer Leave – 32 hours per calendar year
  • Military Spouse Time-Off – 80 hours per calendar year

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Ph.D. or professional degree

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service