Carnegie Mellon University, in collaboration with FAR.AI and Cornell University, is seeking a postdoctoral researcher for a one-year appointment (with the possibility of extension) in Dietrich College. The position has a flexible start date, though early summer 2026 is preferred. The postdoctoral fellow will work closely with Thomas Costello (CMU), Kellin Pelrine (FAR.AI), Gordon Pennycook (Cornell), and David Rand (Cornell). In addition to being a core member of the research team, the fellow will be embedded within the FAR.AI research community, providing direct access to technical AI safety expertise and a broader interdisciplinary network. This position builds on our recent work demonstrating that large language models (LLMs) can meaningfully influence human beliefs — including shifting political attitudes, reducing belief in conspiracy theories, and, more recently, increasing belief in them. The research sits at the intersection of AI safety and behavioral science, examining both the persuasive capabilities of AI systems and the risks they pose to the information ecosystem. Adaptability, excellence, and passion are vital qualities within Carnegie Mellon University. We are in search of a team member who can effectively interact with a varied population of internal and external partners at a high level of integrity. We are looking for someone who shares our values and who will support the mission of the university through their work. You should demonstrate: We're open to candidates from technical/computational backgrounds (ML, NLP, AI safety, working with LLMs) as well as computational social scientists. The ideal candidate bridges these worlds or is eager to learn across them. A combination of education and relevant experience from which comparable knowledge is demonstrated may be considered.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Entry Level
Education Level
Ph.D. or professional degree