Information for Paul Christiano

Table of contents

Basic information

Item Value
Country United States
GitHub username paulfchristiano
LessWrong username paulfchristiano
Intelligent Agent Foundations Forum username Paul_Christiano
Website https://paulfchristiano.com
Source [1]
Donations List Website (data still preliminary) donor
Agendas Iterated amplification, Debate

List of positions (11 positions)

Organization Title Start date End date Employment type Source Notes
Theiss Research Contractor contractor [2]
Future of Humanity Institute Research Associate [3], [4]
University of California, Berkeley [2], [5], [6], [7] One of 37 AGI Safety Researchers of 2015 funded by donations from Elon Musk and the Open Philanthropy Project
80,000 Hours Advisor 2013-09-18 2015-11-26 advisor [8], [9], [10]
AI Impacts [11]
Machine Intelligence Research Institute Research Associate 2013-05-01 2015-03-01 [12], [13]
OpenAI 2017-01-01 full-time [1], [14], [15] The description given is "working on alignment"
OpenAI Intern 2016-05-25 [16], [17]
Open Philanthropy Technical advisor advisor [18]
Ought Collaborator [19]
Ought Board member board member [19]

Products (3 products)

Name Creation date Description
Ordinary Ideas 2011-12-21 Paul Christiano’s blog about “weird AI stuff” [20].
AI Alignment 2016-05-28 Paul Christiano’s blog about AI alignment.
AI Alignment Prize 2017-11-03 With Zvi Mowshowitz, Vladimir Slepnev. A prize for work that advances understanding in alignment of smarter-than-human artificial intelligence. Winners for the first round, as well as announcement of the second round, can be found at [21]. Winners for the second round, as well as announcement of the third round, can be found at [22].

Organization documents (2 documents)

Title Publication date Author Publisher Affected organizations Affected people Document scope Cause area Notes
Hiring engineers and researchers to help align GPT-3 2020-10-01 Paul Christiano LessWrong OpenAI Hiring-related notice AI safety Paul Christiano posts on LessWrong a hiring note asking for engineers and researchers to work on GPT-3 alignment problems, as the language model is already being deployed in the OpenAI API
What I’ll be doing at MIRI 2019-11-12 Evan Hubinger LessWrong Machine Intelligence Research Institute, OpenAI Evan Hubinger, Paul Christiano, Nate Soares Successful hire AI safety Evan Hubinger, who has just finished an internship at OpenAI with Paul Christiano and others, is going to start work at MIRI. His research will be focused on solving inner alignment for amplification. Although MIRI's research policy is one of nondisclosure-by-default [23] Hubinger expects that his own research will be published openly, and that he will continue collaborating with researchers at institutions like OpenAI, Ought, CHAI, DeepMind, FHI, etc. In a comment, MIRI Executive Director Nate Soares clarifies that "my view of MIRI's nondisclosed-by-default policy is that if all researchers involved with a research program think it should obviously be public then it should obviously be public, and that doesn't require a bunch of bureaucracy. [...] the policy is there to enable researchers, not to annoy them and make them jump through hoops." Cross-posted from the AI Alignment Forum; original is at [24]

Documents (1 document)

Title Publication date Author Publisher Affected organizations Affected people Affected agendas Notes
Challenges to Christiano’s capability amplification proposal 2018-05-19 Eliezer Yudkowsky Machine Intelligence Research Institute Paul Christiano Iterated amplification This post was summarized in Alignment Newsletter #7 [25].

Similar people

Showing at most 20 people who are most similar in terms of which organizations they have worked at.

Person Number of organizations in common List of organizations in common
Ryan Carey 4 Future of Humanity Institute, Machine Intelligence Research Institute, OpenAI, Ought
Katja Grace 3 Future of Humanity Institute, AI Impacts, Machine Intelligence Research Institute
Daniel Dewey 3 Future of Humanity Institute, Machine Intelligence Research Institute, Open Philanthropy
Carl Shulman 3 Future of Humanity Institute, 80,000 Hours, Machine Intelligence Research Institute
Long Ouyang 2 Theiss Research, OpenAI
Sebastian Farquhar 2 Future of Humanity Institute, 80,000 Hours
David Manheim 2 Future of Humanity Institute, Open Philanthropy
Nick Beckstead 2 Future of Humanity Institute, Open Philanthropy
Stuart Russell 2 University of California, Berkeley, Machine Intelligence Research Institute
Qiaochu Yuan 2 University of California, Berkeley, Machine Intelligence Research Institute
Smitha Milli 2 University of California, Berkeley, OpenAI
Pieter Abbeel 2 University of California, Berkeley, OpenAI
Andrew Critch 2 University of California, Berkeley, Machine Intelligence Research Institute
Jacob Trefethen 2 80,000 Hours, Open Philanthropy
Howie Lempel 2 80,000 Hours, Open Philanthropy
Claire Zabel 2 80,000 Hours, Open Philanthropy
Catherine Olsson 2 OpenAI, Open Philanthropy
John Salvatier 2 Future of Humanity Institute, AI Impacts
Brian Tse 2 80,000 Hours, Open Philanthropy
Connor Flexman 2 AI Impacts, Machine Intelligence Research Institute