Information for Paul Christiano

Table of contents

Basic information

Item Value
Country United States
GitHub username paulfchristiano
LessWrong username paulfchristiano
Intelligent Agent Foundations Forum username Paul_Christiano
Website https://paulfchristiano.com
Source [1]
Donations List Website (data still preliminary) donor
Agendas Iterated amplification, Debate

List of positions (13 positions)

Organization Title Start date End date Employment type Source Notes
Open Philanthropy Technical advisor [2]
University of California, Berkeley [3], [4], [5], [6] One of 37 AGI Safety Researchers of 2015 funded by donations from Elon Musk and the Open Philanthropy Project
Machine Intelligence Research Institute Research Associate 2013-05-01 2015-03-01 [7], [8]
80,000 Hours Advisor 2013-09-18 2013-09-18 advisor [9]
OpenAI Intern 2016-05-25 2017-01-01 [10], [11]
OpenAI 2017-01-01 2021-01-01 full-time [1], [12], [13], [14] The description given is "working on alignment"
AI Impacts Contributor 2017-10-26 2017-10-26 [15], [16]
Future of Humanity Institute Research Associate 2017-11-24 2024-04-16 [17], [18]
Ought Board member & collaborator 2018-10-17 2019-02-02 board member [19], [20]
Redwood Research Board Member 2021-01-01 2023-01-22 board member [21], [22]
Alignment Research Center Researcher 2021-04-26 [23], [24], [25]
Ought Advisor 2021-05-14 2023-09-01 advisor [26]
Redwood Research Director 2023-03-31 2023-08-30 board member [27], [28]

Products (3 products)

Name Creation date Description
Ordinary Ideas 2011-12-21 Paul Christiano’s blog about “weird AI stuff” [29].
AI Alignment 2016-05-28 Paul Christiano’s blog about AI alignment.
AI Alignment Prize 2017-11-03 With Zvi Mowshowitz, Vladimir Slepnev. A prize for work that advances understanding in alignment of smarter-than-human artificial intelligence. Winners for the first round, as well as announcement of the second round, can be found at [30]. Winners for the second round, as well as announcement of the third round, can be found at [31].

Organization documents (2 documents)

Title Publication date Author Publisher Affected organizations Affected people Document scope Cause area Notes
Hiring engineers and researchers to help align GPT-3 2020-10-01 Paul Christiano LessWrong OpenAI Hiring-related notice AI safety Paul Christiano posts on LessWrong a hiring note asking for engineers and researchers to work on GPT-3 alignment problems, as the language model is already being deployed in the OpenAI API
What I’ll be doing at MIRI 2019-11-12 Evan Hubinger LessWrong Machine Intelligence Research Institute, OpenAI Evan Hubinger, Paul Christiano, Nate Soares Successful hire AI safety Evan Hubinger, who has just finished an internship at OpenAI with Paul Christiano and others, is going to start work at MIRI. His research will be focused on solving inner alignment for amplification. Although MIRI's research policy is one of nondisclosure-by-default [32] Hubinger expects that his own research will be published openly, and that he will continue collaborating with researchers at institutions like OpenAI, Ought, CHAI, DeepMind, FHI, etc. In a comment, MIRI Executive Director Nate Soares clarifies that "my view of MIRI's nondisclosed-by-default policy is that if all researchers involved with a research program think it should obviously be public then it should obviously be public, and that doesn't require a bunch of bureaucracy. [...] the policy is there to enable researchers, not to annoy them and make them jump through hoops." Cross-posted from the AI Alignment Forum; original is at [33]

Documents (1 document)

Title Publication date Author Publisher Affected organizations Affected people Affected agendas Notes
Challenges to Christiano’s capability amplification proposal 2018-05-19 Eliezer Yudkowsky Machine Intelligence Research Institute Paul Christiano Iterated amplification This post was summarized in Alignment Newsletter #7 [34].

Similar people

Showing at most 20 people who are most similar in terms of which organizations they have worked at.

Person Number of organizations in common List of organizations in common
Daniel Dewey 4 80,000 Hours, Future of Humanity Institute, Machine Intelligence Research Institute, Open Philanthropy
Ryan Carey 4 Future of Humanity Institute, Machine Intelligence Research Institute, OpenAI, Ought
Carl Shulman 3 80,000 Hours, Future of Humanity Institute, Machine Intelligence Research Institute
Claire Zabel 3 80,000 Hours, Open Philanthropy, Redwood Research
Nick Beckstead 3 80,000 Hours, Future of Humanity Institute, Open Philanthropy
Katja Grace 3 AI Impacts, Future of Humanity Institute, Machine Intelligence Research Institute
Helen Toner 3 Future of Humanity Institute, Open Philanthropy, OpenAI
Girish Sastry 3 Future of Humanity Institute, OpenAI, Ought
Ben Weinstein-Raun 3 Machine Intelligence Research Institute, Ought, Redwood Research
Holden Karnofsky 3 Open Philanthropy, OpenAI, Redwood Research
Pieter Abbeel 2 University of California, Berkeley, OpenAI
Stuart Russell 2 University of California, Berkeley, Machine Intelligence Research Institute
Andrew Critch 2 University of California, Berkeley, Machine Intelligence Research Institute
Owen Cotton-Barratt 2 80,000 Hours, Redwood Research
Niel Bowerman 2 80,000 Hours, Future of Humanity Institute
Lincoln Quirk 2 80,000 Hours, Machine Intelligence Research Institute
Eli Rose 2 80,000 Hours, Open Philanthropy
Toby Ord 2 80,000 Hours, Future of Humanity Institute
Alex Lawsen 2 80,000 Hours, Open Philanthropy
Howie Lempel 2 80,000 Hours, Open Philanthropy