Item | Value |
---|---|
Country | United States |
GitHub username | paulfchristiano |
LessWrong username | paulfchristiano |
Intelligent Agent Foundations Forum username | Paul_Christiano |
Website | https://paulfchristiano.com |
Source | [1] |
Donations List Website (data still preliminary) | donor |
Agendas | Iterated amplification, Debate |
Organization | Title | Start date | End date | Employment type | Source | Notes |
---|---|---|---|---|---|---|
Open Philanthropy | Technical advisor | [2] | ||||
University of California, Berkeley | [3], [4], [5], [6] | One of 37 AGI Safety Researchers of 2015 funded by donations from Elon Musk and the Open Philanthropy Project | ||||
Machine Intelligence Research Institute | Research Associate | 2013-05-01 | 2015-03-01 | [7], [8] | ||
80,000 Hours | Advisor | 2013-09-18 | 2013-09-18 | advisor | [9] | |
OpenAI | Intern | 2016-05-25 | 2017-01-01 | [10], [11] | ||
OpenAI | 2017-01-01 | 2021-01-01 | full-time | [1], [12], [13], [14] | The description given is "working on alignment" | |
AI Impacts | Contributor | 2017-10-26 | 2017-10-26 | [15], [16] | ||
Future of Humanity Institute | Research Associate | 2017-11-24 | 2024-04-16 | [17], [18] | ||
Ought | Board member & collaborator | 2018-10-17 | 2019-02-02 | board member | [19], [20] | |
Redwood Research | Board Member | 2021-01-01 | 2023-01-22 | board member | [21], [22] | |
Alignment Research Center | Researcher | 2021-04-26 | [23], [24], [25] | |||
Ought | Advisor | 2021-05-14 | 2023-09-01 | advisor | [26] | |
Redwood Research | Director | 2023-03-31 | 2023-08-30 | board member | [27], [28] |
Name | Creation date | Description |
---|---|---|
Ordinary Ideas | 2011-12-21 | Paul Christiano’s blog about “weird AI stuff” [29]. |
AI Alignment | 2016-05-28 | Paul Christiano’s blog about AI alignment. |
AI Alignment Prize | 2017-11-03 | With Zvi Mowshowitz, Vladimir Slepnev. A prize for work that advances understanding in alignment of smarter-than-human artificial intelligence. Winners for the first round, as well as announcement of the second round, can be found at [30]. Winners for the second round, as well as announcement of the third round, can be found at [31]. |
Title | Publication date | Author | Publisher | Affected organizations | Affected people | Document scope | Cause area | Notes |
---|---|---|---|---|---|---|---|---|
Hiring engineers and researchers to help align GPT-3 | 2020-10-01 | Paul Christiano | LessWrong | OpenAI | Hiring-related notice | AI safety | Paul Christiano posts on LessWrong a hiring note asking for engineers and researchers to work on GPT-3 alignment problems, as the language model is already being deployed in the OpenAI API | |
What I’ll be doing at MIRI | 2019-11-12 | Evan Hubinger | LessWrong | Machine Intelligence Research Institute, OpenAI | Evan Hubinger, Paul Christiano, Nate Soares | Successful hire | AI safety | Evan Hubinger, who has just finished an internship at OpenAI with Paul Christiano and others, is going to start work at MIRI. His research will be focused on solving inner alignment for amplification. Although MIRI's research policy is one of nondisclosure-by-default [32] Hubinger expects that his own research will be published openly, and that he will continue collaborating with researchers at institutions like OpenAI, Ought, CHAI, DeepMind, FHI, etc. In a comment, MIRI Executive Director Nate Soares clarifies that "my view of MIRI's nondisclosed-by-default policy is that if all researchers involved with a research program think it should obviously be public then it should obviously be public, and that doesn't require a bunch of bureaucracy. [...] the policy is there to enable researchers, not to annoy them and make them jump through hoops." Cross-posted from the AI Alignment Forum; original is at [33] |
Title | Publication date | Author | Publisher | Affected organizations | Affected people | Affected agendas | Notes |
---|---|---|---|---|---|---|---|
Challenges to Christiano’s capability amplification proposal | 2018-05-19 | Eliezer Yudkowsky | Machine Intelligence Research Institute | Paul Christiano | Iterated amplification | This post was summarized in Alignment Newsletter #7 [34]. |
Showing at most 20 people who are most similar in terms of which organizations they have worked at.