Item | Value |
---|
Organization | Title | Start date | End date | Employment type | Source | Notes |
---|---|---|---|---|---|---|
Leverhulme Centre for the Future of Intelligence | Spoke Leader | [1], [2] | ||||
Centre for the Study of Existential Risk | Advisor | [3], [4] | ||||
Future of Life Institute | Advisor | [5] | ||||
University of California, Berkeley | [6], [7] | |||||
University of California, Berkeley | Professor of Computer Science | 1986-01-01 | [8], [9], [10] | One of 37 AGI Safety Researchers of 2015 funded by donations from Elon Musk and the Open Philanthropy Project | ||
Machine Intelligence Research Institute | Research Advisor | 2015-07-01 | 2018-05-28 | [11], [12] | ||
Center for Human-Compatible AI | Faculty | 2016-10-01 | [13], [14] | |||
Machine Intelligence Research Institute | Spotlighted Advisor | 2018-09-10 | [15], [16] | |||
Berkeley Existential Risk Initiative | Board advisor | 2020-01-01 | advisor | [17], [18] |
Name | Creation date | Description |
---|
Title | Publication date | Author | Publisher | Affected organizations | Affected people | Document scope | Cause area | Notes |
---|
Title | Publication date | Author | Publisher | Affected organizations | Affected people | Affected agendas | Notes |
---|
Showing at most 20 people who are most similar in terms of which organizations they have worked at.