Information for Victoria Krakovna

Table of contents

Basic information

Item Value
Facebook username vkrakovna
Intelligent Agent Foundations Forum username 70
Website https://vkrakovna.wordpress.com/
Donations List Website (data still preliminary) donor

List of positions (3 positions)

Organization Title Start date End date Employment type Source Notes
Future of Life Institute Co-Founder [1]
Google DeepMind Research Scientist [2], [3], [4]
Machine Intelligence Research Institute Research advisor 2018-09-30 advisor [5]

Products (3 products)

Name Creation date Description
AI Safety Discussion 2016-02-21 A Facebook discussion group about AI safety. This is a closed group so one needs to request access to see posts.
Introductory resources on AI safety research 2016-02-28 A list of readings on long-term AI safety. Mirrored at [6]. There is an updated list at [7].
AI safety resources 2017-10-01 A list of resources for long-term AI safety. Seems to have been first announced at [8].

Organization documents (0 documents)

Title Publication date Author Publisher Affected organizations Affected people Document scope Cause area Notes

Documents (1 document)

Title Publication date Author Publisher Affected organizations Affected people Affected agendas Notes
New safety research agenda: scalable agent alignment via reward modeling 2018-11-20 Victoria Krakovna LessWrong Google DeepMind Jan Leike Recursive reward modeling, iterated amplification Blog post on LessWrong announcing the recursive reward modeling agenda. Some comments in the discussion thread clarify various aspects of the agenda, including its relation to Paul Christiano’s iterated amplification agenda, whether the DeepMind safety team is thinking about the problem of whether the human user is a safe agent, and more details about alternating quantifiers in the analogy to complexity theory. Jan Leike is listed as an affected person for this document because he is the lead author and is mentioned in the blog post, and also because he responds to several questions raised in the comments.

Similar people

Showing at most 20 people who are most similar in terms of which organizations they have worked at.

Person Number of organizations in common List of organizations in common
Nick Bostrom 3 Future of Life Institute, Google DeepMind, Machine Intelligence Research Institute
Jesse Galef 2 Future of Life Institute, Machine Intelligence Research Institute
Stuart Russell 2 Future of Life Institute, Machine Intelligence Research Institute
Max Tegmark 2 Future of Life Institute, Machine Intelligence Research Institute
Jaan Tallinn 2 Future of Life Institute, Machine Intelligence Research Institute
Daniel Dewey 2 Future of Life Institute, Machine Intelligence Research Institute
Jan Leike 2 Google DeepMind, Machine Intelligence Research Institute
Janos Kramar 2 Future of Life Institute, Machine Intelligence Research Institute
Martin Rees 1 Future of Life Institute
Vladimir Nesov 1 Machine Intelligence Research Institute
Edwin Evans 1 Machine Intelligence Research Institute
Michael Raimondi 1 Machine Intelligence Research Institute
Ioven Fables 1 Machine Intelligence Research Institute
Anja Heinisch 1 Machine Intelligence Research Institute
Joshua Fox 1 Machine Intelligence Research Institute
Robert Zahra 1 Machine Intelligence Research Institute
Aruna Vassar 1 Machine Intelligence Research Institute
Henrik Jonsson 1 Machine Intelligence Research Institute
Aubrey de Grey 1 Machine Intelligence Research Institute
Kevin Fischer 1 Machine Intelligence Research Institute