Information for Alexey Turchin

Table of contents

Basic information

Item Value
Country Russia
Facebook username turchin.alexei
LessWrong username turchin
Website https://avturchin.livejournal.com/
Source [1], [2], [3], [4]
Donations List Website (data still preliminary)

List of positions (1 position)

Organization Title Start date End date Employment type Source Notes
AI Safety Camp Team member 2018-10-04 2018-10-14 [5]

Products (3 products)

Name Creation date Description
AGI Failures Modes and Levels map 2015-01-01 A flowchart about failure modes of artificial general intelligence, grouped by the stage of development. There is an accompanying post on LessWrong at [4].
AGI Safety Solutions Map 2015-01-01 A flowchart on potential solutions to AI safety. There is an accompanying post on LessWrong at [6].
“Levels of defense” in AI safety 2017-12-12 A flowchart applying multilevel defense to AI safety. There is an accompanying post on LessWrong at [7].

Organization documents (0 documents)

Title Publication date Author Publisher Affected organizations Affected people Document scope Cause area Notes

Documents (0 documents)

Title Publication date Author Publisher Affected organizations Affected people Affected agendas Notes

Similar people

Showing at most 20 people who are most similar in terms of which organizations they have worked at.

Person Number of organizations in common List of organizations in common
Justin Shovelain 1 AI Safety Camp
Johannes Heidecke 1 AI Safety Camp
Remmelt Ellen 1 AI Safety Camp
Jan Kulveit 1 AI Safety Camp
Anne Wissemann 1 AI Safety Camp
Jirí Nadvorník 1 AI Safety Camp
Kristina Nemcova 1 AI Safety Camp
Jessica Cooper 1 AI Safety Camp
Joe Collman 1 AI Safety Camp
Alexis Carlier 1 AI Safety Camp
Alex Turner 1 AI Safety Camp
Joar Skalse 1 AI Safety Camp
Dmitrii Krasheninnikov 1 AI Safety Camp
Michael Cohen 1 AI Safety Camp
Jonas Müller 1 AI Safety Camp
Nicholas Goldowsky-Dill 1 AI Safety Camp
Daniel Kokotajlo 1 AI Safety Camp
Ronja Lutz 1 AI Safety Camp
Brandon Perry 1 AI Safety Camp
Lauro Langosco 1 AI Safety Camp