Information for deep reinforcement learning from human preferences

Basic information

Associated people:

Associated organizations:

Overview

Goals of the agenda

Assumptions the agenda makes

AI timelines

Nature of intelligence

Other

Documents

Title Publication date Author Publisher Affected organizations Affected people Affected agendas Notes
AI Alignment Podcast: An Overview of Technical AI Alignment with Rohin Shah (Part 2) 2019-04-25 Lucas Perry Future of Life Institute Rohin Shah, Dylan Hadfield-Menell, Gillian Hadfield Embedded agency, Cooperative inverse reinforcement learning, inverse reinforcement learning, deep reinforcement learning from human preferences, recursive reward modeling, iterated amplification Part two of a podcast episode that goes into detail about some technical approaches to AI alignment.