Towards Safe Artificial General Intelligence
dc.contributor.author | Everitt, Tom | |
dc.date.accessioned | 2019-06-25T23:50:11Z | |
dc.date.available | 2019-06-25T23:50:11Z | |
dc.date.issued | 2018 | |
dc.description.abstract | The field of artificial intelligence has recently experienced a number of breakthroughs thanks to progress in deep learning and reinforcement learning. Computer algorithms now outperform humans at Go, Jeopardy, image classification, and lip reading, and are becoming very competent at driving cars and interpreting natural language. The rapid development has led many to conjecture that artificial intelligence with greater-than-human ability on a wide range of tasks may not be far. This in turn raises concerns whether we know how to control such systems, in case we were to successfully build them. Indeed, if humanity would find itself in conflict with a system of much greater intelligence than itself, then human society would likely lose. One way to make sure we avoid such a conflict is to ensure that any future AI system with potentially greater-than-human-intelligence has goals that are aligned with the goals of the rest of humanity. For example, it should not wish to kill humans or steal their resources. The main focus of this thesis will therefore be goal alignment, i.e. how to design artificially intelligent agents with goals coinciding with the goals of their designers. Focus will mainly be directed towards variants of reinforcement learning, as reinforcement learning currently seems to be the most promising path towards powerful artificial intelligence. We identify and categorize goal misalignment problems in reinforcement learning agents as designed today, and give examples of how these agents may cause catastrophes in the future. We also suggest a number of reasonably modest modifications that can be used to avoid or mitigate each identified misalignment problem. Finally, we also study various choices of decision algorithms, and conditions for when a powerful reinforcement learning system will permit us to shut it down. The central conclusion is that while reinforcement learning systems as designed today are inherently unsafe to scale to human levels of intelligence, there are ways to potentially address many of these issues without straying too far from the currently so successful reinforcement learning paradigm. Much work remains in turning the high-level proposals suggested in this thesis into practical algorithms, however. | en_AU |
dc.identifier.other | b59286921 | |
dc.identifier.uri | http://hdl.handle.net/1885/164227 | |
dc.language.iso | en_AU | en_AU |
dc.subject | Artificial intelligence | en_AU |
dc.subject | AI safety | en_AU |
dc.subject | reinforcement learning | en_AU |
dc.subject | causality | en_AU |
dc.title | Towards Safe Artificial General Intelligence | en_AU |
dc.type | Thesis (PhD) | en_AU |
dcterms.valid | 2018 | en_AU |
local.contributor.affiliation | College of Engineering and Computer Science, The Australian National University | en_AU |
local.contributor.authoremail | tom.everitt@anu.edu.au | en_AU |
local.contributor.supervisor | Hutter, Marcus | |
local.contributor.supervisorcontact | marcus.hutter@anu.edu.au | en_AU |
local.description.notes | the author deposited 26/06/2019 | en_AU |
local.identifier.doi | 10.25911/5d134a2f8a7d3 | |
local.identifier.proquest | Yes | |
local.mintdoi | mint | en_AU |
local.type.degree | Doctor of Philosophy (PhD) | en_AU |