"Why did you do that?" Explainable intelligent robots

© 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. As autonomous intelligent systems become more widespread, society is beginning to ask: "What are the machines up to?". Various forms of artificial intelligence control our latest cars, l...

Full description

Bibliographic Details
Main Author: Sheh, Raymond
Format: Conference Paper
Published: 2017
Online Access:http://hdl.handle.net/20.500.11937/66551
_version_ 1848761349474091008
author Sheh, Raymond
author_facet Sheh, Raymond
author_sort Sheh, Raymond
building Curtin Institutional Repository
collection Online Access
description © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. As autonomous intelligent systems become more widespread, society is beginning to ask: "What are the machines up to?". Various forms of artificial intelligence control our latest cars, load balance components of our power grids, dictate much of the movement in our stock markets and help doctors diagnose and treat our ailments. As they become increasingly able to learn and model more complex phenomena, so the ability of human users to understand the reasoning behind their decisions often decreases. It becomes very difficult to ensure that the robot will perform properly and that it is possible to correct errors. In this paper, we outline a variety of techniques for generating the underlying knowledge required for explainable artificial intelligence, ranging from early work in expert systems through to systems based on Behavioural Cloning. These are techniques that may be used to build intelligent robots that explain their decisions and justify their actions. We will then illustrate how decision trees are particularly well suited to generating these kinds of explanations. We will also discuss how additional explanations can be obtained, beyond simply the structure of the tree, based on knowledge of how the training data was generated. Finally, we will illustrate these capabilities in the context of a robot learning to drive over rough terrain in both simulation and in reality.
first_indexed 2025-11-14T10:30:16Z
format Conference Paper
id curtin-20.500.11937-66551
institution Curtin University Malaysia
institution_category Local University
last_indexed 2025-11-14T10:30:16Z
publishDate 2017
recordtype eprints
repository_type Digital Repository
spelling curtin-20.500.11937-665512018-05-14T06:08:36Z "Why did you do that?" Explainable intelligent robots Sheh, Raymond © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. As autonomous intelligent systems become more widespread, society is beginning to ask: "What are the machines up to?". Various forms of artificial intelligence control our latest cars, load balance components of our power grids, dictate much of the movement in our stock markets and help doctors diagnose and treat our ailments. As they become increasingly able to learn and model more complex phenomena, so the ability of human users to understand the reasoning behind their decisions often decreases. It becomes very difficult to ensure that the robot will perform properly and that it is possible to correct errors. In this paper, we outline a variety of techniques for generating the underlying knowledge required for explainable artificial intelligence, ranging from early work in expert systems through to systems based on Behavioural Cloning. These are techniques that may be used to build intelligent robots that explain their decisions and justify their actions. We will then illustrate how decision trees are particularly well suited to generating these kinds of explanations. We will also discuss how additional explanations can be obtained, beyond simply the structure of the tree, based on knowledge of how the training data was generated. Finally, we will illustrate these capabilities in the context of a robot learning to drive over rough terrain in both simulation and in reality. 2017 Conference Paper http://hdl.handle.net/20.500.11937/66551 restricted
spellingShingle Sheh, Raymond
"Why did you do that?" Explainable intelligent robots
title "Why did you do that?" Explainable intelligent robots
title_full "Why did you do that?" Explainable intelligent robots
title_fullStr "Why did you do that?" Explainable intelligent robots
title_full_unstemmed "Why did you do that?" Explainable intelligent robots
title_short "Why did you do that?" Explainable intelligent robots
title_sort "why did you do that?" explainable intelligent robots
url http://hdl.handle.net/20.500.11937/66551