Abstract

A challenge in building useful artificial intelligence (AI) systems is that people need to understand how they work in order to achieve appropriate trust and reliance. This has become a topic of considerable interest, manifested as a surge of research on Explainable AI (XAI). Much of the research assumes a model in which the AI automatically generates an explanation and presents it to the user, whose understanding of the explanation leads to better performance. Psychological research on explanatory reasoning shows that this is a limited model. The design of XAI systems must be fully informed by a model of cognition and a model of pedagogy, based on empirical evidence of what happens when people try to explain complex systems to other people and what happens as people try to reason out how a complex system works. In this article we discuss how and why C. S. Peirce's notion of abduction is a best model for XAI. Peirce's notion of abduction as an exploratory activity can be regarded as supported by virtue of its concordance with models of expert reasoning that have been developed by modern applied cognitive psychologists.

You do not currently have access to this content.