Notes:
This paper was just accepted to the ACM Computing Surveys (ACM CSUR) Journal today!! I’m really excited, and grateful to my co-author, and adviser, Dr. Nisar Ahmed. We put a lot of work into this document, and hope it will be useful to others.
This is an evolution of the (much less refined) document that I wrote for my prelim exam (which, unfortunately, has a very similar name). This document has been greatly improved, thanks to feedback from Eric Frew, and Mike Mozer (my committee members), as well as comments from reviewers for the CSUR journal. I am very happy with the final product.
Abstract: People who design, use, and are affected by autonomous artificially intelligent agents want to be able to \emph{trust} such agents – that is, to know that these agents will perform correctly, to understand the reasoning behind their actions, and to know how to use them appropriately. Many techniques have been devised to assess and influence human trust in artificially intelligent agents. However, these approaches are typically ad hoc, and have not been formally related to each other or to formal trust models. This paper presents a survey of \emph{algorithmic assurances}, i.e. programmed components of agent operation that are expressly designed to calibrate user trust in artificially intelligent agents. Algorithmic assurances are first formally defined and classified from the perspective of formally modeled human-artificially intelligent agent trust relationships. Building on these definitions, a synthesis of research across communities such as machine learning, human-computer interaction, robotics, e-commerce, and others reveals that assurance algorithms naturally fall along a spectrum in terms of their impact on an agent’s core functionality, with seven notable classes ranging from integral assurances (which impact an agent’s core functionality) to supplemental assurances (which have no direct effect on agent performance). Common approaches within each of these classes are identified and discussed; benefits and drawbacks of different approaches are also investigated.
PDF Link(s): pre-print
BibTeX:
@ARTICLE{Israelsen2017-ym,
title = "``{Dave...I} can assure you...that it's going to be all
right...'' -- A definition, case for, and survey of
algorithmic assurances in human-autonomy trust relationships",
author = "Israelsen, Brett W and Ahmed, Nisar R",
abstract = "People who design, use, and are affected by autonomous artificially intelligent agents want to be able to \emph{trust} such agents -- that is, to know that these agents will perform correctly, to understand the reasoning behind their actions, and to know how to use them appropriately. Many techniques have been devised to assess and influence human trust in artificially intelligent agents. However, these approaches are typically ad hoc, and have not been formally related to each other or to formal trust models. This paper presents a survey of \emph{algorithmic assurances}, i.e. programmed components of agent operation that are expressly designed to calibrate user trust in artificially intelligent agents. Algorithmic assurances are first formally defined and classified from the perspective of formally modeled human-artificially intelligent agent trust relationships. Building on these definitions, a synthesis of research across communities such as machine learning, human-computer interaction, robotics, e-commerce, and others reveals that assurance algorithms naturally fall along a spectrum in terms of their impact on an agent's core functionality, with seven notable classes ranging from integral assurances (which impact an agent's core functionality) to supplemental assurances (which have no direct effect on agent performance). Common approaches within each of these classes are identified and discussed; benefits and drawbacks of different approaches are also investigated.",
month = nov,
year = 2017,
archivePrefix = "arXiv",
primaryClass = "cs.CY",
eprint = "1711.03846"
}