My research focuses on algorithms that compliment human-AI (or human-robot, human-computer,…) interaction. More specifically I ask: How can AI help humans to trust it appropriately (and act accordingly)? I claim that this is one of the fundamental questions being asked by those who investigate interpretable, comprehensible, transparent, and explainable machine learning, as well as human-computer interaction, human-robot interaction, and e-commerce (among many others).
My collaborators and I have been investigating what we call “algorithmic assurances”. Algorithmic assurances are any feedback that an AI (or robot, computer, machine,…) can provide to a human user in order to calibrate their trust-related behaviors.
More specifically I am investigating different algorithmic assurances that can be used for an autonomous robot that uses a POMDP for decision making. The idea is that these assurances could be reported to the human user in some way in order for them to better understand the capabilities and limitations of the robot. Please see this paper for a more detailed discussion.
Coming soon: my CV