Grants: RCTA work gets funded! $50,000
I'm happy to say that my proposal for a project with the Robotics Collaborative Technology Alliance (RCTA) 2017 was selected for funding.
Grant title: Appropriate calibrations of trust for supporting soldier-robot teaming, PI: Elizabeth Phillips, Brown University, Funded by General Dynamics Land Systems and Army Research Laboratory, $50,000
In future soldier-robot teams, human trust in the robotic teammate(s) will be needed to support successful peer-to-peer interactions. Complicating this picture, however, is the fact that humans tend to hold unrealistic expectations of robot performance and are sensitive to violations of these expectations. As robotic teammates cannot yet perfectly replicate the capabilities of their human counterparts, expectations of robots are likely to be violated and, as a result, trust diminishes or is potentially lost. It is therefore important to investigate means by which the actions and capabilities of the robot can help to mitigate such violations and appropriately calibrate the human’s level of trust in the robotic teammate. Appropriately calibrated human trust in robotic teammates will be especially important in novel situations, when team performance depends on developing an alternative plan.
This research will investigate signatures that bring about human trust in the robot, such as the robot’s sharing of metacognitive information, including detected cues and detected norms (e.g., implicit, explicit, and/or derived from environmental perception). For instance, a robot that communicates what it perceives to be the relevant norms in the situation may signal that the robot has critical coordinating mechanisms that underlie the team’s goals across missions and tasks. Team actions, in particular, rely on norms to coordinate, streamline, and legitimize team behavior; and team members that follow these norms can be trusted and relied on. It therefore stands to reason that humans who act jointly with robots in teams expect their robot partners to be aware of and follow many of the same norms that they themselves follow. If robots in fact do so, human team members are likely to trust them, and justifiably so. The robot’s communication of its norm awareness may be one means to help soldiers maintain appropriate trust in the robot and avoid underreliance (Disuse) when the team encounters novel situations in which the robot could be useful and overreliance (Misuse) when the team assigns a difficult task to the robot that it is not capable of completing.