Building Mental Models Under Uncertainty in HumanAI Teams
Project Abstract/Statement of Work:
Understanding when collaborators are likely to make mistakes is a key way in which people mitigate the impact of errors in team settings. Between people, this understanding of when and where to trust a collaborator — called a mental model — is formed through repeated interactions and communication about uncertainty. Machine Learning (ML) systems have tried to convey similar information on reliability through confidence scores, but prior work suggests the rules by which people judge possible outcomes is based on heuristics that can fail in artificial settings, much less with artificial agents.
We propose to study i) the impact of agentstated uncertainty (over a range of uncertainty values) on user mental model formation in humanAI teams; ii) the importance of reliability in communicating uncertainty (i.e., the negative impact of incorrectly assessing uncertainty), and iii) the impact of team size and complexity on people’s ability to form accurate mental models under uncertainty. We will extend our existing platform (which is inspired by our work in ) for controlled measurement of mental model formation in a joint decisionmaking environment (classification) to include estimates of the AI agent’s confidence. We will then vary the agent’s ability, the accuracy of the reported uncertainty, method for conveying uncertainty (e.g., number, visualization, visualization type), and the size of the team. This will provide an interactive setting in which users/participants can collaboratively complete tasks of differing complexity and length. We will measure collaborative performance in terms of total time taken to complete, and accuracy on, the given shared classification task. This will provide a uniform measure across all task complexities and configurations. Participants will primarily be recruited from Amazon Mechanical Turk.