Withdraw
Loading…
"Good enough" agents: Investigating reliability imperfections in human-AI interactions across parallel task domains
Rodriguez, Sebastian Samuel
Loading…
Permalink
https://hdl.handle.net/2142/120268
Description
- Title
- "Good enough" agents: Investigating reliability imperfections in human-AI interactions across parallel task domains
- Author(s)
- Rodriguez, Sebastian Samuel
- Issue Date
- 2023-04-19
- Director of Research (if dissertation) or Advisor (if thesis)
- Kirlik, Alex
- Doctoral Committee Chair(s)
- Kirlik, Alex
- Committee Member(s)
- Karahalios, Karrie
- Lane, H Chad
- Chin, Jessie
- Schaffer, James
- Department of Study
- Computer Science
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- human-agent teaming
- human-autonomy teaming
- trust
- situation awareness
- individual differences
- group dynamics
- recommender systems
- virtual reality
- Abstract
- Advances in technology have resulted in the development of automation, which has facilitated various difficult tasks by extending the human's capabilities. However, the rapid adoption of automated systems begin to present issues as we continue interacting with technology. Automation is subject to the implicit social contracts we have with other entities in our lives (such as other humans, organizations, and groups), one of them being trust. It is important for us to understand the purpose of automation and its capabilities to set proper expectations into what the system can handle, and set an appropriate amount of trust in them. When we trust automation excessively or insufficiently, it may lead to either misuse (over-trust) or disuse (under-trust) of the automation, which may lead to sub-optimal, harmful, or in the worst of cases, fatal outcomes. The goal for trust within human-AI interactions is to reach calibrated trust. Prior research has investigated approaches and alternatives to addressing under-trust and over-trust -- with under-trust receiving the brunt of the work in order to increase technology adoption of new systems. Over-trust is researched in the context of supervisory control, automation interaction, and human-agent teaming, but it has limited resolution in traditional human-computer interaction research. The most viable approaches are repeated exposure and training of automated systems to prevent users to being lulled into a state of complacency, hampering performance. Addressing over-trust then becomes challenging due to various individual, task, situation, automation, and prior factors that affect the cognitive investment that the human sets in the situation at hand. This dissertation focuses on designing the reliability of a system to promote calibrated trust, and show this across varied task domains. We inquire whether an agent with less than ideal reliability can promote better calibrated trust by presenting itself as imperfect, much alike human-human interactions where skills and capabilities are assessed and calibrated. Furthermore, we present how this manipulation in reliability can affect AI systems in multiple domains, as to demonstrate that the trust dynamics between humans and AI is not only restricted to agents that live behind a screen (e.g., automation support, machine learning models, recommender systems), but also in physical and tangible systems much like we see in robotics today (e.g., drone swarms, robotic assembly). The approach of this dissertation is divided into 3 studies. Recommender systems are a type of decision support system used to provide personalized recommendations to users, and are often the archetype of human-AI interactions (for instance, the plethora of applications that recommend content to us in our smartphones). We delve into recommender systems and compare how features commonly used in decision support system design (i.e., explanations, control settings, reliability) can affect the acquisition of domain knowledge. We discuss 2 sub-studies (n = 526 and n = 529) with a recommendation system each, where we vary the presence of explanation, amount of control over the system, and reliability (i.e., quality of recommendations). We find that features often used to increase trust (e.g., explanation of outputs, control over the system) can lead to over-trust, which is mitigated by a lowered reliability to allow humans to exercise their own judgment. Since recommender systems are not the only type of AI systems we can interact with, we next focus on a physical domain where collaboration can be tangible (such as humans and robots). We investigate a simulated physical task with a pursuit-style objective, where the human is tasked to collaborate with 2 AI agents to capture a singular moving target. In this study (n = 104), we manipulate the reliability of the agent teammates, and measure both individual differences, perceptions of the agents, and task outcomes. Using mediation modeling, we demonstrate how reliability and performance is mediated by trust, situation awareness, and user individual differences. We additionally show how reducing reliability can have interaction effects with the domain and the environment, sometimes presenting unintended benefits. Finally, we explore the simulated physical domain of human-robot interaction in a collaborative decision-making task. We control reliability in a signal detection theory-based task in with distinct robot representations to explore human perception of reliability thresholds, and how robot embodiment affects decision-making. In this study (n = 119), we ascertain that embodied interactions point to higher perceived workload and self-reported trust, and a lower reliability can facilitate trust calibration by allowing users to recognize multiple erroneous cues. The findings in this dissertation contribute to the general knowledge in trust calibration, reliability, and human-AI interaction across virtual and physical domains, which serves for engineers and designers to be cognizant of these effects to build AI systems that are able to cue their users on how to improve the amount of trust that should be allocated. This process then may become more akin to how humans calibrate their trust with other humans, a small step towards improved human-AI integration.
- Graduation Semester
- 2023-05
- Type of Resource
- Thesis
- Copyright and License Information
- Copyright 2023 Sebastian Samuel Rodriguez Rodriguez
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…