Withdraw
Loading…
Designing conversational agents to promote self-disclosure and behavior change
Lee, Yi-Chieh
Loading…
Permalink
https://hdl.handle.net/2142/113171
Description
- Title
- Designing conversational agents to promote self-disclosure and behavior change
- Author(s)
- Lee, Yi-Chieh
- Issue Date
- 2021-07-12
- Director of Research (if dissertation) or Advisor (if thesis)
- Huang, Yun
- Doctoral Committee Chair(s)
- Huang, Yun
- Committee Member(s)
- Karahalios, Karrie
- Sundaram, Hari
- Yamashita, Naomi
- Department of Study
- Computer Science
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Conversational Agent
- Human-Computer Interaction
- Self-Disclosure
- Chatbot
- Mental Wellbeing
- Trust
- Abstract
- Conversational agents, commonly known as chatbots, are software applications that allow people to access online services and information using their natural language. Because chatbots provide a convenient and low-cost communication channel, they are regarded as one of the most promising artificial intelligence (AI) technologies and are increasingly applied in many domains. For example, scholars and practitioners are striving to develop chatbots capable of streamlining healthcare provision and thus improving people's well-being. Some of these efforts have produced chatbots that can guide people to elicit their self-disclosure of personal experiences, thoughts, and feelings. Healthcare professionals can then use such self-disclosures to clarify their understandings of their patients’ statuses. Prior work has shown that reciprocity occurs in human-chatbot conversation. That is, when a chatbot engages in self-disclosure relating to its ostensible life history and emotional states, users are likely to disclose more when interacting with it. Nevertheless, promoting in-depth mutual self-disclosure, sustaining communication, and building trust between people and chatbots remains challenging. Additionally, most existing chatbot research focuses mainly on how to improve human-chatbot interaction through design, with relatively few studies investigating how chatbots can be used to mediate human-human interaction by transferring users’ information and trust to real healthcare professionals. To address the challenges mentioned above, this dissertation draws on human-computer interaction and AI technologies to design chatbots that integrate human support and social learning, and evaluate their impact on their users’ self-disclosure and behavioral changes via mixed-methods longitudinal studies. It makes four main contributions. First, I explored effective chatbot designs that elicit people's deep self-disclosure to a chatbot over time. The results showed that chatbot self-disclosure had a reciprocal effect on promoting deeper participant self-disclosure that lasted over the study period; and chatbot self-disclosure also positively affected participants' perceived intimacy and enjoyment. Second, I provided empirical evidence of sustaining people's self-disclosure and trust of chatbots throughout different interaction periods, e.g., with and without a third party's involvement. In this case, the chatbot introduced and involved a mental health professional as the third party into the conversations. The results showed that within each group, the depth of participants' self-disclosure to the chatbot alone remained after sharing with the mental health professional. Third, I proposed to integrate human support and social learning in human-chatbot interaction to promote behavior change. In this case, I designed a chatbot with human support to guide people to practice journaling skills and conducted a three-phase study to investigate its impact. The results showed that the human-support chatbot encouraged users to follow the guidance during journaling practices and increased engagement; however, this design decreased willingness of some participants to keep practicing the learned skills. Additionally, I explored the effect of incorporating a social learning component into human-chatbot interaction. The findings showed that a social learning component could elicit users' deeper self-disclosure of thoughts and better self-reflection. However, only showing positive learning outcomes from peers seemed to interfere with some participants' perceived engagement with their chatbot. Overall, these findings provided new insights into the design of human-chatbot interaction for promoting users' self-disclosure and delivering guidance for behavior change.
- Graduation Semester
- 2021-08
- Type of Resource
- Thesis
- Permalink
- http://hdl.handle.net/2142/113171
- Copyright and License Information
- Copyright 2021 Yi-Chieh Lee
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisDissertations and Theses - Computer Science
Dissertations and Theses from the Dept. of Computer ScienceManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…