Disclosing with Chatbots: The Future of Discussing Personal Information

By: Nick Rampe

Artificial intelligence is on the verge of turning the world on its head. Whether it’s cheating on a history paper or copying a famous musician’s voice to create a brand-new song, there are serious ethical implications for what might happen as this technology is only becoming more advanced and more accessible. While this rapidly advancing technology that seems to have world-changing possibilities can feel overwhelming, it does not have to be entirely negative.

Researchers have found an interesting use for artificial intelligence chatbots: self-disclosure. In 2018, Annabell Ho, Jeff Hancock, and Adam S. Miner, researchers at Stanford University conducted a study to understand how people would react when disclosing personal information with what they believed was a chatbot. They had three different hypotheses based on previous research on other topics, specifically studies about the benefits of self-disclosure.

First, the Perceived Understanding Hypothesis was based on the idea that people feel more satisfied with a self-disclosure conversation when they feel as though they have been understood. This, paired with this study, created the hypothesis that people would feel less emotionally satisfied if they believed they were talking to a chatbot, due to a computer’s inability to truly understand human emotion.

Second, the Disclosure Processing Hypothesis suggests that people would be more willing to disclose more information to a chatbot, because they would not be judged by a computer the same way they would be by another person. This, in turn, would make talking to a chatbot more beneficial than talking to another person, if the participant was willing to disclose more personal information.

Finally, the Computers as Social Actors or CASA framework predicts that the act of disclosing personal information is beneficial regardless of whether the discloser believes they are talking to a person or a computer. It argues that since computers and technology are so intertwined with our daily life, they are ‘social actors’ the same way that people are. This was the foundation for the Equivalence Hypothesis, which believes that the disclosure would have an equal effect on the participants whether they believed they were talking to a person or a chatbot.

The study was conducted by using what they call the “Wizard of Oz method.” By this, they mean that the thing that the discloser was talking to was always another person, but some were told it was a chatbot. Chatbots, even just six years ago when this study was conducted, were significantly more limited in what they could do, so the option of using a real chatbot was not feasible. The participants were recruited from university research participation websites, and the original 128 participants were whittled down to 98 after various disqualifications. On the other end, there were three undergraduate research students that were trained to validate the disclosers’ feelings without offering advice, while also encouraging the participants to go into further detail.

The study walked away with a variety of interesting findings. It found that there was a significant difference in emotional benefits, but it had more to do with the subject matter of the conversation, rather than who the participants thought they were disclosing to. When they were stating objective facts, there was less emotional satisfaction than when they were discussing their personal feelings. Similarly, the participants reported feeling emotionally closer to their conversation partner after the conversation, regardless of what they thought their partner was. The findings were mostly in line with the equivalence hypothesis. Self-disclosure is known to have significant emotional benefits, and the participants both felt more understood and disclosed more information as the conversation went on, but it was not because of their beliefs about their partner.

The implications of this study could be even greater now compared to when the study was published. Artificial intelligence chatbots are significantly more advanced than they were six years ago, and more easily accessible, namely Chat GPT. It is even possible that if this study were conducted today, they could have a group of people talk to an actual chatbot. We can take this a few steps farther as well. Maybe someday the average person will talk to an AI therapist or have AI friends. While those are obviously extreme examples, it is not only possible, but more than likely that AI will have a significant impact on our daily lives in the near future. While many impacts of AI seem to be leading us to a scary, dystopian future, there are certainly things to be excited about as well.

Leave a Reply

Your email address will not be published. Required fields are marked *