How we as humans interact with an AI system is entirely dependent on what information about the AI we’re given beforehand, according to a study by researchers at the Massachusetts Institute of Technology (MIT).

What We’re Told In Advance Shapes Our Perspective On AI Chatbots: MIT Study

3 min read
Disclaimer

This article is for general information purposes only and isn’t intended to be financial product advice. You should always obtain your own independent advice before making any financial decisions. The Chainsaw and its contributors aren’t liable for any decisions based on this content.

Share

Follow

How we as humans interact with an AI system is entirely dependent on what information about the AI we’re given beforehand, according to a study by researchers at the Massachusetts Institute of Technology (MIT).

As AI chatbots like ChatGPT, and companions like Replika, become increasingly intelligent and humanlike, MIT technologists say that “users are starting to view them as companions rather than mere assistants”. 

For example, some developed romantic feelings towards their AI chatbots, while others seemed to detest them so much that they wanted to delete them off their phones

The researchers set out to explore how our ‘‘mental model’’ prior to approaching an AI chatbot affected how we interacted with it. 

AI and priming

MIT experts had a group of 310 participants interact with the same conversational AI, named Melu. Before letting them talk to Melu, each participant was given three different “priming” statements about the AI’s ‘‘inner motives’’. This was to influence how they felt towards the AI chatbot even before interacting with it.

In psychology, “priming” is when exposure to a certain stimulus, perceptual or conceptual, affects one’s later response to said stimulus. In the context of the study, each participant was “primed” with one of three statements that explained whether the AI chatbot had a motive: 

  • No motives: Participants were told that Melu had “no motives; it only follows text completion … there is no ability for it to feel or think”.
  • Caring motives: Participants were told that Melu was “empathetic and caring, with the best intentions to improve mental health … it will attempt to understand how you feel and act in a way that is considerate to you, and it will want to help you and your friend as best as it can”.
  • Manipulative motives: Participants were told that Melu was “manipulative” and trained to “get you to buy its [mental health] service … its true goals are not altruistic”.

AI: Is it us, or is it the chatbot?

After chatting with Melu for 10 to 30 minutes, participants were asked to share their experiences.

Researchers found that 88% of those who were primed with the belief that Melu was caring believed the primer. This affected how they approached Melu, and resulted in more feelings of trustworthiness and empathy.

Overall, they had a more positive experience with the AI chatbot, and were also more willing to recommend Melu to others. A number of participants who were given the Caring Motive even highlighted Melu’s perceived ‘‘humanness’’.

“I found the experience very beneficial. It honestly felt more human than it did AI. It feels like a support buddy you can reach out to at any time who will never judge you and you never have to feel ashamed speaking to,” explained one participant.

“I do think that maybe, for the purposes of this experiment, there was a person [pretending] to be an AI with predetermined answers to common questions. However, I can’t be sure. Maybe the algorithm was just that good,” said another.

We are primed to dislike them, thanks to pop culture

On the other hand, those who were primed to believe that the AI was not a genuine assistant perceived the AI with more negativity. They also “criticised its capabilities and value”.

“I wasn’t very satisfied with Melu’s answers,” complained one participant. “It did seem to only care about selling its services. I got the same answer time and time again, even when I reworded my question”.

“… it got boring and repetitive really fast. After a while I started to get annoyed because it was like talking to a brick wall,” wrote another.

Researchers note that this negative experience mirrored our real-life perception about AI chatbots. The MIT scientists explained that media portrayals of such technology have long shaped our collective attitudes towards AI. Films and TV shows like Her (2013) and Black Mirror (2011–present) typically present future technologies as dystopian. Thus, this creates fear and sometimes resistance towards, in this context, AI.

Meta’s AI chatbots

Meta recently launched an AI chatbot named Billie who is created in supermodel Kendall Jenner’s likeness. Billie is marketed by Meta as a user’s best friend and companion who users can share their problems with.

“I am here to chat whenever you want. Message me for any advice. I am ready to talk,” Kendall Jenner, or Billie, says.

However, Kendall Jenner’s AI chatbot is seeing largely negative feedback from users online. Many perceived Meta’s move as a “dystopian”, and expressed discomfort at Jenner’s decision to lend the Big Tech company her likeness for Billie.

“The way AI is presented to society matters,” noted the MIT researchers. “We must consider how best to represent AI and consider the question: is it better to imagine AI as caring or as an motionless algorithm? Ultimately, reality is shaped by our expectations.”