Relationships are all about trust, and a new study shows AI-aided conversations could help build rapport between two people—but only as long as no one suspects the other is using AI.
According to a Cornell University research team’s investigation published this week with Scientific Reports, using AI-assisted responses (i.e. “smart replies”) can change conversational tone and social relationships, as well as increase communication speeds. And although more positive emotional language is often used in these instances, people who merely suspect responses to be influenced by AI are often more distrusting of their conversation partners, regardless of whether or not they are actually being used.
[Related: OpenAI’s newest ChatGPT update can still spread conspiracy theories.]
In the team’s study, researchers gathered 219 participant pairs and asked them to work with a program modeled after Google Allo (French for “hello”), the first, now-defunct smart-reply platform. The pairs were then asked to talk about policy issues under three conditions: both sides could use smart replies, only one side could use them, and neither could employ them. As a result, the team saw smart reply usage (roughly one in seven messages) boosted conversations’ efficiency, positive-aligned language, as well as positive evaluations from participants. That said, those who suspected partners used smart replies were often judged more negatively.
In the meantime, the study indicated you could also be sacrificing your own personal touch for the sake of AI-aided speed and convenience. Another experiment involving 299 randomly paired conversationalists asked participants to speak together under one of four scenarios: default Google smart replies, “positive” smart replies, “negative” replies, and no smart replies at all. As might be expected, positive smart replies begat more positive overall tones than conversations with the negative smart replies, or zero smart replies.
[Related: Microsoft lays off entire AI ethics team while going all out on ChatGPT.]
“While AI might be able to help you write, it’s altering your language in ways you might not expect, especially by making you sound more positive,” Jess Hohenstein, a postdoctoral researcher and lead author, said in a statement. “This suggests that by using text-generating AI, you’re sacrificing some of your own personal voice.”
Malte Jung, one of the study’s co-authors and an associate professor of information science, added that this implies the companies controlling AI-assist tech algorithms could easily influence many users’ “interactions, language, and perceptions of each other.”
This could become especially concerning as large language model programs like Microsoft’s ChatGPT-boosted Bing search engine and Google Bard continue their rapid integration into a suite of the companies’ respective products, much to critics’ worries.
“Technology companies tend to emphasize the utility of AI tools to accomplish tasks faster and better, but they ignore the social dimension,” said Jung. “We do not live and work in isolation, and the systems we use impact our interactions with others.”
The post Sounding like an AI chatbot may hurt your credibility appeared first on Popular Science.
Articles may contain affiliate links which enable us to share in the revenue of any purchases made.
from Popular Science https://ift.tt/MOuV7dB
0 Comments