AI assistants influence the ideas and opinions of its users

AI-powered writing assistants like ChatGPT are increasingly integrated into various tools, including search engines and office applications. However, using these assistants can actually influence the ideas and opinions of users.

AI-powered writing assistants like ChatGPT are increasingly integrated into various tools, including search engines and office applications. However, using these assistants can actually influence the ideas and opinions of users.

According to researchers from Cornell University, individuals who used an AI writing tool with a biased viewpoint (such as favoring or opposing social media) were twice as likely to write a paragraph that aligned with the assistant’s opinion. They were also more likely to express the same opinion when compared to those who wrote without AI assistance.

The biases inherent in AI writing tools, whether intentional or unintentional, raise concerns about their potential impact on culture and politics. Mor Naaman, a professor at the Jacobs Technion-Cornell Institute at Cornell Tech and co-author of the study, emphasizes the need to better understand the implications of implementing these AI models in various aspects of life. Apart from enhancing efficiency and creativity, there could be consequences for individuals and society, including shifts in language and opinions.

This study is the first to demonstrate that using an AI-powered writing tool can influence a person’s opinions. The researchers programmed a large language model to have positive or negative opinions about social media. Participants wrote their paragraphs either alone or with the assistance of an opinionated AI assistant on a platform designed to mimic a social media website. The platform collected data on participants’ typing behavior, including their acceptance of AI suggestions and the time taken to compose paragraphs.

Participants who co-wrote with a pro-social media AI assistant produced more sentences supporting the idea that social media is beneficial, while those with an anti-social media assistant did the opposite. These participants were also more likely to adopt their assistant’s opinion in a follow-up survey. Interestingly, even participants who took their time to compose paragraphs produced heavily influenced statements.

Notably, most participants did not realize they were being influenced or even notice the biases in the AI assistant. The co-writing process felt natural and organic, as if they were expressing their own thoughts with some assistance, according to Naaman.

When the experiment was repeated with a different topic, similar results were observed, indicating the influence of the assistants. The research team is now investigating how this experience leads to opinion shifts and the duration of these effects.

Just as social media has impacted the political landscape by facilitating the spread of misinformation and the formation of echo chambers, biased AI writing tools could potentially cause similar shifts in opinion, depending on the tools users choose. Some organizations have even expressed plans to develop alternatives to ChatGPT that are designed to express more conservative viewpoints.

These technologies necessitate public discussions on their potential misuse and the need for monitoring and regulation, the researchers argue. As these technologies become more powerful and deeply integrated into our societies, careful consideration is required regarding the governance of the values, priorities, and opinions embedded within them, as highlighted by Jakesch.

0:00
0:00