Can Humans Distinguish Between Machine-created and Human-written Content?
Artificial intelligence (AI) is undergoing a rapid transformation of the world as we know it, and this change extends to AI-generated content (AIGC). AIGC encompasses any content generated by AI models, including text, code, images, and music. Its applications are already diverse, spanning from the creation of news reports and blog posts to scriptwriting and marketing copy.
Given this landscape, it is now an opportune moment to explore how readers perceive AIGC.
What underlying psychological factors influence human attitudes and responses to content produced by AI?
The Psychology of AI-generated Content
But before all of that, can humans even differentiate between content written by a fellow human and one by a machine without having to put it through software?
Some studies in this field have reached similar conclusions, which largely revolve around psychology. Apparently, there are a number of psychological factors that can influence how we humans perceive AI-generated content.
If It’s AI, It Must Lack Quality and Accuracy: To begin with, there are “expectations”. When a reader realizes that AI has generated the piece of content, he/she tends to have lower expectations for its quality and accuracy. Why? Because we associate AI with machines, which are often seen as less intelligent and creative than humans.
If It’s AI, It Must Be Without Emotions: Another important factor is our need for social connection. When we read content, we want to feel connected with another human being. We want to understand the author’s thoughts and feelings and feel like the author is speaking directly to us. AI-generated content can often feel impersonal and detached, making it difficult for readers to connect with it.
Can Humans Recognize AI-generated Content?
Yes and no. As AIGC gets more sophisticated, it is becoming increasingly tough for humans to distinguish between AI-generated and human-generated content.
For now, though, there are a few things that humans can look for to identify AI-generated content.
- One of them is unusual word choices or phrases. AI models are trained on massive datasets of text, but they can sometimes generate text that contains unusual or nonsensical language.
- Another thing to look for is a lack of flow or coherence in the text. AI models can sometimes generate text that is grammatically correct but lacks a clear and concise message.
- Finally, humans can also look for evidence of bias in AI-generated content. AI models are trained on data that is collected from the real world, and this data can contain biases. As a result, AI-generated content can sometimes reflect these biases.
In an essay, Dr. Alex Mercer, a renowned author specializing in AI and academic writing, says when a reader cannot instinctively differentiate between AI-generated and human-written text, that constitutes “excellent content.”
He argues that the ability of readers to discern between content generated by algorithms and that crafted by humans goes beyond mere philosophical curiosity.
Readers hold a preference for content that is informative, beneficial, accurate, and presented in a linguistically polished manner. But can Artificial Intelligence measure up to these expectations, wonders Dr Mercer in his article.
At times, it certainly can.
For example, when describing a particular device’s details, AI can do just as well as an experienced technical writer.
In such scenarios, the utilization of AI for content creation becomes virtually indistinguishable from content generated by a human. AI can deliver all the necessary information to the reader in a highly organized fashion.
Furthermore, it’s important to note that AI doesn’t operate in isolation; its output is intricately linked to the inputs and prompts provided by users. Utilizing well-constructed prompts can yield exceptional content.
But, overall, most types of content demand attributes that AI cannot replicate. The author claims that certain crucial qualities are absent in AI-authored content.
Not Many Humans Can Understand the Difference, Shows One Study
Another research shows a significant challenge faced by experts in the field of linguistics when it comes to distinguishing between content generated by AI and human authors. It says on the topic of “AI Versus Human Writing”, the experts were mistaken nearly 62% of the time.
This study, jointly conducted by an assistant professor from the University of South Florida and another from the University of Memphis, specializing in applied linguistics, highlighted that even scholars from prestigious linguistic journals could accurately discern AI-generated material in research abstracts merely 38.9% of the time. The study has brought to the forefront concerns regarding the increasing presence of AI in academic contexts and the pressing necessity for more effective detection tools.
In their study, the researchers assigned 72 linguistic experts to evaluate various research abstracts to ascertain whether AI or humans authored them. None of the experts managed to correctly identify the origin of all four writing samples provided to them, resulting in an overall identification accuracy rate of a mere 38.9%. These linguistic experts attempted to utilize certain linguistic and stylistic indicators to make their determinations, but their efforts were largely unsuccessful.
The study’s outcomes suggest that AI, particularly exemplified by language models like ChatGPT, is proficient at producing concise written pieces that match or surpass human-authored content quality.
However, AI’s limitations become apparent when dealing with longer texts, which can sometimes include inaccuracies or fabricated information, making it easier to detect. The researchers ultimately concluded that, without the assistance of yet-to-be-developed software tools, professors and educators would be incapable of distinguishing between a student’s original work and content generated by an AI language model such as ChatGPT.
So the Qs: How Do Readers Perceive AI-generated Content?
A number of studies have looked at how readers perceive AI-generated content. One study published in the journal, Computers in Human Behavior, found that readers rated AI-generated news articles as being less informative and less credible than human-written articles. The study also found that readers were more likely to believe factual errors in AI-generated articles.
Another study published in the journal PLOS One found that readers were more likely to agree with the arguments in AI-generated essays than they were with the arguments in human-written essays. However, the study also found that readers were more likely to detect bias in AI-generated essays.
Overall, these types of studies seem to suggest that readers have mixed views of AI-generated content.
MIT Study
One such in-depth study I came across was conducted by Zhang and Renee Richardson Gosline from MIT Sloan, who argue that the success of AI-generated work ultimately depends on whether consumers like it. The goal of the study was to gain a better understanding of the biases people hold regarding different types of collaboration between humans and AI.
The researchers found that people tend to have a positive bias towards content created by humans when they are aware of its source. However, contrary to the traditional belief of “algorithmic aversion,” people do not express any aversion towards AI-generated content when they know how it was created. In fact, when respondents were not informed about the source of the content, they even preferred AI-generated content.
Overall, the study highlights the importance of considering public perception when implementing generative AI in the workplace. The findings suggest that people prefer human-created content when they are aware of its source, but they do not have a negative bias towards AI-generated content when they know how it was created. This information can guide companies in leveraging generative AI to enhance highly skilled workers’ productivity while ensuring that consumers are satisfied with the final output.
To wrap up, it is becoming more and more difficult for humans to differentiate between content by AI and by humans. For now, people have mixed views of AI-generated content, and their perceptions can be influenced by several psychological factors, including their expectations, their need for social connection, and their own biases and prejudices. But once AIGC becomes sophisticated, humans will be unable to make out the difference between the two types of content.
(A confession: Some help was taken from a machine to write/re-write bits and portions of this newsletter.)