
News created by AI does not have the same creative flair as stories written by journalists, according to an Australian study.
Researchers at Charles Darwin University (CDU) fed 150 news articles, on topics such as politics, sports, military affairs, and technology, into chatbot Gemini and prompted it to compose articles aligned with the content of the human-written stories.
The original stories, judged to be high-quality and trustworthy content, were sourced from five award-winning journalists at the New York Times (Thomas L. Friedman, Dave Philipps, Nicholas Kristof) and the Sydney Morning Herald (Konrad Marshall) and the Australian (David Swan now at The Age and SMH).
Human journalists were found to have greater variety in sentence and paragraph length.
They also use more verbs, suggesting their focus is more on explaining actions to engage readers. Gemini used more nouns than verbs.
The researchers envisage a web browser plug-in that could flag that the news being displayed is AI-generated based on the machine learning model they developed.
The study, Distinguishing Human Journalists from Artificial Storytellers Through Stylistic Fingerprints, was published in a special edition of Computers.
“Our paper suggests that AI and human writers produce equally readable content,” said Van Hieu Tran, the lead investigator of the study and a CDU Master of Information Technology graduate.
“However, the paper also finds that human writers produce more diverse syntactic and paragraph structures in their journalistic pieces than AI did.
“AI produces ‘more boring’ content that lacks stylistic diversity and writers' unique flair.”
Yakub Sebastian, a CDU Lecturer in Information Technology, said the research suggests human ingenuity and deep personality could still thrive, and be more appealing to human readers.
“There is also a deeper question as to whether it matters whether we could distinguish AI versus human writers beyond the issue of attribution/originality especially if all facts in the news are equally accurate,” Sebastian said.
“We think it matters because news often shapes opinions and narratives, not just delivering facts. AI biases, for instance, are certainly one thing that we need to be concerned about.
“AI models advance at a breakneck speed, and we see them increasingly capable of doing what humans can do.
“As such, we can expect that distinguishing between human-generated and AI-generated text will become increasingly difficult for human readers. This is already happening. Just recently, an Italian newspaper officially published the world's first AI-generated newspapers.”
Audiences and journalists are growing increasingly concerned by generative AI in journalism, according to research released last month by the RMIT-led Generative AI & Journalism report and published in the journal Journalism Practice.
Report lead author, Dr T.J. Thomson from RMIT University, said the potential of AI-generated or edited content to mislead or deceive was of most concern.
“The concern of AI being used to spread misleading or deceptive content topped the list of challenges for both journalists and news audiences,” he said.
“We found journalists are poorly equipped to identify AI-generated or edited content, leaving them open to unknowingly propelling this content to their audiences.”
This is partly because few newsrooms have systematic processes in place for vetting user-generated or community contributed visual material.
Have something to say on this? Share your views in the comments section below. Or if you have a news story or tip-off, drop us a line at adnews@yaffa.com.au
Sign up to the AdNews newsletter, like us on Facebook or follow us on Twitter for breaking stories and campaigns throughout the day.