Senate AI inquiry worried at threat to Australian democracy 

By Makayla Muscat | 11 October 2024
 

A parliamentary inquiry is worried at the potential threat to Australian democracy from AI’s ability to manipulate public opinion and perception on a massive scale via social media platforms. 

The Select Committee on Adopting Artificial Intelligence has delayed its final report so it can study the impact of AI on the presidential election in the US.

An interim report, tabled in the Senate, calls on the federal government to implement, ahead of the next Australian federal election, voluntary codes relating to watermarking and credentialing of AI-generated content. 

The committee also recommended a review of potential responses to deepfake content, laws restricting the production or dissemination of AI-generated political material and that the mandatory guardrails also apply to AI systems used in an electoral setting. 

Inquiry deputy chair David Shoebridge, a Greens senator, said the report demonstrates the "shocking and imminent risks" AI poses to electoral processes.

“This gives us a glimpse into the Federal Election 2025 with the very real threat that deepfake content will be used to steal seats if not the whole election,” he told AdNews

“The Greens are disappointed that the final recommendations do not support urgent action to deal with the challenges of AI in time for the next federal election.

“While it would be a challenge to get laws in place in such a short timeline the risk posed justifies this haste. It seems as always this Government won’t rush towards good laws, only to those that limit rights and harm communities.” 

With half of the world's population voting this year, concerns about fictitious content intended to deceive, create bias or influence the outcomes of elections have become more widespread. 

The committee said the use of AI-generated content in a number of overseas elections suggests that similar efforts to spread disinformation will almost certainly occur in Australia.

As it stands, AI technology allows users to create a fake video of a person saying or doing almost anything, limited only by their creativity and the footage of the subject they can source. 

Many participants submitted that AI facilitates disinformation and misinformation by making it harder to detect and easier to disseminate quickly through multiple channels.

Meanwhile, the current regulatory system is “not fit for purpose to respond to the distinct risks that AI poses,” according to the report. 

Committee chair Labor Senator Tony Sheldon said regulating political deepfakes needs to be carefully considered and widely consulted on.

“The threat that AI-generated deepfake political materials pose to democracies around the world is very real,” he told AdNews

“This is why the Albanese Government has implemented new legislation to combat online disinformation and misinformation, while also establishing safeguards for the use of AI in critical areas.

“Social media giants must take more responsibility for disinformation circulating on their platforms, and the Government’s reforms are aimed precisely at achieving this.”

The parliamentary inquiry was established in March to report on the opportunities and impacts for Australia arising out of the uptake of AI technologies. 

The committee held six public hearings where individuals and organisations, including Google, Meta and Amazon, faced questions about how the technology can be used and misused.

While AI-powered chatbots pose legitimate risks, it also has the potential to improve the efficiency and accuracy of elections by helping voters better understand political debates, legislation and policy proposals. 

However, realising the opportunities while mitigating the risks would involve a balanced approach when establishing legislation, regulation and governance frameworks for AI technologies in Australia. 

Independent senator David Pocock said Al is already disrupting, shaping and changing the way humans live and organise themselves. 

“It has extraordinary potential for uses that could potentially enhance the wellbeing of humanity and contribute to a more creative, caring and ecologically sustainable society,” he said. 

“It also poses significant risks to life as we know it and to human civilisation as we know it… this is the first tool in human history that can create new ideas entirely on its own.

“Fakes are not new… photos alleging to capture the Loch Ness monster were created in 1934… But the ease with which forms of communication can be altered, or just plain manufactured, using generative AI is completely unprecedented.

“As the committee report notes, there have already been significant efforts to use generative AI to impact elections around the world. It is critical that the Federal Parliament take this threat seriously and not delay action to protect our democracy.” 

Evidence to the inquiry from large social media companies indicated that they are implementing watermarking in their products, to identify content touched by mechanical intelligence.

The Google submission said the company was committed to ensuring that AI-generated content from its products contained embedded watermarking; and noted that it was developing a tool for watermarking AI-generated images. 

Microsoft advised that it was “actively exploring” watermarking to ‘help users quickly determine if an image or video is AI generated or manipulated”. 

However, there were concerns that bad actors could find ways around watermarking.

The inquiry said identified deep fakes could be used attack political candidates and for foreign power to interfere in the political process via widespread misinformation.

“The potential for generative AI to facilitate the creation and dissemination of disinformation in political contexts was a significant issue raised in evidence received by the committee,” said the interim report.

“The current state of AI technology allows it to be easily used to generate realistic though artificial content intended to deceive the public; create bias; influence public opinion or sentiment; and harm individual reputations, creating a threat to the integrity of electoral processes and democracies in Australia and around the world.”

Associate Professor Shumi Akhtar from the University of Sydney told the inquiry the various ways that AI-generated content could be employed to broadly undermine public trust and destabilise social discourse.

“The proliferation of fake content can erode public trust in media, government, and other critical institutions,” said Akhtar.

“When people struggle to discern real from fabricated information, their foundational trust deteriorates, undermining democratic governance. 

“Additionally, AI-driven content algorithms can intensify societal polarisation by reinforcing echo chambers on social media, further destabilising democratic discourse.”

The Department of Home Affairs told the inquiry the current capabilities and accessibility of generative AI allow for malicious actors to rapidly produce significant volumes of content at low cost.

This could include traditional forms of disinformation, such as fake news articles and misleading posts on social media platforms as well as deepfakes,.

The increasing quality of AI-generated content means that it is less likely to be recognised as fake by humans or be detected by automated systems.

Advancements in the technology also mean that few images are required to generate deepfakes and synthetic content.

“AI’s ability to mass generate fake online accounts…can create an illustration of support for policies or people, which can sway and mislead the political preference of real people,” said the The New South Wales Council for Civil Liberties in its submission to the inquiry.

The Department of Home Affairs submitted that the potential for AI to undermine political stability could augment traditional methods of foreign interference and manipulation (FIMI).

“With the assistance of AI, FIMI can be created and disseminated at unprecedented speed and scale, in multiple languages; and often at a low cost,” the department submitted.  

“Foreign governments could use AI to create coordinated and inauthentic influence campaigns that are designed to foster widespread misinformation, incite protests, exacerbate cultural divides and weaken social cohesion, covertly promote foreign government content, target journalists or dissidents and influence the views of Australians on key issues.

The parliamentary committee is expected to release its final findings on November 26. 

 

Have something to say on this? Share your views in the comments section below. Or if you have a news story or tip-off, drop us a line at adnews@yaffa.com.au

Sign up to the AdNews newsletter, like us on Facebook or follow us on Twitter for breaking stories and campaigns throughout the day.

comments powered by Disqus