The unregulated world of AI, which has spread quickly from online customer chat to high level content creation for advertising, won't last long
Ethics and governance is coming to protect reputation. Trust in technology is the issue, according to industry players.
KPMG and the Australian Information Industry Association have a study, Navigating AI Report: A Guide to the Use and Adoption of AI and A Checklist for Trustworthy AI:
“While trust is a defining factor in an organisation’s success or failure, the risk of AI goes beyond reputation and customer satisfaction.
“AI is playing a critical role in shaping the wellbeing and future of individuals and communities around us – even as few fully understand how it works.
“Leaders are starting to consider how effective AI governance can help them protect and gain competitive advantage, realise operational efficiencies, and crucially, foster trust among their key stakeholders, including customers.
“While regulatory frameworks have been developed to tackle issues related to privacy, there has been little progress towards a more holistic framework that incorporates AI."
The study says Australians expect AI to be regulated with the preferred option a dedicated independent regulator to monitor AI usage.
"This highlights the importance of strengthening and communicating the regulatory and legal framework governing AI including data privacy laws," says the report.
Another study, from the University of Queensland and KPMG Australia, found 40% of Australians trust the use of artificial intelligence (AI) at work, such as ChatGPT.
The survey also found that only 35% of Australians believe there are enough safeguards and current laws or regulations in place to make AI use safe.
Dean Grandy at KPMG Australia: “With two-thirds of individuals lacking trust in governments and business to responsibly develop and use AI, organisations need to show they are following an ethical approach to this fast-emerging technology.”
In the US, a group of digital players is worried about AI.
The Future of Life Institute, funded by the (Elon) Musk Foundation, Founders Pledge and Silicon Valley Community Foundation, is calling for a six-month pause in developing systems more powerful than OpenAI's GPT-4
An open letter: “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.
“These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”
AdNews asked ChatGPT: "Do you have ethics"
The reply:
"As an artificial intelligence language model, I do not have personal beliefs or values, and I do not possess consciousness or the ability to make moral judgments.
"However, I have been programmed to follow certain ethical principles and guidelines, such as not engaging in hate speech, promoting equality and respect for human rights, and not sharing misinformation or harmful content.
"Additionally, my responses are influenced by the data that I have been trained on, which has been curated to align with ethical standards and values.
"Ultimately, the ethical responsibility lies with the individuals who create and use artificial intelligence, and it is their responsibility to ensure that it is used ethically and responsibly."
In 2020 KPMG and the University of Queensland created a model for a trustworthy AI.
Have something to say on this? Share your views in the comments section below. Or if you have a news story or tip-off, drop us a line at adnews@yaffa.com.au
Sign up to the AdNews newsletter, like us on Facebook or follow us on Twitter for breaking stories and campaigns throughout the day.