OpenAI working on ways to detect content made by AI

By Makayla Muscat | 6 September 2024
 
Credit: Louis Reed via Unsplash.

OpenAI is working on a way to detect content generated using artificial intelligence.

The tech giant has made a submission to the Select Committee on Adopting Artificial Intelligence (AI) of the Australian Senate. 

The parliamentary inquiry was established in March and is this month due to report on the opportunities and impacts for Australia arising out of the uptake of AI technologies. 

Public hearings have heard calls for restrictions to be issued for the use of AI tools in fields such as healthcare, media and art.

The firms responsible for AI tools including Google's Gemini, Meta AI, Amazon Lex and Microsoft's Copilot, have also faced questions about how businesses might implement the technology and how it can be misused.

Australia has proposed regulating "mandatory guardrails" for AI in high-risk settings. 

OpenAI’s vice president of global affairs Anna Makanju tol the inquiry, in a submission, the company is researching and assessing a range of techniques to ensure AI systems are built, deployed and used safely. 

“We research, develop, and release cutting-edge AI technology as well as tools and best practices for the safety, alignment, and governance of AI,” Makanju said.

“Prior to releasing any new system, we conduct rigorous testing, engage external experts for feedback through a process called red teaming, work to improve the model's behavior with techniques like reinforcement learning with human feedback (RLHF), and build broad safety and monitoring systems.

“Just as we cannot predict all of the beneficial ways people will use our technology, we cannot foretell all the ways people will misuse or abuse it.”

As it becomes more challenging to distinguish AI-generated content from human-created content, OpenAI is adopting watermarking, classifiers, and metadata-based approaches to help differentiate between them.

Earlier this year, OpenAI introduced the C2PA metadata method for images created with their text-to-image model DALL·E 3 in ChatGPT and API. 

The tech giant is also developing a tamper-resistant watermarking that is difficult to remove from digital content and detection classifiers which are capable of determining whether an image is AI-generated. 

“Given that all current approaches have specific advantages and limitations, we believe it is important to avoid premature commitment to one unique provenance method,” Makanju said. 

“More importantly, a method’s overall effectiveness may depend on other factors such as collaboration within the broader AI ecosystem.” 

Makanju said AI tools have also been used to impact people’s lives in a positive way, explaining that OpenAI actively partners with developers, companies, and NGOs. 

“Ambience Healthcare is building an AI-powered medical scribe that takes notes for clinicians during clinician-patient conversations,” Makanju said. 

“Canva for Education, 100% free for K-12 districts, schools, teachers, and students all over the world, integrated OpenAI-powered Magic Write, which serves as an educator’s personal brainstorming partner, quickly generating first draft content so educators can go from idea to editing in seconds and translate when needed.

“Streamline Climate uses OpenAI’s products to unlock billions of dollars of unallocated government funding to address the climate crisis.” 

Have something to say on this? Share your views in the comments section below. Or if you have a news story or tip-off, drop us a line at adnews@yaffa.com.au

Sign up to the AdNews newsletter, like us on Facebook or follow us on Twitter for breaking stories and campaigns throughout the day.

comments powered by Disqus