Apple's ChatGPT deal a 'security and privacy risk'

Ashley Regan
By Ashley Regan | 13 June 2024
 

Apple's launch of language model AI 'Apple Intelligence' has sparked security and privacy concerns from the tech industry.

Among many gen AI features discussed at the keynote speech for Apple’s Worldwide Developer Conference, Apple announced it will integrate the chatbot ChatGPT directly into iPhones.

Allowing the chatbot to pull information from the user's communications to write iMessages and answer other generative prompts in the next operating system.

Which means the AI engine will collect personal user information and data putting users at risk, according to Charles Darwin University artificial intelligence associate professor Niusha Shafiabady.

"This risk would not come from the OpenAI deal directly but from collecting data from different sources for each user and potentially entering it to the OpenAI tool," Shafiabady said.

"Of course, entering the data to the generative AI tool like ChatGPT would create a privacy breach. The data used for the engine which could potentially be used as training data for the generative AI will become public domain data."

This means people will lose their privacy and it could create another bridge for crossing over people’s security if accessed by malicious actors, Shafiabady said.

Billionaire Elon Musk has also blasted the announcement in a series of tweets and warned that if Apple “integrates OpenAI at the OS level,” all Apple devices will be banned at his companies.

Apple says privacy protections will be built in for users who access ChatGPT — "their IP addresses are obscured, and OpenAI won’t store requests. ChatGPT’s data-use policies apply for users who choose to connect their account."

But users are right to have concerns about their security, Shafiabady said.

"Using and collecting data from emails and different sources of communication opens another door to security risks for the users," Shafiabady said.

"The users should decide how much they get out of these technologies in trading their privacy and security.

“These updates aren’t revolutionary. Using the ChatGPT’s engine as content producer, creating a text to speech on top of that content is very simple. Even we were writing these types of codes when we were students 20+ years ago.

"It is nothing eye-catching technology wise, and personalising content is a relatively old concept.”

Have something to say on this? Share your views in the comments section below. Or if you have a news story or tip-off, drop us a line at adnews@yaffa.com.au

Sign up to the AdNews newsletter, like us on Facebook or follow us on Twitter for breaking stories and campaigns throughout the day.

comments powered by Disqus