The government's incoming legislation to enforce a minimum age for access to social media could spell trouble for those platforms who have a large amount of teenagers using their sites.
While the government didn't specify a minimum age, most social media platforms currently require users to be at least 13 years old to sign up.
On a smaller-scale level, South Australia premier Peter Malinauskas is examining how the state could implement a social media ban for children under 14, along with parental consent for children aged 14 and 15.
The federal opposition is also supporting a social media ban for children.
AdNews reached out to Meta, Snap and TikTok on the issue, and while all either declined to comment in an official capacity or didn't respond, submissions to the Joint Committee on Social Media and Australian Society provide an insight into how these companies are thinking about the issue of young users on their platforms.
Meta’s global head of safety Antigone Davis told the committee that the tech giant requires users to provide their date of birth when they register new accounts, a tool called an age screen.
Those who enter their age as under 13 are not allowed to sign up.
“The age screen is age-neutral — in other words the options offered do not assume that someone is old enough to use our service, and we restrict people who repeatedly try to enter different birthdays into the age screen,” she said.
“We’ve been investing in and developing AI age estimation tools. While this technology is evolving and is far from perfect, it plays an important role in the safeguards we provide. We continue to try and improve its efficacy.”
Australia’s eSafety Commissioner last week requested information from Google’s YouTube, Meta platforms Facebook and Instagram, TikTok, Snap, Reddit, Discord and Twitch to find out how many Australian children are on their platforms and what measures they have in place to enforce age limits.
The eight companies will have 30 days to provide responses to the eSafety Commissioner and appropriate findings will be summarised in a public report.
Meta's Davis also said that the company has teams reviewing reported accounts that appear to be used by people who are underage, with the company also able to place checks on accounts that appear underage in the course of content review.
“If these people are unable to prove they meet our minimum age requirements, we delete their accounts,” she said.
“Where we require age verification - for example, if a teen tries to age up once they are on our apps - we’ve developed an industry first menu of options to verify age.
"It allows users to do so by submitting ID documents, or uploading a video selfie for face-based age prediction through Yoti - a 3rd party vendor based out of the UK which provides privacy preserving, age estimation services.”
Advertising leaders see such proposed regulations of an age limit as a positive opportunity to decrease ad wastage and boost client confidence.
Amplify co-founder and director Alex Reid said while there are concerns about the changing regulatory landscape, increased attention on social media regulation provides greater security and certainty, which ultimately boosts client confidence.
"This increased regulation means brands can operate with more clarity," he said.
Snap claims 80% of 13-24 year-olds in Australia use the Snapchat app currently, with the company rolling out a package of safeguards last year to further protect 13-17 year old users from potential online risks.
Snap's head of public policy for APAC, Henry Turnbull, told the committee that the platform is not intended for people under the age of 13, with the app requiring users to input their age before using Snapchat.
"We provide a differentiated experience for teens on Snapchat than for adults, with more restrictive content and privacy settings. To help prevent teens from circumventing the teen-specific safeguards we have in place, 13-17 year olds with existing Snapchat accounts are not able to change their date of birth to appear 18 or older," he said.
"Our view is that device level age verification is the best available option. Age collection is already part of the device ID process when registering a new device, such as an iPhone or Android phone."
Turnbull said that adding a level of age verification to this step, and then making this verified age available to all services, would simplify the process for users, reduce the risk of repeatedly providing sensitive ID data to a wide range of apps, and avoid consent fatigue.
"Users would only need to confirm their age once, which also increases the odds that the information will be accurate," he said.
"If age is collected and checked at the device level, then that information could be used within the app store to show apps appropriate for the user’s age (meaning that age-inappropriate apps couldn’t be accessed or downloaded; users under 13 would be prevented from viewing or downloading apps that are designated 13+).
"During the app sign-up process, apps could also receive age signals directly from the device. Moreover, apps could also communicate back to the device operators if they have identified any reason to doubt the assured age signals. If an online communication platform became aware that a user was under their assured age, they could notify the device operator so that the account user’s age could be checked again."
Have something to say on this? Share your views in the comments section below. Or if you have a news story or tip-off, drop us a line at adnews@yaffa.com.au
Sign up to the AdNews newsletter, like us on Facebook or follow us on Twitter for breaking stories and campaigns throughout the day.