Fighting online harm and protecting brand reputation – Meta’s safety tools give users and advertisers more control and confidence 

By Carolyn Bollachi, Agency Director, Meta Australia and New Zealand | Sponsored
 

Social media has given everyone a voice and this has empowered millions worldwide to connect, share, discover, and to express what matters to them. But, like all elements of life, it also has a dark side – malicious actors want to weaponise our tools for misinformation, fraud, hate speech and other activities that the safety of online users. Meanwhile, advertisers risk their reputation and bottom line when their brand is inadvertently associated with harmful content.

At Meta, we make significant investments to create safe and meaningful places for people and brands to connect. 

Robust measures to combat harmful content and misinformation 

Meta works with international academics and subject matter experts to stay ahead of rapidly evolving online threats. We also factor in different views and beliefs, especially from marginalised people and communities, to ensure everyone’s voice is valued. This is how we create our policies defining what is and isn’t allowed on our platforms – we keep users safe with our community standards and hold publishers accountable with our advertising guidelines.

At Meta, we remove harmful content that goes against our policies, reduce the distribution of problematic content that doesn’t violate our policies, and inform people with additional context so they can decide what to interact with. 

Using AI and proactive detection technology, we find prohibited content which will either be removed or flagged for further review. We have about 40,000 people working on safety and security issues, reviewing potentially harmful content 24/7, in more than 80 languages around the world. Therefore, in most cases, we are able to detect and remove violating content before it is reported. 

Meta also works with more than 90 third-party fact-checking organisations globally (certified by the International Fact Checking Network) to identify, review and rate viral misinformation. When content is rated false or altered, we downrank it and make it harder to find. Such content is also prominently labelled so people can better decide for themselves what to read, trust and share.

Age appropriate experiences and safety tools

We also take extra measures to ensure age-appropriate experiences for teens and families. This includes an ever-expanding range of safety tools and features, such as defaulting young people’s accounts to private to limit unwanted interactions, and ensuring their location settings are turned off. 

Some of our latest safety and transparency tools include:

  • The Family Centre: A central place to access parental supervision tools and resources from leading experts. 

  • Why Am I Seeing This: Provides more transparency about how your online activity informs the machine learning we use to shape and deliver ads.

  • Quiet Mode on Instagram: Encourages you to set boundaries with friends and followers.

  • Instagram Age Verification: New ways to verify ages and identify minors.

  • Parental Supervision on Messenger: Understand how your teen uses Messenger.

  • Take It Down: Helps young people prevent the unwanted spread of their intimate images online.

 

Enhancing brand safety and suitability for advertisers

All advertisers face the problem of content adjacency. When your ads show up beside unsafe or undesirable content, that influences how consumers perceive and trust your brand. This in turn affects their purchase decisions and your profitability.

Brand safety refers to the measures that protect the image and reputation of a brand when advertising online. We have enhanced our brand safety and suitability controls to give advertisers more control over where their ads appear. Advertisers can now better prevent their ads from appearing within or alongside content and publishers that do not align with their brand.

Here are some of our latest brand safety and suitability tools:

  • Inventory Filter: Control the type of content that your ad appears within; choose full inventory (all eligible content), standard inventory (excludes sensitive content) or limited inventory (excludes all sensitive and moderate content).

  • Third-party Brand Suitability Verification: See a brand suitability score for your ads on both Facebook and Instagram.

  • Block Lists: Block ads from running on specific publishers by uploading a list of pages, websites and/or apps in Business Manager.

  • Placement Opt-outs: Opt out of instant articles, in-stream videos, suggested videos or Audience Network.

  • Publisher Allow Lists: Designate which third-party publisher apps to run ads on.

  • Topic Exclusion: Choose content-level exclusions from four different topics: News, Politics, Gaming, Religion and Spirituality.

 

Giving you control and peace of mind

At Meta, we’re dedicating our best resources and technologies to protect our users and advertisers from online harm. With these powerful safety and transparency features, you gain more control and confidence on our platforms. 

Check out more safety resources here.

Have something to say on this? Share your views in the comments section below. Or if you have a news story or tip-off, drop us a line at adnews@yaffa.com.au

Sign up to the AdNews newsletter, like us on Facebook or follow us on Twitter for breaking stories and campaigns throughout the day.

comments powered by Disqus