
Ori Gold
Marketers, it’s time to speak up. We know better than most how digital platforms operate, how content is moderated, and where the risks lie. We scrutinise brand safety, demand transparency, and push for accountability in advertising. But when it comes to protecting children online, these same platforms operate with far less oversight. That needs to change.
I still get chills when I think about the moment my daughter, then five years old, was watching Peppa Pig on YouTube. Her expression suddenly turned from joy to horror.
What seemed like an innocent episode had been manipulated and within seconds Peppa and her friends were replaced with disturbing content: violence, drug use, and explicit imagery. That was five years ago. The problem has only worsened since.
As an experienced digital marketer, I’ve spent years navigating the complexities of online platforms. I know that YouTube, TikTok, and Meta’s platforms are not inherently safe spaces. Brands invest millions in advertising, but they do so with safeguards such as brand safety checks, whitelisting, and independent verification.
When it comes to online safety – especially for our children - there’s no equivalent level of accountability. Policymakers continue to rely on social media’s self-regulation, despite repeated failures.
The latest closed-door consultations over YouTube’s age verification exemption highlight the real battle here. Meta and TikTok are protesting Google’s perceived unfair advantage, but this isn’t about children’s safety. It’s about corporate competition. The platforms are fighting for market dominance while governments struggle to regulate them effectively. Meanwhile, our kids remain exposed to content that no parent would knowingly allow.
This debate over age restrictions should not be about which tech giant gains the upper hand. It should be about real safety measures that ensure children aren’t accessing harmful content. If advertisers demand transparency, why shouldn’t parents? Governments must do more than react to lobbying pressure. They need to enforce real consequences for platforms that fail to protect children and implement meaningful age verification standards.
The issue of online harm to children isn’t new, yet little has changed. Algorithms designed for engagement continue to push inappropriate content, exposing kids to violence, self-harm, and dangerous trends.
Meanwhile, tech companies lobby against stricter regulation, citing privacy concerns or feasibility issues. But advertisers have proven that digital platforms can be held to higher standards when enough pressure is applied.
Look at how brand safety practices evolved over the years. Marketers demanded solutions, and platforms adapted. Today, advertisers have more control over where their ads appear. There are third-party verification tools, stricter policies, and clearer opt-outs. If those measures can be implemented to protect brands, they can be implemented to protect children.
Governments need to move beyond reactionary measures and legislate proactively. Relying on platforms to police themselves has failed. Age verification should be robust, enforced, and independently audited. Algorithms should be held to a higher standard, with transparency around how content is recommended to young users. And when violations occur, the consequences should be significant enough to force change.
Marketers have influence, and we should use it. We understand audience targeting, digital ecosystems, and the mechanics of engagement. We know what these platforms are capable of. If we don’t demand better protections for children, who will?
The digital world isn’t going anywhere, and banning social media isn’t a solution. But treating it as an unregulated playground where children are the collateral damage of corporate battles is unacceptable.
The focus needs to shift from industry competition to real accountability. The safety of our children is not negotiable, and it’s time for marketers, brands, and regulators to demand better
Ori Gold is CEO and co-founder of Bench Media.