“Nowhere is our safety work more important than when it comes to teen users,” said Savannah Badalich, Discord’s head of product policy [File] | Photo Credit: REUTERS Messaging platform Discord announced Monday it will implement enhanced safety features for teenage users globally, including facial recognition, joining a wave of social media companies rolling out age verification systems. The rollout, beginning in early March, will make teen-appropriate settings the default for all users, with adults needing to verify their age to loosen protections including content filters and bans on direct messaging, the company said. The San Francisco-based platform, popular among gamers, will use facial age estimation technology and identity verification through vendor partners to determine users’ ages. Tracking software running in the background will also help determine the age of users without always requiring direct verification. “Nowhere is our safety work more important than when it comes to teen users,” said Savannah Badalich, Discord’s head of product policy. Discord insisted the measures came with privacy protections, saying video selfies for age estimation never leave users’ devices and that submitted identity documents are deleted quickly. The platform said it successfully tested the measures in Britain and Australia last year before expanding worldwide. The move follows similar actions by rivals facing intense scrutiny over child safety and follows an Australian ban on under-16s using social media that is being duplicated in other countries. Resorting to facial recognition and other technologies addresses the reality that self-reported age has proven unreliable, with minors routinely lying about their birthdates to circumvent platform safety measures. Gaming platform Roblox in January began requiring facial age verification globally for all users to access chat features, after facing multiple lawsuits alleging the platform enabled predatory behaviour and child exploitation. Meta, which owns Instagram and Facebook, has deployed AI-powered methods to determine age and introduced “Teen Accounts” with automatic restrictions for users under 18. Mark Zuckerberg’s company removed over 550,000 underage accounts in Australia alone in December ahead of that country’s under-16 social media ban. TikTok has implemented 60-minute daily screen time limits for users under 18 and notification cutoffs based on age groups. The industry-wide shift comes as half of US states have enacted or introduced legislation involving age-related social media regulation, though courts have blocked many of the restrictions on free speech grounds. The changes come the same day as a trial in California on social media addiction for children begins in Los Angeles, with plaintiffs alleging Meta’s and YouTube’s platforms were designed to be addictive to minors. Published – February 10, 2026 10:27 am IST Share this: Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Facebook (Opens in new window) Facebook Click to share on Threads (Opens in new window) Threads Click to share on X (Opens in new window) X Click to share on Telegram (Opens in new window) Telegram Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Pinterest (Opens in new window) Pinterest Click to email a link to a friend (Opens in new window) Email More Click to print (Opens in new window) Print Click to share on Reddit (Opens in new window) Reddit Click to share on Tumblr (Opens in new window) Tumblr Click to share on Pocket (Opens in new window) Pocket Click to share on Mastodon (Opens in new window) Mastodon Click to share on Nextdoor (Opens in new window) Nextdoor Click to share on Bluesky (Opens in new window) Bluesky Like this:Like Loading... Post navigation Top Iran security official to travel to Oman, site of talks with U.S., likely with nuclear message Why not all cancers need aggressive treatment anymore