Parents display a banner bearing the names of children lost to social media harm outside a Los Angeles court, after a jury held Meta and YouTube liable in a landmark case over platforms and their impact on children’s mental health on March 25

Parents display a banner bearing the names of children lost to social media harm outside a Los Angeles court, after a jury held Meta and YouTube liable in a landmark case over platforms and their impact on children’s mental health on March 25
| Photo Credit: REUTERS

The story so far:

A Los Angeles Superior Court jury, on March 25, found Meta Platforms and Alphabet’s YouTube liable for designing platforms in ways that foster addiction and harm users’ mental health. The case centred on a woman who was addicted to social media platforms. The jury awarded $3 million in compensatory damages, which dispensed 70% to Meta and 30% to YouTube, and designated punitive damages of up to $3 million, subject to judicial confirmation.

What did the verdict say?

The presiding judge at the Los Angeles Superior Court has yet to formalise the final judgment. The 20-year-old woman, known as Kaley, testified that her social media use began as early as age 6 on YouTube and age 9 on Instagram because of their attention-grabbing design.

Kaley’s lawyers argued that features like infinite scroll, autoplay, and algorithm-driven notifications were specifically engineered to “hook” young users. She testified that this addiction exacerbated her depression, anxiety, and body dysmorphia.

The case focused on platform design rather than content, sidestepping Section 230 immunity by characterising social media platforms as “defective products,” focusing on design elements such as algorithmic amplification and addictive features rather than third-party content, making it harder for the companies to avert liability.

Snapchat and TikTok were defendants in the trial, while both settled with the plaintiff before it began; the terms of the agreements were not disclosed.

What evidence swayed the jury?

The case reportedly relied on internal corporate documents, expert testimony, and user-behaviour data, pointing to evidence such as the ‘Facebook Files’, internal research reported by The Wall Street Journal in 2021 showing Meta knew Instagram could worsen body image issues for teenage girls, with one study noting that “32% of teen girls said Instagram made them feel worse.” They also cited findings referenced in U.S. Senate hearings, where whistleblower Frances Haugen testified that company research linked platform design to anxiety and compulsive use.

In YouTube’s case, it pointed to concerns that its recommendation system steers users toward increasingly engaging content to maximise watch time, an issue noted in academic research and media reports.

Why is this verdict a landmark?

The ruling is significant because it shifts liability from content to platform design. It challenges protections under Section 230 of the U.S. Communications Decency Act, long used to shield firms from responsibility for user-generated content. Courts have typically dismissed such cases under Section 230 protections. For instance, in Gonzalez v. Google (2023), the U.S. Supreme Court declined to hold Google liable for YouTube’s algorithmic recommendations of ISIS-related content. Similarly, in Twitter v. Taamneh (2023), claims against Twitter, Facebook, and Google for aiding terrorism were rejected due to insufficient proof of direct liability. These rulings reinforced that platforms are generally not responsible for third-party content, even when amplified by algorithms.

What changes for social media companies?

The verdict came a day after a jury in New Mexico found Meta liable for the way in which its platforms endangered children and exposed them to sexually explicit material and contact with sexual predators. If upheld, the verdict could compel platforms to rethink core design features. The ruling heightens demands for algorithmic transparency, as seen in proposals like the U.S. Algorithmic Accountability Act. Crucially, the risk of punitive damages and over 1,600 pending lawsuits could trigger costly, copycat litigation, making aggressive engagement-driven design legally vulnerable.

What is next for regulation?

At least half of American teens use YouTube or Instagram daily, according to the Pew Research Center. California is considering stricter rules on teen social media use, including potential restrictions on addictive features. At the federal level, lawmakers have proposed bills mandating algorithmic transparency and stronger child-safety protections.

Recently, countries such as Australia have imposed restrictions on children to stop or limit their use of social media. The U.K. is running a pilot programme to see how a ban on social media for people aged under 16 may work. If upheld on appeal, it could mark the beginning of a new era in which algorithmic design is scrutinised not just for efficiency, but for its societal and psychological impact.

(Saee Pande is a freelance writer with a focus on politics, current affairs, international relations, and geopolitics)


Leave a Reply

Your email address will not be published. Required fields are marked *