California’s AB 56: New Social Media Warning Law Targets Youth Mental Health Risks
Executive Summary
Effective January 1, 2027, California Assembly Bill 56 (AB 56) will require social media companies to display health warnings to users under 18 years old. Signed into law by Governor Gavin Newsom on October 13, 2025, this measure seeks to alert minors and parents to the mental health risks associated with social media use.
AB 56 is part of California’s broader child safety legislative package and positions the state alongside Minnesota as one of the first to impose such warning label mandates. The law reflects growing bipartisan support across the U.S. for policies designed to mitigate the effects of social media addiction among youth.
Background and Legislative Context
Introduced by Assemblymember Rebecca Bauer-Kahan, AB 56, also known as the Social Media Health Warning Act, was inspired by the 2023 U.S. Surgeon General’s Advisory on Social Media and Youth Mental Health, authored by Dr. Vivek H. Murthy. That advisory linked excessive social media use with anxiety, depression, body image issues, and disordered eating.
Governor Newsom cited “truly horrific and tragic examples of young people harmed by unregulated tech” as a motivating factor for signing the measure. The Governor’s office emphasized that while social media and emerging AI technologies can inspire and connect, they also carry “profound risks” when unmoderated.
AB 56 joins a growing legislative movement across states such as Texas and New York, where lawmakers are introducing similar youth safety regulations aimed at curbing the potential harms of digital addiction.
Defining a “Covered Platform”
Under AB 56, the law applies to any “covered platform”, defined as an internet-based service or application offering users an “addictive feed.”
An addictive feed refers to a digital interface that algorithmically recommends, prioritizes, or curates content to maximize engagement — a hallmark feature of platforms like Instagram, TikTok, Snapchat, and YouTube.
Exemptions
Certain online services are excluded from the definition of a covered platform, including:
Websites whose primary purpose is the sale of goods or services;
Cloud storage services;
Email and private direct messaging tools limited to sender and recipient communication;
Internal corporate collaboration or communication systems not available to the public.
This limitation ensures that enterprise software and non-consumer tools are not inadvertently subjected to AB 56’s warning label obligations.
Key Compliance Obligations
AB 56 introduces specific, time-based health warning requirements designed to interrupt prolonged social media use by minors.
1. Initial Access Warning
A warning must appear when a minor first opens the app each day.
The warning must occupy at least 25% of the screen for a minimum of 10 seconds.
The message may be dismissed by clicking a clearly visible “X.”
2. Extended Use Warning
After three hours of cumulative daily use, platforms must display a non-dismissible warning that covers at least 75% of the screen for 30 seconds.
This warning must reappear for every additional hour of continued use.
3. Required Warning Language
The law prescribes the exact language platforms must use:
“The Surgeon General has warned that while social media may have benefits for some young users, social media is associated with significant mental health harms and has not been proven safe for young users.”
Platforms are only required to display this warning for users reasonably determined to be under 18 years old.
Legislative Purpose and Broader Trends
California’s action follows mounting national concern regarding the psychological effects of social media. The American Bar Association reported in 2024 that 42 state attorneys general had endorsed the adoption of similar warning label requirements.
Lawmakers describe these measures as necessary steps in addressing the national youth mental health crisis, which experts link to social media-driven social comparison, cyberbullying, and sleep disruption.
Industry Reaction and Constitutional Debate
Trade associations representing major technology firms, including TechNet (which counts Meta and Google among its members), opposed AB 56 on First Amendment grounds. They argue that mandatory warning labels compel speech and could infringe on platforms’ constitutional rights to communicate or moderate content.
Nevertheless, California’s Attorney General Rob Bonta defended the law as a public health measure, emphasizing the state’s authority to protect minors from digital products that exhibit addictive or manipulative design features.
Given the constitutional questions raised, industry observers anticipate potential litigation challenging AB 56’s implementation before its effective date.
Related Legislative Activity
Governor Newsom signed multiple digital safety measures alongside AB 56, including:
Regulations on AI chatbots, requiring disclosure and safeguards against misuse;
Enhanced penalties for the creation and distribution of pornographic deepfakes;
Youth privacy protections reinforcing California’s prior Age-Appropriate Design Code Act (AADC).
However, Newsom vetoed Senate Bill 771, which sought to impose liability on social media companies whose algorithms promote content violating civil rights laws, calling it “premature.”
Compliance Considerations and Legal Takeaways
For In-House Counsel and Compliance Teams
Assess Platform Scope: Determine whether your platform qualifies as a “covered platform” under AB 56’s definitions.
Review Interface Design: Evaluate user experience flows to implement time-based warnings without violating UX or accessibility standards.
Coordinate with Legal and Technical Teams: Ensure consistent application of age-verification and warning display mechanisms.
Monitor Federal and State Developments: Track ongoing litigation and rulemaking efforts that may clarify or expand obligations for digital services targeting minors.
Key Dates
Signed: October 13, 2025
Effective Date: January 1, 2027
Conclusion
California’s Assembly Bill 56 (AB 56) marks a major turning point in the intersection of technology, mental health, and regulatory policy. By requiring social media companies to display clear health warnings for minors, the state has positioned itself as a national leader in digital consumer protection. Much like public health warnings in other industries, this law recognizes that excessive social media use can pose measurable risks to young users’ well-being, and that transparency is a critical component of corporate responsibility in the digital age.
For technology and social media companies, compliance will not be purely technical—it will be strategic. Implementing time-based warning systems, age verification measures, and user engagement tracking tools will demand both operational and legal oversight. Moreover, companies should anticipate litigation testing the constitutionality of compelled warnings under the First Amendment, as trade groups have already signaled opposition on free speech grounds. How courts resolve these challenges will shape the national conversation around the limits of state power to regulate online platforms.
Ultimately, AB 56 represents far more than a labeling requirement—it signifies a broader policy shift toward holding platforms accountable for the social and psychological impacts of their design. As other states explore similar legislation and Congress considers national standards, companies that adopt proactive compliance and youth safety strategies will be best prepared for this new regulatory era. California’s move underscores an emerging truth in technology law: protecting minors online is no longer optional—it’s the next frontier of digital regulation.