Monday, March 30, 2026

Childhood Can Yet Be Rescued from Social Media

By Christine Rosen

Thursday, March 19, 2026

 

In the past few years, the United States has witnessed a significant shift in public opinion and policymaking about children’s use of social media platforms. Thanks in large part to the well-deserved popularity of Jonathan Haidt’s 2024 book, The Anxious Generation, which gathered reams of social science data about the harms of a screen-based childhood, the national conversation about children’s use of social media has entered a new and more productive phase.

 

Uncritical techno-enthusiasm is giving way to a robust skepticism, long overdue, about the technology companies that, through deliberate design choices and a failure to confirm the age of users, encourage ever-younger children to spend hours every day on these platforms. The advent of more-sophisticated artificial intelligence tools, notably chatbots that draw users into intimate conversations about their lives and often dispense questionable mental health advice, has added urgency to the debate over regulating technology use for children. New risks posed by these platforms have been highlighted by recent episodes, such as when users of Grok, the X platform’s AI chatbot, generated millions of nonconsensual, sexual, deepfake images of real people, including children.

 

Internationally and in the U.S., this has ushered in a flurry of proposed and enacted legislation; new lawsuits against social media companies alleging that they knowingly hosted young children on platforms they knew to be harmful to them; and a range of civic efforts by parents, schools, and researchers to limit children’s use of social media and smartphones.

 

***

 

The most sweeping piece of legislation took effect in late 2025, not in the U.S. but in Australia; it enacted the first nationwide age limit for social media use by children. Australians must be 16 years old to open a social media account, and the law places the responsibility on platforms to enforce this age limit or face significant penalties. Australia’s eSafety commissioner reported that in the first month of the law’s enactment, more than 4 million accounts were deactivated because they did not comply with the age-limit requirement.

 

One of my American Enterprise Institute colleagues, Bronwyn Howell, a thoughtful critic of the Australia legislation, notes that, not surprisingly, many teenagers have found work-arounds by using VPNs (virtual private networks), or they’ve moved their attention to alternative platforms not included in the social media ban. She argues for fewer sweeping bans and “a new paradigm for social media safety with shared, not unilateral, responsibility.” Yet the public seems largely enthusiastic about age limits such as the one Australia enacted. Other countries have also passed strict age limits for social media, including France (age 15), Brazil (16), Indonesia (16), Malaysia (16), the United Arab Emirates (13), and Vietnam (16). Many more countries have laws currently under consideration.

 

In the United States, the only federal law regulating social media use by children stems from the 1998 Children’s Online Privacy Protection Act (COPPA), which prohibits personal data-harvesting from children younger than 13 without a parent’s consent. Passed long before the creation of social media platforms, COPPA does not require age verification, so platforms can rely on users’ self-declaration of age, which is why platforms such as Instagram, since their inception, have knowingly hosted millions of underage users. The Federal Trade Commission recently issued guidance about COPPA, announcing its intention to review the COPPA Rule “to address age verification mechanisms.” There are many new tools available: temporary “token” technology that can confirm age without storing private information or disclosing an individual’s identity, and selfie-technology that can accurately assess a user’s age via photo analysis, among many others.

 

In Congress, Senator Ted Cruz (R., Texas) announced on March 12 that he will advance several key pieces of legislation to protect children online: a COPPA 2.0 bill; the Kids Online Safety Act (KOSA), which includes a “duty of care” that places the burden on platforms to prevent foreseeable harms to children; and the Kids Off Social Media Act (KOSMA), which would ban anyone younger than 13 from social media and limit algorithmic recommendations for those younger than 17, among other features.

 

Similar efforts are underway nationwide. At the end of 2025, more than 300 bills related to social media and technology use by children were pending in state legislatures. Eight states now have some form of ban for minors on social media or require parental consent; many more states are considering such laws. Pending a case in federal appeals court, Florida is moving forward with bipartisan legislation that Governor Ron DeSantis signed into law in 2024 that would ban anyone younger than 14 from having a social media account, and Virginia passed a law limiting social media use to one hour per day for anyone younger than 16, although a federal judge recently blocked its implementation.

 

Nebraska has one of the strictest laws, which requires parental consent for any minor to open a social media account. And the U.S. Supreme Court upheld a Texas law requiring websites with a significant amount of sexual content to verify the age of users. Other states, including California and New York, have proposed legislation to target addictive design features of social media platforms; several states have also passed laws requiring social media warning labels.

 

***

 

The biggest challenges to these laws are on First Amendment grounds. Federal courts have been sympathetic to the argument that it’s a violation of children’s speech and expression rights to restrict access to social media platforms; courts permanently blocked state laws in Ohio and Arkansas on these grounds. In Georgia, Florida, and California, courts have temporarily blocked state laws while litigation continues.

 

The industry’s failure to take seriously the harms they know their platforms enable is why popular support is increasing for these kinds of legal protections. It’s notable that during a time of extreme political partisanship, many of these bills are exemplars of bipartisan agreement: A version of KOSA passed in the Senate in 2024 by a vote of 91–3, for example, though it remains stalled in the House. Critics of these laws and well-funded lobbyists for technology companies like to argue that denying children access to social media platforms will drive them toward even more questionable, unregulated places on the internet, so why not just continue to trust the platforms to regulate themselves, since they claim to offer effective “parental controls”? This is like the Wolf, dressed as Grandmother, telling Little Red Riding Hood that it’s much safer to snuggle up with him, within reach of his fangs, than to walk back into the woods.

 

Such half measures and public relations palaver come only after these companies have faced years of scrutiny by Congress and regulatory agencies and amid a proliferation of lawsuits about the serious damage done to children, undermining their attempts to paint themselves as proactive protectors of children.

 

New research also challenges technology companies’ claims that the tools they make available do enough to protect children from harm. Consider the design choices of these platforms, which effectively hijack children’s attention. Researchers Daniel Frost, Sarah Coyne, and Jane Shawcroft, writing on Haidt’s After Babel Substack, note how “features like infinite scroll, autoplay, incessant notifications,” and the like are “intentional design choices” that “keep kids online much longer than is healthy.” They continue targeting children because they make a lot of money doing so: A 2023 study from Harvard’s T.H. Chan School of Public Health discovered that social media companies made $11 billion in 2022 just from advertising aimed at children.

 

As the researchers note, parental controls “place a heavy burden on parents” to master and monitor them, which “would be difficult even if the tech companies wanted wholeheartedly to help parents place reasonable limits on their children’s media consumption, which they do not.” They argue, persuasively, that the burden should be on the companies “to build safety features into the products so the digital environment is not so dangerous and addictive in the first place.”

 

Shifting the burden from individual parents to the companies that design these platforms is also at the heart of the many lawsuits making their way through the legal system. A multistate lawsuit against Meta in federal court in California alleges that the company, while having access to age-verification tools, “chooses not to use” them, even though it knows that children are on its platforms. As an internal company memo acknowledged, age verification would “impact growth” among under-13 users (who are, by law, barred from being on the platforms at all).

 

***

 

The most interesting activity related to social media, smartphone, and technology use and children — and the most appealing if one is a conservative — is at the level of civic engagement. New nonprofit and research organizations are springing up to meet the need for rigorous research about the long-term effects of technology use on children. Public and private schools are enacting “bell-to-bell” smartphone bans during the school day. Parents continue to sign “Wait Until 8th” (meaning eighth grade) and other pledges to delay their children’s use of social media technology, and local community groups are promoting “analog experiences” to lure children and young adults away from screens and out into the real world.

 

All of this will be crucial for combating harm from the newest technologies, notably AI chatbots used by children. We need basic guardrails such as age limits, risk-based audits, and privacy protections for these tools, since they are already in widespread use by children. As Amina Fazlullah of Common Sense Media told Tech Policy Press recently, “We found 70% of teens are already using what we’ve described as [an]AI companion. . . . About 50% of them were regular users and 30% were already preferring conversations with the chatbot over, or similarly to, other humans.” Last year, Senator Josh Hawley (R., Mo.) introduced the bipartisan Guidelines for User Age-Verification and Responsible Dialogue (GUARD) Act to address concerns about AI chatbots. The bill would ban AI companions for children and require AI chatbots to disclose their nonhuman status to users, among other provisions.

 

***

 

What these federal and state laws, lawsuits, and civic efforts all suggest is a pushback on the idea that, when it comes to children, adoption of these technologies is a welcome inevitability and that the responsibility to deal with the impact these have on children lies entirely with parents. Many Americans understand that, when it comes to safety, the burden of proof should be on the companies making these platforms and tools and on the school districts uncritically adopting them.

 

Why is this important? Because children still come to know the world by living in it, in physical bodies. Haidt and other researchers have given us a portrait of a childhood spent staring at screens. Many recent lawsuits have been brought by heartbroken parents whose children died by suicide after being blackmailed by other social media users or emotionally manipulated by an AI “companion.”

 

But we should also consider the depletion of physical reality for children who spend most of their time in mediated environments.

 

In a recent post on his Substack, critic Ted Gioia compiled some startling facts about modern childhood. A few highlights: The average child today plays outside only four to seven minutes every day. (As Gioia noted, “Even inmates in top security prisons get more outdoor time than this.”) During the past decade, children have spent half the time with their friends that they used to. Children are “entering school with autism-like symptoms due to the use of devices,” including lack of emotional expression and speech delays. Their bodies show evidence of excessive screen use and lack of physical activity, with poor fine motor skills, higher rates of obesity, and too little “strength in their fingers to hold a pencil or even a knife or fork.” Many students cannot read or do math at a basic level, they lack core knowledge such as the name of the state they live in, and they cannot read a clock. Test scores are in steep decline, and kids’ social media use has been linked to lower reading and memory scores.

 

Social media platforms, smartphones, and the internet are not entirely to blame for these transformations, but they’re implicated in nearly all of them. For more than a decade, technology companies insisted that users should trust them as they “move fast and break things.”  It’s becoming increasingly evident that they are breaking childhood, and it’s time to slow them down.

No comments: