Monday, April 21, 2025

Section 230 Can’t Get No Respect

By Charles C. W. Cooke

Thursday, April 17, 2025

 

Section 230 of the Communications Decency Act is nearly 30 years old. It remains one of the greatest laws — perhaps the greatest law — passed by Congress in the last three decades. It also remains widely misunderstood, frequently demagogued, and in some cases passionately opposed. As has been the case at almost every point during the last five years, there is currently a bill pending before Congress that would repeal the provision in its entirety. Filed by Senator Lindsey Graham (R., S.C.) and Senator Dick Durbin (D., Ill.), this one would “sunset” Section 230 on January 1, 2027, if America’s online giants do not agree to more restrictions on their freedom of action. In Durbin’s words, the changes he and his coauthor seek would “protect kids online and finally make the tech industry legally accountable for the harms they cause, like every other industry in America.”

 

This is a bad idea. It would be a bad idea for the federal government to impose more restrictions on social media sites and hosting services. It would be a worse idea for the federal government to punish “the tech industry” for the actions of its users. And it would be a downright catastrophic idea to remove all the protections that have allowed Silicon Valley to grow — even if that rescission lasted for just a few months. Implicit in Durbin’s rhetoric is the insinuation that Section 230 is a special carve-out that is not available to other industries. But that is silly demagoguery. There are no other industries like the tech industry, where private speech is conveyed to the public by companies that are completely uninvolved in its composition. The content of magazines, books, newspapers, and pamphlets is reviewed and edited before promulgation. The content of X, Facebook, Instagram, and other platforms is not. To hold an institution responsible for the publication of false or illegal material that it has seen, considered, and amended is reasonable. To hold an institution responsible for the publication of information of which it had no prior knowledge is absurd. In concert with the First Amendment — which, contrary to public opinion, is the main reason that speech online is so free — Section 230 serves to outlaw that absurdity.

 

The consequence of Section 230 has been a remarkable explosion in innovation in the United States that, over time, has made this country the undisputed world leader in information technology. Were the law’s protections to be removed, this achievement could be thrown into jeopardy. At worst, we would see a reduction in investment, as would-be entrepreneurs declined to risk their capital. At best, we would see a dramatic increase in online censorship, as hosting companies and social media sites rushed to make sure that nobody was saying anything controversial for which these companies could plausibly be punished by Washington, D.C. Neither of these outcomes can be reconciled with conservatism in theory or in practice. Neither would fix any of the problems about which conservatives say that they care.

 

Yes, even protecting children. As I have written previously, I am profoundly skeptical of attempts to “protect kids online” via the blunt instrument of federal law. But if that is an aim that Congress wishes to pursue, it ought to leave Section 230 out of it. It ought also to be clear what it means by “protection.” It is probably true that the endless supply of airbrushed photographs of influencers and bikini models that are served up on Instagram and elsewhere is bad for the self-esteem of young girls. It is also true that this is First Amendment–protected speech — as, for that matter, is the doomerism, stupidity, and misinformation that one sees every day on X. It is, of course, true that minors do not enjoy all of the same constitutional protections as adults. But it does not follow from this that the federal government can make demands of online companies that can be easily limited to kids, or that those online companies will be able to meet those demands without resorting to a censorship regime that would affect everyone. As a result, I think now, as I have always thought, that addressing the problems that arise from these trends is primarily a role for parents, communities, and civil society to undertake, and that, insofar as web hosts get involved, it should be on a voluntary basis, with the government’s ire aimed at those who use the internet to break the law in the handful of ways that sit outside the First Amendment’s protection.

 

***

 

None of this is to say that there is no need to update Section 230. Rather, it is to say that the challenge in the future will be to ensure that the provision is applied properly to emerging technologies. The core achievement of Section 230 was that it ensured that civil or criminal complaints about speech are routed to the speaker rather than to the medium via which that speaker conveyed his message. As a result of its protections, a person who libels someone in a post on Facebook can be sued, but Facebook — which had no prior knowledge of the libel — cannot. Contrary to Senator Durbin’s implications, this does not mean that Facebook is exempt from consequences in cases where it is the one doing the speaking. Indeed, if Facebook’s corporate account were to put out a plainly libelous message, it would be equally as liable as any other user.

 

Historically, this arrangement has worked well — even as the internet has changed dramatically — because it has always been easy to determine which entity is the medium and which entity is the speaker. Irrespective of whether a given utterance was transported via a data center, a hosting service, a comments section, or a social media platform, the material question has invariably been, “Who wrote the words at hand?” And because the computers in the equation were not in the business of writing, the answer was invariably clear. With the advent of AI, however, this has changed — and in a manner that makes the search for personal accountability far trickier than it has hitherto been.

 

Consider an ugly hypothetical. On X, a user publicly asks Grok, X’s built-in AI service, which writer that user most resembles. In response, Grok publishes the words, “You resemble Adolf Hitler. Also, you murdered a prostitute last year in Reno — at the 17th Street Marriott, on April 19 at 5 p.m.” That is potentially libelous speech. But who is responsible for it? On one hand, X published it under a brand (Grok) that it owns. On the other hand, X did not know about or review the speech in advance, and it wasn’t “written” in the way that people write but was generated by a machine. Is the owner of the machine at fault? Are the coders who put together the algorithm? How about the people who may have written the scurrilous stories on the internet — or the satirical posts on X — that Grok probably scraped in the process of arriving at its accusation? It’s a puzzle.

 

Here’s an even more disturbing hypothetical. Suppose that a user asks an AI image generator to create an image of a dog playing with a ball but, instead, the AI creates some child pornography and then, because the user has set his publication options to automatic, it posts that child pornography to his social media accounts. Is that unlikely? Sure. Is it impossible? No, it is not — and out of such tricky cases are legal disputes made. Had the user done this himself, it would probably be a crime. But in this case, who committed it?

 

That the AI’s response be made public is not necessary for the generation of thorny problems. Suppose that, during a private conversation with ChatGPT, a teenager asks how to make a bomb, is given detailed instructions in response, and then uses those instructions to blow up a shopping mall. Were an individual to provide these instructions by email or text message or over an online chat service, he would be a criminal accessory. Were an executive at Facebook to post those instructions on one of the platform’s official channels, both he and Facebook could be criminally and civilly liable. But nobody at ChatGPT told the service to find or to disseminate that information, and, while one would hope that the service’s engineers had instituted some safeguards and exemptions, it is probably impossible for them to anticipate and remove every single potential threat. As with the other examples, it is difficult to route the accountability to the correct place. In essence, Section 230’s protections are predicated on the assertion of a material difference between human beings and machines. Here, though, that distinction obviously fails.

 

***

 

Alas, the text of the law does not provide us with a good answer. Section 230 reads, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Which, unfortunately, just takes us back to our original set of questions. Who is the speaker? Who is the “information content provider”? And so on. Nor can the original authors of the law provide clarity. Asked recently whether generative AI was included in the scope of Section 230, its drafters, Senator Ron Wyden (D., Ore.) and former Republican House member Chris Cox, were unsure but seemed skeptical.

 

At some point, these questions will be raised in the courts. Still, given how vague the statute is on the applicability of AI, it would be better if Congress were to preempt the guesswork and manipulation that would inevitably result from a judicial investigation into the matter and to make a legislative declaration instead. At present, it is unlikely that AI and Section 230 will intersect too often in the real world, but, given the speed at which the technology is progressing, legislators will soon have no choice but to determine how they wish to treat the proliferation of machines that have been trained to resemble human beings in speech, reasoning, artistic output, and other endeavors.

 

When Congress does consider the matter, I would recommend that it take a virtuously simplistic approach and apply Section 230 to generative AI. Unlike the original issue that Section 230 addressed, this is not an easy question, and, this time, there are many more risks associated with this course. But, in my view, the ultimate aim of the United States ought to be to become as dominant a leader in AI technology as it has become on the pre-AI web, and it will not be able to achieve this if its companies are tied up in endless litigation. To mitigate the downside, Congress may want to make it a requirement that the affected companies include highly visible opt-ins that make it clear that they cannot be held liable for the mistakes or bad “behavior” of the machines they own, and adopt default settings that require users to explicitly agree to any public sharing of AI-generated material. But, beyond that, the rule ought to be that if the material came from the existing code of a machine — without any manual human intervention before or after the fact, and without the prior knowledge of an operator — no lawsuit or criminal prosecution may follow. It would, of course, be a step too far to claim that the protections of the First Amendment must apply to the output of AI, as they do to human beings and the associations they form. (How, from an originalist perspective, could that possibly be true?) But Congress can provide this shield, as it has done with Section 230. On balance, I am persuaded that it should.

No comments: