Saturday, January 23, 2021

What to Do about Social-Media Double Standards?

By Charles C. W. Cooke

Thursday, January 21, 2021

 

Whatever one chooses to call it — “censorship,” “moderation,” “responsibility” — the online clampdown that we have witnessed in the days since President Trump encouraged a mob to attack the U.S. Capitol has flown directly against both the spirit and the promise of the Internet. Since it was widely adopted in the mid 1990s, a combination of laws, regulations, contracts, and customs has kept the Web extraordinarily free. In 1993, on the eve of its explosion into our lives, the libertarian founder of the Electronic Frontier Foundation, John Gilmore, explained that “the Net interprets censorship as damage and routes around it.” This, he made clear, should be regarded as a feature, not a bug. And so it was.

 

Recently, however, things have started to change. In architectural terms, the Internet is as decentralized and distributed as it ever was. But, through their own free will, users of that decentralized, distributed Internet have become increasingly reliant upon a handful of massive corporations: Google for search, advertising, and email; Cloudflare for DNS, DDoS protection, and low-latency caching; Amazon Web Services for servers and object storage; Twitter and Facebook for social interaction; and so on. Give or take, all of these companies have acquired their powerful positions fairly. And yet, in the process of doing so, they have concentrated an enormous amount of power in a tiny number of hands, such that, pace Gilmore’s dictum, it is now much more difficult for an average user to “route around” censorship than was once the case.

 

To make matters worse, the companies that have found themselves so empowered are no longer filled with the garage-startup laissez-faire libertarians who made the early Internet so great, but with replacement-level busybodies who spend their days lobbying their employers to act as arbiters of decency and taste. Where once the Web was the Wild West, now it is full of wannabe Carrie Nations entering formerly free saloons to smash liquor bottles and save souls. In the last month alone, Twitter and Facebook have removed the president of the United States from their platforms, and Amazon Web Services has deleted a rival social-media network from its infrastructure. And, far from prompting raised eyebrows, these moves have been met by many of the most powerful people in the country with applause and demands for an encore. One does not need to like President Trump or to approve of everything that is said online to find this alarming, or to notice that this was not how the Internet was supposed to work.

 

What can we do about it? Well, it’s complicated — whatever glib U.S. senators and ignorant television hosts might tell you. On the one hand, the Internet as we know it simply cannot operate if its component parts are too heavily regulated, which is why core elements such as DNS and IP allocation are neutral by design; why ISPs, peering networks, and data centers have been historically indifferent as to what information they serve; and why — until now, at least — most websites and platforms have limited their interventions to addressing clear violations of the law. On the other hand, the private companies that provide so much of the Internet’s functionality enjoy First Amendment rights, too. Much of the coverage of this issue pits every question as if it were a matter of censorship vs. liberty. But it is not always that simple. To force a business owner to take a customer he doesn’t want is itself a form of intellectual control.

 

To understand how complicated and messy these issues are, one need only look at how awkwardly they attach to the two main political parties in the United States. It is a matter of considerable irony, for example, that it is the Democratic Party that wants to use Title II of the Communications Act of 1934 to reclassify broadband providers as “common carriers” and thereby to prevent them from discriminating in any way between different kinds of traffic, while the Republican Party not only opposes the move but, in 2017, chose to reverse the Obama administration’s reforms at the first possible opportunity it got. Equally ironic is that it is the most populist members of the Republican Party who hope to remove or dilute the Section 230 protections attaching to Internet-based hosts and platforms and, thereby, to give those hosts and platforms a more convenient excuse for kicking off the dissenting or unpopular voices they claim to want to protect. To listen to the Josh Hawleys and Ted Cruzes of the world, one might get the impression that Section 230 accords Twitter and Facebook the power to determine who may use its service and who may not, and that repealing it would immediately strip them of that power. This is almost certainly incorrect: In a world without Section 230, Twitter and Facebook would be more, not less, liable for their users’ speech and, in consequence, would be more, not less, likely to monitor it. Here, as almost everywhere else at the moment, our political debate is an inchoate mess.

 

Perhaps that is inevitable, given that there are no sympathetic characters in this drama. I continue to believe that Section 230 of the Communications Decency Act is one of the best laws that have been passed in recent memory, and yet I am under no illusions as to why we are witnessing a backlash against it. Implicit in the logic of Section 230 is the assumption that the hosts and platforms that it protects will pass that protection on to their customers. But, as companies such as Amazon Web Services, Google, and Twitter are showing us every day, that assumption is now a shaky one. In an ideal world, Amazon Web Services would not care a great deal about the speech of, say, Parler and its users, because, thanks to Section 230, Amazon Web Services cannot be held liable for their speech.

 

But we do not live in that world. On the contrary: We increasingly live in a world in which the likes of Amazon Web Services want to have it both ways. It is true, of course, that Amazon Web Services is a private company and that, because it is a private company, it may choose its clients in the same way as may any other. But it does seem a little rich for Amazon Web Services simultaneously to demand congressional protection against the behavior of its customers and to elect to boot those customers from its system if they do any of the things against which Amazon Web Services has been indemnified. Alas, this “for me, but not thee” approach is becoming ubiquitous. In one breath, a company such as Google insists that it cannot possibly be held liable for the slanderous or illegal speech of, say, YouTube commenters, while in another it announces that it will be punishing users of its advertising services if their third-party commenters write anything heinous.

 

Legally, there is no double standard in these cases. Ethically, however, the behavior is ugly, duplicitous, and counterproductive. Insofar as there is a moral case for Section 230, it is that the provision ensures that the speaker, and not the passive platform on which they speak, is punished for libelous or illegal speech, while everyone else can continue about their business unscathed. Thanks to Section 230, if a user on a Parler-style platform publishes an inciteful post that is in violation of the standard laid out by the Supreme Court in Brandenburg v. Ohio, that user can be prosecuted without the entire platform — or its host — being taken down, too. The benefits of this arrangement are twofold. It encourages investment in tech infrastructure by removing the possibility that investors will be held collectively guilty for their users’ transgressions, and it ensures that the fringe cases can be dealt with as narrowly and as delicately as possible.

 

The Parler incident rankled because, even though Section 230 clearly applied to all involved, the response was anything but narrow. Within a few days of the disgrace at the Capitol, Parler had not only been removed from AWS’s EC2 service — leaving all of its users staring at a blank white screen — but its app had been deleted from Apple’s and Google’s repositories. Politically speaking, one has to wonder how long Section 230 will last if its most powerful beneficiaries choose to pocket the protections they have been afforded and enforce their own rules anyway.

 

Especially when those rules are not in any meaningful way neutral. Amazon’s explanation of why it kicked Parler off its servers was, on its own terms, logically consistent: Unlike most other sites on AWS, Amazon insisted, Parler was so lax in its moderation of egregious content that it could not avoid violating the terms of service under which it was hosted. Which . . . okay, but have you ever been on Twitter? And no, not just the parts of Twitter that move quickly and are difficult to moderate in good faith, but the bits that are curated? This year, Twitter banned the sitting president of the United States for “glorifying violence,” on the grounds that his doing so violated its content policy. It also promoted the hashtag “Burn Louisville” in its curated “Trending” bar and gave $3 million to Colin Kaepernick’s organization Know Your Rights Camp after he tweeted: “When civility leads to death, revolting is the only logical reaction. The cries for peace will rain down, and when they do, they will land on deaf ears, because your violence has brought this resistance. We have the right to fight back!” Should Twitter’s vendors and distribution channels delete it, too?

 

Bit by bit, the argument here seems to have changed. One of the most frequent instructions given to commentators who complain about Twitter and Facebook is “Look, if you object to the way they work, then go build your own social network and set your own rules.” In a vacuum, this is good advice. How better, after all, to deal with differing moderation standards than to have different moderation standards, on a variety of different platforms? And yet, if Apple and Google and Amazon and Cloudflare are to get into the business of deciding which moderation standards are acceptable and which are not, then “Go build your own” will quickly become a theoretical, rather than practical, prospect.

 

So what do we do? Nothing easy, certainly. Ultimately, this will not be solved by legislation, because, ultimately, this is a cultural rather than a technological problem. The big firms on the Internet are gravitating toward censorship and admonition because our culture is gravitating toward censorship and admonition. Americans are fortunate to have a First Amendment that stands apart from the passions of the day, but they are foolish if they believe that those passions can be suppressed forever. Slowly but surely, we are giving up on free expression, and, despite its design, the Internet is following suit. And why wouldn’t it, when that’s where we all live.

No comments: