Saturday, August 25, 2018

Tech Titans Made Serious Mistakes, and More Censorship Won’t Right the Ship


By David French
Friday, August 24, 2018

It seems like each week brings yet another example of strange and amateurish Facebook censorship. Last Friday morning, the immensely popular PragerU platform tweeted that Facebook had blocked access to its videos. PragerU screen-capped the proof. By the evening, Facebook reported that it had “mistakenly” removed the videos and was restoring access.

Then, yesterday, journalist and bestselling author Salena Zito reported that Facebook seemed to be censoring a story she wrote for the New York Post detailing why many Trump supporters won’t be shaken by the Paul Manafort conviction or the Michael Cohen plea deal. Some of her readers reported that it was being marked as “spam.” Others told her that Facebook was reporting that the article “did not follow” its “Community Standards.”

Then, suddenly, the posts reappeared. In both instances there has been no satisfactory explanation from Facebook for its censorship.

Let’s draw some distinctions. When Facebook wants to systematically and intentionally censor, it can do so with ruthless efficiency. Earlier this week I wrote about how it’s not only blocking access to the website codeisfreespeech.com (which contains links to lawful gun designs), it’s diligently preventing speech about the site. Moreover, there was no ambiguity when it removed Alex Jones’s content from the site for “dehumanizing language.”

Moreover, it’s hard to argue that Facebook was trying to get away with stealth censorship. How can you quietly block access to articles and videos posted by people with access to other public platforms? These actions reek of errors or incompetence, not systematic silencing. But to believe that the censorship isn’t calculated isn’t necessarily consoling. Instead, it helps expose the deeper problems of our social-media platforms. Our progressive tech titans built corporations based on a number of false premises, and now they have a tiger by the tail. They don’t know what to do, they’re bumbling around, and there is collateral damage.

Let’s back up a moment. Much of the Internet has been built by people who aspired to bring the world together. They wanted to “make the world a better place” (to use the line hilariously and constantly satirized on HBO’s Silicon Valley) by connecting people, facilitating relationships, and putting the sum total of the world’s knowledge at every person’s fingertips. I’m oversimplifying, of course, but there was (and often still is) an infectious optimism — about people, about the possibilities of technology — that lies at the foundation of virtually every one of our modern tech goliaths.

Behind much of this idealism is certain understanding of human nature — one that at the very least posits that connection will be good, problems can be managed, and virtue will ultimately win.

But what if this understanding is fundamentally flawed? What if the net effect of all this connection is that human flaws are magnified perhaps even more than human virtues, problems can’t either be coded or managed away, and whether good or evil ultimately wins is in constant doubt? What if the result is a product that people feel they can’t do without (great for the bottom line) but that also magnifies anger and division, leading to a constant outcry from customers distressed by their experience with your product?

And then what if your product serves the whole nation, but your colleagues and peers almost exclusively reflect the ideas and worldview of a small slice of the progressive elite?

Well then, you’ve got a well-nigh unsolvable problem. It’s going to be clear, soon enough, that algorithms and automation won’t solve your problem. Smart people from all sides can game the system and spot its biases, quickly. Complaint-based systems are going to create large-scale problems with heckler’s vetoes, incentivizing bad-faith spam reporting or bad-faith hate-speech claims. (Remember, your user base isn’t as virtuous as you thought.)

Then, failing automation and punting your policing to users, your top-down subjective, technocratic solutions — relying on mechanisms such as, say, a “trust and safety council” or “hate speech” policies — will be just as unsatisfactory. If you satisfy the internal constituencies and staff your team with people who largely reflect the company’s core ethos, then you’ll craft “hate speech” or “dehumanizing language” guidelines that target or alienate an immense portion of your customer base (while leaving the hateful or dehumanizing language of ideological allies intact). But make your technocrats more ideologically diverse — more reflective of the nation you’re trying to reach — and you’ll infuriate your workforce. Remember, we now live in an era when it’s just an intolerable affront in some quarters to work alongside a person who doesn’t share your worldview.

The ultimate result of all these flawed premises and all the flawed solutions is exactly the world you see before you today — a world dominated by progressive corporations that engage in a handful of explicit crackdowns and a host of confused, ad hoc, and seemingly arbitrary “mistakes” or unexplained actions that leave no one satisfied and make too many of their users long for market alternatives.

They were wrong about human virtue. They were too confident in their ability to manage the user experience of hundreds of millions of people while keeping the platform open enough to create a version of the marketplace of ideas. In short, they thought they could do better than the First Amendment, and they failed. A series of choices loom, between a miserable status quo, an alienating authoritarian future, and a more rational but less progressive regime that strikes the same kinds of balances that have benefited American culture for more than two centuries.

The fundamental viewpoint neutrality of classic First Amendment doctrine is the right refuge for the titans of social media. But is this a lesson they will ever choose to learn?

No comments: