By Eli Lake
Tuesday, January 24, 2023
A few
weeks after Elon Musk formally acquired Twitter in October 2022, a senior
official at the company who quit in the wake of Musk’s arrival took to
the New York Times to pour cold water on Musk’s vision for the
social-media platform. Yoel Roth, whose title had been Head of Trust and
Safety, sought to assure his fellow progressives. Roth wrote that even if Musk
wanted to remove the web of content-moderation rules and procedures Roth had
helped create and enforce, the tech billionaire would be unable to achieve his
aim. “The moderating influences of advertisers, regulators and, most critically
of all, app stores may be welcome for those of us hoping to avoid an escalation
in the volume of dangerous speech online,” he wrote.
What
Roth meant was this: No Internet platform is an island, and Musk simply didn’t
have the power to do what he wanted despite his 100 percent ownership of the
social-media platform. It wasn’t merely that Musk would have to contend with
Twitter’s progressive workforce, which believes that some political speech is
so awful that it should be throttled or banned. He would also come into
conflict with European regulators, the Federal Trade Commission, and Congress,
all of whom also seek to limit what can be said online. And what about the Global
Alliance for Responsible Media, a trade organization of some of the world’s
biggest consumer brands that advocates for “online safety”—a euphemism for
protecting social-media users from accounts that may offend, harass, or trigger
them?
He would
also be dogged by advocacy groups such as the Southern Poverty Law Center and
the Anti-Defamation League, which have found a new and lucrative mission
monitoring social-media platforms for hate speech. They work hand in hand with
elite journalists and think tankers, who have taken to tracking the spread of
misinformation and disinformation online. In Washington, the FBI and the
Department of Homeland Security have personnel whose job it is to alert
social-media companies to foreign propaganda and terrorism. In Atlanta, the
Centers for Disease Control seeks to quarantine dangerous information that
might lead Americans to forgo masks or vaccine boosters. And perhaps most
important, there are other Silicon Valley giants—Apple and Google—that provide
the digital storefronts or app stores that Twitter needs to update their
software and continue to run its service.
Call it
the “content-moderation industrial complex.” In just a few short years, this
nomenklatura has come to constitute an implicit ruling class on the Internet,
one that collectively determines what information and news sources the rest of
us should see on major platforms. Talk about “free speech” and “the First
Amendment” may actually be beside the point here. The Twitter that Musk bought
was part of a larger machine—one that attempts to shape conversations online by
amplifying, muzzling, and occasionally banning participants who run afoul of
its dogma.
The
existence of this nomenklatura has been known for a few years. But thanks to
Musk and his decision to make Twitter’s internal communications and policies
available to journalists Matt Taibbi, Bari Weiss, and others, more detail is
now known on why and how this elite endeavors to protect us from all manner of
wrongthink.
***
The
unspooling of Twitter’s secrets—in a series of long Twitter threads—has been
revelatory for many reasons. It turns out that despite armies of human content
moderators, artificial-intelligence tools to weed out everything from health
misinformation to hate speech, and an expanding set of internal rules, a
handful of senior executives still made the most consequential decisions on
what Twitter’s users were allowed to see.
Twitter
has taken great pains to appear as a neutral arbiter of allowable speech on its
platforms. It has released semi-annual “transparency reports” for the last
decade, providing data on the total number of accounts it has suspended and how
many requests from governments it abided by. Along with Google, Meta, and
others, Twitter signed onto the Santa Clara Principles on Transparency and
Accountability in Content Moderation, a declaration of rights of social-media
users to due process and free expression.
And yet
the process at Twitter for moderating content from the largest and most
controversial accounts was often arbitrary, ill-informed, and biased in favor
of the beliefs of coastal progressives. Take, for example, the ban on sharing
the New York Post story about Hunter Biden’s laptop in the
run-up to the 2020 election. At the time, Twitter’s decision was presented as
enforcement of its policy not to allow hacked materials to be posted on the
platform (the laptop of the then–presidential contender’s son had been
abandoned at a Delaware computer shop the year before). But as the first
installment of the #TwitterFiles showed, no one at the company knew for sure
whether the laptop included actual hacked material. In one memorable exchange,
Twitter deputy counsel James Baker (former general counsel of the FBI)
acknowledged as much but then opted for blocking the story out of caution. The
decision was particularly glaring in light of Twitter’s refusal to block
a New York Times story a few weeks earlier that published some
of Donald Trump’s non-public tax returns—material that had unquestionably been
purloined.
This
kind of inconsistency proved to be a theme. Determinations that rules and
procedures on the platform had been violated were often little more than
judgment calls. Senior executives actually acknowledged that Trump’s last
tweets hadn’t violated its policies when his account was permanently banned two
days after the January 6 riot. LibsOfTikTok, an account with more than a million
followers that usually posts excerpts of videos of deranged Millennials
spouting the latest pabulum of gender theory, was suspended six times in 2022
for violating Twitter’s rules against hateful conduct. And yet an assessment
for the senior executive committee found that LibsOfTikTok “has not directly
engaged in behavior violative of the Hateful Conduct policy.” Rather, the
assessment said the account had been harassing hospitals and medical providers
by claiming that aiding gender transitions for adolescents amounted to “child
abuse or grooming.”
The
#TwitterFiles also exposed how the rules of content moderation themselves were
subject to wildly different interpretations. Take this tweet from former
president Donald Trump after he was released from Walter Reed Hospital where he
had been treated for Covid: “Feeling really good! Don’t be afraid of COVID.
Don’t let it dominate your life.” Baker asked why Trump’s tweet didn’t violate
Twitter’s policy on Covid-19 misinformation, “especially the ‘don’t be afraid
of COVID’ statement.” Roth responded that vague statements of optimism were not
“misinformation.”
One of
the biggest takeaways from the #TwitterFiles is that the FBI and the U.S.
intelligence community were playing a significant role in content moderation.
As Taibbi reported, several U.S. government agencies, and state and local law
enforcement, used the bureau’s liaison portal with Twitter to flag more tweets
than anyone previously suspected. Some of the tweets were from low-follower
accounts whose proprietors were making dad jokes about when the election was
scheduled. One Twitter content moderator openly expressed his fear that FBI
agents were doing random searches for the specific purpose of finding
violations of Twitter’s rules.
The
American public has been led to believe that the FBI’s relationship with
social-media companies after the 2016 election was designed to help foil and
expose foreign propaganda. But the #TwitterFiles show that the FBI’s San
Francisco bureau and the bureau’s Foreign Influence Task Force became a general
conduit for requests from local law enforcement, the U.S. intelligence community,
and others to take action against accounts with little to no proof of a foreign
connection. What had begun as a response to Russian meddling in the 2016
campaign had become by 2020 an open door for the federal government to act as
Twitter’s partner in content moderation.
One
obvious question raised by the Twitter files is how social-media companies
allowed the new Internet nomenklatura to dictate the terms of content
moderation in the first place. Let’s examine this.
In the
1990s, during the first gold rush in Silicon Valley, the politics surrounding
the concept of free speech were radically different from what they are today.
The right was concerned about the dangers of offensive speech; the left wanted
to leave the marketplace of ideas untouched. Just as liberals and conservatives
tussled over Howard Stern and gangster rap, they also crossed swords over the
Internet.
At
first, social conservatives won the day. In 1996, Congress passed a law known
as the Communications Decency Act, which, among other things criminalized
“patently offensive” or “indecent” content on the Internet if it was
plausible that adolescents and children could view it. This was the
culmination of a debate that had been raging in America since the 1960s over
whether local communities could prohibit a neighborhood newsstand or bookstore
from selling pornography. The right largely lost these challenges. But at the
height of the Clinton years and at the dawn of the Internet, the argument that
unprecedented dangers were posed by the unlimited capacity of the Internet to
disseminate information briefly triumphed.
Not for
long. In 1997, the Supreme Court struck down most of the Communications Decency
Act on First Amendment grounds. What remained was a compromise crafted by
Representative Chris Cox and Senator Ron Wyden. Internet service providers
would be encouraged to enforce terms of service to limit the most offensive
material from being posted. In turn, according to the language of the new
bill’s section 230, “no provider or user of an interactive computer service
shall be treated as the publisher or speaker of any information provided by
another information content provider.” This formulation—which meant that
Internet platforms were to be treated pretty much as though they were the new
era’s telephone wires and exchanges rather than purveyors of the material posted
on them—made the Internet as we know it possible. Everything from Internet
pornography to the comment sections on news sites to Facebook, Twitter, and
Instagram owes a debt to section 230.
There
was an important exception. Internet Service Providers still had to cooperate
with law enforcement on criminal cases, such as those against child
pornographers. These providers also had to police their servers for copyright
infringement. This meant that content moderation in the late 1990s was mainly
focused on taking down Napster and other sites that allowed people to acquire a
digital library of music for free. Hate speech, harassment, doxxing,
disinformation, conspiracy theories, and foreign propaganda—these were all fine
as the world began to get online.
The protection
provided by section 230 was a license for disruption. And the first
social-media companies took full advantage. Just look at some of the mottos
associated with the founders of social media. Facebook in its early years
adhered to the creed, “move fast and break things.” Twitter executives once
described their company as “the free speech wing of the free speech party.”
This
anarcho-libertarian approach to content moderation is captured in a vignette
from Pyra Labs, the company started by Twitter co-founder Ev Williams in 1999.
He invented the software known as Blogger, which, along with Kinja and
LiveJournal, provided the code that started the blog revolution in the early
2000s. The mantra for Blogger was “push button publishing for the people,” and it
meant that anyone could publish what he wanted on the Web.
Nick
Bilton’s 2013 book, Hatching Twitter, features a scene in which a
new Pyra employee is confounded by a customer-service email complaining about a
blog that displayed an animated picture of a group of men having sex on a
trampoline. Asked what he should do about it, Williams replied, “Nothing.”
Bilton writes that Williams soon “realized it would be impossible to police all
of the posts that were shared on the site, so as a rule, he opted for an
anything-goes mentality.”
This
“anything goes” approach to content moderation carried over to the next venture
for Williams, Twitter. He co-founded the platform with Jack Dorsey, Biz Stone,
and Noah Glass in 2006. For its first years, Twitter really was the Wild West
of social media. Trolls created accounts impersonating other users. Users could
be doxxed—meaning their home addresses and other personal information could be
published. Ethnic and racial slurs were allowed. And for a while, none of it
seemed to matter. Twitter was a breakout hit in Silicon Valley. It was making
no money, but by 2009 it was valued at more than $4 billion. Celebrities
flocked to the platform. In 2009, Ashton Kutcher challenged CNN to a race for
who could get to a million followers first (Kutcher won).
In 2011,
Stone published a blog post on behalf of the company with the title: “The
Tweets Must Flow.” He explained that there were times when the company would
have to remove tweets from the platform, such as with spam or tweets that
violated local laws. But, he wrote, “we make efforts to keep these exceptions
narrow so they may serve to prove a broader and more important rule—we strive
not to remove Tweets on the basis of their content.”
In this
period, Twitter, Facebook, and other social-media platforms also strived to be
politically neutral. For example, during the 2009 Iranian uprising over the
stolen presidential election, the State Department persuaded Twitter to delay a
long-scheduled maintenance update that would have disabled the service while
protestors were planning a major demonstration. Twitter agreed, but after the
news leaked that it had done so at the request of the U.S. government, Stone
and Williams became worried. “Now it seemed Twitter had been seen as picking a
side in an international war of words,” Bilton wrote. “It had been seen on one
side of a moral and diplomatic fence—exactly the last place it wanted to be.”
An
anecdote from Twitter’s larger rival, Facebook, also demonstrates the
seriousness of social-media companies’ initial efforts to stay out of domestic
politics. In 2016, on the tech-news site Gizmodo, a former curator of
Facebook’s “trending topics” blew the whistle on the operation. This anonymous
contractor described a kind of Facebook newsroom where Ivy League–educated
contractors chose news items to feature next to Facebook’s newsfeed. The
whistleblower claimed that almost all the curators were liberal and that news
favorable to the right was suppressed. Fearing a backlash, Facebook decided to
fire the contractors and leave the work of selecting news stories for “trending
topics” to an algorithm.
This was
a textbook case of creating a new problem to solve an old one. The algorithmic
solution to the perception of bias at Facebook was jet fuel for fake news.
Junky websites that trafficked in outrageous clickbait were favored by
algorithms that sought to increase engagement. In his book Facebook:
The Inside Story, Steven Levy writes that without the human oversight for
“trending topics,” the algorithm “rewarded the types of posts that thrived on
the News Feed—attention-getters, without regard to truth, good intentions, or
newsworthiness.” Almost overnight, “trending topics” spotlighted stories from
fraudulent sites, such as the “Denver Guardian.” Items would claim that Hillary
Clinton had died or that the pope had endorsed Donald Trump. In this moment,
Facebook was still moving fast and breaking things. After Trump was elected,
though, Facebook would slow down.
There is
a common misconception that the heavy-handed content moderation at Twitter,
Facebook, and other social-media sites really began after the 2016
election—when the social-media companies, Congress, and the FBI discovered that
Russia had been using these platforms to spread disinformation and misinformation.
That is
true to an extent, but the ground was prepared for this before Trump. In 2015,
for example, Twitter finally took on the issue of harassment. For the first
nine years on the site, Twitter handled reports of abuse on an ad-hoc basis.
Users were encouraged to report offensive tweets, and moderators would decide
whether they violated the rules. Twitter allowed you to block abusive accounts,
but that was about the extent of their efforts at what is today known as
“online safety.”
Then This
American Life, an NPR show, released an episode in February 2015 by
feminist writer Lindy West about an anonymous troll who had created an account
based on the identity of her father. Paul West had passed away only 18 months
earlier from prostate cancer. The account featured a photo of West and a bio
that read “embarrassed father of an idiot.” The piece was powerful enough that
Twitter employees raised it in the company’s internal message system. In
response, the CEO at the time, Dick Costolo acknowledged, “We suck at dealing
with abuse and trolls on the platform and we’ve sucked at it for years.”
Two
months later, Twitter’s general counsel at the time, Vijaya Gadde, wrote a mea
culpa in the Washington Post that attempted to straddle the
line between the company’s founding ideal of “letting the tweets flow,” and a
new ethos of protecting vulnerable communities from online harassment. “Freedom
of expression means little as our underlying philosophy if we continue to allow
voices to be silenced because they are afraid to speak up,” she wrote. Gadde
pledged that Twitter had tripled the size of the team devoted to online safety
and predicted Twitter would be able to respond to user complaints in a fraction
of the time it had taken before these changes.
Around
the same time, Facebook was also confronting a similar problem but on a much
larger scale. In the early 2010s, Facebook expanded its services all over the
world, including countries with no tradition of free speech or digital
literacy. At the time, this was all seen through the prism of the Iranian
protests and the Arab Spring. Social media still had a halo around it. Facebook
was a company that was bringing the world closer together. It was also a
company that was enabling a genocide in Burma.
That was
the conclusion of a blistering 2018 UN Human Rights Council report on the
ethnic cleansing campaign against the Muslim Rohingya minority in Burma.
“Facebook has been a useful instrument for those seeking to spread hate, in a
context where, for most users, Facebook is the Internet,” the report said.
“Although improved in recent months, the response of Facebook has been slow and
ineffective.” The campaign had gone back years. On June 1, 2012, according to
Levy’s book, the Burmese president’s spokesperson took to Facebook to call for
action against the Rohingya. Levy wrote that the post essentially generated
“support for a government massacre that would indeed occur a week later.”
Another
factor in the 2010s that shifted the balance on the Internet from free speech
to online safety was a particularly effective English-speaking, American-born
jihadist named Anwar al-Awlaki. His video sermons were widely distributed
throughout the Internet, and they helped to radicalize Nidal Hasan, the U.S.
Army psychologist who murdered 13 people at Fort Hood in 2009. Several other
followers of al-Awlaki engaged in terrorism at his urging, sometimes in
personal correspondence. In 2011, President Barack Obama ordered his killing in
a targeted drone strike in Yemen. But the videos of his sermons survived. For
the next seven years, it was still relatively easy to find them on YouTube and
other places on the Internet. It was not until 2017, that Alphabet (formerly
Google), the parent company of YouTube, removed his videos from the platform. Until
then, its own content-moderation rules allowed videos from al-Awlaki so long as
they did not directly incite violence.
Part of
the reason YouTube removed those videos is that lobbying from the U.S.
government, academics, and pressure groups focused on online radicalization.
Another factor was the rise of ISIS, which used Twitter, Facebook, and other
social-media platforms to recruit and spread its propaganda on a much larger
scale than al-Qaeda did. A Twitter blogpost from early 2016, for example, says
the company suspended over 125,000 accounts for threatening or promoting
terrorist acts in the last half of 2015 alone.
***
After
Donald Trump unexpectedly won the 2016 election, the social-media platforms
became a scapegoat for those seeking to explain Hillary Clinton’s gobsmacking
loss. Facebook came in for a beating in Congress from Democrats who were livid
to learn that a Kremlin troll farm known as the Internet Research Agency had
purchased ads and created memes that denigrated Clinton.
There
were two major problems for Facebook during the 2016 election that had
prevented it from catching the Russians in the act. To begin with, Facebook was
so focused on increasing its advertising revenue overseas that it didn’t put
the ads themselves under much scrutiny at all. The second problem was that the
Russian memes and posts on their platforms were not in any real violation of
Facebook’s terms of service. Because most of the posts were about the 2016
election, the content fell well within acceptable political speech, according
to Facebook’s own rules. When Facebook in 2017 began to suspend Russian
accounts, it did so because they were fakes—the Russians were pretending to be
people they were not—not because their content was unacceptable.
Twitter
was also blamed for the 2016 election. As Taibbi reported on January 3, the
company’s first audit for Russian activity revealed next to nothing. But as
outside academics, reporters, and Congress began to claim otherwise, Twitter
came to accept the narrative. “Researchers took low-engagement, ‘spammy’
accounts with vague indicia pointing to Russia (for instance, retweet
activity),” Taibbi wrote, “and identified them as not only Russian, but
specifically as creations of the media’s favorite villain, the Internet Research
Agency.”
The
focus on Russian meddling created an environment where it was possible for the
FBI and other government agencies to play a persistent and direct role in
content moderation for Twitter. By the 2020 election, what had begun with
discreet requests from the U.S. government or committees in Congress to suspend
accounts suspected to be Russian fronts transformed into a firehose of diktats
from federal, state, and local authorities to censor content. Twitter was no
longer a disruptive force for free speech; it was a tool for controlling
discourse on the Internet.
This
excessive discourse control often stymied the flow of real information in the
name of filtering out disinformation. In his installment of the #TwitterFiles,
which focused on the platform’s moderation of health misinformation, the gadfly
critic David Zweig asked a devilish question. If Twitter had allowed dissent
from the Covid dogma on its platform, would schools have opened sooner? Would
the lockdowns have lingered?
Not
every account that was suspended or throttled during the pandemic was giving
sound medical advice or speaking truth to power. But plenty of them were. Zweig
found that a tweet from Harvard epidemiologist Martin Kulldorff asserting that
children and people with prior infections do not need vaccinations—a position
in line with vaccine policies in other countries—had come into Twitter’s
crosshairs. Kulldorff’s tweet deviated from recommendations of the Centers for
Disease Control on vaccines. So Twitter labeled it as health “misinformation.”
One
tweet alone would not have shortened the lockdowns or opened schools. But Zweig
says that he found numerous examples of tweets taken down or labeled simply
because they challenged whatever the CDC policies were at the time. Add to this
Twitter’s efforts to shadow-ban accounts that countered conventional Covid
wisdom. Weiss disclosed in December that Stanford University Medical School’s
Jay Bhattacharya had been placed on a “trends blacklist,” which de-amplified
his tweets during the pandemic. In other words, they used Web juju to limit the
ability of others on Twitter to read what Bhattacharya had written.
It’s
hard to measure precisely the cumulative effect of muting online dissent. But
we know from the experience of dictatorships that governments that lack the
feedback afforded by regular elections and free speech often forge ahead with
reckless and stupid policies long past the point of futility. We also know that
efforts to limit what the public can read, hear, or view are almost always done
with the best intentions. The Soviets justified domestic censorship in
part to protect their citizens from foreign disinformation. The nomenklatura of
the Soviet Union believed they had an obligation to keep its radio, newspapers,
and television free of Western propaganda.
None of
this is to compare America today to the evil empire. Twitter and Facebook are
private companies. And some content moderation is necessary for social-media
platforms. But there is one important similarity between the original
nomenklatura and our own. Like all censors, the content moderators today do not
trust the rest of us to evaluate the information we encounter in the world.
They believe that the minds of the masses are like balls of clay to be molded
and shaped, incapable of critical thought and discernment. As history has
proven time and time again, this assumption makes fools of us all.
No comments:
Post a Comment