Wednesday, December 8, 2021

No Sentient Human Thinks Biden’s Press Coverage Is Worse Than Trump’s

By Charles C. W. Cooke

Tuesday, December 07, 2021

 

The Washington Post’s Dana Milbank believes that he has hit upon a journalistic scandal. “After a honeymoon of slightly positive coverage in the first three months of the year,” Milbank wrote last week, President “Biden’s press for the past four months has been as bad as — and for a time worse than — the coverage Trump received for the same four months of 2020.” This is a problem, he added, because it has led to the “news media” giving “equal, if not slightly more favorable, treatment to the authoritarians,” and thereby “serving as accessories to the murder of democracy.” On CNN yesterday morning, Milbank expanded his argument. “We see it as our job to be negative, to be adversarial,” he said. “But there’s a real problem when we are being just as adversarial ‘cuz a guy didn’t pass a bill as we are when a guy is trying to overthrow democracy.”

 

If, upon reading this, you thought to yourself, “there is no human being alive who could possibly have concluded that this is happening” . . . well, then you were correct, because it turns out that no human being alive did conclude that this is happening. Instead, Milbank’s evidence — which he describes hilariously as “painstakingly assembled” “proof” — came from a bunch of servers. “At my request,” he explained, “Forge.ai, a data analytics unit of the information company FiscalNote

 

combed through more than 200,000 articles — tens of millions of words — from 65 news websites (newspapers, network and cable news, political publications, news wires and more) to do a “sentiment analysis” of coverage. Using algorithms that give weight to certain adjectives based on their placement in the story, it rated the coverage Biden received in the first 11 months of 2021 and the coverage President Donald Trump got in the first 11 months of 2020.

 

Responding to Milbank’s conclusions, the statistician Nate Silver noted drily that “the degree to which the extremely nontransparent ‘AI’ analysis cited by Milbank should shift our priors” on this question “is somewhere between zero and less than zero.” Silver is correct. Milbank’s credulous talk may impress the partisan laymen, but the harsh truth is that what he is selling here is closer to snake oil than to “artificial intelligence” (itself a marketing term). In his piece, Milbank claims that “artificial intelligence can now measure the negativity with precision.” But this isn’t true — it can’t be. Human communication is extraordinarily complex, and it remains the case the most sophisticated algorithms struggle to parse it usefully. Absent heavy-handed intervention, “AI” is unable to comprehend commonplace linguistic tools such as irony, sarcasm, cynicism, in-jokes, callback-humor, and self-deprecation, and because they are so heavily contextual, it is of extremely limited use when attempting to judge the tone or scope of quotidian human sentiments. In a categorical sense, “she’s not a good violinist” and “she’s the worst f***ing violin player in the world” are both “negative.” But they do not represent the same critique. You know that. I do, too. But does an undisclosed algorithm, utilizing a set of unrevealed input variables, deployed by a researcher of unknown quality, at the behest of a partisan journalist? Let’s say that the odds aren’t great.

 

Nothing about political language excludes it from these structural deficiencies. “President Biden failed to implement his vaccine mandate,” “President Trump is a failure and has left the country in an unhealthy state,” and “Senator Dole’s health eventually failed him” are superficially similar statements. But they do not amount to the same sentiments when processed by nuanced human ears. Likewise, straight pieces documenting adverse things that have happened to a political figure, recording their poor polling numbers, or chronicling disagreements within their party, are not intrinsically or deliberately “negative” in nature, but are all liable to be interpreted as such by drones.

 

There is a reason that Milbank’s piece provoked such hysterical laughter upon publication, and that reason was that its readers were people, who were in possession of sophisticated brains, of ears that can process distinctions, and of political memories that go back earlier than January of this year. I am, as my readers know, both an enormous devotee of technology and a self-confessed tinkerer (you can’t tell me to “learn to code,” because I already can), but even I have not been so blinded by the lights of science that I consider a glorified Ctrl + F program that has been told to “give weight to certain adjectives based on their placement in the story” to be more capable of grasping attitude, tone, scope, disposition, proclivity, and sensibility than real-time, real-life human observers.

 

Does Forge.ai have a service that detects obvious gaslighting?

No comments: