By Nick Catoggio
Friday, February 27, 2027
Remember The Phantom Menace? (Hopefully not.) Amid
the scintillating lightsaber duels and even more scintillating Jar Jar Binks
set pieces, it was easy not to notice that the plot centered on—yawn—a trade
dispute.
The conflict between the Defense Department and AI firm Anthropic is like that.
Superficially it’s a sci-fi blockbuster about Skynet.
At heart it’s a decidedly less sexy story about contracts and private property.
Anthropic supplies the Pentagon with AI technology for
its classified systems. It’s also built a reputation as the most ethically
conscious of America’s AI titans (although that reputation isn’t as sturdy as it used to be). Earlier this week
Defense Secretary Pete Hegseth told the company’s CEO, Dario Amodei, that the
military wishes to use the technology without any restrictions except those provided by law.
No can do, Amodei told him. Anthropic refuses to let its
AI be used for two purposes: to conduct mass domestic surveillance of Americans
and to operate fully autonomous weapons, i.e., drones lacking human
supervision.
AI is already so good at synthesizing information
quickly, Amodei said recently in an
interview, that using it for domestic surveillance could allow the
government to detect political opponents, compile dossiers on them, and begin
tracking their movements in a matter of seconds. And without human oversight,
AI-powered drones can’t be trusted to disobey unlawful orders—or, perhaps, to
carry out lawful ones without going haywire. “Today, frontier AI systems are
simply not reliable enough to power fully autonomous weapons,” the CEO warned
in a statement released on Thursday.
Late Friday afternoon the president replied with a
similar degree of thoughtfulness. “THE UNITED STATES OF AMERICA WILL NEVER
ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND
WINS WARS!” he declared on Truth Social, announcing that the federal
government will phase out Anthropic from all federal systems over the next six
months.
More revealing was how Hegseth responded to Anthropic
earlier this week: Lift your restrictions, he told Amodei, or we will nuke
you.
Figuratively, of course. The nuclear option in this case
is the Pentagon’s threat to designate Anthropic a “supply chain risk.”
That designation has traditionally been reserved for
foreign companies whom the U.S. fears would spy on or sabotage American
national security capabilities if their product ended up in our domestic
defense supply chain. (A Chinese or Russian AI firm would be an obvious
example.) Any American firm that does business with such a company is barred
from doing business with the Pentagon, and business with the Pentagon is
awfully lucrative for our country’s many, many defense contractors.
Being labeled a “supply chain risk” renders a foreign company instantly
commercially radioactive to all of them.
Never before, according to Amodei, has the federal government threatened to designate
an American company a supply chain risk, let alone a company that the
Pentagon trusts enough to have already integrated into its classified defense
systems. If Hegseth were to follow through, it would force every business in
the United States that supplies the military to choose between working with the
Defense Department and working with Anthropic—a potentially “existential”
scenario for Amodei’s firm.
And shortly before this piece was published, that’s just
what the defense secretary did. “Effective immediately, no contractor,
supplier, or partner that does business with the United States military may
conduct any commercial activity with Anthropic,” Hegseth announced
in a tweet.
Dispense with your ethics and comply with our demands
or we will destroy you. Once you get past the gee-whiz stuff about killer
drones, that plot
line is quite
familiar,
no?
Meaningless assurances.
In principle, the Pentagon’s position is defensible.
Neither Anthropic nor any other private firm should have veto power over how
the military conducts business. So long as Pete Hegseth’s department behaves
lawfully, it’s fulfilling its obligations. If Amodei disagrees, he should
expect to be dropped as the department’s AI vendor of choice.
When I say that this story is at bottom a humdrum
contract dispute, that’s what I mean. Picking a side is hard—or would be in a
world in which Americans hadn’t chosen to put a mafioso and his dissolute Fox
News-host sidekick in charge of the U.S. military.
Because we don’t live in that world, we’re stuck with
this question: Why would anyone take this kakistocracy’s word for it when it
promises to follow the law in using Anthropic’s AI?
“The Department of War has no interest in using AI to
conduct mass surveillance of Americans (which is illegal),” Pentagon spokesman Sean
Parnell said Thursday. That parenthetical is adorable. The Department of
Defense (not War) has already conducted
illegal operations under Trump’s and Hegseth’s leadership and is poised
to do so again in Iran on a grand scale. The defense secretary himself has,
in barely veiled terms, encouraged
American troops to commit war crimes and moved to punish Democrats who
simply warned
service members not to obey unlawful orders.
The Trump-Hegseth Pentagon functions, by design,
in a way that makes it difficult
to know what the law is. Its assurances that Anthropic’s AI will be used
exclusively for legal purposes are worth less than nothing.
“It is unbelievably rare that corporate ethics constrain
government behavior, as opposed to the other way around,” Emma
Isabella Sage marveled Wednesday in her piece for The Dispatch on
this dispute, but it can’t be otherwise under the circumstances. Americans
opted to be led by amoral postliberal cretins, so it fell to Dario Amodei to
supply the restraints on his own technology.
Extortion.
The ethical vacuum inside the federal government isn’t
the only hallmark of Trumpism in this case. Hegseth’s ruthless, possibly
unlawful, and likely counterproductive approach to Anthropic also stinks of it.
Negotiations that involve the president and his cronies
invariably lead to threats, which I’m sure they would rationalize as a matter
of driving a hard bargain. But it’s possible to drive a hard bargain without
vowing to ruin your opponent if he won’t agree to your terms or to seize what
you want from him forcibly if he won’t hand it over.
That’s called extortion. Give us Greenland, or we
might invade. Change your university’s rules, or we’ll end your federal
funding. Settle my lawsuit against you, or your merger won’t be approved. Vote
the way I want on this bill, or I’ll endorse your primary challenger. “Negotiations”
with the current White House are exercises in coercion, with the federal
government’s mind-boggling economic influence typically supplying the needed
leverage.
That’s precisely what Hegseth is doing by abusing the
“supply chain risk” designation with Anthropic: attempting to ruin the company
by making it persona non grata to defense contractors nationwide. If he
gets away with it, he could pull the same extortionate stunt on any other firm
in the United States that does business with the Pentagon to pressure it to do
his and Trump’s bidding. Dean Ball, a former AI adviser to the administration,
didn’t mince words about it in an interview yesterday with The Bulwark. “This would be one of the worst things
for the American business climate I have ever seen the government do,” he said.
But it’s worse than that. Hegseth was and maybe still is
also considering using the 1950 Defense Production Act, a law that lets the
president direct private industry to produce certain “critical and strategic”
goods, to force Anthropic to drop its ethical restrictions on how the
military uses its AI software. That’s literally coercive, beyond even what the
Trump administration is usually willing to stoop to. And in this case, paired
with the “supply chain risk” threat, it’s incoherent: “You’re telling everyone
else who supplies to the DOD you cannot use Anthropic’s models, while also
saying that the DOD must use Anthropic’s models,” Ball told Politico.
Coherent or not, using the DPA to compel Anthropic’s
acquiescence is a logical move for an administration of right-wing
socialists that’s already carved out equity shares for itself from a number of private
companies. Hegseth and the White House aren’t seizing the means of production
by claiming ownership of Amodei’s company, but by presuming to dictate the
contractual terms under which Anthropic’s intellectual property is used, they’re
converting
a private enterprise into a sort of state asset.
To any fascist movement, those outside of it are either
servants or outlaws. The DPA is an attempt to make Anthropic a servant; the
“supply chain risk” nuke is an attempt to make it an outlaw. Go figure that
Amodei might not trust the Pentagon to distinguish Americans from enemies when
deploying killer AI-run drones if Hegseth can’t distinguish Americans from
enemies when deciding who is and isn’t a risk to the supply chain.
Silly season.
I can’t help feeling a little silly getting exercised
about this, though. It’s all so … familiar.
The dumb and nasty demagoguery being used by the White
House and its flunkies to defend their position is familiar. “It’s a shame that
[Dario Amodei] is a liar and has a God-complex,” Defense Undersecretary Emil
Michael complained. “He wants nothing more than to try to personally
control the U.S. military and is ok putting our nation’s safety at risk.”
I do not believe Amodei wants to personally control the
U.S. military. (AI overlords have grander ambitions.) But if he did, how
tremendously stupid would one have to be to put his technology in charge of
classified military systems and to demand fewer restrictions on that
technology?
Michael feels obliged to post dreck like that for the
same reason Hegseth felt obliged to threaten to go nuclear on Anthropic, I
assume. The domineering culture of Trump’s administration requires it. If
you’re not behaving with gratuitous, off-putting, and probably
counterproductive belligerence toward your opponents, you’re not “fighting”
ruthlessly enough.
The fact that the White House is on the wrong side of
public opinion in this matter is also quite familiar.
It happens a lot nowadays, as
I noted recently, and will happen again if the president pulls the trigger on attacking Iran. It’s happening in the Anthropic dispute,
too. Earlier this month David Shor’s firm polled the issue by asking
respondents which comes closer to their view: Should the government require
unrestricted access to all U.S. AI technology to ensure that we stay ahead of
China or should private companies be allowed to set ethical limits on how the
government uses their technology?
Overall, the public split
21-54 on that choice. Swing voters split 24-51. Even Trump voters split
28-44. “The people unsurprisingly do not want killer robots and do not trust
Trump/Hegseth/the Republican party to do the right thing without limits,” Shor
concluded. I suspect the former is more of a factor than the latter—we’ve all
seen The Terminator—but whatever the explanation, Americans appear to be
on Anthropic’s side. If there’s an unpopular position on any issue, rely on the
Trump administration to find it, claim it, and be really boorish about
it.
The fact that Congress is nowhere to be found in this
fiasco is also familiar, needless to say.
“While it’s nice that Anthropic is digging in their heels
here, it’s insane that such questions as ‘how much killing will we let the
killer robots do on their own’ are being hashed out as back-room handshakes
between the military and its AI contractors in the first place,” Andrew Egger observed at The Bulwark, wondering
where our august legislature is in all this. Former Air Force Secretary Frank Kendall made the same point in an op-ed today for the
New York Times, calling on Congress “to pass, as part of comprehensive
AI regulation, restrictions on the most dangerous uses of these tools despite
the Trump administration’s strong resistance to such limits.”
Seems logical. Seems impossible, too: The president will
not allow congressional Republicans to tie his hands in setting policy for what
will soon be the most lucrative and powerful industry on Earth, assuming it
isn’t already. If you thought he liked tariffs because of the quasi-dictatorial
power that his monopoly over trade granted him, wait until he gets a taste
of playing favorites with AI. Democrats would need to win close to a
supermajority in both houses of Congress this fall to pass AI regulation over
Trump’s veto next year, and that’s not happening.
And even if it did, you know how he feels about laws he doesn’t like.
Democracy and nationalism.
There’s one more thing that’s familiar about the
Anthropic episode. Like so many of the daily political dramas we get spun up
about, it probably won’t matter much.
None of us believes that the government will ultimately
fail to use AI to surveil Americans or build self-guided killer drones, do we?
The latter, at least, is a military necessity: Once China fields fully
autonomous airborne death merchants powered by superintelligence, the United
States will have no choice but to keep pace. At the rate drone warfare is
progressing in Ukraine, we might even see that sort of weapon deployed in
battle if fighting drags on for another year or two.
Amodei acknowledges it too, noting in his statement yesterday that “fully autonomous weapons (those
that take humans out of the loop entirely and automate selecting and engaging
targets) may prove critical for our national defense.” The technology isn’t
ready yet, he stresses, and shouldn’t be deployed without proper “oversight”
and “guardrails,” but he never calls it morally unconscionable or categorically
rules out supplying it.
His position is “not yet,” not “never.” Not now—but soon,
and probably sooner than we think.
The Pentagon doesn’t appear willing to wait, though, and
might not have to. Recently, to put pressure on Anthropic, it signed a deal making xAI the second artificial intelligence
firm authorized for use in its classified systems. If that name rings a bell,
it’s because xAI is Elon Musk’s company; it’s the outfit behind Grok, the
chatbot that serves Musk’s social media platform, Twitter.
The one that once turned Nazi and began calling
itself “MechaHitler.” The one that let Twitter users create nearly naked
sexual images of women—and children. That one. Pete Hegseth’s
Pentagon likes it because, and here I quote the Wall Street Journal, “The looser controls on Grok,
and Musk’s absolutist stance on free speech, have made it a more attractive
choice to the Pentagon.”
That “loose” AI will be the one that replaces Anthropic
in federal systems and will soon be handling mass surveillance and killer
drones, presumably.
See why I say it’s hard to get worked up about this?
Skynet is coming; which corporate logo it bears when it arrives seems not very
important.
Even so, I appreciate Amodei showing some spine. One line
in his Thursday statement stood out: In explaining why his firm’s technology
shouldn’t be used in certain military applications, he wrote, “In a narrow set
of cases, we believe AI can undermine, rather than defend, democratic values.”
With postliberals in charge of the United States and its military, it feels
vaguely scandalous for a figure of influence to declare that liberalism should
take priority over the nationalist imperative to target one’s enemies with
utmost ruthlessness.
How nice to know that not every guy who’s careening
toward the singularity that will destroy the world is a chud.
No comments:
Post a Comment