Friday, April 7, 2023

A Misguided Warning on AI

National Review Online
Friday, April 07, 2023

It was easy to lose track of in the midst of the firestorm over the Trump indictment, but representatives of the Future of Life Institute released a letter, signed by various figures including Elon Musk, calling for a moratorium of at least six months on “the training of AI systems more powerful than GPT-4.”

The letter is presumptuous, pompous — and deeply wrongheaded. If no pause is agreed to, it intones, “governments should step in and institute a moratorium.” In the institute’s view, AI’s future should be circumscribed by regulation. AI research should be “refocused” to make “today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal,” happy adjectives that can mean anything that “independent” experts want them to mean. Oh, yes, there’s a request for taxpayer money: “Robust public funding” for this and “well-resourced institutions” for that.

It is possible that AI could, one day, evolve to a point that it “sees” us as surplus to its requirements. But we are still a long way from Skynet’s maw. As Noah Rothman pointed out in a recent article for National Review, critics of the letter “note that it elides the distinctions between modern AI and artificial general intelligence, which would approach a human’s capacity for cognition and is a technology that’s decades away — if it is feasible at all.”

Regulators are what they are. AI may never take on a life of its own, but the regulatory structure sketched out in the letter may well do so. The result would be to hobble AI’s development, in the West anyway, and perhaps for reasons that owe rather too much to ancient fears long predating Victor Frankenstein’s unfortunate tinkering. Some cynics might suspect more-rational, if unattractive, explanations for this proposal, ranging from hyping AI (and the stock-market valuations that would go with those magic initials) to using regulation (or even just a pause) to distort the competitive landscape.

But even if this letter has been prepared with the best of intentions, Beijing and its authoritarian associates elsewhere will not share our scruples. Should AI have the malevolent potential sketched out by the institute, the U.S. must develop the capability to deal with it. To be fair, the institute emphasizes that its proposal “does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.” But if these black boxes are as unpredictable as the institute would suggest, the best way to develop the skills necessary to manage them will be to learn by doing, not pausing. Moreover, the prospect of profiting from such black boxes will mean that they will attract much more brainpower than if they are to be left on the shelf while we ponder their use and let regulators design their future. There will be no such constraints in Beijing.

AI, like most technologies, comes with its dangers. That will justify some regulation as we see how it progresses. But it should not be an excuse for heavy-handed regulation in advance, driven by excessive reliance on the precautionary principle.

To take one example, worries about ever more sophisticated levels of “propaganda and untruth” make their predictable appearance in the letter. While its authors have ideas (some of them worth considering) about how aspects of this might be combated, the old Roman question — Quis custodiet ipsos custodes? (Who will guard the guardians?) — goes, as usual, unanswered. To claim that we can put our trust either in regulators, who are rarely without agendas of their own, or in supposedly “independent” experts is disingenuous. To believe such claims is naïve.

AI is already moving automation up the technological and social ladder. The jobs it threatens will be increasingly upscale, a development that might explain one particularly revealing note of panic that pops up in the letter: “Should we automate away all the jobs, including the fulfilling ones?”

New technologies cannot be wished away, and nor can some of the problems they may create. But the institute’s benign-sounding idea that we should develop “well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause” should not be one of them. We have enough opportunities for rent-seeking as it is — even before machine-learning perhaps puts it into overdrive.

No comments: