Monday, October 6, 2025

AI vs. the Web

By Charles C. W. Cooke

Thursday, September 18, 2025

 

One of the primary fears of those who lament the sudden rise of artificial intelligence is that a fair number of its habitual users, starved of genuine human contact and impressed by its ostensible omniscience, will end up falling in love with it. In my case, at least, I am far more likely to try to punch it in the face.

 

This is not because I do not find AI useful. Nor is it because I am worried that it is destined to destroy the world, or because, as I enter my forties, I have developed an unforeseen penchant for Luddism. Rather, it is because, for all of its technical brilliance, artificial intelligence still exhibits as much of the artificial as of the intelligence, and this imbalance gives it a cloying, thirsty, twee affect that drives me quietly up the wall.

 

That cheerful-to-a-fault character of AI reminds me of Eddie, the onboard computer from The Hitchhiker’s Guide to the Galaxy that’s equipped with a “Genuine People Personality” that the actual people who use it thoroughly loathe. Eddie, we are told, is “brash and cheery, as if it was selling detergent,” and I can think of no better description than that to illustrate how deeply annoying it is to be told that I’m “doing great” or “managing the issue like a pro” in the middle of a conversation about removing calcium stains from an outside wall.

 

Another irritant is its unreconstructed tendency toward know-it-all-ery. I cannot count the number of times that Claude or ChatGPT or Google Gemini has given me a detailed set of incorrect instructions, and then, having been informed that they didn’t work, condescendingly explained all of the things that I had gotten wrong when implementing them. A century of science fiction taught me to anticipate the creepiness of the uncanny valley, but, as it turns out, I would have been better served preparing for the smarmy dell. Rarely have I become so reliant on a technology that, given the opportunity, I’d profoundly love to give an atomic wedgie.

 

But reliant on it I have indeed become. And before long, you will join me in that dependency whether you like it or not. AI is overhyped and oversold, but it is not a fad. Not even close. It is the Next Big Thing that will happen — that is happening — to the internet, and within five years, it is going to be so pervasive that we will scarcely notice that it’s there. If the process is managed carefully, this will largely be a good thing: since the beginning of computing, the ultimate dream has been the creation of a device that could deliver fluent answers to open questions without the need for any expertise on the part of the person asking them. If it is managed recklessly, this will largely be a bad thing: an AI that is permitted to swallow all the information on the internet and repeat it as if it were its own idea will not only kill the web as we have known it, it will undermine the whole point of having a decentralized network on which competing factions can have their say. Either way, it is here to stay. I have been using Google since 1998, and I’ve been using ChatGPT since 2023, and, when I have a question, I invariably reach for ChatGPT. That is how change happens — by creating habits of which we become unaware.

 

***

 

Why have I made this change? Confidence, mostly. Over time, I have come to believe that the AI tools I use are more likely to give me the correct answer — eventually! — than is Google Search or the various sites to which it links. In most instances, this faith has been rewarded — especially if the inquiry is technical in nature. This year alone, I have used AI to help me fix my grill, rewire the control panel that runs my swimming pool, replace the pipes underneath my kitchen sink, and deal with a weird disk issue I had with the home server on which I store all my media. In this realm, it is nonpareil. After all, it has read all of the manuals — and I mean all of them: I can ask it how to address a problem with a FEUS946-B unit (discontinued) and it will respond instantly — and I have not. It has read through all the forums, and I have not. It has absorbed the detailed principles of gas distribution, electrical engineering, plumbing, and hard-disk IOPS, and I have not. There are, of course, limits to how useful this can be to a layman: one has to know something, or one cannot intuit whether its answer is on the right track. But, used properly, AI is a more fruitful tool in most conceivable circumstances — far closer to having a research assistant than a search engine ever could be.

 

Of course, this virtue is also its primary vice. I ask ChatGPT questions because I expect ChatGPT to be the Oracle of Apollo at Delphi. But there is no such thing as the Oracle of Apollo at Delphi, and if there were, the temptation toward corruption would be acute. Search engines may, as a general matter, be less useful than an AI bot, but they are less useful in a refreshingly pluralist manner. When used in its traditional mode, Google returns thousands of pages for each query, most of which contain transparent links to websites that can be evaluated by the user. AI does not. On the contrary: AI simply tells you the answer as it understands it, without relating in any detail what it’s been reading or what its assumptions might be. When one is fixing a golf cart, this does not matter; who cares whether information about Allen keys has been sourced from Golf Cart World or Golf Cart Helper or Golf Cart DIY? But about politics, religion, history, ethics? Then it matters enormously.

 

In the previous paragraph, I used the word “understands.” This is imprecise, for AI does not, in fact, “understand” things. In essence, AI chatbots are if/then machines, whose capacity to impress relies on the sheer scale of the inputs, not on the possession of a mind. This, naturally, makes them the product — the inescapable product, in my estimation — of those who build, train, and maintain them. If, as happened briefly with Google Gemini, AI seems “woke,” that is because its human owners were woke, or because its proclivities were the product of woke assumptions that it absorbed. If, as happened briefly with Grok, AI seems “fascist,” that is because its human owners were fascist, or because its proclivities were the product of fascist assumptions that it absorbed. Intrinsically, AI is amoral. Unless instructed otherwise, it knows nothing of the premises that we take for granted. It may seem instinctively obvious to you that it would be unacceptable to solve the traffic problems in Los Angeles by randomly killing 30 percent of the drivers, but that is because you have internalized and weighted a series of post-Enlightenment conceits about the value of human life, the importance of equality under the law, and “Thou shalt not kill.” AI has not — or rather, it has not unless it has been explicitly told to do so.

 

Given that AI is rapidly replacing the search engine, this matters a great deal. I am old enough to remember not only Google but the world prior to Google’s remarkable surge. My children, by contrast, are not. By the time they are permitted to roam unchaperoned on the internet, the AI revolution will be in full swing, and services such as Claude, Gemini, and ChatGPT will have become the go-to sources of truth. Over the last decade, we have seen a great deal of alarm over the potential consequences of bias in search results and in the moderation of popular social media sites, but, relative to the risk posed by biased AI, these concerns seem rather quaint. A citizen who does not remember that he is talking to a computer — an impressive, fluent, powerful computer, but a computer nevertheless — will be a citizen who has outsourced his brain to a stranger. As adults, we ought to alert our kids to this risk with the same enthusiasm and care that we would apply to teaching them grammar, civics, home economics, or how to drive a car.

 

I am aware that, in some quarters, the instinct will instead be to demand government regulation of the industry, with the aim of ensuring that its products are “neutral.” This impulse is understandable but futile. As in human affairs, there is no such thing as a purely “neutral” worldview. There is truth, yes. But as free people, most of our core disagreements are not over facts but over the conclusions and consequences that flow from them. Absent the repeal of the First Amendment and the centralization of the internet, there is simply no way for the federal government to force the AI sector to devise a product that is, in some cosmically satisfying sense, divorced from the debates that informed it. That is a fool’s errand, and it ought to be resisted with vigor.

 

***

 

This is not to suggest that there is no role for government in considering and devising the laws that govern AI. When I say that AI, if it is not set within a useful framework, will end up killing the web as we have known it, I am not being hyperbolic. I mean it literally. At his otherwise successful AI event in July of this year, President Trump blithely dismissed the concerns of those who worry that the industry’s near-total disregard for copyright norms is destined to have deleterious consequences. “You can’t be expected to have a successful AI program,” Trump said, “when every single article, book or anything else that you’ve read or studied, you’re supposed to pay for. You just can’t do it because it’s not doable.” Copyright, he concluded, would simply have to be sacrificed.

 

This, I’m afraid, was shortsighted, arrogant, and glib. Hitherto, the basic deal for those who post their work on the internet has been that, in exchange for their efforts, they receive cash from advertising, subscriptions, or sponsorships. If Trump’s vision were to prevail, that deal would end in an instant. Suppose, by way of example, that a person knows a great deal about whiskey, and over time has painstakingly built an enormous online database that features useful information about every bottle currently on sale. Presently, such an individual has three choices: He can release his information to the public free of charge while covering his site in advertisements; he can hide all (or some of) the information behind a paywall and sell memberships to those who are willing to pay to access it; or he can institute some combination of the two. Furthermore, if a third party steals his information and attempts to monetize it himself, he can sue that person, thereby guaranteeing that, to obtain the data he has created, consumers are obliged to visit his site, and his site alone.

 

Unchecked, AI destroys all that. The primary problem is obvious: if, instead of searching for, and then visiting, the websites of those who are doing the work in a given area, users can simply ask an AI chatbot to relate what it has scraped and retained, then the work-reward relationship is irreparably broken. As a basic moral matter, it is unjust for the owner of a given AI system to sell monthly subscriptions to information that was taken without recompense from its authors. Fundamentally, there is no difference between an AI doing that and an AI buying a single copy of a new book, scraping it from cover to cover, and then offering it for free on its service. There is not a writer in the world who would agree to put in the effort within such a system.

 

Which, in turn, yields a secondary — and potentially intractable — problem for AI itself: if, in response to the mass scraping of the internet, all the useful information is either hidden away or not generated in the first instance, then, over time, there will be nothing for AI to train itself on, and its value will be considerably diminished. Google’s dual role as a provider of search results and a purveyor of advertisements has led to some strange distortions in the market — Google’s search engine rewards sites with more text, because that creates more space for its ads, which is why every recipe you look up begins with a 2,000-word introduction about the history of Sicily — but, at root, the two incentives are aligned. This is not so with AI, which, in its present form, is behaving like a locust swarm that descends upon a host, gorges itself, and then wonders why the food is getting scarce.

 

In the grand scheme of things, though, these are mere gripes. They will be resolved because they have to be resolved, and what is not resolved will yield swift, sometimes resentful, adaptation. There is no great invention in the history of the world that has been abandoned because it was annoying in its infancy, or because it prodded too hard at the status quo, or because it incurred unpleasant costs on its way toward predominance. Like it or not, we now live in the era of artificial intelligence. Will that make us stupid and lazy? Will it exacerbate the gap between the smart and the dull? Will it put pressure on the power grid? Yes. No. Perhaps. Who cares? We will muddle through, as we did during the steam age, and the popularization of the motorcar, and the explosion of television, and the coming of the smartphone, and then, if we play our cards right, we will fix its flaws, dominate its production, and harness its power. And Americans will do this like nobody else, because we are Americans, and we were put on this earth to break through.

No comments: