By Kevin D. Williamson
Thursday, June 30, 2016
Being an astrophysicist, Neil deGrasse Tyson is familiar with event horizons. He needs a refresher on epistemic horizons.
An event horizon (the term is generally associated with black holes) is a boundary in spacetime surrounding a massive object exerting gravitational force so great that nothing that happens within the borders of the event horizon can ever affect anything outside of it. Which is to say, the escape velocity is equal to the speed of light, meaning that you could spend an eternity staring into it and never see what’s happening inside. If you got close enough to take a peek . . . the result would be what British astrophysicist Martin Rees calls “spaghettification,” and nobody wants to suffer that.
An event horizon is something you cannot see into. An epistemic horizon is something you cannot see out of.
If what you know of chaos science is limited to the published works of Dr. Ian Malcolm, you’ll despair to learn that the reality of it is a lot less sexy and a lot more mathy than it is in Jurassic Park.
Drawing from sources as diverse as the works of Henri Poincaré and mathematical biologist Robert May, scholars of complexity have disassembled Isaac Newton’s machine-like, deterministic model of reality that gave scientists the dream of a perfectly predictable world. As Melanie Mitchell put it in her indispensable Complexity: A Guided Tour:
Newtonian mechanics produced a picture of a ‘clockwork universe,’ one that is wound up with the three laws and then runs its mechanical course. The mathematician Pierre Simon Laplace saw the implication of this clockwork view for prediction: in 1814 he asserted that, given Newton’s laws and the current position and velocity of every particle in the universe, it was possible, in principle, to predict everything for all time. With the invention of electronic computers in the 1940s, the ‘in principle’ might have seemed closer to ‘in practice.’
That hope turned out to be a false one. Some complex systems (weather patterns, markets, animal population groups) turn out to be extremely sensitive to tiny variations in initial conditions, which we call, for lack of a better term, chaos. You can have a theoretically perfect model of the behavior of a system, but that behavior remains unpredictable — even in principle — because of variations that are beyond our ability to measure. Phillips again:
The presence of chaos in a system implies that perfect prediction à la Laplace is impossible not only in practice but also in principle, since we can never know [initial conditions] to infinitely many decimal places. This is a profound negative result that, along with quantum mechanics, helped wipe out the optimistic nineteenth-century view of a clockwork Newtonian universe that ticked along its predictable path.
That is pretty heady stuff, but the unpredictability and fundamental unknowability of many aspects of reality are familiar enough, particularly when it comes to human social interactions (meaning, among other things, the whole of politics and economics), human beings being notoriously unpredictable creatures. Soldiers, entrepreneurs, and fashion designers all know that all of the best planning and research that can be done often goes up in a flash when actual events start to unfold. Never mind the failed businesses; go back and read the initial plans of some of our most successful firms, and you’ll get a good laugh. Bill Gates denies ever having actually said that 640k memory ought to be enough for anybody, but he does admit to being surprised at how quickly 640k became too little, at being surprised at how important the Internet became and how quickly that happened, etc. Paul Krugman, the great economist, famously predicted that “by 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.”
The point here isn’t that Bill Gates and Paul Krugman are dumb — it’s that they aren’t.
Politicians like to tell simple stories about social problems, preferably stories in which their friends wear white hats and their rivals wear black hats. The 2008–09 financial crisis is an excellent example of that: The Left says that the problem is that we deregulated finance (never mind that we didn’t actually do that) and that “greed” caused bankers to trick tens of millions of Americans into taking out mortgage loans that they couldn’t really afford, with the result that wicked banksters such as Dick Fuld managed to cleverly . . . lose themselves billions of dollars. It’s a dumb story.
Some conservatives tell a pretty dumb story, too: that the bankers and mortgage brokers were in reality good, public-minded, upstanding types, who were viciously strong-armed into making loans to poor people, especially black and brown ones, who schemed to enrich themselves by . . . getting themselves foreclosed on, ruining their credit, losing their investments, and being put out of their homes.
The reality is that regulations, regulatory reforms, and economic incentives interacted in ways that no one foresaw — or could foresee — producing results that no one wanted. Securitization (bundling and chopping up mortgages into financial instruments that could be easily traded among firms) was intended to distribute risk among investors and institutions, but it ended up concentrating that risk. Everything from public-school failures to advanced mathematics contributed to the housing bubble and meltdown.
Many of the policies relevant to the housing bubble go back to the 1930s. No one in the Roosevelt administration could have foreseen what their policies ultimately would contribute to, nor could the deregulation advocates of the Reagan and Clinton years, or the regulators who helped shape housing and financial markets from the Great Depression until the current day. There’s a case to be made (I made it in National Review on December 15, 2008) that the invention of photocopying played a role, with credit-rating agencies switching from an investor-pays business model to an issuer-pays model once the easy replication of their reports made it more difficult to get paid for their work.
These things happen all the time. Protectionist measures taken by the United States against Japanese automakers ended up contributing to those firms’ technological innovation (especially with smaller four-cylinder engines) and allowing domestic automakers to forgo improvements in quality and performance; this ultimately made Japanese cars more attractive rather than less attractive to U.S. buyers. Agriculture policies that kept sugar prices artificially high led to the popularity of high-fructose corn syrup. The Islamic State emerged from our war on al-Qaeda and its supporters. Etc., etc., etc.
Professor Tyson, who may be the dumbest smart person on Twitter, yesterday wrote that what the world really needs is a new kind of virtual state — he wants to call it “Rationalia” — with a one-sentence constitution: “All policy shall be based on the weight of evidence.” This schoolboy nonsense came under withering and much-deserved derision. Conservatives, who always have the French Revolution in their thoughts, reminded him that this already has been tried, and that the results are known in the history books as “the Terror.” Writing with a great deal of reserve in Popular Science, Kelsey D. Atherton notes:
Rationalia puts a burden on science that it cannot bear: to work, it must be immune to the passions of the day, promising an objective world and objective truth that will triumph over obstacles.
That’s true enough, but it shortchanges the scientific objection to Tyson’s Rationalia pipe dream, which is that it implicitly presupposes quantities and types of knowledge that are not, even in principle, available, even if the scientists in question were the dispassionate truth-seekers of Atherton’s ideal.
The epistemic horizon is not very broad. We do not, in fact, know what the results of various kinds of economic policies or social policies will be, and there isn’t any evidence that can tell us with any degree of certainty. The housing projects that mar our cities weren’t supposed to turn out like that; neither was the federal push to encourage home-ownership or to encourage the substitution of carbohydrates for fats and proteins in our diets. A truly rational policy of the sort that Tyson imagines must take into account not only how little we know about the future but how little we can know about the future, even if we consult the smartest, saintliest, and most disinterested experts among us.
That is part of the case for limited government and free markets. Government can do some things, such as guard borders (though ours chooses not to) and fight off foreign invaders. There are things that it cannot do, even in principle, such as impose a “rational” order on the nation’s energy markets, deciding that x share of our electricity supply shall come from solar, y share from wind, z share from natural gas, all calculated to economic and environmental ideals. That is simply beyond its ken, even if all the best people — including Tyson, from time to time — pretend that it is otherwise. Free markets go about solving social problems in the opposite way: Dozens, or thousands, or millions, or even billions of people, firms, organizations, investors, and business managers trying dozens or thousands of approaches to solving social problems.
Consider the relatively straightforward question: How do we move people around to the places they need to go? Even the most simple-minded among us would realize that there isn’t a single answer to that question: Some trips are best done in a 747, some in a Honda Civic. What is the ideal mix of walking paths, bicycle routes, rickshaws, Hindustan Ambassadors, airliners, private jets, trains, hyperloops, spacecraft, sailboats, Teslas, hot-air balloons, zip lines, etc., for the world’s 7.125 billion people? And what will it be 20 years from now?
Would you really trust a group of politicians to figure that out?
There isn’t a road to Rationalia. There are billions of them, negotiated by individuals and institutions dozens or hundreds of times a day, every time they make a significant choice. Government programs are, by their nature, centralized, unitary, and static attempts to impose a rational order on complexity beyond the understanding of the people who would claim to manage it. Obamacare is an excellent example of that: No one intended for premium prices to skyrocket and for millions of people to lose their policies or for the majority of the American public to be unhappy with the program and its results, but that is what happened. The architects of Obamacare weren’t stupid, but, being ordinary mortals (albeit reasonably bright ones), their intellectual capacity was insufficient to the problem at hand: Small brains, big problems.
It isn’t ideology that imposes a relatively narrow circle on what government planners can do. And, with all due respect to the genius of F. A. Hayek (“The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design”), it isn’t only economics, either. The limitations on human knowledge are real, and they are consequential. As men like him have done for ages, Tyson dreams of a world of self-evident choices, overseen by men of reason such as himself who occupy a position that we cannot help but notice is godlike. It’s nice to imagine ruling from an Olympus of Reason, with men and nations arrayed before one as on a chessboard.
Down here on Earth, the view is rather different, and the lines of sight inside the epistemic horizon are not nearly so long as our would-be rulers imagine.