Monday, April 20, 2026

Anti-Americanism Is a Disease

By Charles Fain Lehman

Monday, April 20, 2026

 

Think what you will about Donald Trump; no one can deny his flair. Take, for example, a segment of his State of the Union speech earlier this year. “I’m inviting every legislator to join with my administration in reaffirming a fundamental principle,” Trump said. “If you agree with this statement, then stand up and show your support: The first duty of the American government is to protect American citizens, not illegal aliens.”

 

Unsurprisingly, every Republican in the chamber rose, while every Democrat remained sullenly seated. The ensuing applause went on for several minutes. Trump then lambasted Democrats: “You should be ashamed of yourself, not standing up. You should be ashamed of yourself.”

 

What was interesting about the moment was not so much that Trump tried the ploy, but how Democrats responded to it. After all, it’s hard to disagree with the premise: “The first duty of the American government is to protect American citizens, not illegal aliens” is the sort of proposition that gets supermajority support in polls. But standing in response to them would have also meant capitulating to Trump—a sin for Democrats more mortal than disagreeing with the idea that the American government is for Americans.

 

There was a time in American history when the calculus would have been different. If Ronald Reagan or George Bush (at least the elder, maybe the younger) had asked everyone in Congress to recognize that the American government’s duty is to citizens over illegal immigrants, the Democrats present would have feared appearing unpatriotic more than they would have feared supporting a Republican president. Today, it’s the reverse.

 

That is attributable, in part, to Trump’s unique relish in provoking division. But it is also the result of the fervor with which Democrats seem to hate not only Trump but everything he touches. Representative Rashida Tlaib’s decision to wear to the State of the Union not only a keffiyeh but also a button reading “F—K ICE” is typical of the aggressiveness demanded of a party some of whose voters literally want them to “get shot” opposing Trump.

 

This matters not just because it makes political life unpleasant. There’s a deeper and darker truth revealed: It’s axiomatic that if you hate America’s representative government long enough, you will start to feel pretty badly about America itself. Americans regard Trump and the Republican Party as more patriotic than Joe Biden and the Democratic Party, according to YouGov polling last year. If Trump owns the patriotism brand, and everything associated with Trump is bad, the logic that follows is straightforward. You end up refusing to stand not in spite of your belief that America is good, but because you no longer really believe America is good and that Americans deserve defending.

 

That attitude has spread from Democratic politicians to voters (maybe vice versa). A plurality of Democrats told YouGov that they thought America’s best days were behind us, and that America’s prospects for the future were worse than most other countries; a third said they are less patriotic now than they were in childhood.

 

America has seen moments like this before. Through the 1960s and 1970s, the Democratic Party took on an increasingly anti-American stance. The Democrats became the party of “blame America first,” from our domestic problems to our foreign entanglements. But back then, this posture resulted in epic defeats, as workaday voters repudiated anti-Americanism over and over again at the ballot box.

 

Today, the question is whether such a posture is still the same kind of electoral poison it once was. Democratic voters seem eager to reward their representatives for growing ever more vicious in their criticism of Amerikkka. On the right, too, there are early warning signs of a surging skepticism of America, coming even as it does wrapped in the cloak of nostalgic “nationalism.” The revolutionary politics ushered in by Trump and Bernie Sanders is, inevitably, a politics openly hostile to the current order—an order that is, at root, tied up in affection for America. So it’s perhaps not so surprising that affection for America is on the decline.

 

Is a root level of genuine patriotism a precondition for participation in our politics in 2026? The answer is unclear. But the fact that it is even in question is a poor sign for the health of a republic—which depends first and foremost on a pre-rational commitment to love of our shared political project.

 

***

 

Since January 2001, Gallup has intermittently polled its interview subjects on how proud they are to be Americans. In their first survey, 87 percent called themselves “very” or “extremely” proud. That figure peaked at 91 percent in 2004, perhaps thanks to the spike in patriotism following September 11. Rates ticked down slightly through the Bush and Obama years but only really began to fall in 2017, when just 3 in 4 Americans were very or extremely proud. In 2025, that number hit just 58 percent, with only 41 percent saying they were “extremely” proud.

 

Even more remarkable than the decline is the trend by partisan identification. In 2001, Democrats were only three percentage points less likely than Republicans to be proud of their American identity. The gap widened over the next 15 years, but it was not until 2016 that a real difference emerged. That year, Democrats were 21 points less America-loving. In Gallup’s most recent survey, the distance has grown even wider: 92 percent of Republicans call themselves very or extremely proud to be an American, versus just 36 percent of Democrats.

 

At the tail end of the Clinton era, in other words, Democrats and Republicans were all but equal in their self-professed patriotism. But since then, and particularly since Donald Trump’s election in 2016, patriotism has, like everything else, become polarized. The base of one of the two major parties simply no longer sees identification as an American as a source of pride. Indeed, the way in which Democrats’ pride shifted over time—falling with Trump’s election, spiking during Joe Biden’s presidency, then dropping once again upon Trump’s return to office—indicates that many experience affection for their country as contingent rather than essential, conditional on who happens to be in the White House at any given time.

 

This is part of why Democrats could not rise to their feet at the State of the Union. They know that, for their voters, being pro-America is no longer intrinsically good, especially when it can be perceived as a surrender to the Bad Orange Man. Democrats in power have adapted to this reality by adopting one of three positions on America. The first is open hostility. The second is a kind of situational patriotism that synonymizes patriotism with liberalism. The third is a quasi-patriotism composed entirely of symbol rather than substance.

 

The first is by far the easiest to identify, and it most closely resembles the left-wing anti-Americanism of the 1960s and 1970s. It takes the view that the United States, far from being something worth loving, is a depraved, criminal society. Consider, for example, Bernie Sanders’s allegations that our economy is “rigged,” Ilhan Omar’s accusing the United States of “unthinkable atrocities,” or Hakeem Jeffries’s assertion that “systemic racism has been in the soil of America for over 400 years.” In many cases these views are indistinguishable from the propaganda issued by America’s enemies. In spite of this, these politicians are able to command voting majorities in some of the bluest jurisdictions.

 

The second is what we might think of as aspirational or conditional patriotism—a love of America as it will someday be, rather than as it is or was. If you view America as conceived in bondage and dedicated to inequality, but you still want to say something nice about the country, you can point to our overcoming of past injustices as characteristic of why you love America—because it’s not as bad as it used to be! Recall, for a classic example, the Michelle Obama line about her husband’s nomination to the presidency: “For the first time in my adult life, I am really proud of my country.” Pride here is only about the change—I will love you if you become someone else.

 

The third posture can be captured only by the 2024 Democratic Convention, which Ross Douthat ably summarized as “abortions for some, little American flags for others.” The thinking here is that Americans do like patriotism, and therefore Democrats should appear patriotic, without necessarily making any substantial changes to their values that might be required by actually being patriotic. As a result, in this posture Democrats look to embrace visible symbols of patriotism—veterans, patriotic songs, and especially the flag itself. (As now–Senator Elissa Slotkin enjoined on the campaign trail, Democrats should “f—ing retake the flag.”)

 

Each of these approaches serves its own function, and politicians can and do move between them depending on their audience or purpose. The third works well if you want to deny charges of disloyalty—look, I have a big flag behind me, I must like America! The second is useful for audiences who feel that patriotism still deserves some lip service but who want something they think of as more highbrow, more reflective of the “harsh realities” of American history. And the first, of course, works for audiences who just hate America.

 

What all three postures share, though, is a lack of patriotism as a presumption that informs thought and deed, rather than a conclusion of one’s worldview. Even when (as in the second and third approaches) patriotism is something the speaker nominally embraces, he or she is defending the idea that America is good insofar as it aligns with the speaker’s values, or that America is good because it’s a useful brand.

 

In other words, there is nowhere to be found the basic premise that America is good simply because it is good—because it is, despite its flaws, a shining city on a hill and, as such, is a place for which we feel a natural affection that grounds our other political commitments. Love for America is something that is functional or transactional; it is never something the speaker feels in the way we feel love for our friends or families.

 

The question, politically speaking, is whether all of this posturing actually falls flat with the voting public. Can Americans see through the false embrace of the flag? Or have they lost the ability to discern real patriotism from America-bashing dressed up as the real item?

 

***

 

There was a time, of course, when they could have told the difference. The years following World War II were something of a second “era of good feelings” in American life. The bipartisan consensus—opposed to Communism but comfortable with some degree of command and control in the economy, socially conservative in some ways and liberal in others, relentlessly patriotic—was best typified by Dwight Eisenhower, the war hero who was elected as a Republican only after he was courted to run as a Democrat. Affection for government—a sort of index of patriotism—was high: When first asked in 1958, roughly  3 in 4 Americans said they trusted Washington to do the right thing always or most of the time.

 

Obviously, this was not to last. Trust in government peaked in 1964, then began a steady 15-year decline. The succession of social conflicts and crises is familiar: the civil rights struggle, nationwide rioting, Vietnam, Watergate. Patriotic consensus was assailed by a new youth movement on the left, backed by sympathizers in the establishment who saw in young radicals the kind of moral fervor they themselves wished they had.

 

The result was a Democratic elite that felt increasingly comfortable criticizing America in increasingly vicious ways. In 1968, a federal commission alleged of the ghetto that “white institutions created it, white institutions maintain it, and white society condones it.” That same year, Bobby Kennedy on the campaign trail offered a bleak picture of a violent, dehumanized America, identifying a “poverty of satisfaction—purpose and dignity—that afflicts us all.” Criticizing the Vietnam War, Senator George McGovern would in 1970 accuse his colleagues of “sending 50,000 young Americans to an early grave,” adding, “This chamber reeks of blood.” Two years later, his party nominated him for the presidency.

 

Democrats sometimes made bedfellows, too, with an increasingly violent and radical left. New York elites, memorably skewered by Tom Wolfe, eagerly fundraised for the Black Panthers charged with and later convicted of torturing and executing 19-year-old Alex Rackley. The eminently respectable Ford Foundation poured money into the coffers of black extremists. Sometimes-violent student protests—against the war, but also against the Man—received the support of faculty and administrators as often as not.

 

What unified this tendency was a stance toward America that was not merely critical but reflexively antagonistic. The view of many Democrats seemed to be that America was not merely imperfect but defined by its sins—that there was no America beyond violence, racism, and death, and often that America was the actor driving these horrors in the rest of the world. This is perhaps why many on the left made such easy allies with revolutionaries—they basically agreed with the revolutionary worldview and could only sort of quibble when it came to methods. That all made the idea of being patriotic incoherent: If you think America is the great force for evil in the world, how can you possibly claim to love it?

 

This rhetoric does not sound out of place today, does it? What’s so strange about Democrats saying that white institutions created the ghetto, or that Congress is killing young men? What Manhattan cosmopolite, university professor, or foundation head wouldn’t be sympathetic to left-leaning terrorists?

 

But at the time, these utterances and associations were profoundly shocking to the American conscience. The “silent majority” of everyday Americans may not have been happy about the trajectory of American society in the 1960s. But they had no time for America-hating, recognizing as they did the difference between criticizing America’s faults and identifying her with them.

 

And Democrats—the party of this tendency—paid an electoral price. The silent majority gave Richard Nixon a 49-state victory in 1972. In 1980, fresh off Jimmy Carter’s “malaise” speech and the crippling inadequacy of his response to the Iran hostage crisis, the nation favored Ronald Reagan by 10 points before handing him his own 49-state victory in 1984. Over and over again, the public rejected Democratic nominees too closely aligned with the party’s anti-America wing. Conspicuously, only two Democrats took the White House between Lyndon Johnson and Barack Obama. Both were Southern governors, presented as outsiders to their party who were more moderate than their peers (however false that might actually have been).

 

This is not to say that all of Democrats’ electoral woes were attributable to the party’s mid-century anti-Americanism. Indeed, that anti-Americanism was downstream of a more general radicalism that alienated them from the median voter. But the caustic words and callous actions of the party’s leadership were the most visible symbol of that alienation. And voters repeatedly punished leaders who they felt disdained the American political project in which they were participating.

 

What that suggests is that, at least as of 1980, patriotism was still what a patriotic public expected of its leaders. It was a shared assumption: that America is worth loving, independent of its flaws, and that this love is not conditional on how much we like what is happening at any given moment. Indeed, the reason that Democrats could have been expected to stand for the sort of line they heard from Trump if delivered by one of the Bushes is that Democrats of the 1990s and 2000s understood that not standing would be politically suicidal.

 

This insight—that Democrats’ return to patriotism was the result of political pressure—helps explain their re-embrace of America-bashing. Democratic voters are upset not only about the particulars of policy but about the state of the republic itself. They see Donald Trump as an unprecedented, historic threat and his election therefore as an indictment of the system that installed him. What’s needed—and on offer—in this view is a revolutionary politics, which promises a total overhaul of the republic. That, too, was what Democrats offered in the 1970s. Americans rejected it then, because they saw revolution as unappetizing. But is that still the case today?

 

***

 

Perhaps not. We see alarming signs not only on the left but on the young right as well.

 

The Gallup polling shows a sharp disjuncture by age. Among older Republicans—those born prior to 1996—over 90 percent label themselves as extremely or very proud to be an American. But among Gen Z Republicans (born after 1997), the equivalent figure is 65 percent, slightly below that of Baby Boomer Democrats (born between 1946 and 1964) and just ahead of Millennial (born between 1981 and 1996) independents. In polling published by my colleagues at the Manhattan Institute, Republicans under 50 were twice as likely (10 percent versus 5 percent) to say they wanted to “burn down” America’s economic and social system compared with older respondents.

 

Such results, like other anomalous findings about young Republicans, should be taken with a grain of salt. Young people are essentially twice as likely to be Democrats as they are Republicans, according to Pew. “Young Republican” is practically an oxymoron, and anyone who self-selects into that category is likely to be unusual in other ways that will show up in polls. Young people, moreover, are in general more liberal than older ones, even within the party; a lesser degree of patriotism may just be downstream of this difference in political attitude.

 

At the same time, it is hard not to see in this and other trends a real phenomenon: that the alienation from America so common on the left is creeping in on the right. This tendency is reflected in the more extreme manifestations of the new right, eager to cultivate in a minority of Gen Zers what they see as the future. The self-pity and anti-Semitic conspiracy-theorizing that characterize 27-year-old Nick Fuentes’s view of America, for example, are hard to distinguish from the self-pity and anti-Semitic conspiracy-theorizing that characterize much of the American left. Or consider the preference in certain corners for other nations’ mode of government entirely, especially the extremely peculiar affection for the tiny nation of Hungary under the heavy hand of Viktor Orbán, who led his homogeneous land of 11 million. (We’ll see how much that transferred patriotic fervor for Hungary lasts now that Orbán’s party has been ousted.)

 

Many of the most America-negative new righters characterize themselves as the country’s most ardent nationalists. Yet much as progressives claim to criticize the nation because they love it, some on the right use their claimed affection for America as a shield from behind which they issue nothing but criticisms—about our nation’s environment, or the status of workers, or the decay of our culture—that would not sound out of place in a Democratic politician’s speech in the 1960s. Similarly, much as some progressives seem able to love America only as it could be, some on the darker corners of the online right seem able to love America only as they imagine it once to have been—an idyll once sparsely populated by “heritage Americans,” now despoiled by the hordes of brown men ruled over by their Jewish masters.

 

What joins this right-wing movement with the newly anti-America left is a basically revolutionary tendency. Affection for rulers outside our borders, a fixation on the “Zionist-occupied government,” a belief in the need for some kind of dramatic restructuring of American society: All are examples of the conviction that the governing regime here is fundamentally illegitimate and should be replaced. I choose the word “regime” advisedly. A regime is not just who is in power but the fundamental system or mode of government; in a republic, it is the order under which the people rule themselves.

 

If your goal is to bring about a change to the regime, at some point you have to be so disaffected that you develop a certain instinctual dislike for the society in which you currently live. The revolutionary always dreams about an ideal future (or, in the case of the reactionary, dreams about returning to an ideal past). But that glorious vision inevitably collides with the messy reality in which we actually live and can, in turn, breed resentment and outright hatred. It is not possible to be both a revolutionary and a patriot; you have to choose one or the other.

 

Which is not to say that most young conservatives today are revolutionaries—the large majority, who don’t live their lives on X, mostly are not. But the young right now seems to be less and less instinctually affectionate about America as it is. The sense of pride in their nation is demonstrably declining. They might assert that this is rational—but that only reinforces their alienation from the kind of instinctual affection already out of fashion on the left, and it therefore makes them susceptible to revolutionary politics.

 

***

 

There is something bizarrely consumerist about the anti-American posture: America is good (charitably) insofar as it conforms to my ideas of the kind of society in which I want to live or (less charitably) insofar as I get stuff out of it. To the extent that people use America’s current or historic errors to indict the nation as such, they are indirectly implying that America is only as good as its deliverables and that they identify with it only and insofar as they happen to feel it’s a good brand at the moment.

 

That such a posture would grow more common among those earliest and most comprehensively exposed to our current moment—when relations are fleeting, borders are dematerializing, and every product is sold as an item of identity—is hardly a surprise. But it reveals how the loss of patriotism can be deeply destabilizing for a country, especially one like the United States.

 

In Politics, Aristotle argues that a polity cannot be properly understood as merely an alliance for some material end, like wealth or safety. A group of men who work together on a job site is not a polity; nor are nations that join together for mutual military defense (otherwise, NATO would be a single state). Rather, “any polity that is truly so called and is not a polity merely in name must pay attention to virtue; for otherwise the community becomes merely an alliance, differing only in locality from the other alliances, those of allies that live apart.” That is to say, a polity is not just something that provides us with things; it is a shared project that works towards its citizens living the good life, as made possible by the practice of virtue.

 

If Aristotle’s account is right—and his book is still read with profit after 2,500 years—a functional polity requires more than just citizens who regard themselves as living in an alliance of convenience. Rather, those citizens need to understand themselves as part of a shared project, one whose members are striving to achieve a good life together, because (again to cite Aristotle) men are political animals, and the good life is possible outside of a polity only for wild beasts or gods.

 

Functioning republics do not simply operate on their own steam. They require a delicate infrastructure of institutions, norms, and civic virtues—you can have a republic only “if you can keep it,” as Benjamin Franklin famously put it. One of those predicates is a sense of the aforementioned shared project—of America’s errors being our errors, its triumphs being our triumphs, of, yes, my country right or wrong. We have to see ourselves as inextricably within the polity before we start to reason about it in order for the polity to be successful. And we have to, specifically, feel an affection for that polity that is prior to everything else we think about it—we have to feel patriotism.

 

Which is what makes the apparent return of anti-Americanism so alarming. Voters once punished Democrats for adopting the aforementioned stance. Now, it’s de rigueur in the Democratic Party, and beginning to infect the Republicans. It’s hard to discern whether this behavior still incurs the electoral penalty it once undeniably did. The rise of polarization means that neither party is likely to achieve the kind of landslides Nixon and Reagan did. But the mere presence of the tendency, and its acceptability in American life, augurs ill.

 

Indeed, the experience of the 1970s and 1980s implies that—contrary to the views of some on the left and right—it is the common man who disciplines the elite into participation in the patriotic project. When the Democratic Party took its hard-left turn, it was the everyday voter who reacted with horror and dismay. “Elite” became a dirty word, because the elite had opted out of the shared American political project. The common man, by contrast, was the enforcer of patriotism as a necessary virtue.

 

If we cannot still rely on that popular check today, it might be because the elites of the 1960s and 1970s got to set the agenda for the children of the average American of that era. When “blame America first” becomes the curricular and cultural North Star, our institutions and norms no longer work to inculcate patriotism in the public. The remarkable thing is not that the average man has become less patriotic, but that he still remains patriotic in spite of the propaganda with which he is constantly blasted. But we have every reason to worry that the process of steady erosion will continue, until not just the elites but everyone forgets the importance of loving America.

 

It is almost saccharine to insist that patriotism matters, that people should love their country whether or not they agree with it. But we should, because that is what sustains a republic for 250 years—and what, one hopes, will sustain it for 250 more.

The Pope, the President, and the Pacifist Illusion

By Joseph Loconte

Monday, April 20, 2026

 

The pope can raise legitimate questions about whether U.S. action against Iran meets the criteria of just war theory. But that is not at all what he has done.

 

During the height of the Cold War struggle between totalitarianism and democratic freedom, the West was united by a trio of political and religious leaders characterized by intellectual seriousness and moral courage: Margaret Thatcher, Ronald Reagan, and Pope John Paul II.

 

As a native son of Poland, the pontiff played a critical role — with help from Reagan and Thatcher — in confronting the communist apparatchiks in Poland and igniting a democratic revolution that ultimately toppled the Soviet Union.

 

Not so today: Never has the democratic West appeared more fragmented and befuddled in the face of new threats to human freedom.

 

President Trump has belittled the British military, alienated most of America’s NATO allies, and, most recently, lashed out publicly against Pope Leo XIV for his criticism of the U.S.-led war in Iran. The president’s insulting rhetoric aimed at the leader of the world’s largest Christian denomination demeans the American presidency. Let’s also stipulate that Mr. Trump has made a series of false and, at times, grotesque remarks concerning the war.

 

Yet, for all that, the statements by Pope Leo and the Vatican about the Iran war beggar belief: If taken at face value, the Catholic Church is led by a man who appears ready to abandon 1,500 years of Christian moral theology about war, justice, and the problem of radical evil.

 

Let’s begin with the pope’s blanket claim — directed at the U.S. war with Iran — that “military action will not create space for freedom or times of peace, which comes only from the patient promotion of coexistence and dialogue among peoples.”

 

The statement is not only patently false. It is morally repugnant to anyone acquainted with the history of the 20th century. It is impossible to overstate the extent of the horrors committed because facile views like this — ignoring the reality and depth of human malevolence — prevailed for too long in the West.

 

Indeed, the Catholic Church’s “concordat” with Adolf Hitler in 1933 — granting him international respectability in exchange for a measure of civic freedom for the church in Germany — could not tame his hatreds or his lust to dominate. The “patient promotion of coexistence” could not stop the Nazi blitzkrieg, the death camps, the plan to annihilate the Jewish people, and the desire to destroy what was left of Western civilization.

 

The only force that could stop Hitlerism and “make space for freedom” in subjugated Europe was the combined military might of the Allied Forces in the most destructive war in world history — a just cause if ever there was one under heaven.

 

Yet the pope seems indifferent to the Christian just war tradition, articulated by Catholic thinkers such as Thomas Aquinas, which teaches that nation-states not only have the option but also, under the right circumstances, the moral obligation to use lethal force to punish or prevent a great evil.

 

As I have written elsewhere, the just war tradition provided the conceptual basis for the Responsibility to Protect, a doctrine defending the use of force to protect civilian populations from genocide or other atrocities. In 2005, it was approved overwhelmingly by the U.N. General Assembly — and by the Catholic Church, which agreed that, when other means fail, the “international community” has the right and obligation to wage war to “block the hand of an aggressor.”

 

The Iranian regime remains hell-bent on acquiring nuclear weapons to destroy Israel and blackmail the West. It has engaged in acts of barbarism — in the murder of tens of thousands of its own citizens and through its terrorist network in the Middle East — that make it exactly the kind of aggressor state the just war tradition had in mind.

 

The pope can raise legitimate questions about whether U.S. military action against Iran meets the criteria of just war theory. But that is not at all what he has done.

 

Consider his Palm Sunday homily, in which he referred repeatedly to Jesus as the “King of Peace, who rejects war.” Referencing the prophet Isaiah, he claimed that God “does not listen to the prayers of those who wage war,” because “your hands are full of blood.”

 

The problem is not merely that the pope employs shoddy exegesis — twisting the plain meaning of Scripture by ignoring its historical context. The problem is not only that he has retroactively condemned the prayers of the Catholic faithful throughout the centuries — the prayers of priests, soldiers, and their families during wartime. The problem is not primarily that he offers a misleading and one-dimensional view of Jesus — the “Prince of Peace,” after all, is also called “the Lion of the tribe of Judah” who will come again with “a sharp sword with which to strike down the nations.”

 

The deepest problem is the pope’s claim that, in denouncing the U.S. war against Iran, he is merely “preaching the gospel.” This is most certainly what he is not doing.

 

The Gospel, as understood by the historic Christian Church for 2,000 years, cannot be reduced to platitudes about peace and love. “The whole point of the Christian doctrine of atonement,” wrote Protestant theologian Reinhold Niebuhr, “is that God cannot be merciful without fulfilling within himself, and on man’s behalf, the requirements of divine justice.”

 

The Christian Gospel, the message that transformed the ancient world, is simply this: that Jesus, the God-man, bore upon Himself the judgment of God for our sins by His death and resurrection, making possible forgiveness and new life. As the apostle John expressed it: “For God so loved the world that He gave His only begotten Son, that whoever believes in Him shall not perish but have eternal life.”

 

The earliest Christians believed that Jesus literally battled the forces of hell — and prevailed. Indeed, the Greek word for gospel, euangelion, was used in ancient Rome to signify a military victory. The powers of evil were vanquished, but not through “dialogue among peoples.”

 

The enduring modern temptation is to substitute the biblical Gospel for a progressive, pacifist illusion about human nature and the problem of radical evil. Writing at the start of the Second World War, Niebuhr summarized the religious mood thus: “Some easy and vapid escape is sought from the terrors and woes of a tragic era.”

 

Such an outlook may bring comfort to the butchers in Tehran, but it will surely not advance the cause of peace.

I Am a Free-Range Parent. I Probably Won’t Be When I Move to America.

By Stephanie H. Murray

Monday, April 20, 2026

 

Recently, I was invited onto a podcast to chat with some American moms about modern parenting. At one point in the conversation, I made a comment about how liberating it is as a parent to give your children freedom. I explained that, from the time my children were pretty young, I have been allowing them to run errands or fetch treats for themselves from the corner shop near my home in the United Kingdom—a practice that is not only confidence-boosting for them, but useful for me.

 

In response, one of the hosts told me that she’s started to allow her 10-year-old to do the same thing. Then she added some details that threw me off: To get to the store in question, her son has to cross an intersection where people not only blow through the stop sign, but frequently do donuts.

 

I am someone who walks the walk when it comes to pushing back on a culture of safetyism. The extent of freedom I allow my 7- and 8-year-olds—to go not only to the corner shop, but the park or really anywhere else in our urban neighborhood—is at the very edge of what’s considered socially acceptable in our area. My husband or I walk with them to school every day, and if we were to stay in the United Kingdom, we’d allow them to make the trip on their own just as soon as the school allows it. But we aren’t staying here. This summer, after nearly seven years abroad, we’ll be heading back to the United States. And the podcast host’s comments bolstered my sneaking suspicion that the free-range parenting style I’ve adopted here likely won’t survive the trip—not because I’ll lose my nerve somewhere over the Atlantic, but because the American environment simply doesn’t allow for it.

 

Much has been written about the highly protective style of parenting that has come to predominate in America. Often, it’s taken for granted that so-called helicopter parenting is irrational, the reflection of a kind of mass paranoia stoked by media-inflated fears about stranger danger and with little basis in real risk. I think there is some evidence supporting this view. Various surveys have found, for example, that a frankly shocking percentage of American parents think a child ought not be left alone at home until they are well into their teens. It is extremely hard for me to imagine a home environment so rife with danger that a 10-year-old couldn’t manage there for a couple of hours.

 

But what about the public environment? Surely we can agree that the amount of freedom parents give their children ought to reflect the terrain they are expected to navigate—it makes more sense, for example, to hold tightly to a 3-year-old’s hand on a busy subway platform than on a beach or in an open field. And the terrain—both social and physical—in large swaths of American society is quite different from that in the United Kingdom, in ways that inevitably push the timeline of children’s independence back.

 

The corner shop question—at what age is it appropriate to send a child there on their own?—offers a concrete way of illustrating what I mean. For my second child, whose itch for independence emerged quite early, the answer was 5 and a half. It wasn’t my idea. On a summer day, having failed to convince her older sister to accompany her, she asked if she could head to the store by herself to purchase some Skittles. I was a little conflicted, but ultimately let her go for it. I gave her some pocket change and waited outside until she returned some 10 minutes later, exuberant at having accomplished her feat.

 

Prior to moving to England, I lived in a suburb of Madison, Wisconsin, a midsized city that is, by American standards, comparatively walkable and safe. Had I still been living there when my 5-year-old daughter asked to run her errand, I would have certainly said no. In fact, I doubt she would have asked.

 

Here, the corner store is not far—a six-minute round-trip walk according to Google Maps. Getting there doesn’t require my daughter to cross any streets except the one in front of our house. The streets she has to walk along are also narrow, two-way roads where surpassing the 20-mph speed limit is difficult because drivers are constantly having to stop to allow traffic coming from the opposite direction to pass. In Madison, by contrast, the closest “corner store” was a gas station that required a 26-minute round-trip walk across a far wider road where people easily blew past the 30-mile-per-hour speed limit and that ended in a parking lot bigger than the store itself. These are fundamentally different trips, requiring different levels of maturity.

 

My example might seem cherry-picked and overly specific, but the differing patterns of urban design contained in it hold more broadly. It is a fact that, on average, the physical distance between a child’s home and the sort of places he or she might go is greater in America than it is in England, or elsewhere in Europe for that matter. The mixed-use development necessary for a corner shop to exist is illegal in most American neighborhoods. The row homes that allow for density in English cities often are, too. Minimum lot size and parking requirements have the effect of spreading everything out. This is not to say that English housing development law is ideal. Far from it: That nation, too, is crippled with a housing crisis. But the English status quo seems, on the whole, easier for kids to navigate.

 

This observation comes through clearly when you compare how English and American children get to school. People often point to the utter collapse in the share of American children walking or cycling to school—from over 40 percent in 1969 to under 11 percent today—as evidence of their declining independence. But while the United Kingdom is certainly no free-range paradise, nearly half (46 percent) of children in England get to school on foot or bike, a figure that hasn’t budged much over the past few decades. It is likely no coincidence that, on average, English children live quite a bit closer to school than American kids do (2.5 miles vs. 4.4 miles, respectively). To put a finer point on it, more than 4 in 5 American kids live 3 or more miles from their school—a distance that vanishingly few British kids are walking. We can quibble all day about what age a child ought to be able to traverse such and such a distance, but it would be silly to suggest that parents ought not factor distance into the equation at all. All else equal, the further a shop or school or park is located from a child’s home, the older a child must generally be to make the trip alone.

 

And of course, all is not equal. Consider the horrifying example of Mary Fong Lau, the 78-year-old woman who killed a family of four after crashing into a San Francisco bus shelter at 70 mph. The case rightly ignited outrage over the fact that someone could get off so easily after causing such harm—Lau received no jail time. But from my perspective, the more striking fact was that Lau managed to achieve such a speed in an urban environment in the first place. This is a pervasive problem in America: Streets enable recklessness. Don’t get me wrong: There is plenty of room for improvement when it comes to ensuring that English streets are safe for nonmotorists. But generally speaking, I do not have to worry about elderly drivers reaching highway speeds in residential areas, or teenage boys doing donuts on any of the streets near my current home. Not because such reckless driving is illegal (though that, too), but because it is physically very difficult to accomplish. Regardless of speed limit, the sheer tightness of the roads tends to limit the amount of damage even the least competent and most reckless British drivers can do. As a British mum acquaintance recently put it, “The U.S. makes driving very pleasant and the U.K. makes it highly unpleasant and stressful.” That’s exactly it: The built environment here makes driving a more anxiety-inducing experience—but one that is ultimately safer for everyone.

 

***

 

Reckless driving is just one part of a more pervasive phenomenon that hinders children’s independence: public disorder. This is a broad and fuzzy category of behaviors that can range from the relatively innocuous—littering, or leaving dog poop on the sidewalk—to more severe problems like public urination, open drug use, or brawling. Such societal disarray exists to varying degrees in every country in the world, and the United Kingdom is certainly no exception. But as others have pointed out, America’s public disorder problem is palpably more widespread and severe than it is among our peer countries. Dispatch contributing writer Charles Fain Lehman has argued that public disorder has risen in America since the pandemic, but I would argue that it was elevated even before COVID started to spread. Living in Madison, I grew accustomed, as many Americans are, to sharing the bus with people who were strung out, or simply not quite tethered to reality. The tendency for welcoming public spaces to become hotbeds for vandalism, substance abuse, and violence made such spaces a liability to the city.

 

I’ve now lived in Bristol, a city of half a million people, for nearly seven years. Again, it’s a city with plenty of problems. But the buses here do not, as a general rule, function as roving shelters. There are certainly pockets of the city where you encounter unstable people, but they are rarer and more concentrated. If my intuition—or the United Kingdom’s much lower prevalence of drug overdose deaths—can be trusted, the troubled people you encounter are far less likely to be using the sort of serious drugs that ruin one’s life.

 

It’s hard to pin down a specific reason for this discrepancy. While homelessness is higher in the United Kingdom than in America, street sleeping is lower. Generally speaking, British law also allows authorities to take a more proactive approach to handling public disorder, granting them broader civil powers to, say, ban someone from a particular area. And the threshold for involuntary commitment is somewhat lower in the United Kingdom than in the United States; it is not only the threat of imminent danger to oneself or others, but the health of the individual, that is taken into consideration. But I suspect that some of what separates the U.S. from so many other countries is not just that it is slower to get people prone to disorder off the streets, but that it has more disorderly people in the first place. In the same way that our street design fosters recklessness, it doesn’t seem far-fetched to say that some combination of lax laws, individualistic ideals, and access to drugs or weapons might be cultivating a more disorderly populace.

 

The risk of being hurt by an unstable person is small. Still, safety isn’t all that matters. Being accosted by a disoriented stranger is an unnerving experience. Even in my 20s, I didn’t always know how to handle such circumstances. Do I look away or will that upset them? Should I get off at my usual stop or wait for one where there’s more likely to be a crowd? One reason that it is so much easier to envision allowing my children to use public transportation on their own here is that these are, by and large, not questions they’ll need to consider.

 

The deeper issue, though, is that children’s independent mobility has never really been independent. It has always relied on the willingness of adults in the vicinity to look out for the children in their midst and tailor their behavior accordingly. That might mean watching their step or their tongue, offering some guidance or even a stern word if need be. It is this sort of social infrastructure that grants children in places like Japan the ability to roam. In other words, children can move freely only when adults commit to upholding the social contract. Public disorder is a visible indication of widespread refusal to do so. As Chris Arnade writes in his recent essay on America’s public disorder problem, “there is a fine line between vibrant streets and squalid ones, and that line is public trust.” And, as he adds: “The U.S. is on the wrong side of it.”

 

I can’t fully pin down the ur-cause of America’s public disorder. But any disorder makes the public realm trickier for children to manage. I have no plans to abandon my commitment to giving my children age-appropriate independence when we make our way back to the United States later this year. But I have come to accept that “age-appropriate independence” will look quite a bit different there than here.

Emerging Threats Require Proactive American Innovation

By Pat Fallon

Monday, April 20, 2026

 

In 1942, the United States utilized the greatest minds in particle physics to spearhead the Manhattan Project and achieve an absolute victory in World War II. NASA employed a similar approach for the Apollo missions to beat the Soviets and win the space race, giving life to advanced computing technologies, material sciences, and communication satellites. Under President Ronald Reagan, the Strategic Defense Initiative, which its critics mockingly called “Star Wars,” allowed the U.S. to out-innovate the Soviets in directed energy weapons, space-based sensors, and kinetic interceptors.

 

In each of these examples, the U.S. was able to leverage perhaps its greatest comparative advantage over the opposition: American innovation and the freedom to manufacture the future. But as technological innovation in today’s geopolitical landscape accelerates at an unprecedented pace, the U.S. must abandon its more recent posture of reacting to emerging threats and transition to a proactive posture that detects, deters, and suppresses threats before they even emerge.

 

President Trump fully believes that the U.S. must maintain its position as the world’s dominant superpower, and he has shown a commitment to making that the prime focus of his administration. Just over a year into Trump’s second term, the U.S. is not only seizing the current moment but preparing our nation for the challenges and threats of tomorrow. The Trump administration is creating an industry-friendly environment unlike any we’ve seen before — a key shift if we are to remain the dominant global leader for the foreseeable future.

 

In late March, under the leadership of Secretary Marco Rubio, the State Department formally notified Congress about the creation of the Bureau of Emerging Threats. This office’s mission will vary dramatically compared to established government agencies. Instead of regulating key sectors only after a substantial threat has already developed, this office’s outlook will be proactive. Its goal is to get ahead of global competitors, establish dominance, and retain the initiative.

 

While the U.S. has traditionally displayed its dominance over state-sponsored terrorists, criminal groups, and insider threats with intelligence and kinetic capabilities, the magnitude at which our peer adversaries are investing and developing emerging technologies must be met with the full force of the U.S. government and our private sector experts.

 

These adversaries are already using emerging technologies as tools of state power in the gray zone, wherein states employ ambiguous or plausibly deniable methods for strategic ends. For example, we have seen this with repeated Russia-linked cyberattacks on U.S. government agencies, as well as with surveillance and data mining software embedded in Chinese-produced electronics. Perhaps the most recent high-profile example would be China’s hack of the AI assistant, Claude. It is in the gray zone that states often operate below the threshold of armed conflict in the areas of economic disruption, espionage and intelligence collection, and communication and information interference.

 

If the U.S. wants to continue to deter and combat these threats, it must remain the global superpower in artificial intelligence, quantum computing, space, advanced cybersecurity, biotechnology, and advanced military technology. This is why the Bureau of Emerging Threats’ three overarching pillars focus on cyberattacks targeting critical infrastructure, threats in the space domain, and military applications of artificial intelligence and quantum technologies.

 

Still, many like to focus on the latency or failures of the U.S. government in terms of technological development. Thankfully, the U.S. has an unprecedented resume when it comes to innovation domination, and organizations like the Defense Advanced Research Projects Agency, commonly known as DARPA, have pioneered countless world-altering technologies, such as the internet, GPS, and next-generation stealth fighter systems.

 

Similarly, the U.S. government can also be credited with advanced research and development of semiconductors through the SEMATECH consortium, the nuclear submarine via the Naval Reactors program, and other commercial technologies like GPS, artificial intelligence, and microelectronics.

 

Secretary Rubio is anticipating — and envisioning — an even more highly sophisticated world where U.S. dominance of emerging capabilities will be a key metric in the fight for global superiority. The establishment of the Bureau of Emerging Threats stands as a testament to how seriously this administration is taking that fight.

 

Under the Bureau of Emerging Threats, the U.S. government will be able to effectively and efficiently identify advanced threats, powered by American innovation via the private sector. The battlefield of tomorrow is unknown, but if we take a proactive, pragmatic position, we can mitigate — and ultimately eliminate — those threats before they manifest.

The Illusive Iran Deal

National Review Online

Monday, April 20, 2026

 

Despite the highly variable mood music around the Iran war driven by President Trump’s ever-changing statements, we are in essentially the same place we were two weeks ago — with preparations for negotiations in Islamabad happening against the backdrop of a fragile cease-fire that the Iranians are flouting.

 

The strategic fulcrum of the war has become control of the Strait of Hormuz. When the Iranian foreign minister declared it open to commercial shipping on Friday (with the caveat that ships had to go through the Iranian-approved route) and Trump said that the Iranians had agreed never to close the strait again, markets rallied and it seemed the U.S. had achieved a breakthrough. The IRGC, though, quickly said that the strait was closed and fired on ships to emphasize the point.

 

By keeping the strait effectively closed, the Iranians have failed to deliver on the most tangible benefit to the U.S. of the cease-fire.

 

The IRGC looks to be increasingly in control in Tehran, and it is not an organization likely to produce a Delcy Rodríguez. The guards presumably think that they can use the strait to exact so much economic pain on the United States that Trump stands down, or, failing that, use the strait for leverage to drive a bargain that falls short of Trump’s red lines.

 

President Trump hopes, in turn, that continuing to devastate the Iranian economy — this time via blockade — will further fracture the regime, or make the IRGC blink as it watches the sources of its revenue disappear. The problem is that the IRGC doesn’t operate by Western standards of rationality or humanity. Having just participated in a crackdown believed to have killed tens of thousands of Iranian protesters, it’s not going to be particularly moved if the daily economic existence of ordinary Iranians becomes markedly more difficult.

 

Trump can gain more leverage if he convinces the Iranians that he is perfectly willing to start shooting again and to use the U.S. forces that have continued to flow into the region to conduct freedom-of-navigation operations in the strait. The president’s constant talk of an imminent end to the war, coupled with rosy portrayals of the state of negotiations, may reassure markets but signals to the Iranians a lack of resolve.

 

Perhaps a deal worth having can be cut that truly reopens the strait (although as of this writing, the Iranians are saying they won’t participate in a second round of talks). But the odds are that the Iranians won’t give up control unless they are convinced that we can take the strait back by force or that the price of retaining it will, one way or the other, be too high to pay.

Sunday, April 19, 2026

Unsafe at Any TSA Checkpoint

By Jonah Goldberg

Friday, April 17, 2026

 

Ralph Nader is pissed.

 

I’ll let him tell the story. Last week he posted on social media:

 

Today, the notoriously picky TSA at Bradley Airport in Connecticut confiscated a container of fresh hummus. “Hummus?! Why?” asked the traveler. “Hummus is not a mysterious liquid. It’s a nutritious popular vegetable!”

 

“Doesn’t matter,” was the rejoinder. “Either leave the line with it or it goes into the garbage.”

 

So now add hummus to the list of national security perils. Maybe ground broccoli will be next. Absurdity reigns! -R

 

Now, I should say, I think this was a well-written, quality tweet. It told a story with drama and panache. And, frankly, I think Nader signing off with his first initial was quaintly charming.

 

I’ll also say that I have some sympathy for his complaint. Our airport security system (which has gotten better, in my experience, under Trump, shutdowns notwithstanding) can be infuriating. That’s because we turned it over to a public sector bureaucracy with the instructions that it has to work at scale. That requires clear rules with little room for individual judgment. I am sure the Transportation Security Administration worker didn’t think the 92-year-old Nader was smuggling pasty plastic explosives in that hummus container. But a huge bureaucratic system that processes millions of passengers a week can’t leave such matters up to the discretion of each agent. Moreover, now that everything is videotaped and recorded, if the agent let Nader go through and it turned out that the hummus was an explosive or some kind of poison paste capable of being aerosolized, everyone would know that it was Ted, or Sue, at Bradley International Airport who bent the rules for this unlikely terrorist.

 

But here’s the thing. Ralph Nader, perhaps more than any other single individual in American history, has dedicated his life to empowering government functionaries to slow down government processes, inconvenience consumers, and ruthlessly enforce regulations in the name of public safety.

 

And I think it’s hilarious that he’s angry about such things when they inconvenience him.

 

I’ll skip a long dive into Nader, Naderism, Naderites, and the generations of trial lawyers and other disciples who worship all three. Instead, I want to illustrate the point by talking about John Nestor.

 

Nestoring resentments.

 

Nestor was most famous—or infamous—for his driving practices. Specifically, what he loved to do is get in the passing lane on D.C.’s highways and switch on cruise control at the 55 mph speed limit. He infuriated Beltway drivers. They’d flash their headlights at him, honk their horns, tailgate, and make all the usual gestures. You could be trying to get to the hospital because of a medical emergency or to your kid’s school for a play. He didn’t know, and he didn’t care.

 

Here’s how he put it in a letter to the Washington Post:

 

On divided highways I drive in the left lane with my cruise control set at the speed limit of 55 miles per hour because it is usually the smoothest lane. I avoid slower traffic coming in and out from the right, and I avoid resetting the cruise control with every lane change.

 

Why should I inconvenience myself for someone who wants to speed?

 

What he didn’t say in his letter, but comes across quietly clearly in this sympathetic Post profile, is that one of his primary motivations was simply that he enjoyed arousing anger in truckers and others stacked up behind him. Their anger was a feature, not a bug. And he was disappointed when he didn’t piss people off.

 

Nestor’s fans called themselves Nestorians, but everyone else used his name pejoratively, as a verb, “Nestoring.”

 

My wife and I have a term for the people who oppose all economic development or any other kind of loosening of the rules to make life more enjoyable, efficient, or entertaining in the nation’s capital: “The Coalition Against Everything.” A neighborhood restaurant wants outside seating? We can’t have that. Some kids want to sell lemonade on the sidewalk? Without a permit? Are you kidding me? How about live music on Saturday nights at the local bar? What? Without years of hearings and impact statements?

 

Nestor was a hero of the Coalition Against Everything, a paladin of precaution, a knight of “No!” He hassled developers in his neighborhood, he harassed public officials to require extensive medical screenings of fast-food employees unnecessarily. Unmarried and childless, he had lots of time to do that in his off hours.

 

But that was also his job. He worked as a regulator at the Food and Drug Administration. He worked in the cardio-renal-pulmonary unit, and on his watch his department approved no new drugs from 1968 to 1972. The FDA transferred him, because while on the road to work he would lock in at 55 mph, but when he got to the office he put everything in park.

 

That made him a hero to Ralph Nader and the Naderites. They helped him sue to get his position back. Nader’s Public Citizen Health Research Group wanted him back on the job because he “had an unassailable record of protecting the public from harmful drugs.” One HRG doctor called him "sort of the ideal public servant.” At the time, the Naderites  left out the fact that Nestor spent his days leaking FDA reports to them, so they kind of owed him (the Washington Post profile quoted him bragging about leaking to the Nader people, Congress, and the media. It also had him describing the American public as “sheep.”).

 

Now, I am sure Nestor stopped some bad things from happening. In that sense, you can say his record of “protecting the public from harmful drugs” was, indeed, unassailable. But maybe, just maybe, he also “protected” the public from drugs that might have saved lives. 

 

Given his driving habits, he might also have prevented some car accidents caused by speeders. But he also may have created traffic that delayed an ambulance’s arrival at the hospital or caused someone to speed even more once they got past the Nestoring-induced traffic jams.

 

Nestor and the Naderites undoubtedly did some things that saved lives. Seatbelts, all in all, are a good innovation. It is just as obvious to me that some of the things they did cost lives. The Competitive Enterprise Institute’s Sam Kazman made this point (about drugs and about Nestor) a long time ago. We see the victims of bad drug approvals, but “victims of incorrect FDA delays or denials are practically invisible.”

 

I’ve heard Nader boast about how his groups helped stop the construction of nuclear power plants in the United States for decades. He thought he was stopping more Three Mile Islands—though it’s worth noting that the reactor’s partial meltdown killed no one and scientists are still debating whether there were any notable health effects at all. What that mishap did do is set back nuclear power in this country for a generation, because it gave political ammunition to the anti-nuke branch of the Coalition Against Everything.

 

If you take seriously everything the Naderites, or public health experts generally, say about the dangers of burning coal—air pollution, asthma, lost life expectancy, and, of course, climate change—it seems entirely plausible that the blanket opposition to nuclear power harmed a lot more people, and the planet, than judicious support of nuclear power would have.

 

Until now, I’ve avoided invoking Frédéric Bastiat’s essay “What Is Seen and What Is Not Seen.” He wrote: “There is only one difference between a bad economist and a good one: the bad economist confines himself to the visible effect; the good economist takes into account both the effect that can be seen and those effects that must be foreseen.” In his famous parable of the broken window, he tells the story of the onlooker who sees a broken window and says (I’m paraphrasing), that’s too bad, but such things keep the window makers in business. Bastiat notes that that’s true, but the money used to replace the window could have been spent on something more productive. The only place I might quibble with Bastiat is that some effects cannot be foreseen. The invention that is never invented lies outside our imagination.

 

This is a “news”letter about Nestors and Naderites, but it’s at least worth noting that the failure to appreciate the unseen is not solely a failure of the left. As Kevin Williamson and John McCormack have detailed, the victims of Trump’s tariffs aren’t just the people who pay them.  They also include the people who don’t get hired because businesses can’t risk expansion. The costs are heaped on the vendors they don’t use because it’s too risky when revenues are shrinking. They fall on the people who don’t get bonuses or raises, or on the parents who have to work extra hours to cover higher prices and their children who see their parents less.

 

Anyway, back to the Nestors. The Trial Lawyer Industrial Project, whether well-intentioned or not, has deterred economic growth by impeding innovation and business formation. They make hiring harder because they make firing riskier. They make starting new factories more expensive by larding on rules and regulations that make suing easier and economic productivity harder.  My friend David Bahnsen estimates that the legal headwinds against business formation and economic development amount to a full percentage point of economic growth. That’s trillions of dollars over the last two decades—or the next two—we leave on the table because we focus on the seen rather than bet on the unseen.

 

It’s a shame that Ralph Nader was denied the simple pleasure of eating hummus on a plane by an inflexible bureaucrat’s dedication to rules based on fear about a very real threat to public safety. That was Nestor’s rationale for saying no to everything. You can’t have a bad new drug if you don’t approve any new drugs. But Nader was asked to pay a very small price in service to the common good, at least compared to the prices he’s helped inflict on the entire country. More’s the pita he can’t see it.

The Return of ‘We Missed the Story’

By Becket Adams

Sunday, April 19, 2026

 

You have to give CNN’s Brian Stelter this much: Nobody is as reliably wrong as he is.

 

That counts for something, right?

 

On April 13, the self-appointed public relations agent for the legacy press gave the news business a metaphorical pat on the back, crediting investigative journalists with forcing disgraced Representative Eric Swalwell (D., Calif.) to scuttle his gubernatorial campaign and resign from the House of Representatives after multiple women accused him of sexual harassment and assault.

 

“Eric Swalwell ending his bid for California governor is, among other things, a testament to the power of investigative reporting,” Stelter boasted.

 

It is not. No one comes away from this story looking good, least of all members of the press.

 

The allegations against Swalwell are as serious as they get. The alleged predation, which Swalwell denies, spans more than a decade, stretching back to at least his first term in Congress in 2012. Worst of all, journalists and insiders in Washington, D.C., and California now say they first heard rumors about the congressman’s secret life years earlier.

 

Gee, fellas. A decade-plus is a long time to not do anything with an allegation stretching from the nation’s capital to the Golden State.

 

“Rumors about Eric Swalwell’s sexual misconduct have swirled in D.C. for years,” said former Axios reporter Bethany Allen-Ebrahimian. “I first heard these rumors in 2020, in the course of my other reporting about Swalwell. I was neither a politics reporter nor a women’s issues reporter, so I could not chase them down.”

 

She added, “I very much wanted to report it out myself. But #MeToo stories on the Hill aren’t related to my beat, as much as I personally wish I could report them out. I passed the tip along to colleagues on the Hill beat.”

 

For the record, there is no rule in journalism preventing a reporter from pursuing a tip simply because it drifts slightly outside his or her beat, though some newsrooms are stricter than others in this respect. Allen-Ebrahimian’s explanation is especially puzzling when you remember that, in her role covering China, she was one of the Axios reporters who broke the story of Swalwell’s relationship with a Chinese spy.

 

California politics insider Steven Tavares offered a similar account: “I’ve covered Eric [Swalwell] since he was a member of the Dublin City Council. Shortly after being elected to Congress in 2013, his behavior towards women was known by all levels of our local government and the Alameda County Democratic Party.”

 

Then there’s this curious 2017 tidbit from CNN: During the height of the #MeToo movement, the network reported that “more than half a dozen interviewees independently named one California congressman for pursuing female staffers.” CNN chose not to name the lawmaker or pursue the claims further, citing a lack of verification.

 

Five women have now accused Swalwell of sexual misconduct. One allegation involves strangulation and rape. Three women describe “blackout” experiences.

 

It’s difficult to square the caution that the press exercised for Swalwell with the recklessness with which it pursued the thin and obviously dubious allegations of sexual misconduct leveled against Supreme Court Justice Brett Kavanaugh.

 

We all remember the confirmation hearings, when newsrooms such as CNN eagerly platformed even the most outlandish claims of sexual misconduct.

 

Our media has  a two-tied system of standards. It had no problem legitimizing even anonymous allegations against Kavanaugh but passed on reporting the far more credible allegations against Swalwell, claiming a lack of corroborating evidence (they also didn’t have corroborating evidence in Kavanaugh’s case, but no matter!).

 

It’s really as simple as selective caution, and one can’t help but notice to whom the courtesy is extended.

 

Lastly, as far as crediting investigative journalists with Swalwell’s downfall goes: Be serious.

 

We recognize an opposition-research dump when we see one.

 

The press didn’t suddenly crack the Swalwell story; it languished for years despite being what reporters themselves called an open secret. The information surfaced only when Swalwell complicated the Democratic Party’s odds in the California gubernatorial race.

 

We can venture a pretty good guess as to what happened: Democratic operatives fed reporters the dirt.

 

Party strategist Michael Trujillo claimed as much:

 

One note on the Swalwell stuff — (this isn’t confirmed) but a reporter with Politico was working on verifying the rumors on Swalwell when he was running for President. (He’s no longer with the publication.) Two days before he was scheduled to sit down with this reporter Swalwell dropped out of the race. The energy disappeared to potentially take him out, the victims if they were even willing to go on the record never did. He slithered back to his safe house seat. December 2025 was too early to take down Swalwell we had to wait til his paperwork was ALL IN running for governor March 2026, so the head of the snake could be chopped off and he had no safe house seat to slither back to this time. Hate the strategy fine, but for folks unsure if this would work, we had to make sure he couldn’t get away like he did in 2020.

 

Naturally, take Trujillo’s account with a grain of salt; strategists lie for a living. Yet nothing he said is hard to believe given the pattern.

 

Reporters claim they had heard rumors of Swalwell’s misconduct for years, yet they did nothing. Suddenly, when the party needed Swalwell gone, the story materialized.

 

We’ve seen this sort of thing before, most recently with the White House press corps pretending not to notice that former President Joe Biden’s brains had turned to mush.

 

It’s a stutter! Those are cheap fakes! He’s as sharp as a tack!

 

Then Biden himself removed all doubt as to his mental acuity, or lack thereof, and the same press decided all at once that the truth of Biden’s decline could no longer be ignored. Never mind that the about-face coincided exactly with Democratic power brokers reaching the same conclusion.

 

From dismissing concerning videos as “cheap fakes” to hawking books about how everyone knew Biden was a dotard, and the only thing that changed in that time was the opinion of Democratic leadership.

 

Similarly, the press simply couldn’t report on this Swalwell business until it suddenly “could.”

 

A high-profile member of Congress allegedly drugged and raped women and somehow got away with it for more than a decade, through the #MeToo era and even a presidential campaign. Until now, reporters just couldn’t report the story, even though they claim they knew Swalwell had a reputation as a creep and predator. The press couldn’t lock it down, choosing instead to demonstrate a level of responsibility that was wholly absent for the duration of Kavanaugh’s confirmation.

 

Then Swalwell becomes a problem in the California gubernatorial race, and the next thing you know, those rumors no one could confirm suddenly appear in print.

 

All it took was seven terms in Congress, nearly a half-dozen victims, and possibly one spiked investigation in 2020.

 

Hooray for journalists indeed.