By Tevi Troy
Sunday, March 23, 2025
Watch a movie or a TV show these days and there is a good
chance that you’ll see a scheming pharmaceutical executive among the ranks of
the villains. These white-collar masterminds dominate the category of America’s
pop-culture bad guys, often resorting to the most nefarious of means—hiring
assassins, employing hit squads, destroying evidence, and subverting
justice—all in the pursuit of profit. The biggest TV hit of the year is Matlock,
in which Kathy Bates infiltrates a law firm to get the inside story on a pharma
firm whose product killed her addict daughter.
The anti-pharma plot resonates because the plotline
intersects so visibly with the real world. From the bully pulpit, Big Pharma is
routinely denounced by politicians on both the left and the right. In the
courts, pharma has become a lucrative target for trial lawyers. In the media,
journalists and popular podcasters portray pharmaceutical companies as greedy
capitalists looking to squeeze dollars out of patients desperate for meds that
can make the difference between life and death. In state legislatures and in
the halls of Congress, Big Pharma has become what Big Tobacco was in the 1980s
and 1990s—a rapacious industry that works in its own interest and harms the
general public.
The difference, however, is that tobacco kills people
while pharma transforms lives—and people know it. Most of the advertising on
television is for lifesaving, life-enhancing, and life-extending medications
created by Big Pharma. From antibiotics to antidepressants to antihistamines to
gastric-acid reducers, modern pharmaceutical products have been extending and
improving lives for nearly a century.
In the years before World War II, the pharmaceutical
industry created medications such as antibiotics and insulin to treat
previously life-threatening conditions like infections and diabetes. Since
then, pharmaceutical companies have been inventing new therapeutics with a
consistency that has led us to take them for granted. In the 1950s, the
creation of the polio vaccine ended a disease that terrified families and left
tens of thousands of children paralyzed each year. Beginning in the 1970s, the
creation of a host of new cancer therapies doubled cancer survival rates. In
the 1980s, AZT became one of the first effective treatments for HIV/AIDS, while
the introduction of statins significantly reduced the need for cardiac
procedures. The 1990s saw the first protease inhibitors, which would be helpful
in the treatment of HIV/AIDS, hepatitis C, and later Covid-19. The 2000s
brought about new vaccines for HPV and rotaviruses. In the 2010s,
immunotherapies like ipilimumab revolutionized treatment of many cancers. And
in the 2020s, the rapid creation of the mRNA vaccines allowed us to end the
Covid lockdowns and return to our normal lives. These are just a handful of
highlights from a very long list.
The miracles are continuing to unfold, as there are
approximately 8,000 potential therapies currently at various stages in the
development pipeline. Right now, the country is experiencing a genuine
revolution in the treatment of diabetes and obesity as a result of a medication
that mimics and duplicates the behavior of the Gila monster’s digestive system.
First developed by a small San Diego firm called Amylin Pharmaceuticals, the
GLP-1s (Ozempic and Mounjaro are the best-known) are transforming people’s lives
and bodies in a way no one ever imagined possible, and their development may be
this generation’s life-saving triumph. Pharmaceutical companies commit to this
work at great risk and cost, investing more than $800 billion over the past
decade on a multitude of promising ideas—many of which will never bear fruit.
They spend an average of about $2.6 billion to bring a drug to market, and only
about 12 percent of drugs that enter clinical trials eventually gain approval
from the Food and Drug Administration.
With this century-long track record of lifting up
mankind, how do we explain the ongoing assaults on pharmaceutical
manufacturers, whose work has been so directly beneficial to humanity? And even
more concerning, where will these assaults lead? Such a long and sustained attack on
biomedical innovation from so many sectors of society could have severe
implications for our economy and our health system alike.
***
It wasn’t always like this. In a 1963 speech, President
John F. Kennedy praised “the miracles of modern science.” He noted that
“medical science has done much to ease the pain and suffering of serious
illness; and it has helped to add more than 20 years to the average length of
life since 1900. The wonders worked in a modern American hospital hold out new
hopes for our senior citizens.”
In issuing this rather banal statement, Kennedy was
actually taking a stand of sorts. Four years earlier, Democratic Senator Estes
Kefauver had begun a series of hearings about drug pricing. One of the
prominent exhibits was a graphic showing an antibiotic pill with the claim that
only 2 percent of the consumer payout came from the actual cost of producing
the pill—which deceptively suggested that the remaining 98 percent was pure
company profit. But the price accusations attracted little attention. The hearings
are largely remembered for the discussion of horrifying birth defects caused by
the anti-nausea drug thalidomide, news of which broke after the hearing had
begun. The hearings led to the 1962 passage of the Kefauver-Harris Amendments
to the Federal Food, Drug, and Cosmetic Act. These amendments gave the Food and
Drug Administration more power over the pharmaceutical approval process,
specifically granting the FDA authority to determine the efficacy of
pharmaceutical products as well as rule on their safety.
The Kefauver hearings were a watershed moment. They
highlighted a terrible story about the unintended dangers of powerful drugs,
which would become a staple of anti-pharma activity in the generations to come.
But the regulatory regime they helped bring about also calmed things down for
several decades. Over the next 20 years, the companies produced important new
therapies for heart disease, high blood pressure, ulcers, inflammation,
tuberculosis, among others, and the American people only wanted more. The extraordinarily
lengthy FDA approval process may have been calming, but it was also
frustrating, and the companies bore the brunt of the public’s impatience. Until
the 1980s, the most powerful criticism of the companies was that they were
moving too slowly. That came to a head during the AIDS crisis—when they and
their regulators at the FDA were accused of failing to treat as an emergency
requiring new and radical measures an epidemic that ended up killing more than
100,000 Americans from 1981 to 1990. This
argument had purchase across the ideological fault lines in the United
States—both the radical ACT UP and the conservative Wall Street Journal editorial
page banged the drumbeat for the relaxation of FDA regulations to allow the
administering of AIDS drugs to dying people who needed them desperately and had
nothing to lose.
As this push demonstrated, there was a general belief in
the United States at the time that the pharmaceutical industry was a symbol of
American innovation and scientific progress. The platform for the 1984
Republican Convention had a plank on the commitment to pharmaceutical
innovation that is worth quoting at length, lest today’s readers fail to
believe it ever happened:
We will maintain our commitment to
health excellence by fostering research into yet-unconquered diseases. There is
no better investment we as a nation can make than in programs which hold the
promise of sounder health and longer life. For every dollar we spend on health
research, we save many more in health care costs…. That research effort holds
great promise for combatting cancer, heart disease, brain disorders, mental
illness, diabetes, Alzheimer’s disease, sickle cell anemia, and numerous other
illnesses which threaten our nation’s welfare. We commit to its continuance.
As president, Reagan was generally cooperative with the
pharmaceutical industry. During the 1982 Chicago-area Tylenol poisonings that
killed seven people, Reagan gave Johnson & Johnson and its CEO James Burke
wide latitude to take the lead in managing the crisis. The company pulled the
product from the shelves and then returned it with the tamper-proof packaging
that is now ubiquitous. Reagan later had Burke come to the White House, where
he said that Burke “lived up to the highest ideals of corporate responsibility
and grace under pressure.” Burke’s deft management is cited in business-school
textbooks as a shining example of how to handle crises, but it’s hard to
imagine the federal government providing such deference—or even a modicum of
praise—to a pharmaceutical company in a similar situation today.
Reagan was not alone. His successor George H. W. Bush
favorably cited pharma’s research efforts in the fight against cancer. Nor was
it Republicans exclusively who expressed their admiration. In 1993, Bill
Clinton called for improving American childhood immunization rates while
noting, with justification, that America had “the finest pharmaceutical
companies in the world.”
Clinton also invited Kurt Landgraf, president and CEO of
DuPont Merck, to speak at the White House about how to combat youth drug use.
As Landgraf noted, “The United States has the most successful pharmaceutical
industry in the world. And it depends, in part, for its success on a decent
partnership with the federal government, especially through the Food and Drug
Administration.” As late as 2000, toward the end of his presidency, Clinton
touted pharmaceutical companies for “using supercomputers to simulate literally
millions and millions of possible candidates for new drugs, cutting down
development time for new anticancer drugs, for example, by several years.”
Yet even while presidents of both parties saw
pharmaceutical companies as loyal partners and productive developers of
important therapeutics, other actors were starting to target them.
***
The first serious enemy was the plaintiffs’ bar.
Throughout the 1980s and 1990s, trial lawyers and their allies were pushing
for—and securing—changes in legislation and in tort law that led to the
emergence of mass tort litigation. There were a number of key milestones on
this path. One was the rise of a legal doctrine called “market share
liability.” It originated in California, and it allowed injured parties to sue
manufacturers even if it was not clear which specific manufacturer made the
product that caused the harm. Any and all manufacturers could be deemed liable
for the harm that a product class caused, whether or not it could be proved
that their specific product bore the responsibility for the injury.
Another key change came with the easing of rules on class
action lawsuits that began to affect pharmaceutical companies in the
1990s—particularly when it came to the so-called Fen-Phen problem. “Fen-Phen”
was the collective name for two weight-loss drugs from Wyeth (then called
American Home Products) given in tandem. In 1996, a Mayo Clinic study showed
that the one-two punch could cause heart-valve damage.
After the study was published in the New England
Journal of Medicine, Wyeth pulled the drugs, Pondimin and Redux, off the
market in 1997. Tort lawyers followed with a massive class action suit
on behalf of hundreds of thousands of patients. As it turned out, while the
danger was real, the problem did not affect anywhere near as many people as
participated in the suit. Many of the claimants turned out to have filed fraudulent
claims against the company to participate in the gold rush. All told, the
payments cost Wyeth about $22 billion. This was one of the first massive
plaintiffs’-bar judgments against the pharmaceutical industry for “failure to
warn” about health risks. Other suits followed, including one against Merck’s
painkiller Vioxx, which prompted more than 27,000 lawsuits and cost Merck $1
billion in legal fees before the company agreed to an almost $5 billion
settlement.
In addition to the product-liability cases, plaintiffs’
attorneys also partnered with the government in going after drug companies on
pricing issues. They did this through qui tam lawsuits—a term deriving
from a Latin phrase meaning “he who brings action for the king as well as
himself.” Such suits have existed in some form for centuries in both the
British and American systems, and they allow a citizen to raise a complaint and
then pursue action on behalf of the government. Qui tam was supercharged
in 1986. In that year, Congress increased incentives for whistleblowing under
the False Claims Act—a move that exposed pharmaceutical companies to actions
against them by disgruntled or even fired employees who claimed to have
evidence of malfeasance. Once such a suit is filed, a U.S. attorney’s office
can then work with the complainant and outside plaintiffs’ attorneys to pursue
the case against a company.
Qui tam cases are particularly fraught. If a
company contests one and loses, one of the penalties is losing the right to do
business with the federal government. This is a potential death knell for
health-care companies, since almost 50 percent of the overall health-care
dollars spent in the United States comes from the government—in particular,
from the Centers for Medicare and Medicaid Services. Because losing the
government as a buyer is an unacceptable outcome, the companies are strongly
incentivized to settle, whether or not the claim is even remotely just. Such
cases also incentivize trial lawyers to file suit, take a proffered settlement,
and then move on to the next claim. Some U.S. Attorney offices, like the one in
Boston, are known for actively seeking out qui tam opportunities.
Anti-pharma lawsuits are a gold mine for the tort bar.
According to a 2024 Public Citizen study by Michael Abrams and Sidney Wolfe,
“From 1991–2021, the federal and state governments entered into 482 settlements
with pharmaceutical manufacturers, totaling $62.3 billion in financial
penalties.” Public Citizen is an ally of the trial lawyers, so it is bragging,
not complaining, about the additional $62.3 billion in costs imposed on the
pharmaceutical development process. This figure is only a portion of the
overall amount pharma companies have paid to private actors—some of which is
confidential, so the total is not known—in class action and product-liability
lawsuits against the industry.
Trial lawyers are also allied with the Democratic Party.
According to one study, 99 percent of political contributions from trial
lawyers at top firms go blue. This partisan giving has influenced the party’s
standard-bearers. Up through Clinton, Democratic presidents were neutral to
positive in their comments about pharma. As lawsuits proliferated and giving
increased, that changed. On the campaign trail in 2000, the Democratic nominee
for president, Al Gore, claimed that pharmaceutical profits were “way out of
line.” In calling for a new Medicare benefit to pay for pharmaceuticals, a
benefit not included in the original Medicare program, Gore blamed the
companies for “using the market power to dictate prices that are way above what
competition would set them at.”
Gore lost the race to George W. Bush, but Bush followed
him in supporting a plan to create a prescription-drug benefit under Medicare.
The Medicare Modernization Act (MMA) was passed in 2003, but it ramped up
rather than tamped down anti-drug-company rhetoric. According to Billy Tauzin,
former head of the industry trade association PhRMA and previously a 14-term
member of the House who served both as a Democrat and a Republican, Democratic
antipathy toward the pharmaceutical companies increased with the passage of the
MMA, as Democrats did not want to give Republicans a win on the issue. In
addition to fighting against it, Democrats quickly shifted their rhetoric to
lump in the Bush legislation with the increasingly targeted pharmaceutical
industry.
The evolution in Democratic politics could be seen in the
subsequent party platforms. The 2004 platform mentioned “drug companies” only
once, complaining that “in President George Bush’s America, drug company and
HMO profits count for more than family and small business health costs.” It also charged that “the current Medicare
drug program serves drug companies more than seniors.” That same year, the
Democratic presidential nominee, John Kerry, chose pharmaceutical-industry
critic Senator John Edwards as his running mate—a man who had made a fortune as
a trial lawyer.
The game was afoot. Attacking pharmaceutical companies
became part of the political playbook for nearly all Democratic politicians.
Two decades on, the 2024 Democratic Party platform talked about standing up to
and cracking down on “Big Pharma” 11 times, accusing “Big Pharma” of charging
“exorbitant prices.”
These days, though, it’s not just Democrats who rail on
pharma. Republicans have become more critical of the industry as well. Some of
this stems from an increasing anti-corporate bent among Republicans such as
Senator Josh Hawley. But there is also some resentment among Republicans who
remember that PhRMA backed the passage of Obamacare in 2010. Republicans
understandably felt that they had long taken fire for defending the industry
and that the industry deserved the back of their hand for betraying them when
it came to their top priority of stopping Obamacare. (When I asked Tauzin, the
man behind the decision and himself a former 12-term congressman who started as
a Democrat and switched to the Republicans, about this, he agreed it was
“accurate.”) According to a recent poll, only 13 percent of Republicans
surveyed support the pharmaceutical industry, down from 45 percent as recently
as 2020. This means that the industry now takes rhetorical hits, and policy
losses, from both Democrats and Republicans.
***
The culture war against pharmaceutical companies was, by
this point, a generation old. Since the 1990s, Hollywood has made pharma
companies the villain in a continuous host of movies, including The Fugitive
(1993), Jurassic Park (1993), The Constant Gardener (2005), Side
Effects (2013), Dallas Buyers Club (2013), and The Report (2019).
Add to this TV docudramas like The Crime of the Century (2021) and Dopesick
(2021), and the aforementioned series Matlock, and it is possible that
the average consumer of popular culture in 21st-century America will see more
negative depictions of pharmaceutical companies than of Islamist terrorists,
the Chinese Communist Party, and tyrannical governments in Cuba or North Korea combined.
There are a variety of sources for all this hostility,
ranging from the organic to the nefarious. One problem for the pharmaceutical
companies is that drugs have indeed gotten more expensive. As we reached the
limits of discovery in the realm of small-molecule biochemical products—in
other words, pills—we have entered a new realm of more complicated biologics.
These are sophisticated, non-chemically synthesized products.
Biologics are harder to produce, harder to get approved,
and harder to administer, since they have to be infused rather than
swallowed—but they also have far more potential to create transformative
therapies in the areas of cancer and autoimmune diseases. All these factors
make the new products more costly. In addition, the Biden administration’s
Inflation Reduction Act imposed a new penalty that incentivizes companies to
make more-expensive biologics rather than pills.
Another source of public dissatisfaction with the
industry comes from something the industry lobbied for and got when they
finally received permission to engage in direct-to-consumer advertising
beginning in 1997. While such advertising is in line with the First Amendment
and certainly helped pharmaceutical companies promote their products, in
retrospect the proliferation of these ads may have done them more harm than
good.1
For a long time, pharmaceutical companies had
successfully argued that they needed to charge higher prices for their product
in the U.S. than they did abroad to pay for the research and development of new
products. Yet the flood of televised ads led many Americans to feel that the
companies could and should have been using the revenue generated by the high
prices to pay for research rather than for expensive marketing campaigns (and
in the case of Viagra, forcing parents to distract their children so as to avoid
awkward conversations). In addition, the FDA-mandated disclosures of side
effects (those verbal lists mid-commercial that warn of stomach pain, vertigo,
and problems with other parts of the body) made Americans queasy and increased
their wariness. On top of that, the heavy advertising of brand-name products
made it harder for doctors to prescribe cheaper alternatives, which has
contributed to the cost spiral—people want the named drug, not the generic.
While ads were spurring Americans to demand exciting but
costly new meds, regular folk were increasingly worrying about the costs of
health care. The three-legged stool of Medicare for old people, Medicaid for
poor people, and employer-sponsored insurance for the vast majority of employed
people was getting wobbly in a society in which increasing numbers of people
were not working, or working for small companies that could not provide
insurance, or working for themselves. This led to the multi-decade debate that
preceded the passage of Obamacare in 2010.2
For rank-and-file Americans, even those with health-care
coverage, pharmaceutical prices seemed to be a growing problem. Although the
industry argued that their products were only 9 percent of overall health-care
spending, they were a consistently higher percentage of people’s out-of-pocket
expenses, even for the insured. Doctor visits were largely paid for, but the
co-pays on the drugs the doctors prescribed were often eye-opening.
This problem is compounded by the perception that
pharmaceutical products are some kind of a public good—even a human right. The
American people like the fact that American companies regularly produce
innovative, lifesaving, and life-extending products, but they don’t like that
some people can afford those products and some cannot.
There are also misperceptions that the National
Institutes of Health, on which taxpayers spend about $48 billion annually,
develops new drugs. According to Sally Pipes’s new book, The World’s
Medicine Chest, President Lyndon Johnson was himself shocked to learn that
they do not—something he found out in 1968, after he had been president for
five years.3 Pharmaceutical companies are the actors who take basic
research, some but not all of which comes from NIH, and develop it into usable
products. This is why the panic over the proposed reduction of administrative
costs in NIH grants to universities in the early days of the Trump
administration is wildly out of proportion—NIH cuts to administrative costs are unlikely to bring
cures to a halt, given the predominant role pharma companies play in actually
bringing therapies to market.
People also think that the FDA administers clinical
trials. It doesn’t. These are done by the companies at great cost—costs that
increase when the FDA increases the requirements for said trials.
All these misperceptions lead to a belief that taxpayer
dollars create new drugs, but this perception fails to take into account the
vital role that the private sector plays: If we want new lifesaving products,
they have to be developed, and pharmaceutical companies are the developers.
High prices, out-of-pocket costs, political and financial
motivations, and misperceptions mixing with unrealistic expectations have
driven the industry into disfavor. The public trust for the industry is only 18
percent, barely above that of Congress, which is at 16 percent. All the efforts
to condition the environment against the industry have had an effect.
At the same time that the industry was being assailed
from the outside, it was also making costly mistakes of its own. One was its
failure to see what was happening around it and respond in an effective way.
The industry’s tone-deaf response to the assault against it—highlighting its
accomplishments without recognizing the depth of citizen anger—meant that it
was alienating Americans while it should have been working to assuage them. The
effort to win over Americans was made harder by issues such as the EpiPen
pricing scandal of the 2010s, in which this emergency anti-allergen skyrocketed
in price from $60 to over $600 for no apparent reason beyond a profit
opportunity.
In addition to these mistakes, there is also the opioid
crisis, which has plagued our nation for more than two decades and killed more
than 1 million Americans.4 In this instance, the pharmaceutical
company responsible for setting things in motion, Purdue Pharma, has recently
agreed to a $7.4 billion settlement. Purdue was clearly a bad actor, but that
does not mean the entire pharmaceutical industry is similarly disreputable. Yet
the cultural and political opprobrium indiscriminately paints all companies,
most of which are just trying to sell existing therapeutic products and develop
new ones, with the same brush.
***
Amid all this, one thing that the industry’s many critics
fail to consider is the second-order effect of such a multi-decade assault. The
vitriol directed against our pharmaceutical industry has many potential costs,
not all of them visible. These costs manifest in at least four areas worth
considering.
The first is price. As discussed, one of the reasons for
the negativity around the pharmaceutical companies is the cost of their
products. But the assaults and the costs they impose in terms of settlements
and verdicts only raise the prices that they have to charge to recoup their
investments. The billions in legal payments—so high that many pharma companies
must self-insure because the insurance companies consider them uninsurable—have
to come from somewhere. One Colorado study specifically listed liability risk
as a cause of higher drug costs in the U.S. relative to England. The costs add
to the price, which adds to the resentment Americans feel toward the industry.
Yet as Pipes writes, the high price of a pharmaceutical product “is not greed.
It’s math.”
A second type of cost comes from how the government
treats an unpopular industry. A steady stream of attacks can affect how
regulatory bodies view the pharmaceutical companies. When FDA officials get
nervous about the negative tone in which Big Pharma is discussed, they start to
look askance at new approvals, which are typically only in the range of dozens
per year in any case. (An old FDA joke goes, “You never get called up to the
Hill for the drug you didn’t approve.”)
Another government-imposed limitation comes in the form
of the Biden administration’s price controls—which are imposed through mandated
negotiations for certain pharmaceutical products. As former FDA chief counsel
Dan Troy5 described this Inflation Reduction Act provision in the Wall
Street Journal: “Secret negotiations force pharmaceutical companies to
agree to government-determined prices amounting to massive discounts off
market-based prices, under the threat of crippling taxes and penalties.” Add to
this an overzealous antitrust policy hostile to mergers, and pharmaceutical
companies are increasingly constrained in the business decisions they get to
make. As with Hollywood movies, boards of directors tell executives not to go
forward with a new product unless they are sure that it is going to be a
blockbuster. That means that many potential therapies for rare diseases—of
which 95 percent have no approved therapies—will never get made.
Yet another invisible cost comes in the form of talent.
Students and would-be researchers look at the news, at the size and scope of
the regulatory state, and at the image of Big Pharma in the culture, and wonder
why they would want to work for an industry crippled by government scrutiny and
the tort bar and viewed as evil by their peers. The smart chemist at MIT or
Caltech may choose to work in cosmetics, or food, or the fashion industry
instead. Or they may choose to work on pharmaceutical development, but not in
America. Forty percent of pharma’s scientific workers are immigrants. They have
already moved once; perhaps they would move again to a more welcoming
environment for pharmaceutical development. (Wegovy and Ozempic are produced in
Denmark.) As a result, we can never know what great therapeutics will never be
developed because smart people chose to shun an unpopular industry, or to work
elsewhere.
The industry even has trouble finding defenders. Most
people I spoke to for this article took pains to clarify that they were not on
the record. Web searches on this subject uncover reams of material assaulting
the industry, denunciations that vastly outnumber any defenses that the
companies or their allies can muster. And the attacks against defenders can get
ugly. A friend of mine suffered a host of negative media attacks simply for
asking a pharma executive to look at a draft of an article to check for factual
errors. You had better believe that the atmosphere around the industry limits
the number of industry defenders and makes those brave enough to try it
exceedingly nervous.
***
As for the pharma companies themselves, for all their
flaws, they keep going about their business of trying to create
life-transforming products—from antibiotics to vaccines to anti-ulcer H2RAs to
therapies against Alzheimer’s. They also do it, for the most part, in
America—for now. Sixty-seven percent of drugs developed in the 2010s came from
the U.S., and 80 percent of new drugs in the research pipeline are from U.S.
companies as well. There’s a reason for the American dominance up to this
point: our capitalist system. According to Pipes, “Among America’s great
advantages in pharmaceutical sciences is our relatively free market for
prescription drugs, as well as our rigorous system for protecting intellectual
property.”
Yet in the face of this constant onslaught, there is no
guarantee that American pharmaceutical companies will keep leading the world.
The example of vaccines is worth looking at. Legal assaults against them had
become so prevalent that companies were coming to grips with the notion that
the costs were just not worth the potential benefits. Congress created the
Vaccine Compensation Fund in 1986 and then amended it in 2016 to limit the
liability that pharmaceutical companies face from lawsuits over vaccine-related
side effects. Trial lawyers grumbled, but without it, we would no longer have
domestically produced vaccines—a rare moment of sanity after nearly four
decades of increasing vitriol against a vital American industry.
Thanks to new technological advances such as the
unlocking of the human genome, bioinformatics, and artificial intelligence, we
could be on the cusp of a biomedical revolution. This revolution has the
potential to produce amazing new technologies that could save, extend, and
improve countless lives, and help the American economy in the process. Because
of our free-market system, American pharmaceutical companies are best equipped
to take advantage of these promising developments—and maintain America’s economic
and innovation advantage in the process.
Yet the constant assault against the pharmaceutical
companies for trying to create new cures has got to be discouraging. At some
point, we need to wonder whether these business are just going to say enough is
enough and either go elsewhere or close down their operations in this country.
Perhaps Americans would rethink things were that to happen, but by then it
would be far too late to make up for the loss. As Tauzin, himself a cancer
survivor because of pharmaceutical intervention, told me, “If we allow the
hatred of the industry to continue, we are going to lose investment and people
are going to die.”
1
See my “The
End of Medical Miracles,” Commentary, June 2009.
2
See my “Health
Care: A Two-Decade Blunder,” Commentary, April 2010.
3
Encounter, 2025
4
See “Our
Miserable 21st Century,” Nicholas Eberstadt, Commentary, March 2017.
5
Full disclosure: Dan Troy is my brother.
No comments:
Post a Comment