Against Scientific Gatekeeping thumbnail

Against Scientific Gatekeeping

By Jeffrey A. Singer

Science should be a profession, not a priesthood.

In March 2020, the iconoclastic French microbiologist Didier Raoult announced that the anti-malaria drug hydroxychloroquine had cured all 36 COVID-19 patients enrolled in his clinical trial. Many of Raoult’s colleagues rejected his conclusions, arguing that the trial was too small and noting that it was not randomized and controlled. But as the deadly coronavirus spread rapidly throughout the world and governments responded with draconian lockdowns, public attention was quickly drawn to the chance that a common and inexpensive drug might rid the world of the danger.

President Donald Trump promoted hydroxychloroquine as a “game-changer,” which raised the ire of many medical and public health experts. Without randomized controlled trials, they complained, it was irresponsible to prescribe the drug for infected patients. Under pressure from Trump, other Republican politicians, and conservative pundits, the Food and Drug Administration (FDA) nevertheless issued an emergency use authorization (EUA) for adding hydroxychloroquine to the strategic national stockpile of COVID-19 treatments.

After numerous randomized controlled trials failed to demonstrate the drug’s effectiveness, the FDA revoked the EUA, leaving the national stockpile with 63 million unused doses of hydroxychloroquine. Florida’s Republican governor, Ron DeSantis, had purchased 1 million doses for the state’s stockpile, which likewise remained unused.

There is a difference, however, between the claim that a drug has been proven not helpful and the weaker claim that it has not been proven helpful. Despite the failure to validate Raoult’s claims, many Americans believed that hydroxychloroquine’s potential benefits outweighed its minimal risks. Exercising their right to self-medicate, some people infected by the coronavirus continued to take the drug.

The hydroxychloroquine brouhaha illustrates the roiling conflict between the scientific establishment and its uncredentialed challengers. Because the internet has democratized science, the academy no longer has a monopoly on specialized information. Based on their own assessments of that information, laypeople can chime in and may even end up driving the scientific narrative, for good or ill.

Meanwhile, the internet is developing its own would-be gatekeepers. Those who oversee the major social media platforms can filter information and discourse on their platforms. Pleasing the priesthood enhances their credibility with elites and might protect them from criticism and calls for regulatory intervention, but they risk being captured in the process.

Challenges to the priesthoods that claim to represent the “scientific consensus” have made them increasingly intolerant of new ideas. But academic scientists must come to terms with the fact that search engines and the digitization of scientific literature have forever eroded their authority as gatekeepers of knowledge, a development that presents opportunities as well as dangers.

Experts, Yes; Priesthoods, No

Most people prefer experts, of course, especially when it comes to health care. As a surgeon myself, I can hardly object to that tendency. But a problem arises when some of those experts exert outsized influence over the opinions of other experts and thereby establish an orthodoxy enforced by a priesthood. If anyone, expert or otherwise, questions the orthodoxy, they commit heresy. The result is groupthink, which undermines the scientific process.

The COVID-19 pandemic provided many examples. Most medical scientists, for instance, uncritically accepted the epidemiological pronouncements of government-affiliated physicians who were not epidemiologists. At the same time, they dismissed epidemiologists as “fringe” when those specialists dared to question the conventional wisdom.

Or consider the criticism that rained down on Emily Oster, a Brown University economist with extensive experience in data analysis and statistics. Many dismissed her findings—that children had a low risk of catching or spreading the virus, an even lower risk of getting seriously ill, and should be allowed to normally socialize during the pandemic—because she wasn’t an epidemiologist. Ironically, one of her most vocal critics was Sarah Bowen, a sociologist, not an epidemiologist.

The deference to government-endorsed positions is probably related to funding. While “the free university” is “historically the fountainhead of free ideas and scientific discovery,” President Dwight Eisenhower observed in his farewell address, “a government contract becomes virtually a substitute for intellectual curiosity.” He also warned that “we should be alert to the…danger that public policy could itself become captive of a scientific-technological elite.” Today we face both problems.

The Orthodoxy in Earlier Times

The medical science priesthood has a long history of treating outside-the-box thinkers harshly. Toward the end of the 18th century, Britain’s Royal Society refused to publish Edward Jenner’s discovery that inoculating people with material from cowpox pustules—a technique he called “vaccination,” from the Latin word for cow, vacca—prevented them from getting the corresponding human disease, smallpox. Jenner’s medical colleagues considered this idea dangerous; one member of the Royal College of Physicians even suggested that the technique could make people resemble cows.

At the time, many physicians were making a good living by performing variolation, which aimed to prevent smallpox by infecting patients with pus from people with mild cases. Some saw vaccination as a threat to their income. Thankfully, members of Parliament liked Jenner’s idea and appropriated money for him to open a vaccination clinic in London. By the early 1800s, American doctors had adopted the technique. In 1805, Napoleon ordered smallpox vaccination for all of his troops.

Half a century later, the prestigious Vienna General Hospital fired Ignaz Semmelweis from its faculty because he required his medical students and junior physicians to wash their hands before examining obstetrical patients. Semmelweis connected puerperal sepsis—a.k.a. “childbed fever,” then a common cause of postnatal death—to unclean hands. Ten years after Semmelweis returned to his native Budapest, he published The Etiology, Concept and Prophylaxis of Childbed Fever. The medical establishment rained so much vitriol on him that it drove him insane. (Or so the story goes: Some think, in retrospect, that Semmelweis suffered from bipolar disorder.) He died in an asylum in 1865 at the age of 47.

The “germ theory” anticipated by Semmelweis did not take hold until the late 1880s. That helps explain why, in 1854, the public health establishment rebuffed the physician John Snow after he traced a London cholera epidemic to a water pump on Broad Street. Snow correctly suspected that water from the pump carried a pathogen that caused cholera.

Public health officials clung instead to the theory that the disease was carried by a miasma, or “bad air.” The British medical journal The Lancet published a brutal critique of Snow’s theory, and the General Board of Health determined that his idea was “scientifically unsound.” But after another outbreak of cholera in 1866, the public health establishment acknowledged the truth of Snow’s explanation. The incident validated the 19th-century classical liberal philosopher Herbert Spencer’s warning that the public health establishment had come to represent entrenched political interests, distorting science and prolonging the cholera problem. “There is an evident inclination on the part of the medical profession to get itself organized after the fashion of the clericy,” he wrote in 1851’s Social Statics. “Surgeons and physicians are vigorously striving to erect a medical establishment akin to our religious one. Little do the public at large know how actively professional publications are agitating for state-appointed overseers of the public health.”

Heterodoxy Finds a Welcome Environment

Advances like these made the medical establishment more receptive to heterodoxy. As new knowledge overthrew long-held dogmas in the 20th century, scientists were open to fresh hypotheses.

As a surgical resident in the 1970s, for example, I was taught to excise melanomas with about a five-centimeter margin of normal skin, the theory being that dangerous skin cancer should be given a wide berth. A skin graft is needed to cover a defect that size. This approach was never evidence-based but had been universally accepted since the early 20th century. In the mid-’70s, several clinical researchers challenged the dogma. Multiple studies revealed that the five-centimeter margin was no better than a two-centimeter margin. Now the five-centimeter rule is a thing of the past.

For decades, physicians thought the main cause of peptic ulcer disease was hyperacidity in the stomach, often stress-related. In the 1980s, a gastroenterology resident, Barry Marshall, noted the consistent appearance of a bacterium, Helicobacter pylori, on the slides of stomach biopsy specimens he sent to the lab. He suspected the bacterium caused the ulcers. He ingested the bacteria, which indeed gave him ulcers. He then easily cured himself with antibiotics. By the early 1990s, several studies had confirmed Marshall’s discovery, and today Helicobacter pylori is recognized as the cause of most peptic ulcers.

“Off-label” use of FDA-approved drugs is another path to medical innovation. When the FDA approves a drug, it specifies the condition it is meant to treat. But it is perfectly legal to use the drug to treat other conditions as well. Roughly 20 percent of all drugs in the U.S. are prescribed off-label. That practice is often based on clinical hunches and anecdotal reports. Eventually, the off-label use stimulates clinical studies.

Sometimes, as with hydroxychloroquine, the studies fail to validate the initial hunches. But sometimes evidence from clinical trials supports off-label uses. We surgeons use the antibiotic erythromycin to treat postoperative stomach sluggishness. Lithium was originally used to treat gout and bladder stones; now it is used to treat bipolar illness. Thalidomide was developed to treat “morning sickness” in pregnant women. Because it caused horrific birth defects, it is no longer used for that purpose. But thalidomide was subsequently found useful in treating leprosy and multiple myeloma. Tamoxifen, developed as an anti-fertility drug, is now used to treat breast cancer.

These are just a few examples of the rapid advances in the understanding and treatment of health conditions during my medical career, made possible by an environment that welcomes heterodoxy. But even health care practitioners who recognize the value of unconventional thinking tend to bridle when they face challenges from nonexperts.

Today the internet gives everyone access to information that previously was shared only among medical professionals. Many lay people engage in freelance hypothesizing and theorizing, a development turbocharged by the COVID-19 pandemic. Every physician can tell stories about patients who ask questions because of what they’ve read on the internet. Sometimes those questions are misguided, as when they ask if superfoods or special diets can substitute for surgically removing cancers. But sometimes patients’ internet-inspired concerns are valid, as when they ask whether using surgical mesh to repair hernias can cause life-threatening complications.

It may be true that as American science fiction and fantasy writer Theodore Sturgeon said, “90 percent of everything is crap.” But the remaining 10 percent can be important. Health care professionals who see only the costs of their patients’ self-guided journeys through the medical literature tend to view this phenomenon as a threat to the scientific order, fueling a backlash. Their reaction risks throwing the baby out with the bathwater.

The Return of Intolerance

It is easy to understand why the scientific priesthood views the democratization of health care opinions as a threat to its authority and influence. In response, medical experts typically wave the flag of credentialism: If you don’t have an M.D. or another relevant advanced degree, they suggest, you should shut up and do as you’re told. But credentials are not always proof of competence, and relying on them can lead to the automatic rejection of valuable insights.

Economists who criticize COVID-19 research, for example, are often dismissed out of hand because they are not epidemiologists. Yet they can provide a useful perspective on the pandemic.

*****

Continue reading this article at Reason.