What If We Tried a Real Quarantine?

Voluntary social distancing has helped, but governments have ignored the most powerful tool they have.

With more and more states coming out of lockdown, the “end game” for COVID seems to be in sight. The worst wave of death is (maybe) over. Surely life can go back to normal?

Unfortunately, this hope is likely to be in vain. To the extent that the threat of COVID is receding around the world, it is because of a massive (and mostly voluntary) campaign of social distancing, in many places helped along by the arrival of warmer weather. When social life returns to normal, or when the weather begins to cool in the fall, it’s quite likely that COVID will come roaring back. Thus far, no highly effective treatment options have been found, and a safe vaccine remains a long way off. It is likely, then, that the fall of 2020 will see another outbreak. Many states and school systems are already planning how to manage frequent, episodic closures, on the assumption that further outbreaks will occur.

Unfortunately, most Western governments have left their most powerful tool on the shelf. Every country that has beaten COVID has done it the same way: with centralized quarantine. If we want to avoid a second round of lockdowns, panic, and disruption to life, there is only one path forward: to adopt a functional system of centralized quarantine, as humans before us have done for millennia.


Before we get into what centralized quarantine might look like today, it’s worth walking through the history of this strategy. In Leviticus 13 and 14, Moses gave the people of Israel instructions on how to deal with outbreaks of infectious diseases (in this case, a skin condition; widely translated as leprosy, it is actually very unlikely that the disease described was true “leprosy.”). The instructions are a Bronze Age version of “test and quarantine.” Detailed diagnostic criteria are listed for experts to use to provide a diagnosis. Any cases that meet a very low standard for a “possible positive” were then isolated from the rest of the community in a secure location until they completed a certain window of time without symptoms.

This is textbook “centralized quarantine.” Infected people didn’t shelter in place, and isolation was not limited to highly infectious people. The instruction of God Almighty through His prophets to the people of God when they confronted a contagious disease was very simple: Isolate large numbers of people who might be infectious at a safe distance from the community.

While whatever disease the Israelites were trying to control has been lost to history, the reason Greek and medieval translators assumed it to be leprosy is easy to understand. 

Leprosy became endemic in Europe sometime in the Roman period, and by the 900s and 1000s was tearing wide swaths through the population. It left people horribly disfigured and, given the lack of treatment options, probably killed 5 percent or more of those infected. While even today scientists do not know everything about how leprosy can be spread, there is good evidence that the primary vehicle is through mucus and respiratory droplets, just like COVID. Seeing this horrible plague around them, ancient translators just assumed Leviticus was talking about the same disease.

But they did more than assume. They established enormous systems of “lazar houses,” or leper colonies. Thousands of these isolation units were established throughout Europe in the years before the bubonic plague. Medieval religious orders stepped up to provide relatively humane living and care conditions for lepers.

It’s important to understand that leprosy today is as bad as it has ever been. The bacterium that causes leprosy today is genetically almost identical to the one raging through Europe centuries ago, except that it has acquired a resistance to treatments developed through the 1960s. But leprosy is not a scourge today because an estimated 95 percent of the European population alive today is naturally resistant.

How did we become resistant to leprosy? This is a hotly debated question. Many people acquired individual immunity by being exposed to the disease, but this kind of immunity isn’t heritable. Leprosy did kill a lot of people, but not enough to create genetic immunity. Even the bubonic plague, which killed far more people than leprosy, has not yielded the same widespread population resistance as exists for leprosy.

Detailed study of medieval cemeteries in Scandinavia has yielded an answer. Leprosy was driven to extinction in many Danish cities before 1400. It persisted longer in rural areas. The cities where leprosy declined share a common trait: lazar houses. They established “leprosaria” where infected people were isolated. This resulted in a dramatic decline of infection in the wider population.

But it’s not just that. Individuals in leprosaria found it difficult to reproduce. They were under constant religious oversight, segregated from wider society, and of course suffering a painful disease. Between all these pressures, it appears that people genetically predisposed to leprosy were weeded out (or weeded themselves out) of the population through non-reproduction. Meanwhile, a related bacteria (possibly one which infects cows, but more on that below) gradually infected many people in rural areas, providing them immunity.

Many forces worked together to eliminate leprosy, but one of the strongest factors was simply a concentrated program of religiously motivated centralized quarantine, in this case lifelong quarantine for infected people.


Leprosy is an odd condition because, while infectious, it has a very long incubation period, and kills very slowly. By the 1600s, it had dramatically declined as a public health menace, and the lazar houses were mostly empty.

But another medieval killer, bubonic plague, was quite different. Throughout history, bubonic plague has swept through societies in waves. It mutates quickly, meaning it is hard for societies to build up long-lasting herd immunity. The disease can spread through fleas as well as person-to-person. It is extremely lethal. In the 1600s, the bubonic plague was still a recurrent and terrifying killer.

It is also, as it turns out, extremely beatable. There are numerous cases of bubonic plague being successfully controlled without modern pharmaceuticals, but the best-documented case is the city of Ferrara from 1629 to1631. Strikingly, the scientific paper about Ferrara’s successful management of bubonic plague was accepted for publication in December 2019, and just officially published last month

From 1629 to 1631, a terrible wave of plague killed tens of thousands in every city in northern Italy. Except for Ferrara, where there were fewer than a dozen plague deaths. This was not due to luck: Several towns around the region’s official borders were hit hard, suffering thousands of deaths. But Ferrara, alone in the center of northern Italy, stood resilient and relatively plague-free.

Among the city’s strategies: Strict travel restrictions limited who could come to the city; trade goods were repackaged before being locally sold. Movement was limited, and most city gates were sealed. Mostly, these strategies worked. But not entirely. In August 1630, several cases broke out in the city. This should have been the end for Ferrara: Once you’ve got a few bubonic plague cases, it tends to spread. But the city’s officials deployed all options they had on hand: All those who were sick were put in centralized quarantine in the now-empty leper colonies. So were their close contacts, business partners, families, and others. Their houses were sealed up while they were away, and after a safe period of time scrubbed with a kind of medieval bleach.

There were a few other scares, which demonstrate how extreme the measures were. A postal worker got sick: He and everyone on his route were put into quarantine. A school child showed some symptoms: School was canceled for two months. If this all sounds surprisingly modern for 400 years ago, don’t be surprised: “Plague protocols” were written for many cities around the world. Unfortunately, they were often ignored. Ferrara is unique, not for having written rules, but for actually following those rules. Politicians in Ferrara shelled out a huge amount of money from their budget to pay for food for quarantined people, have doctors check on the poor, and generally keep their city safe. They succeeded.


Leprosy turns out to be important even to other historical epidemic-fighting. I noted above that one theory of the decline of leprosy suggests that “cross immunity” may have a role to play; that is, that exposure to some other disease may yield immunity to leprosy. Some researchers have a theory of what that “other disease” may have been: tuberculosis. Tuberculosis has been with humans since practically the dawn of our species. However, it appears to have become considerably more common in the last few centuries. Whether tuberculosis’ rise is the cause of declining leprosy, or whether the decline in leprosy is the cause of the rise in tuberculosis, may be debated. But what is clear is that by the 1700s and 1800s, the disease was rampant in society, so much so that 17th century physicians often simply assumed that the entire population would be infected if they lived long enough. I’ve found in my own research that diagnosed tuberculosis could account for more than a fifth of all deaths in Massachusetts in the 1840s. 

And yet, tuberculosis was already in decline in the 1840s. Reports from England suggest tuberculosis made up a third of deaths in the late 1700s in many places. By the 1860s, scientists had conclusively proven that tuberculosis was contagious, something that had not been widely known up to that point. By the 1890s, X-rays could be used to assess the severity of the disease. Once scientists had proven that tuberculosis was contagious, serious efforts began to prevent that spread.

Frustratingly, these efforts came 50 years after they should have. There had already been calls for centralized quarantine of tuberculosis patients as early as the 1840s. Eventually, the “sanatorium” movement began, an effort to establish clean, healthy living environments for tuberculosis patients. 

Sanatoriums have been widely identified as the first effective treatment of tuberculosis. Not because they had some great medical innovation at their root, but because they provided integrated, managed care for patients, while also effectively reducing the number of patients. They might have only slightly reduced the odds of a tuberculosis patient dying, but perhaps even more importantly they reduced how many healthy people an infected person would themselves infect.

Sanatoriums worked, just like Ferrara’s centralized quarantines worked, just like medieval lazar houses worked, just like the procedures in Leviticus very likely worked. If societies follow some pretty simple rules, but do so with diligence, it is possible to beat even epidemic diseases for which there is no cure and no vaccine. To this day, tuberculosis, leprosy, and bubonic plague are challenging conditions to treat and manage. Only tuberculosis has a vaccine, and it is of somewhat limited effectiveness compared to other vaccines. And yet, very few of us spend our days worrying about tuberculosis, leprosy, or the bubonic plague. 


Centralized quarantine is the normal way to deal with epidemic disease. The CDC even maintains a website about its dedicated “quarantine stations,” which are primarily focused on international borders. The first centralized quarantine site in America was built in 1799 to deal with yellow fever victims (it was not widely understood that yellow fever is transmitted primarily through mosquitoes). We literally have a “National Quarantine Act” that gave the federal government quarantine powers in 1878, reinforced in 1944, then transferred to the eventual CDC in 1967.

And then in the 1970s, the U.S. closed down 47 of its 55 quarantine stations because, and I’m quoting from CDC’s website here, “infectious diseases were thought to be a thing of the past.” In a fit of arrogance and hubris, American leaders simply decided that it could never happen here. President George W. Bush, to his credit, expanded the number back to 20 in the 2000s, but they remain small in scale compared to their former levels. By law, individuals can be quarantined for cholera, diphtheria, tuberculosis, plague, smallpox, yellow fever, hemorrhagic fevers like Ebola, new varieties of influenza, and severe acute respiratory syndromes like SARS, MERS, or COVID. President Trump just updated the federal regulation governing these stations (and clarified the legality of their use in domestic cases) in 2017.

There is precisely zero legal debate about whether the government has the authority to order someone who has or is suspected of having a contagious, lethal disease into a quarantine site. Certainly, there are limits: A person could not be held indefinitely, and would require medical care, food, and any necessities. The government would need to provide reasonable accommodation for disability or unusual family situations. But at the end of the day, police officers have the authority to shoot someone walking around a Walmart waving a gun, and the CDC has the authority to detain someone walking around a Walmart spewing lethal bullets of viral particles all over the place.

The question is why Americans didn’t leap to this response. Why did we, rather than using existing legal authorities and time-tested procedures, instead choose to lock down all of society, despite no precedent for such a response, and little evidence it would work? Why did we, instead of deploying a simple, easy-to-understand, low-tech solution that any local government could have implemented, focus on a futuristic testing-and-trace regime that is never going to be fully deployable? The legal authority for quarantine is very clearly codified in law, the historic basis for its effectiveness is obvious, and it was even used as a strategy against the 1918 influenza pandemic, polio, and against recent novel influenzas.

I have several theories about what went wrong, none of them complimentary to the quality of America’s political and medical leaders.

My first theory is simply that we have become a decadent and degraded society lacking the will to protect ourselves. Put bluntly, had COVID arrived in 1970, before PCR testing existed and when we still had orders of magnitude more quarantine sites and more widespread lived experience of sanatoriums, polio, and other epidemics, there is no question in my mind what would have happened. If the World Health Organization notified America about COVID in January of 1970, there is absolutely no doubt in my mind that President Nixon would have rolled out a centralized mass quarantine program within weeks. Not because he was clever or wise, but simply because that’s what all the public health manuals of the time made clear you simply had to do. The nation would not have locked down: We would never have shown that kind of weakness to the Soviets. Rather, we would have locked down infected people, their close contacts, and any other potential or suspected cases. We would have tackled the problem with the tools we had, and we would have beaten it.

Furthermore, our leaders would have had the very plain priority of beating COVID. U.S. public health authorities at that time had very recent memories of polio, encephalitis lethargica, and the terrible flu seasons of 1929, 1943, 1957, and 1968. And the nation had an acute sense that without strenuous exertion of political will to achieve victory, numerous disasters could befall the nation, nuclear or biological. Had deaths in New York risen 100 percent above their seasonal average (as is now the case), Americans of even the quite recent past would not be debating getting haircuts: they would be freely supporting the institution of a large-scale program to quarantine several million potentially infected people for several weeks while the nation got a handle on the problem.

But while I think decadence is a possible explanation, a second theory is that it’s not a lack of will at all, but simply the ability of wealth to redirect political will. The availability of rapid testing changed the political calculus of epidemics. People know it’s possible to know if they are infected, and so resist the idea of “precautionary” measures. The existence of testing creates political demands for testing, and political opposition to dramatic measures aimed at people who have not tested positive. This second theory suggests that American leaders today are acting as aggressively as any ever would have, but that the existence of modern scientific tools, far from providing them more options, actually provides them fewer options. The people demand testing, and everything else must wait (even if waiting leads to thousands of preventable deaths). In other words, people who are worried about COVID have become so fixated on the possibility of a scientific-technological solution centered on testing that they are unwilling to consider the brute-force (but more effective) strategy of centralized quarantine.

This view is especially frustrating since it is completely backward. Centralized quarantine will succeed in reducing caseloads, reducing the need for testing. With lower caseloads, current levels of testing will be sufficient. Quarantine comes before mass testing.

A third theory relates to the scientific and medical establishments themselves. From the earliest days of the COVID crisis, expert epidemiologists promoted the idea that the objective of public health responses was to “flatten the curve.” The idea was that COVID was likely to overwhelm the health care system, which would lead to a huge number of excess deaths, whereas if the health care system was not overwhelmed, deaths could be kept lower, even if the same number of people became infected. Put simply, the “flatten the curve” approach was focused entirely on the medical problem facing patients, doctors, and hospitals: How can we ensure COVID patients don’t die? Simple: Space out when they arrive at the hospital, so we never get overwhelmed. Because health care in Western countries is so advanced, technical, and effective for most conditions, this approach came very naturally to medical leaders. They wanted to do everything they could to ensure patient survival.

But this strategy was perverse from the beginning. It presumed that we couldn’t actually prevent the disease from obtaining widespread transmission. Every single epidemiological model used by developed countries simply took for granted that every community would have effectively unstoppable community-based transmission, and the only options were how to space it out along the curve. Thus, governments didn’t bother with limiting domestic travel (which can prevent new outbreaks), because they simply assumed there would be universal outbreaks. Governments didn’t bother with centralized quarantine, because they simply assumed any outbreak would be too big to quarantine anyways. 

But where this advice is particularly galling is on masks. Governments repeatedly claimed that masks didn’t work, by citing academic research showing that masks do not prevent infection of the person wearing the mask. The only effect of masks governments could imagine was the individual effect of a person protecting themselves with a mask. The reason they focused on this effect is that this is the medical use of N95 respirators: Medical professionals wear them as “personal protective equipment” or PPE. Masks were repeatedly portrayed as being a measure to protect yourself, and thus the public didn’t need to wear them, because they only worked if you wore basically a full HAZMAT suit.

This advice was insane. Masks are not “personal” protective equipment. There are societal protective equipment. The reason masks are important is not that they protect the mask-wearer, but that they protect the rest of society from the mask wearer. Western governments could only imagine their people protecting themselves: wearing masks to avoid infection, sheltering in place to avoid infection, etc. They simply could not imagine their citizens banding together to protect each other and obliterate the foe, whether through adopting universal mask-wearing, or voluntarily accepting centralized quarantine.

Finally, there’s a last theory on why governments eschewed centralized quarantine. China responded to COVID by locking down Wuhan; famously going to the extreme of forcibly sealing apartment complexes. This model was widely praised by the World Health Organization, and widely regarded in many countries as being effective. Formal academic study disagrees, of course: The lockdown in Wuhan did not reduce the spread of the disease below epidemic levels. But the idea that it did was promoted by Chinese elites looking for proof that authoritarianism is good, and WHO officials carrying water for them. Unfortunately, the wider world took the lesson.

Most frustratingly of all, as I’ve shown elsewhere, the case of Wuhan actually shows that centralized quarantine is what matters. Wuhan’s outbreak was resolved through quarantine, not lockdown. What ended the outbreak in Wuhan wasn’t welded-shut apartments, but massive field-quarantine units where tens of thousands of people spent weeks at a time being protectively isolated from society. 


It is not too late for the U.S. to implement centralized quarantine programs. The U.S. has 5 million hotel rooms, mostly vacant right now. We also have thousands of vacant schools that could be converted into temporary residences for low-risk individuals. And if that’s not enough, the U.S. military and FEMA both have enormous experience setting up large-scale temporary habitation areas. They could certainly set up centralized quarantine sites for hundreds of thousands or even millions of Americans.

However, many Americans worry about what centralized quarantine would mean. So let me dispel some myths.

Centralized quarantine will separate parents and children.

It is likely that children could be quarantined with parents in necessary cases. Children are at very low risk of dying of COVID, and if we’re using hotels and schools, there will be no shortage of family-size rooms. It is unnecessary to break up families in a centralized quarantine program (though if families request separate placement, that could be arranged too). Furthermore, in most cases, if one family member tests positive, all others are likely to be close contacts, and thus will be subjected to quarantine anyways.

Isolation will be miserable.

In the case of people who test positive but are symptomatic or low-risk, it may not be necessary to have individual isolation at all. Group quarantine of positive-but-low-risk individuals should not pose any significant public health risk. For others, isolation will be difficult, but, again, assuming hotels are the dominant quarantine site, individuals should at least have access to the internet to entertain themselves.

Quarantine will result in ruined household budgets.

The government can and should compensate individuals for lost work time. Paying quarantined individuals a reasonably high, fixed per-day rate would be no costlier than the COVID relief programs already in place, and far more useful to society on the whole.

People will be forced into camps at gunpoint.

While centralized quarantine should be mandatory, it probably will not require much or any actual force. The government can achieve its goals in a far simpler way: administrative fines. Individuals resisting a quarantine order could simply have a daily fine levied on them, set by a judge at a level sufficiently large to be likely to create compliance. Some people might pay the fine or accept bankruptcy, but most people would give in pretty quickly after racking up $1,000 daily fines. Failure to pay the fine would simply result in a lien on property or wage garnishment by the IRS, not a SWAT team knocking down your door. Centralized quarantine in America need not involve the police at all. Achieving 90 percent compliance and raking in huge fines from the remainder would still be a huge improvement over the status quo.

Americans won’t accept this.

Just like they didn’t accept lockdowns? The idea that Americans won’t accept a bootheel on their liberty seems pretty clearly disproven at this point: Centralized quarantine is a soft slipper by comparison. Moreover, it’s still early, and anecdotal, but businesses in states that have reopened are reporting that customers are not turning out in droves, despite our collective desire to return to normal. 


Unfortunately, a quick return to carefree life will lead to a resurgence of COVID, and more deaths.

More to the point, life is not best lived free of care. COVID is the latest major novel epidemic, but it won’t be the last. We need to learn the lesson Americans forgot in the 20th century, and maintain the capacity to respond to future epidemics. Mass quarantine sites need to be reopened and maintained for the future. Centralized quarantine protocols need to be updated for the 21st century. Governments need to run drills on epidemic response that include this kind of strategy as a viable option. Households should always maintain readiness for emergencies like epidemics, and individuals should always be mindful of rational risks to their health and the health of those around them.

As we move toward reopening, policymakers should take steps to prepare for the next wave, especially by testing out centralized quarantine programs, and making a game plan for how to scale them up rapidly during the next outbreak. 

If we fail to do this, the next outbreak will be as bad as the latest one. But if we succeed, if we implement centralized quarantine, then our next outbreak can be as mild as the countries that did implement centralized quarantine, like Hong Kong, Vietnam, and Korea.

Lyman Stone is the chief information officer of the consulting firm Demographic Intelligence, a research fellow at the Institute for Family Studies, and an adjunct fellow at the American Enterprise Institute.

Photograph of an Athens hotel housing Greeks who had returned from Spain and Turkey by Dimitris Lampropoulos/NurPhoto/Getty Images.