How the human life span doubled in 100 years


In September 1918, a flu virus began spreading through Camp Devens, an overcrowded military base just outside Boston. By the end of the second week of the outbreak, one in five soldiers at the base had come down with the illness. But the speed with which it spread through the camp was not nearly as shocking as the lethality. “It is only a matter of a few hours then until death comes,” a camp physician wrote. “It is horrible. One can stand it to see one, two or 20 men die, but to see these poor devils dropping like flies sort of gets on your nerves. We have been averaging about 100 deaths per day.”

The devastation at Camp Devens would soon be followed by even more catastrophic outbreaks, as the so-called Spanish flu — a strain of influenza virus that science now identifies as H1N1 — spread around the world. In the United States, it would cause nearly half of all deaths over the next year. In what was already a time of murderous war, the disease killed millions more on the front lines and in military hospitals in Europe; in some populations in India, the mortality rate for those infected approached 20 percent. The best estimates suggest that as many as 100 million people died from the Great Influenza outbreak that eventually circled the globe. To put that in comparison, roughly three million people have died from Covid-19 over the past year, on a planet with four times as many people.

There was another key difference between these two pandemics. The H1N1 outbreak of 1918-19 was unusually lethal among young adults, normally the most resilient cohort during ordinary flu seasons. Younger people experienced a precipitous drop in expected life during the H1N1 outbreak, while the life expectancies of much older people were unaffected. In the United States, practically overnight, average life expectancy plunged to 47 from 54; in England and Wales, it fell more than a decade, from a historic height of 54 to an Elizabethan-era 41. India experienced average life expectancies below 30 years.

Imagine you were there at Camp Devens in late 1918, surveying the bodies stacked in a makeshift morgue. Or you were roaming the streets of Bombay, where more than 5 percent of the population died of influenza in a matter of months. Imagine touring the military hospitals of Europe, seeing the bodies of so many young men simultaneously mutilated by the new technologies of warfare — machine guns and tanks and aerial bombers — and the respiratory violence of H1N1. Imagine knowing the toll this carnage would take on global life expectancy, with the entire planet lurching backward to numbers more suited to the 17th century, not the 20th. What forecast would you have made for the next hundred years? Was the progress of the past half-century merely a fluke, easily overturned by military violence and the increased risk of pandemics in an age of global connection? Or was the Spanish flu a preview of an even darker future, in which some rogue virus could cause a collapse of civilization itself?

Both grim scenarios seemed within the bounds of possibility. And yet, amazingly, neither came to pass. Instead, what followed was a century of unexpected life.

The period from 1916 to 1920 marked the last point in which a major reversal in global life expectancy would be recorded. (During World War II, life expectancy did briefly decline, but with nowhere near the severity of the collapse during the Great Influenza.) The descendants of English and Welsh babies born in 1918, who on average lived just 41 years, today enjoy life expectancies in the 80s. And while Western nations surged far ahead in average life span during the first half of the last century, other nations have caught up in recent decades, with China and India having recorded what almost certainly rank as the fastest gains of any society in history. A hundred years ago, an impoverished resident of Bombay or Delhi would beat the odds simply by surviving into his or her late 20s. Today average life expectancy in India is roughly 70 years.

In effect, during the century since the end of the Great Influenza outbreak, the average human life span has doubled. There are few measures of human progress more astonishing than this. If you were to publish a newspaper that came out just once a century, the banner headline surely would — or should — be the declaration of this incredible feat. But of course, the story of our extra life span almost never appears on the front page of our actual daily newspapers, because the drama and heroism that have given us those additional years are far more evident in hindsight than they are in the moment. That is, the story of our extra life is a story of progress in its usual form: brilliant ideas and collaborations unfolding far from the spotlight of public attention, setting in motion incremental improvements that take decades to display their true magnitude.

Another reason we have a hard time recognizing this kind of progress is that it tends to be measured not in events but in nonevents: the smallpox infection that didn’t kill you at age 2; the accidental scrape that didn’t give you a lethal bacterial infection; the drinking water that didn’t poison you with cholera. In a sense, human beings have been increasingly protected by an invisible shield, one that has been built, piece by piece, over the last few centuries, keeping us ever safer and further from death. It protects us through countless interventions, big and small: the chlorine in our drinking water, the ring vaccinations that rid the world of smallpox, the data centers mapping new outbreaks all around the planet. A crisis like the global pandemic of 2020-21 gives us a new perspective on all that progress. Pandemics have an interesting tendency to make that invisible shield suddenly, briefly visible. For once, we’re reminded of how dependent everyday life is on medical science, hospitals, public-health authorities, drug supply chains and more. And an event like the Covid-19 crisis does something else as well: It helps us perceive the holes in that shield, the vulnerabilities, the places where we need new scientific breakthroughs, new systems, new ways of protecting ourselves from emergent threats.

How did this great doubling of the human life span happen? When the history textbooks do touch on the subject of improving health, they often nod to three critical breakthroughs, all of them presented as triumphs of the scientific method: vaccines, germ theory and antibiotics. But the real story is far more complicated. Those breakthroughs might have been initiated by scientists, but it took the work of activists and public intellectuals and legal reformers to bring their benefits to everyday people. From this perspective, the doubling of human life span is an achievement that is closer to something like universal suffrage or the abolition of slavery: progress that required new social movements, new forms of persuasion and new kinds of public institutions to take root. And it required lifestyle changes that ran throughout all echelons of society: washing hands, quitting smoking, getting vaccinated, wearing masks during a pandemic.

It is not always easy to perceive the cumulative impact of all that work, all that cultural transformation. The end result is not one of those visible icons of modernity: a skyscraper, a moon landing, a fighter jet, a smartphone. Instead, it manifests in countless achievements, often quickly forgotten, sometimes literally invisible: the drinking water that’s free of microorganisms, or the vaccine received in early childhood and never thought about again. The fact that these achievements are so myriad and subtle — and thus underrepresented in the stories we tell ourselves about modern progress — should not be an excuse to keep our focus on the astronauts and fighter pilots. Instead, it should inspire us to correct our vision.

The first life-expectancy tables were calculated in the late 1600s, during the dawn of modern statistics and probability. It turned out to be one of those advances in measurement that transform the thing being measured: By following changes in life expectancy over time, and comparing expected life among different populations, it became easier to detect inequalities in outcomes, perceive long-term threats and track the effects of promising health interventions more accurately. Demographers now distinguish between life expectancies at different ages. In a society with very high infant mortality, life expectancy at birth might be 20, because so many people die in the first days of life, pulling the overall number down, while life expectancy at 20 might easily be in the 60s. The doubling of life expectancy over the past century is a result of progress at both ends of the age spectrum: Children are dying far less frequently, and the elderly are living much longer. Centenarians are projected to be the fastest-growing age group worldwide.

One strange thing about the story of global life expectancy is how steady the number was for almost the entirety of human history. Until the middle of the 18th century, the figure appears to have rarely exceeded a ceiling of about 35 years, rising or falling with a good harvest or a disease outbreak but never showing long-term signs of improvement. A key factor keeping average life expectancy low was the shockingly high rates of infant and childhood mortality: Two in five children perished before reaching adulthood. Human beings had spent 10,000 years inventing agriculture, gunpowder, double-entry accounting, perspective in painting — but these undeniable advances in collective human knowledge failed to move the needle in one critical category: how long the average person could expect to live.

The first hint that this ceiling might be broached appeared in Britain during the middle decades of the 18th century, just as the Enlightenment and industrialization were combining to transform European and North American societies. The change was subtle at first and largely imperceptible to contemporary observers. In fact, it was not properly documented until the 1960s, when a historical demographer named T.H. Hollingsworth analyzed records dating back to 1550 and discovered a startling pattern. Right around 1750, after two centuries of stasis, the average life expectancy of a British aristocrat began to increase at a steady rate, year after year, creating a measurable gap between the elites and the rest of the population. By the 1770s, the British elite were living on average into their mid-40s; by the middle of Queen Victoria’s reign, they were approaching a life expectancy at birth of 60.

Those aristocrats constituted a vanishingly small proportion of humanity. But the demographic transformation they experienced offered a glimpse of the future. The endless bobbing of the previous 10,000 years had not only taken on a new shape — a more or less straight line, steadily slanting upward. It also marked the beginning of a measurable gap in health outcomes. Before 1750, it didn’t matter whether you were a baron or a haberdasher or a hunter-gatherer: Your life expectancy at birth was going to be in the 30s. All their wealth and privilege gave European elites no advantage whatsoever at the elemental task of keeping themselves — and their children most of all — alive.

The best way to appreciate the lack of health inequalities before 1750 is to contemplate the list of European royalty killed by the deadly smallpox virus in the preceding decades. During the outbreak of 1711 alone, smallpox killed the Holy Roman emperor Joseph I; three siblings of the future Holy Roman emperor Francis I; and the heir to the French throne, the grand dauphin Louis. Smallpox would go on to take the lives of King Louis I of Spain; Emperor Peter II of Russia; Louise Hippolyte, sovereign princess of Monaco; King Louis XV of France; and Maximilian III Joseph, elector of Bavaria.

How, then, did the British elite manage that first sustained extension in average life span? The classic story of health progress from the age is Edward Jenner’s invention of the smallpox vaccine, which ranks alongside Newton’s apple and Franklin’s kite among the most familiar narratives in the history of science. After noticing that exposure to a related illness called cowpox — often contracted by dairy workers — seemed to prevent more dangerous smallpox infections, Jenner scraped some pus from the cowpox blisters of a milkmaid and then inserted the material, via incisions made with a lancet, into the arms of an 8-year-old boy. After developing a light fever, the boy soon proved to be immune to variola, the virus that causes smallpox. As the first true vaccination, Jenner’s experiment was indeed a watershed moment in the history of medicine and in the ancient interaction between humans and microorganisms. But Jenner’s triumph did not occur until May 1796, well after the initial takeoff in life expectancy among the British elite. The timing suggests that an earlier innovation was most likely driving much of the initial progress, one that originated far from the centers of Western science and medicine: variolation.

No one knows exactly when and where variolation, a kind of proto-vaccination that involves direct exposure to small amounts of the virus itself, was first practiced. Some accounts suggest it may have originated in the Indian subcontinent thousands of years ago. The historian Joseph Needham described a 10th-century variolater, possibly a Taoist hermit, from Sichuan who brought the technique to the royal court after a Chinese minister’s son died of smallpox. Whatever its origins, the historical record is clear that the practice had spread throughout China, India and Persia by the 1600s. Enslaved Africans brought the technique to the American colonies. Like many great ideas, it may have been independently discovered multiple times in unconnected regions of the world. It is possible, in fact, that the adoption of variolation may have temporarily increased life expectancies in those regions as well, but the lack of health records make this impossible to determine. All we can say for certain is that whatever increase might have happened had disappeared by the time countries like China or India began keeping accurate data on life span.

Variolation made it to Britain thanks to an unlikely advocate: a well-bred and erudite young woman named Lady Mary Wortley Montagu. A smallpox survivor herself, Montagu was the daughter of the Duke of Kingston-Upon-Hull and wife of the grandson of the first Earl of Sandwich. As a teenager, she wrote poetry and an epistolary novel; in her early 20s, she struck up a correspondence with the poet Alexander Pope. She crossed paths with variolation thanks to an accident of history: Shortly after her successful recovery from smallpox, her husband, Edward Wortley Montagu, was appointed ambassador to the Ottoman Empire. In 1716, after spending her entire life in London and the English countryside, Mary Montagu moved her growing family to Constantinople, living there for two years.

Montagu immersed herself in the culture of the city, visiting the famous baths and studying Turkish. In her explorations, she came across the practice of variolation and described it in enthusiastic letters back to her friends and family in England: “The Small Pox — so fatal and so general amongst us — is here rendered entirely harmless, by the invention of engrafting.” In March 1718, she had her young son engrafted. After a few days of fever and an outbreak of pustules on both arms, Montagu’s son made a full recovery. He would go on to live into his 60s, seemingly immune to smallpox for the rest of his life. He is generally considered the first British citizen to have been inoculated. His sister was successfully inoculated in 1721, after Montagu and her family returned to London. Over the next few years, inspired by Montagu’s success, the Princess of Wales inoculated three of her children, including her son Frederick, the heir to the British throne. Frederick would survive his childhood untouched by smallpox, and while he died before ascending to the throne, he did live long enough to produce an heir: George William Frederick, who would eventually become King George III.

Thanks in large part to Mary Montagu’s advocacy, variolation spread through the upper echelons of British society over the subsequent decades. It remained a controversial procedure throughout the century; many of its practitioners worked outside the official medical establishment of the age. But the adoption of variolation by the British elite left an indelible mark in the history of human life expectancy: that first upward spike that began to appear in the middle of the 1700s, as a whole generation of British aristocrats survived their childhoods thanks at least in part to their increased levels of immunity to variola. Crucially, one Englishman inoculated during that period was Edward Jenner himself, who received the treatment as a young child in 1757; decades later, as a local doctor, he regularly inoculated his own patients. Without a lifelong familiarity with variolation, it is unlikely that Jenner would have hit upon the idea of injecting pus from a less virulent but related disease.

As Jenner would later demonstrate, vaccination improved the mortality rates of the procedure; patients were significantly more likely to die from variolation than from vaccination. But undeniably, a defining element of the intervention lay in the idea of triggering an immune response by exposing a patient to a small quantity of infected material. That idea had emerged elsewhere, not in the fertile mind of the country doctor, musing on the strange immunity of the milkmaids, but rather in the minds of pre-Enlightenment healers in China and India and Africa hundreds of years before. Vaccination was a truly global idea from the beginning.

The positive trends in life expectancy among the British elites in the late 1700s would not become a mass phenomenon for another century. Variolation and vaccination had spread through the rural poor and the industrial working classes during that period, in part thanks to political and legal campaigns that led to mandatory vaccination programs. But the decline of smallpox was overwhelmed by the man-made threats of industrialization. For much of the 19th century, the overall balance sheet of scientific and technological advances was a net negative in terms of human health: The life-span benefits of one technological advance (variolation and vaccines) were quickly wiped out by the costs of another (industrialization).

In 1843, the British statistician William Farr compared life expectancies in three parts of England: rural Surrey, metropolitan London and industrial Liverpool. Farr found that people in Surrey were enjoying life expectancies close to 50, a significant improvement over the long ceiling of the mid-30s. The national average was 41. London, for all its grandeur and wealth, was still stuck at 35. But Liverpool — a city that had undergone staggering explosions in population density, because of industrialization — was the true shocker. The average Liverpudlian died at 25.

The mortality trends in the United States during the first half of the 19th century were equally stark. Despite the widespread adoption of vaccination, overall life expectancy in the United States declined by 13 years between 1800 and 1850. In 1815, about 30 percent of all reported deaths in New York were children under 5. By the middle of the century, it was more than 60 percent.

One culprit was increasingly clear. In May 1858, a progressive journalist in New York named Frank Leslie published a 5,000-word exposé denouncing a brutal killer in the metropolis. Malevolent figures, Leslie wrote, were responsible for what he called “the wholesale slaughter of the innocents.” He went on, “For the midnight assassin, we have the rope and the gallows; for the robber the penitentiary; but for those who murder our children by the thousands we have neither reprobation nor punishment.” Leslie was railing not against mobsters or drug peddlers but rather a more surprising nemesis: milk.

Drinking animal milk — a practice as old as animal domestication itself — has always presented health risks, from spoilage or by way of infections passed down from the animal. But the density of industrial cities like New York had made cow’s milk far deadlier than it was in earlier times. In an age without refrigeration, milk would spoil in summer months if it was brought in from far-flung pastures in New Jersey or upstate New York. Increased participation from women in the industrial labor force meant that more infants and young children were drinking cow’s milk, even though a significant portion of dairy cows suffered from bovine tuberculosis, and unprocessed milk from these cows could transmit the bacterium that causes the disease to human beings. Other potentially fatal illnesses were also linked to milk, including diphtheria, typhoid and scarlet fever.

How did milk go from being a “liquid poison” — as Frank Leslie called it — to the icon of health and vitality that it became in the 20th century? The obvious answer begins in 1854, when a young Louis Pasteur took a job at the University of Lille in the northern corner of France, just west of the French-Belgian border. Sparked by conversations with winemakers and distillery managers in the region, Pasteur became interested in the question of why certain foods and liquids spoiled. Examining samples of a spoiled beetroot alcohol under a microscope, Pasteur was able to detect not only the yeast organisms responsible for fermentation but also a rod-shaped entity — a bacterium now called Acetobacter aceti — that converts ethanol into acetic acid, the ingredient that gives vinegar its sour taste. These initial observations convinced Pasteur that the mysterious changes of both fermentation and spoilage were not a result of spontaneous generation but rather were a byproduct of living microbes, and that insight, which would eventually help provide the foundation of the germ theory of disease, led Pasteur to experiment with different techniques for killing those microbes before they could cause any harm. By 1865, Pasteur, now a professor at the École Normal Supérieure in Paris, had hit upon the technique that would ultimately bear his name: By heating wine to around 130 degrees Fahrenheit and then quickly cooling it, he could kill many of the bacteria within, and in doing so prevent the wine from spoiling without substantially affecting its flavor. And it is that technique, applied to milk all around the world, that now saves countless people from dying of disease every single day.

Understanding that last achievement as a triumph of chemistry is not so much wrong as it is incomplete. One simple measure of why it is incomplete is how long it took for pasteurization to actually have a meaningful effect on the safety of milk: In the United States, it would not become standard practice in the milk industry until a half century after Pasteur conceived it. That’s because progress is never a result of scientific discovery alone. It also requires other forces: crusading journalism, activism, politics. Pasteurization as an idea was first developed in the mind of a chemist. But in the United States, it would finally make a difference thanks to a much wider cast of characters, most memorably a department-store impresario named Nathan Straus.

Born in the kingdom of Bavaria in 1848, Straus moved with his family to the American South, where his father had established a profitable general store. By the 1880s, Straus and his brother Isidor had become part owners of Macy’s department store in Manhattan. Straus had long been concerned about the childhood mortality rates in the city — he had lost two children to disease. Conversations with another German immigrant, the political radical and physician Abraham Jacobi, introduced him to the pasteurization technique, which was finally being applied to milk almost a quarter of a century after Pasteur developed it. Straus saw that pasteurization offered a comparatively simple intervention that could make a meaningful difference in keeping children alive.

One major impediment to pasteurization came from milk consumers themselves. Pasteurized milk was widely considered to be less flavorful than regular milk; the process was also believed to remove the nutritious elements of milk — a belief that has re-emerged in the 21st century among “natural milk” adherents. Dairy producers resisted pasteurization not just because it added an additional cost to the production process but also because they were convinced, with good reason, that it would hurt their sales. And so Straus recognized that changing popular attitudes toward pasteurized milk was an essential step. In 1892, he created a milk laboratory where sterilized milk could be produced at scale. The next year, he began opening what he called milk depots in low-income neighborhoods around the city, which sold the milk below cost. Straus also funded a pasteurization plant on Randall’s Island that supplied sterilized milk to an orphanage there where almost half the children had perished in only three years. Nothing else in their diet or living conditions was altered other than drinking pasteurized milk. Almost immediately, the mortality rate dropped by 14 percent.

Emboldened by the results of these early interventions, Straus started an extended campaign to outlaw unpasteurized milk, an effort that was ferociously opposed by the milk industry and its representatives in statehouses around the country. Quoting an English doctor at a rally in 1907, Straus told an assembled mass of protesters, “The reckless use of raw, unpasteurized milk is little short of a national crime.” Straus’s advocacy attracted the attention of President Theodore Roosevelt, who ordered an investigation into the health benefits of pasteurization. Twenty government experts came to the resounding conclusion that pasteurization “prevents much sickness and saves many lives.” New York still wavered, and in 1909, it was instead Chicago that became the first major American city to require pasteurization. The city’s commissioner of health specifically cited the demonstrations of the “philanthropist Nathan Straus” in making the case for sterilized milk. New York finally followed suit in 1912. By the early 1920s, three decades after Straus opened his first milk depot on the Lower East Side — more than half a century after Pasteur made his namesake breakthrough — unpasteurized milk had been outlawed in almost every major American city.

The fight for pasteurized milk was one of a number of mass interventions — originating in 19th-century science but not implemented at scale until the early 20th century — that triggered the first truly egalitarian rise in life expectancy. By the first decade of the 20th century, average life spans in England and the United States had passed 50 years. Millions of people in industrialized nations found themselves in a genuinely new cycle of positive health trends — what the Nobel-laureate economist Angus Deaton has called “the great escape” — finally breaking through the ceiling that had limited Homo sapiens for the life of the species. The upward trend continued after the brief but terrifying firestorm of the Spanish flu, driven by unprecedented declines in infant and childhood mortality, particularly among working-class populations. From 1915 to 1935, infant-mortality rates in the United States were cut in half, one of the most significant declines in the history of that most critical of measures. For every hundred human beings born in New York City for most of the 19th century, fewer than 60 would make it to adulthood. Today 99 of them do.

One reason the great escape was so egalitarian in scope is that it was propelled by infrastructure advances that benefited the entire population, not just the elites. Starting in the first decades of the 20th century, human beings in cities all around the world began consuming microscopic amounts of chlorine in their drinking water. In sufficient doses, chlorine is a poison. But in very small doses, it is harmless to humans but lethal to the bacteria that cause diseases like cholera. Thanks to the same advances in microscopy and lens making that allowed Louis Pasteur to see microbes in wine and milk, scientists could now perceive and measure the amount of microbial life in a given supply of drinking water, which made it possible by the end of the 19th century to test the efficacy of different chemicals, chlorine above all else, in killing off those dangerous microbes. After conducting a number of these experiments, a pioneering sanitary adviser named John Leal quietly added chlorine to the public reservoirs in Jersey City — an audacious act that got Leal sued by the city, which said he had failed to supply “pure and wholesome” water as his contract had stipulated.

After Leal’s successful experiment, city after city began implementing chlorine disinfectant systems in their waterworks: Chicago in 1912, Detroit in 1913, Cincinnati in 1918. By 1914, more than 50 percent of public-water customers were drinking disinfected water. These interventions turned out to be a lifesaver on an astonishing scale. In 1908, when Leal first started experimenting with chlorine delivery in Jersey City, typhoid was responsible for 30 deaths per 100,000 people. Three decades later, the death rate had been reduced by a factor of 10.

The rise of chlorination, like the rise of pasteurization, could be seen solely as another triumph of applied chemistry. But acting on those new ideas from chemistry — the painstaking effort of turning them into lifesaving interventions — was the work of thousands of people in professions far afield of chemistry: sanitation reformers, local health boards, waterworks engineers. Those were the men and women who quietly labored to transform America’s drinking water from one of the great killers of modern life to a safe and reliable form of hydration.

The increase in life expectancy was also enhanced by the explosion of vaccine development during this period — and the public-health reforms that actually got those vaccines in people’s arms. The whooping-cough vaccine was developed in 1914, tuberculosis in 1921, diphtheria in 1923 — followed, most famously, by Jonas Salk’s polio vaccine in the early 1950s.

The curious, almost counterintuitive thing about the first stage of the great escape is that it was not meaningfully propelled by medical drugs. Vaccines could protect you from future infections, but if you actually got sick — or developed an infection from a cut or surgical procedure — there was very little that medical science could do for you. There was no shortage of pills and potions to take, of course. It’s just that a vast majority were ineffective at best. The historian John Barry notes that “the 1889 edition of the Merck Manual of Medical Information recommended one hundred treatments for bronchitis, each one with its fervent believers, yet the current editor of the manual recognizes that ‘none of them worked.’” If a pharmacist in 1900 was looking to stock his shelves with medicinal cures for various ailments — gout, perhaps, or indigestion — he would be likely to consult the extensive catalog of Parke, Davis & Company, now Parke-Davis, one of the most successful and well-regarded drug companies in the United States. In the pages of that catalog, he would have seen products like Damiana et Phosphorus cum Nux, which combined a psychedelic shrub and strychnine to create a product designed to “revive sexual existence.” Another elixir by the name of Duffield’s Concentrated Medicinal Fluid Extracts contained belladonna, arsenic and mercury. Cocaine was sold in an injectable form, as well as in powders and cigarettes. The catalog proudly announced that the drug would take “the place of food, make the coward brave, the silent eloquent” and “render the sufferer insensitive to pain.”

Today, of course, we think of medicine as one of the pillars of modern progress, but until quite recently, drug development was a scattershot and largely unscientific endeavor. One critical factor was the lack of any legal prohibition on selling junk medicine. In fact, in the United States, the entire pharmaceutical industry was almost entirely unregulated for the first decades of the 20th century. Technically speaking, there was an organization known as the Bureau of Chemistry, created in 1901 to oversee the industry. But this initial rendition of what ultimately became the U.S. Food and Drug Administration was toothless in terms of its ability to ensure that customers were receiving effective medical treatments. Its only responsibility was to ensure that the chemical ingredients listed on the bottle were actually present in the medicine itself. If a company wanted to put mercury or cocaine in their miracle drug, the Bureau of Chemistry had no problem with that — so long as it was mentioned on the label.

Medical drugs finally began to have a material impact on life expectancy in the middle of the 20th century, led by the most famous “magic bullet” treatment of all: penicillin. Just as in the case of Jenner and the smallpox vaccine, the story of penicillin traditionally centers on a lone genius and a moment of surprising discovery. On a fateful day in September 1928, the Scottish scientist Alexander Fleming accidentally left a petri dish of Staphylococcus bacteria next to an open window before departing for a two-week vacation. When he returned to find a blue-green mold growing in the petri dish, he was about to throw it away, when he noticed something strange: The mold appeared to have stopped the bacteria’s growth. Looking at the mold under a microscope, Fleming saw that it was literally breaking down the cell walls of the bacteria, effectively destroying them. Seventeen years later, after the true magnitude of his discovery had become apparent, he was awarded the Nobel Prize in Medicine.

Like many stories of scientific breakthroughs, though, the tale of the petri dish and the open window cartoonishly simplifies and compresses the real narrative of how penicillin — and the other antibiotics that quickly followed in its wake — came to transform the world. Far from being the story of a lone genius, the triumph of penicillin is actually one of the great stories of international, multidisciplinary collaboration in the history of science. It also represents perhaps the most undersung triumph of the Allied nations during World War II. Ask most people to name a top-secret military project from that era involving an international team of brilliant scientists, and what most likely would spring to mind is the Manhattan Project. In fact, the race to produce penicillin at scale involved all the same elements — only it was a race to build a genuinely new way to keep people alive, not kill them.

For all Fleming’s perceptiveness in noting the antibacterial properties of the mold, he seemed to have not entirely grasped the true potential of what he stumbled upon. He failed to set up the most basic of experimental trials to test its efficacy at killing bacteria outside the petri dish. It took two Oxford scientists — Howard Florey and Ernst Boris Chain — to turn penicillin from a curiosity to a lifesaver, and their work didn’t begin for more than a decade after Fleming’s original discovery. By then, global events had turned the mold from a mere medical breakthrough into a key military asset: War had broken out, and it was clear that a miracle drug that could reduce the death rate from infections would be a major boost to the side that was first able to develop it.

With the help of an engineer named Norman Heatley, Florey and Chain had built an elaborate contraption that could convert, in the span of an hour, 12 liters of broth filled with the penicillin mold into two liters of penicillin medication. By early 1941, after experiments on mice, Florey and Chain decided they were ready to try their new treatment on an actual human. In a nearby hospital they found a police constable named Albert Alexander, who had become “desperately and pathetically ill” — as one of the Oxford scientists wrote — from an infection acquired from a rose-thorn scratch. Alexander’s condition reminds us of the kind of grotesque infections that used to originate in the smallest of cuts in the era before antibiotics; already he had lost his left eye to the bacteria, and the other had gone blind. The night after Heatley visited Alexander in the hospital, he wrote in his diary, “He was oozing pus everywhere.”

Within hours of receiving an initial dose of penicillin, Alexander began to heal. It was like watching a reverse horror movie: The man’s body had been visibly disintegrating, but suddenly it switched directions. His temperature settled back to a normal range; for the first time in days, he could see through his remaining eye. The pus that had been dripping from his scalp entirely disappeared.

As they watched Alexander’s condition improve, Florey and his colleagues recognized they were witnessing something genuinely new. “Chain was dancing with excitement,” a colleague would write of the momentous day; Florey was “reserved and quiet but nonetheless intensely thrilled by this remarkable clinical story.” Yet for all their genius, Florey and Chain had not yet solved the problem of scale. In fact, they had such limited supplies of penicillin that they took to recycling the compound that had been excreted in Alexander’s urine. After two weeks of treatment, they ran out of the medicine entirely; Alexander’s condition immediately worsened, and on March 15 the policeman died. His remarkable, if temporary, recovery had made it clear that penicillin could battle bacterial infections. What was less clear was whether anyone could produce enough of it to make a difference.

To solve the scale problem, Florey turned to the Americans. He wrote to Warren Weaver, the visionary head of the Rockefeller Foundation, explaining the promising new medicine. Weaver recognized the significance of the finding and arranged to have the penicillin — and the Oxford team — brought over to the United States, far from the German bombs that began raining down on Britain. On July 1, 1941, Florey and Heatley took the Pan Am Clipper from Lisbon, carrying a locked briefcase containing a significant portion of the world’s penicillin supply. In America, the team was quickly set up with a lab at the Department of Agriculture’s Northern Regional Research Laboratory in Peoria, Ill. The project quickly gained the support of U.S. military officials, who were eager to find a drug that would protect the troops from deadly infections — and of several American drug companies, including Merck and Pfizer.

It might seem strange that Florey and Heatley were set up in an agricultural lab when they were working on a medical drug. But Peoria turned out to be the perfect spot for them. The agricultural scientists had extensive experience with molds and other soil-based organisms. And the heartland location had one meaningful advantage: its proximity to corn. The mold turned out to thrive in vats of corn steep liquor, which was a waste product created by making cornstarch.

While the scientists experimented with creating larger yields in the corn steep liquors, they also suspected that there might be other strains of penicillin out in the wild that would be more amenable to rapid growth. At the same time, U.S. soldiers and sailors collected soil samples around the globe — Eastern Europe, North Africa, South America — to be shipped back to the American labs for investigation. An earlier soil search in the United States had brought back an organism that would become the basis for streptomycin, now one of the most widely used antibiotics in the world. In the years immediately after the end of the war, Pfizer and other drug companies would go on to conduct major exploratory missions seeking out soil samples everywhere, from the bottoms of mine shafts to wind-borne samples gathered with the aid of balloons. In the end Pfizer collected a staggering 135,000 distinct samples.

The search for promising molds took place closer to home as well. During the summer months of 1942, shoppers in Peoria grocery stores began to notice a strange presence in the fresh produce aisles, a young woman intently examining the fruit on display, picking out and purchasing the ones with visible rot. Her name was Mary Hunt, and she was a bacteriologist from the Peoria lab, assigned the task of locating promising molds that might replace the existing strains that were being used. (Her unusual shopping habits ultimately gave her the nickname Moldy Mary.) One of Hunt’s molds — growing in a particularly unappetizing cantaloupe — turned out to be far more productive than the original strains that Florey and Chain’s team had tested. Nearly every strain of penicillin in use today descends from the colony Hunt found in that cantaloupe.

Aided by the advanced production techniques of the drug companies, the United States was soon producing a stable penicillin in quantities sufficient to be distributed to military hospitals around the world. When the Allied troops landed on the Normandy beaches on June 6, 1944, they were carrying penicillin along with their weapons.

Penicillin, alongside the other antibiotics developed soon after the war ended, triggered a revolution in human health. Mass killers like tuberculosis were almost entirely eliminated. People stopped getting severe infections from simple cuts and scrapes, like the rose-thorn scratch that killed Albert Alexander. The magical power of antibiotics to ward off infection also opened the door to new treatments. Radical surgical procedures like organ transplants became mainstream.

The antibiotics revolution marked a more general turning point in the history of medicine: Physicians now had genuinely useful drugs to prescribe. Over the subsequent decades, antibiotics were joined by other new forms of treatment: the antiretroviral drugs that have saved so many H.I.V.-positive people from the death sentence of AIDS, the statins and ACE inhibitors used to treat heart disease and now a new regime of immunotherapies that hold the promise of curing certain forms of cancer for good. Hospitals are no longer places we go to die, offering nothing but bandages and cold comfort. Routine surgical procedures rarely result in life-threatening infections.

Those medical breakthroughs were also propelled by the statistical breakthrough of randomized controlled trials (R.C.T.s), developed for the first time in the late 1940s, that finally allowed researchers to test the efficacy of experimental treatments or detect health risks from dangerous pollutants. The methodology of the R.C.T. then allowed private companies and government agencies to determine empirically whether a given drug actually worked. In the early 1960s, Congress passed the landmark Kefauver-Harris Drug Amendments, which radically extended the demands made on new drug applicants. The amendments introduced many changes to the regulatory code, but the most striking one was this: For the first time, drug companies would be required to supply proof of efficacy. It wasn’t enough for Big Pharma to offer evidence that they had listed the right ingredients on the label. They had to show proof — made possible by the invention of the R.C.T. — that their supposed cures actually worked.

The decade following the initial mass production of antibiotics marked the most extreme moment of life-span inequality globally. In 1950, when life expectancy in India and most of Africa had barely budged from the long ceiling of around 35 years, the average American could expect to live 68 years, while Scandinavians had already crossed the 70-year threshold. But the post-colonial era that followed would be characterized by an extraordinary rate of improvement across most of the developing world. The gap between the West and the rest of the world has been narrowing for the past 50 years, at a rate unheard-of in demographic history. It took Sweden roughly 150 years to reduce childhood mortality rates from 30 percent to under 1 percent. Postwar South Korea pulled off the same feat in just 40 years. India nearly doubled life expectancy in just 70 years; many African nations have done the same, despite the ravages of the AIDS epidemic. In 1951, the life-span gap that separated China and the United States was more than 20 years; now it is just two.

 

You can return to the main Market News page, or press the Back button on your browser.