Beyond Discovery™: The Path from Research to Human Benefit is a series of articles that trace the origins of important recent technological and medical advances. Each story reveals the crucial role played by basic science, the applications of which could not have been anticipated at the time the original research was conducted.
The National Academy of Sciences ran this project from 1996 to 2003. The website went offline in early 2013 and as of June 2014 is still “under construction.” [2015 update: that link now leads to a page with all the articles as PDFs! Except the MRI one, for some reason.] You can [also] find all twenty articles in the Internet Archive, but it’s kind of a pain to navigate. So I’ve gathered them all here.
The articles (each around 8 pages) are roughly popular-magazine-level accounts of variable quality, but I learned quite a bit from all of them, particularly from the biology and medicine articles. They’re very well written, generally with input from the relevant scientists still living (many of them Nobel laureates). In particular I like the broad view of history, the acknowledged scope of the many branches leading to any particular technology, the variety of topics outside the usual suspects, the focus on fairly recent technology, and the emphasis bordering on propagandist on the importance and unpredictability of basic research. It seems to me that they filled an important gap in popular science writing in this way. Links, quotes, and some brief comments follow.
Curing Childhood Leukemia (PDF)
The fight against cancer has been more of a war of attrition than a series of spectacular, instantaneous victories, and the research into childhood leukemia over the last 40 years is no exception. But most of the children who are victims of this disease can now be cured, and the drugs that made this possible are the antimetabolite drugs that will be described here. The logic behind those drugs came from a wide array of research that defined the chemical workings of the cell—research done by scientists who could not know that their findings would eventually save the lives of up to thirty thousand children in the United States.
Like most scientific innovations that have had significant effects on society, bioengineered seeds did not emerge solely from the efforts of researchers to improve pest or weed control. Rather they were the by-product of earlier researchers’ curiosity about such basic science questions as: How do bacteria cause plant tumors? How do some viruses protect plants from other viruses? What enables some bacteria to kill insects? The following article explores the trail of research that ultimately led scientists to bioengineer the plants that are beginning to transform agriculture. This story provides a dramatic example of how science works and how basic research can lead to practical results that were unimaginable when the research began.
That last sentence becomes something of a refrain.
Disarming a Deadly Virus (PDF)
More solutions by analogy followed, as scientists interested in finding other substances to lower blood pressure looked to solve the structure of renin, a protein that works in concert with ACE to make angiotensin. Renin was difficult to crystallize, but clues to its structure came, albeit in roundabout fashion, through investigators who were looking into the structure of the digestive enzyme pepsin. Both renin and pepsin are members of a family known as aspartic proteases for the amino acid aspartate in their active sites. Although scientists took the first x-ray crystallography images of pepsin in the 1930s, they had a difficult time unraveling the structure of pepsin. It was not until the 1970s that research on the structure of fungal acid proteases served as a guide for solving the structure of pig pepsin.
GPS: The Role of Atomic Clocks (PDF)
In the course of his research, Rabi invented the technique known as magnetic resonance, by which he could measure the natural resonant frequencies of atoms. Rabi won the 1944 Nobel Prize for his work. It was in that year that he first suggested—”tossed off the idea,” as his students put it—that the precision of these resonances are so great that they could be used to make a clock of extraordinary accuracy…
Rabi himself never pursued the development of such a clock, but other researchers went on to improve on the idea and perfect the technology. In 1949, for instance, research by Rabi’s student Norman Ramsey suggested that making the atoms pass twice through the oscillating electromagnetic field could result in a much more accurate clock. In 1989 Ramsey was awarded the Nobel Prize for his work.
There was only one catch—the method worked only on single-stranded DNA and the heating that is needed to unzip the two strands of DNA kills the polymerases. Fortunately, researchers several years earlier had isolated bacteria that had the amazing ability to thrive at temperatures near that of boiling water in hot springs. Scientists discovered that the polymerase isolated from these bacteria could survive the high temperatures needed for PCR. By the late 1980s, the PCR technique had spawned a number of practical developments, of which gene testing is only one.
The discoverer of those bacteria tells a longer story here.
But Lintner took Fabre’s musings further. He not only assumed that the female releases a chemical substance to which the male is exquisitely sensitive; he foresaw that people might be able to harness such chemicals as a means to control insect pests. Noting the chemicals’ “irresistible and far-reaching force,” Lintner asked, “Cannot chemistry come to the aid of the economic entomologist in furnishing at moderate cost the odorous substances needed?”
That case of rare foresight happened in the 1870s, but nothing was really harnessed for another century.
Meanwhile, Lauterbur’s results, published in 1973, included an image of his test sample: a pair of small glass tubes immersed in a vial of water. Working with the small NMR scanner he had created (and using a technique called back projection borrowed from CT scanning), he continued to image small objects, including a tiny crab scavenged by his daughter from the Long Island beach near his home. By 1974, using a larger NMR device, he produced an image of the thoracic cavity of a living mouse. Mansfield, for his part, had imaged a number of plant stems and a dead turkey leg by 1975, and by the next year he had captured the first human NMR image–a finger. Damadian also was at work producing images. In 1977, he produced an image of the chest cavity of a live man.
The 2003 Nobel Prize in Physiology or Medicine was shared by Mansfield (whose “assistance” was obtained for the article, first published in 2001) and Lauterbur but controversially not Damadian. (The line in that article about Damadian’s method having “not proved clinically reliable” was removed between the original version and the version above from August 2002; Damadian was also restored to one note in the timeline.) The full story of their contributions (and drama surrounding their recognition) is fascinating, but also in a sense fairly typical of what it looks like to have multiple independent investigators on similar tracks: all of them made important contributions, built on each other’s ideas in indirect and complicated ways, and continued to develop their ideas along divergent paths well beyond any initial convergence of “discovery.”
Optical fibers offered one approach, although in the mid-1960s it was by no means certain that the answer lay in this direction and other possibilities were seriously considered. Light is channeled in glass fibers by a property known as total internal reflection. The equations governing the trapping of light inside a flat glass plate were known to Augustine-Jean Fresnel as early as 1820, and their extension to what were then known as glass wires was achieved by D. Hondros and Peter Debye in 1910. It was not until 1964, however, that Stewart Miller at Bell Laboratories deduced detailed ways to probe the potential of glass as an efficient long-distance transmission medium.
From Explosives to the Gas That Heals: Nitric Oxide in Biology and Medicine (PDF)
The discovery of EDRF caused an explosion in research, with many different groups around the world making important contributions to the search for its identity. But the realization that EDRF and NO were one and the same substance took six years of intense research. From 1980 to 1986 reports of similarities between the two gradually mounted. In hindsight this may seem to have been an inevitable accumulation of data, but at the time the picture was quite confusing.
The first polymer inventors made progress by transforming natural materials in hit-or-miss fashion, but their work greatly accelerated after basic researchers clarified the fundamental characteristics, such as the relation between size or molecular weight and physical properties, that govern the behavior of polymers. Similarly, the progress in medicine and biology that gave birth to organ transplantation still relies on basic research into the role of chemical messengers, genetic codes, and cellular function.
Preserving the Miracle of Sight (PDF)
In Ad Vitellionem Paralipomena, published in 1604, Kepler produced new theories of light, of the physiology of vision, and of the mathematics of refraction. Among other things, he introduced the term “focus” and showed that the convergence of light rays in front of the retina is the cause of myopia, a condition in which distant objects are blurred and near objects are clear. People had been using spectacles for nearly 400 years by this time, but until Kepler—who himself was myopic, or nearsighted—no one understood why or how artificial lenses improved vision.
Sound from Silence: Development of Cochlear Implants (PDF)
The auditory nerve contains 30,000 fibers. Would the researchers have to provide 30,000 electrodes to stimulate all the nerve fibers individually in order to simulate intelligible speech sounds? If so, the project would clearly be impractical. But according to Zwicker and his co-workers, 24 channels were sufficient. In addition, Michael Merzenich of the University of California, San Francisco, simplified the system even further after he uncovered research results from an unexpected source.
Bell Laboratories, which was then the research arm of AT&T, was concerned with how much information needed to be sent over telephone lines to re-create intelligible speech sounds. Bell scientist James Flanagan, who is now at Rutgers University, determined that the frequencies of speech could be divided into as few as six or seven channels and still be understood.
Sounding Out the Ocean’s Secrets (PDF)
Scientists wishing to observe blue whales firsthand must simply wait in their ships for the whales to surface. A few whales have been tracked briefly in the wild this way but not for very great distances, and much about them remains unknown. Using the SOSUS stations, scientists can track the whales in real time, positioning them on a map. Moreover, they can track not just one whale at a time, but many creatures simultaneously throughout the North Atlantic and the eastern North Pacific. They also can learn to distinguish whale calls. For example, Fox and colleagues have detected changes in the calls of finback whales during different seasons and have found that blue whales in different regions of the Pacific ocean have different calls.
That’s not really illustrative, but I’m sure you’ll understand.
Sometimes the main contribution of game theory to auction design is not some deep theorem but simply the idea that it is vital for auction designers and bidders to put themselves into the minds of their opponents. In recent years several disastrous auctions have shown that when an auction is poorly designed, bidders will exploit the rules in ways the auction’s creators didn’t anticipate.
For instance, in 2000, Turkey auctioned two telecom licenses one after another, with the stipulation that the selling price of the first license would be the reserve price for the second license—the minimum price they would accept for it. One company bid an enormous price for the first license, figuring that no one would be willing to pay that much for the second license, which did in fact go unsold. The company thus gained a monopoly, making its license very valuable indeed.
Consider how and why that could have happened (assuming that’s an accurate account, which I haven’t vetted). It seems less that something went wrong and more that there’s just so much knowledge, communication, and institutional competence needed in the same place for that sort of thing to go right.
With the factoring methods and computer technology then available, the three codemakers estimated that it would take someone 40 quadrillion years to break the cipher.
This was like waving a red flag in front of a bull. In the end it took only 17 years, accompanied by tremendous advances in computer technology and factoring algorithms, for persistent mathematicians and computer scientists to decode the message. Led by a group of experts, a team of more than 600 volunteers in two dozen countries, collaborating over the Internet, factored the RSA 129-digit key in 1994. The team used a new factoring algorithm called the “quadratic sieve,” invented in 1981 by Carl Pomerance, now at Bell Labs, which has the ability to distribute the computation among many computers.
During the next decade and a half, researchers at many laboratories tried in vain to isolate the infectious agents that cause the two types of hepatitis. Scientists suspected that the culprit organisms were viruses because they were small enough to pass through some of the smallest-pore filters used in experiments, but the scientists were unable to grow them in order to identify and study them. By the mid-1960s, hepatitis research had reached a discouraging deadlock. Then a remarkable advance in knowledge of the causes of hepatitis was made by someone who was not working on the disease at the time. Baruch Blumberg, a medical researcher specializing in internal medicine and biochemistry, was interested in a more basic question—why were some people prone to particular diseases?
The Ozone Depletion Phenomenon (PDF)
This unexpected discovery prompted Lovelock to do further studies. Accordingly, he asked the British government for a modest sum of money to place his apparatus on board a ship traveling from England to Antarctica. His request was rejected; one reviewer commented that even if such a measurement succeeded, he could not imagine a more useless bit of knowledge than finding the atmospheric concentration of CFC-11.
Unraveling the Enigma of Vitamin D (PDF)
By 1924, the practical side of the battle against rickets had been won. Across the United States, children began consuming irradiated milk and bread and, seemingly overnight, the imminent threat of epidemic disease dwindled to a half-forgotten historical event. But the quest to understand vitamin D was only just beginning, for scientists still knew almost nothing of what it was or how it worked.
Again with usefulness of research even after the apparent applications have been realized. Also:
Between 1968 and 1971, researchers made great progress in understanding the metabolic processing of vitamin D and its physiological activity. In 1968 a team headed by Hector F. DeLuca at the University of Wisconsin isolated an active substance identified as 25-hydroxyvitamin D3, which the team later proved to be produced in the liver. During the next two years, the Wisconsin team, Anthony W. Norman and colleagues at the University of California-Riverside, and E. Kodicek and coworkers at Cambridge University in England independently reported the existence of a second active metabolite. Kodicek and David R. Fraser showed that this second metabolite is produced in the kidney. Finally, in 1971 all three research groups published papers in which they reported the chemical/molecular structure of this metabolite, which was identified as 1,25-dihydroxyvitamin D3.
This is what multiple discovery often looks like; the “independent” parts are just a few links in a chain of developments approaching a clear target or bottleneck, after which the multiple hubs of expertise remain at least as useful, if not more so—Norman, Fraser, and DeLuca remained prominent Vitamin D researchers, with the latter two still active and as far as I can tell not stepping on one another’s toes nearly as much.
Wavelets: Seeing the Forest and the Trees (PDF)
Even though as an organized research topic wavelets is less than two decades old, it arises from a constellation of related concepts developed over a period of nearly two centuries, repeatedly rediscovered by scientists who wanted to solve technical problems in their various disciplines. Signal processors were seeking a way to transmit clear messages over telephone wires. Oil prospectors wanted a better way to interpret seismic traces. Yet “wavelets” did not become a household word among scientists until the theory was liberated from the diverse applications in which it arose and was synthesized into a purely mathematical theory. This synthesis, in turn, opened scientists’ eyes to new applications.
Perhaps the broader lesson of wavelets is that we should not view basic and applied sciences as separate endeavors: Good science requires us to see both the theoretical forest and the practical trees.
The Vine-Matthews hypothesis, published in the fall of 1963, garnered little support in the geophysical community, partly because the magnetic reversal timescale was not yet complete, so the seafloor anomaly data matched their hypothesis only poorly. But two years later, in 1965, Fred Vine found himself in the company of Harry Hess, who had arrived at Cambridge on sabbatical leave, and J. Tuzo Wilson, there from the University of Toronto, continuing some of his own research on midocean ridges.
Wilson was examining Raff and Mason’s maps of the seafloor area off the coast of Vancouver Island and south to California, and he suggested that the maps showed a seafloor spreading ridge. Vine and Wilson published a paper in October 1965 proposing a model for seafloor spreading in the northeastern Pacific, using as evidence bands of reversed magnetism that marched out from either side of the ridge. Shortly thereafter, a slight discrepancy between the seafloor reversal bands and the timing of known field reversals on land was smoothed out by a new land-based field reversal discovered by Doell and Dalrymple. With this addition, the two sets of data matched astonishingly well.
There’s also a good article on El Niño and La Niña which for whatever reason didn’t get the label of Beyond Discovery. [As I write this (June 2014), there’s a recent (February 2014) paper in PNAS claiming three-in-four likelihood of return of El Niño by the end of this year; having predicted it more than 6 months in advance would be very exciting if it does happen. John Baez writes more at Azimuth.]
If you worry that “standard” popular histories of science emphasize moments of discovery, paradigm shifts, lone geniuses, and linear narratives with inevitable, correct, and final conclusions, then you might find Beyond Discovery a useful corrective. It doesn’t always give a great sense of the messiness; it’s often still a rosy retrospective of a march directly into the future. It does observe, at least, that discovery is frequently only present in hindsight, and more of a bottleneck than a destination; and even if it doesn’t talk about all the wrong turns, it certainly highlights obstacles, setbacks, and the diverse sources contributing to their solutions. Moreover, it serves as a qualitative argument for the value of both untargeted basic research and post-discovery “incremental” work, at least in the past. In a later post I’ll address these and related issues in scientific inputs and outputs more directly.
I’ll also take this opportunity to link some mini-histories of science that are nonstandard in similar ways:
Failed theories of superconductivity (2010) fights the good fight against hindsight and survivorship bias with a look at pre-BCS theories. The abstract:
Almost half a century passed between the discovery of superconductivity by Kamerlingh Onnes and the theoretical explanation of the phenomenon by Bardeen, Cooper and Schrieffer. During the intervening years the brightest minds in theoretical physics tried and failed to develop a microscopic understanding of the effect. A summary of some of those unsuccessful attempts to understand superconductivity not only demonstrates the extraordinary achievement made by formulating the BCS theory, but also illustrates that mistakes are a natural and healthy part of the scientific discourse, and that inapplicable, even incorrect theories can turn out to be interesting and inspiring.
It also contains the following amusing snippet:
The second idea proposed in 1932 by Bohr and Kronig was that superconductivity would result from the coherent quantum motion of a lattice of electrons. Given Bloch’s stature in the field, theorists like Niels Bohr where eager to discuss their own ideas with him. In fact Bohr, whose theory for superconductivity was already accepted for publication in the July 1932 issue of the journal “Die Naturwissenschaften”, withdrew his article in the proof stage, because of Bloch’s criticism (see Ref.[20]). Kronig was most likely also aware of Bloch’s opinion when he published his ideas[22]. Only months after the first publication he responded to the criticism made by Bohr and Bloch in a second manuscript[23]. It is tempting to speculate that his decision to publish and later defend his theory was influenced by an earlier experience: in 1925 Kronig proposed that the electron carries spin, i.e. possesses an internal angular momentum. Wolfgang Pauli’s response to this idea was that it was interesting but incorrect, which discouraged Kronig from publishing it. The proposal for the electron spin was made shortly thereafter by Samuel Goudsmit and George Uhlenbeck[29]. Kronig might have concluded that it is not always wise to follow the advice of an established and respected expert.
On the Benefits of Promoting Diversity of Ideas (2014) by Abraham Loeb not only lists a number of incorrect predictions in astronomy, but describes specific failures and delays caused by pessimism and tunnel-vision apparently justified by theoretical priors in a data-starved field.
During her PhD thesis in 1925 (which was the first PhD in Astronomy at Harvard-Radcliffe), Cecilia Payne-Gaposchkin interpreted the solar spectrum based on the Saha equation and concluded that the Sun’s atmosphere is made mostly of hydrogen. While reviewing her dissertation, the distinguished Princeton astronomer Henry Norris Russell convinced her to avoid the conclusion that the composition of the Sun is different from that of the Earth, as it contradicted the conventional wisdom at the time.
Scott and Scurvy is an incredible essay by Maciej Cegłowski on the tortuous history of scurvy and its purported cures:
Now, I had been taught in school that scurvy had been conquered in 1747, when the Scottish physician James Lind proved in one of the first controlled medical experiments that citrus fruits were an effective cure for the disease. From that point on, we were told, the Royal Navy had required a daily dose of lime juice to be mixed in with sailors’ grog, and scurvy ceased to be a problem on long ocean voyages.
But here was a Royal Navy surgeon in 1911 apparently ignorant of what caused the disease, or how to cure it. Somehow a highly-trained group of scientists at the start of the 20th century knew less about scurvy than the average sea captain in Napoleonic times. Scott left a base abundantly stocked with fresh meat, fruits, apples, and lime juice, and headed out on the ice for five months with no protection against scurvy, all the while confident he was not at risk. What happened?
There’s also the story of the missing I-J determinant in immunology, which at least has a happy ending. That and the above account are both anomalously dramatic, but those patterns do repeat themselves on smaller and less formal scales.
Anthropologist Hugh Gusterson wrote “A Pedagogy of Diminishing Returns: Scientific Involution across Three Generations of Nuclear Weapons Science” (2005) about the strange sort of inward-turning and withering of nuclear weapons science in the post-testing era. That field (as well as national labs and megaprojects more generally) often seems to be dramatically idiosyncratic or even dysfunctional — but as with many of those dramatic features, the process Gusterson describes is a magnified version of something that plays in some form in all sorts of labs as fads and funding wax and wane. (Not to mention that perhaps most actual work in science is done by people in temporary and training positions, who today are very likely to leave science, taking a great deal of tacit knowledge with them.)
The Hanbury Brown and Twiss experiment is an interesting case where classical electromagnetism easily produces a correct answer, while the quantum mechanical explanation involves important subtleties. It caused controversy when it was performed in the 1950s, with critics saying that if the results were correct they would call for a “major revision of some fundamental concepts in quantum mechanics.” This was not at all true, as some people recognized immediately. From a certain perspective the quantum theory necessary for a correct explanation had been developed decades earlier (doubly true for the debate’s reappearance in the 1990s), but certain distinctions, particularly in source and detector physics, had not yet been made relevant by experiment. (Additionally, Dirac had written something that made a certain sense in the context of the 1930s but confused many physicists trying to apply it to understanding HBT: “Interference between different photons never occurs.”) The HBT paper in 1956 was then one of the motivations for developing theory along these lines, laying the foundations for quantum optics. I may write more about it someday, but for now The Twiss-Hanbury Brown Controversy: A 40-Years Perspective is a good overview.
A Half Century of Density Functional Theory (2015) celebrates a theory exceptional in that it in some sense fits the “discovery” narrative very well — it wasn’t at all “in the air” as these things often are. On the other hand, DFT’s value took some time to be recognized, especially among quantum chemists, for somewhat arbitrary reasons. [Additional links are quotes.]
I expect I’ll continue adding to this list; let me know if you have something you think belongs here.
2 thoughts on “Beyond Discovery”