Thinking on the page

“Thinking on the page” is a handle that I’ve found useful in improving my writing and introspection more generally. When I write, for the most part, I’m trying to put something that I already feel is true into words. But when I think on the page, the words are getting ahead of my internal sense of what’s true. I’m writing something that just sounds good, or even just something that logically flows from what I’ve already written but has come untethered from my internal sense. It’s kind of a generalized verbal overshadowing.

I don’t think this is challenging only to people who think [of themselves as thinking] non-verbally, considering how much more universal are the experiences “this puts exactly what I believe into words better than I ever could” or even the satisfaction of finding a word on the tip of the tongue. Some people seem to be better than others not just at describing their internal sense of truth, but at tapping into it at all. But if you think only in internal monologue, you may have a very different perspective on “thinking on the page”—I’d be interested to hear about it.

At best, this is what happens in what Terry Tao calls the “rigorous” stage of mathematics education, writing, “The point of rigour is not to destroy all intuition; instead, it should be used to destroy bad intuition while clarifying and elevating good intuition.” At worst, it’s argument based on wordplay. Thinking on the page can be vital when you’re working beyond your intuition, but it’s equally vital to notice that you’re doing it. If what you’re writing doesn’t correspond to your internal sense of what’s true, is that because you’re using your words wrong, or because you need to use the page as prosthetic working memory to build a picture that can inform your internal sense?

The two places this becomes clearest for me are in academic writing and in art critique. Jargon has the effect of constantly pulling me back towards the page. If it doesn’t describe a native concept, I can either heroically translate my entire sense of things and arguments about them into jargon, or I can translate the bare raw materials and then manipulate them on the page—so much easier. As for art, the raw material of the work is already there in front of me—so tempting to take what’s easy to point to and sketch meaning from it, while ignoring my experience of the work, let alone what the raw material had to do with that experience.

A lack of examples often goes hand in hand with thinking on the page. Just look at that last paragraph: “translate”, “raw materials”, “manipulate”—what am I even talking about? An example of both the jargon and art failure modes might be my essay about Yu-Gi-Oh! Duel Monsters. My analysis isn’t entirely a joke, but it’s not a realistic reading in terms of the show’s experiential or rhetorical effect on the audience, intended or otherwise. The protagonist’s belief in the heart of the cards and his belief in his friends are genuinely thematically linked, but neither one is the kind of “shaping reality by creative utterance” that has anything to do with how the characters talk their way around the procedural weirdness of the in-show card game as game. But when I put all these things in the same language, I can draw those connections just fine. I’m playing a game with symbols like “creative utterance”.

How can one notice when this is happening? Some clues for me:

  • I feel like I’m moving from sentence to sentence rather than back and forth between thought and sentence
  • I feel something like “that’s not quite right”
  • I feel a persistent “tip of the tongue” sensation even after writing
  • I feel clever
  • I haven’t used an example in a while
  • I’m using jargon or metaphor heavily

What can one do after noticing?

  • Try to pull the words back into your internal picture, to check whether they fit or not—they might, and then you’ve learned something
  • Rewrite without self-editing until something feels right, using as many words or circumlocutions as it takes
  • Try to jar the wording you want mentally into place by trying more diverse inputs or contexts (look at distracting media, related essays, a thesaurus)
  • Ask “but is that true?”
  • Connect with specific examples
  • Focus on the nonverbal feeling you want to express; try to ask it questions

What’s a good way to practice?

  • Write reviews of art/media you encounter, then read other people’s reviews. As far as “not being led astray by thinking on the page” is more than the skill of writing-as-generically-putting-things-into-words, I think this is a good place to practice what’s particular to it. People seem to have a good enough sense of what they liked and why for good criticism to resonate, but often not enough to articulate that for themselves, at least without practice. So it can be good to pay attention to where the attempted articulations go wrong.
  • Write/read mathematical proofs or textbook physics problems, paying attention to how the steps correspond to your sense of why the thing is true (or using the steps to create a sense of why it’s true)
  • If it seems like the sort of thing that would do something for you, find a meditation practice that involves working with “felt senses” (I don’t have a real recommendation here, but it’s the kind of thing Gendlin’s Focusing aims for)

The goal isn’t to eliminate thinking on the page, but to be more deliberate about doing it or not. It can be useful, even if I haven’t said as much about that.

One thing I don’t recommend is using “you’re thinking on the page” as an argument against someone else. If you find yourself thinking that, it’s probably an opportunity to try harder to get in their head. Like most of these things, as a way thinking can go wrong, this is a concept best wielded introspectively.

(Here’s a puzzle: can you go up another level? If I’m saying something like “felt senses/thoughts want to make words available”, then what things “want to make felt senses available”? Can you do anything with them?)

Link: Trial By Mathematics

Since it’s been a while, I’ll reaffirm my pre-hiatus policy on linkposts:

I’d like to use your attention responsibly. To that end, I want to avoid spraying links to whatever’s recently hijacked my brain. When I share a link, I’ll do my best to make it a classic: a text I’ve had time to consider and appreciate, whose value has withstood the vagaries of the obsession du jour, my own forgetfulness and mental flux; something that changed my mind or (and) continues to inform my outlook; a joy to read and re-read. A piece of my personal Internet canon. Anyway, don’t get your hopes up.

My previous post, on fair tests with unfair outcomes, is a narrow piece in a broad dialogue about quantitative measures in prediction and decision-making. I pointed out one specific statistical fact that shows up in certain situations, and observed that it’s valuable to consider “statistical fairness” not just of a test but of the decisions and consequences that follow from it. Outcomes that are fair in the ways we care about don’t necessarily flow from tests that give unbiased predictions, especially when we care about error rates of decisions more than point estimates from predictions (i.e. more than what some people think of as the the part of the whole business that needs to be fair or not).

One much more important piece of this dialogue is Tetlock’s Expert Political Judgment. I strongly recommend it, for its substantive contribution to what we can say on the subject, its practical grounding, and its thread of doing better through intellectual humility. It also serves as a great example of thorough and even-handed analysis, and the patient exhaustion of alternative explanations.

Another such piece is Laurence Tribe’s Trial By Mathematics. Tribe systematically considers mathematical methods in civil and criminal trials. His analysis applies much more broadly. Along the way he makes a number of subtle points about probability, Bayesianism, and utility, as well as about the tradeoffs between trial sensitivity and specificity. In his words:

A perhaps understandable pre-occupation with the novelties and factual nuances of the particular cases has marked the opinions in this field, to the virtual exclusion of any broader analysis of what mathematics can or cannot achieve at trial—and at what price. As the number and variety of cases continue to mount, the difficulty of dealing intelligently with them in the absence of any coherent theory is becoming increasingly apparent. Believing that a more general analysis than the cases provide is therefore called for, I begin by examining—and ultimately rejecting—the several arguments most commonly advanced against mathematical proof. I then undertake an assessment of what I regard as the real costs of such proof, and reach several tentative conclusions about the balance of costs and benefits.

He then stands back and asks about the broader effects of adopting a quantitative rule, not in terms of a utility calculation but as a matter of legitimacy and confidence in the justice system:

As much of the preceding analysis has indicated, rules of trial procedure in particular have importance largely as expressive entities and only in part as means of influencing independently significant conduct and outcomes. Some of those rules, to be sure, reflect only “an arid ritual of meaningless form,” but others express profoundly significant moral relationships and principles—principles too subtle to be translated into anything less complex than the intricate symbolism of the trial process. Far from being either barren or obsolete, much of what goes on in the trial of a lawsuit—particularly in a criminal case—is partly ceremonial or ritualistic in this deeply positive sense, and partly educational as well; procedure can serve a vital role as conventionalized communication among a trial’s participants, and as something like a reminder to the community of the principles it holds important. The presumption of innocence, the rights to counsel and confrontation, the privilege against selfincrimination, and a variety of other trial rights, matter not only as devices for achieving or avoiding certain kinds of trial outcomes, but also as affirmations of respect for the accused as a human being—affirmations that remind him and the public about the sort of society we want to become and, indeed, about the sort of society we are.

I’m always reluctant to excerpt things like this—particularly summaries of arguments like in the second quote—because it feels like I’m implying that the summary is the extent of what you’ll get out of it, and that it’s not worth reading in its entirety. Let there be no confusion: Trial By Mathematics belongs on any reading list about decision-making under uncertainty, and it’s only 65 pages.

 

Unfair outcomes from fair tests

[Status: I’m sure this is well known, so I’d appreciate pointers to explanations by people who are less likely to make statistical or terminological errors. I sometimes worry I do too much background reading before thinking aloud about something, so I’m experimenting with switching it up. A quick search turns up a number of papers rediscovering something like this, like Fair prediction with disparate impact from this year. Summary: Say you use a fair test to predict a quality for which other non-tested factors matter, and then you make a decision based on this test. Then people who do worse on the test measure (but not necessarily the other factors) are subject to different error rates, even if you estimate their qualities just as well. If that’s already obvious, great; I additionally try to present the notion of fairness that lets one stop at “the test is fair; all is as it should be” as a somewhat arbitrary line to draw with respect to a broader class of notions of statistical fairness.]

What’s a fair test?a Well, it probably shouldn’t become easier or harder based on who’s taking it. More strongly, we might say it should have the same test validity for the same interpretation of the same test outcome, regardless of the test-taker. For example, using different flavors of validity:

  • Construct validity: To what extent does the test measure what it claims to be measuring? “Construct unfairness” would occur when construct validity varies between test-takers. If you’re measuring “agility” by watching an animal climb a tree, that could be valid for cats, but less so (hence unfair) for dogs.b
  • Predictive validity: To what extent is the measure related to the prediction or decision criterion the test is to be used for? Imagine a test that measures what it claims to measure and isn’t biased against anyone, but isn’t predictive for some subset of of the population. Filtering everyone through this test could be considered unfair. If we consider the test as administered and not just in the abstract, we also run into predictive unfairness due to differential selection bias for test-takers from different groups.

As an example of predictive unfairness, say I’m hiring college students for a programming internship, and I use in-major GPA for a cutoff.c I can say it has construct fairness if I don’t pretend it’s a measure of anything more than performance in their major.d But that number is much more predictive of job performance for Computer Science majors than for Physics majors.

This is “unfair” in a couple ways. Many CS students will be more prepared for the job by virtue of experience, but will be outranked by non-coder Physics students with high GPA. At the same time, the best Physics majors for the job can be very good at programming, but that mostly won’t show up in their physics GPA.

Can we make the use of a GPA cutoff “fair”? Well, say the correlation between physics GPA and coding experience is small but nonzero. We can raise the cutoff for non-CS majors until the expectation for job performance at the two cutoffs are the same. From the employer’s point of view, that’s the smart thing to do, assuming threshold GPA-based hiring.e Then we have a new “test” that “measures” [GPA + CS_major_bonus] that has better predictive fairness with respect to job performance.f We’re still doing poorly by the secretly-coding physicists, but it’s hard to see how one could scoop up more of them without hiring even more false positive physicists.g

Intuitively, “fairness” wants to minimize the probability that you will fail the test despite your actual performance—the thing the test wanted to predict—being good enough, or that I will succeed despite falling short of the target outcome, perhaps weighted by how far you surpass or I fall short of the requirements. In these terms, we also want to minimize the effects of chance by using all the best information available to us. Varying the cutoff by major seems to have done all that.

So is it a problem that the part of the test that’s directly under the students’ control—their GPA (for the sake of argument, their major is fixed)—is now easier or harder depending on who’s taking it? In this case it seems reasonable.

But there’s still at least one thing about fairness we didn’t capture: we may want the error probabilities not to depend on which groups we fall into. Our model of fairness doesn’t say anything about why we might or might not want that. Perhaps there’s still a way to do better in terms of equalizing the two kinds of error rates between the two populations. Hmm…

Continue reading

  1. What’s a test? For now, it’s a piece of imperfect information about some real quality that you want to use to make a decision or prediction.  (back)
  2. Dogs do have less of things like joint and spine flexibility, which may matter equally well for tree-climbing and general agility, but they also lack cats’ claws, which mostly help with the trees.  (back)
  3. That is, the average grade a student has received in classes in her major. But this is meant to be a generic example; these issues are not far-fetched and can show up in different  and even qualitative forms in school, work, court, politics, daily life, charity evaluation…  (back)
  4. There’s some gerrymandering here: is it a fair measure of academic performance, or an unfair measure of coding ability?  (back)
  5. More specifically, the employer cares about expected utility of letting the candidate past the GPA screen. If false negatives are really bad, and qualifications beyond the minimum don’t matter, then they’d equalize the false negative rate, requiring an even higher cutoff for Physics. If there’s a (costly) interview rather than just hiring, then physics majors may need a lower cutoff, if the CS GPA tells you all you need to know but the physics GPA tells you little. Improving your instrument is good, if you can afford it.  (back)
  6. Note that this isn’t a correction for CS majors having a higher mean performance, despite looking kind of like it—it’s just an ad-hoc adjustment for the fact that GPA is less correlated with coding skill for Physics majors, who could be just as good at programming, on average.  (back)
  7. I keep saying “physicists” instead of Physics majors because it’s shorter, but I mean, come on, they’re undergrads.  (back)

The Hedgehog and the Fox, GMU economist edition

Tyler Cowen on Robin Hanson:

Robin is very fond on powerful theories which invoke a very small number of basic elements and give those elements great force.  He likes to focus on one very central mechanism in seeking an explanation or developing policy advice.  Modern physics and Darwin hold too strong a sway in his underlying mental models.  He is also very fond of hypotheses involving the idea of a great transformation sometime in the future, and these transformations are often driven by the mechanism he has in mind.  I tend to see good social science explanations or proposals as intrinsically messy and complex and involving many different perspectives, not all of which can be reduced to a single common framework.  I know that many of my claims sound vague to Robin’s logical atomism, but I believe that, given our current state of knowledge, Robin is seeking a false precision and he is sometimes missing out on an important multiplicity of perspectives.  Many of his views should be more cautious.

Robin Hanson on Bryan Caplan:

We find ourselves managing complex networks of beliefs. Bryan’s picture seems to be of a long metal chain linked at only one end to a solid foundation; chains of reasoning mainly introduce errors, so we do best to find and hold close to our few most confident intuitions.  My picture is more like Quine’s “fabric,” a large hammock made of string tied to hundreds of leaves of invisible trees; we can’t trust each leaf much, but even so we can stay aloft by connecting each piece of string to many others and continually checking for and repairing broken strings.

Bryan Caplan on Tyler Cowen:

one of the most intellectually stubborn people I know, despite (because of?) his often promiscuous open-mindedness

[original post]

Follow-up on molecular electronics

Exercise #5 discussed a 1983 paper by Forrest Carter on proposed fabrication techniques for “molecular electronics”—electronic devices made from molecular building blocks, which held or hold promise for extending Moore’s Law beyond the limits of silicon. One question was “How many of these methods do you think are in use today, almost 35 years later?” I don’t want to imply that there’s necessarily a moral lesson to be drawn from the answers to the exercise; it was open-ended for a reason. With that in mind, to my knowledge, only one—nanopatterning with LB films—has become an active area of study in a form somewhat like the proposal, although other methods for accomplishing similar goals are also now subjects of experimental research.

For context on Carter, I’ll repost a comment I made on tumblr:

Luke Muehlhauser’s descriptions for OPP of cryonics and especially of molecular nanotechnology as case studies in failures for early field-building strike me as a little odd coming from someone who put Instrumental Community in his annotated bibliography. At the very least why not say something about the success of field-building in (not-MNT) nanotechnology?

This reminds me that I wanted to share “The Long History of Molecular Electronics”, an article about a sort of ancestor of MNT, by Hyungsub Choi and Cyrus Mody (IC author). (A lot of this also shows up in Mody’s new book The Long Arm of Moore’s Law: Microelectronics and American Science, which is really excellent and kind of about me personally.)

Before Drexler, there was Forrest Carter. At the beginning of the story, Carter is a chemist at the Naval Research Lab, specializing in x-ray photoelectron spectroscopy, and he gradually shifts to community-building:

Critically, Carter’s interest in molecular computing grew out of an institutional and disciplinary environment similar to [IBM chemist Ari] Aviram’s, as well as a personal curiosity dating to his graduate training at Caltech. There, he had studied organometallic chemistry under Howard Lucas, graduating in 1956 (Carter, 1956). His Caltech mentors also included Linus Pauling and Richard Feynman; indeed, by Carter’s account, he attended parties at Feynman’s house and played bongos and talked science with the older man. It is interesting to note that Carter knew of and was influenced by Feynman’s (1960) famous ‘Room at the Bottom’ speech – much more so than most other early nanotechnologists.

Moreover, Carter incorporated elements of the Feynman persona into his own presentation of self, developing an expansive, charismatic style that helped him promote bold visions and gather protégés, but which also led to institutional conflict. Like Feynman, he had a taste for exotic hobbies (hot rods, motorcycles, fencing, platform diving, salsa dancing); and, like Feynman, he became known for extraordinary parties, risqué banter, and a coterie of young acolytes. Carter’s striking appearance, rumbling voice, and colorful banter (cited to this day by skeptics and believers alike) personalized molecular electronics as neither Aviram nor the Westinghouse engineers had before him.

By 1983, the radical turn in Carter’s vision for molecular computing was visible in his community-building strategies as well. That year he held the second of his Molecular Electronic Devices workshops (Carter, 1987). Where the first conference had been attended mostly by conducting polymer researchers (many of whom were funded by the Navy), by the second workshop those people were replaced by an eclectic mix of synthetic chemists, biochemists, lithography specialists, and provocative speculators such as Eric Drexler. This broadening of topics and personnel is indicative of Carter’s unmooring of molecular electronics from a particular material such as polysulfur nitride or TCNQ, and his construction of a big tent for all things ‘molecular’.

For Carter, it was among the visionaries in that tent – people like Drexler, Stuart Hameroff, and Stephen Wolfram – that he could discuss cellular automata, molecular assemblers, and biological computing and obtain material for the more radical parts of his vision.[47] That vision was quickly attracting widespread attention. Carter was publishing more papers in mainstream physics and surface science journals in the mid 1980s than at any time in his career; but he was also publishing in more off-beat edited volumes that some of his peers and supervisors were beginning to contest.[48]

Thus, Carter fell into that familiar category of scientist for whom his peers’ evaluations of his science was inseparable from their interpretations of his character and mental state.[51] To his critics, Forrest Carter avoided producing experimental results because he was a second-rate chemist; to his supporters, he was a community-builder whose work was more important for national security than the research the Navy demanded. To critics, he was a wild speculator whose misunderstandings of quantum chemistry would bring the Navy into disrepute; to supporters, a charismatic genius who deliberately advanced unfashionable ideas to provoke discussion .

In some ways he gives us an archetype halfway between Drexler and the mainstream. He was from the start a member of the research community—an experimentalist, even, if not when it came to molecular electronics. He brought the “field” (as much as it was one) into some disrepute (moreso in the US than elsewhere), and his big tent mostly collapsed after his death. But a kernel of experimentalists—who may not have talked to each other without Carter’s efforts—regrouped and began to produce credible results (in the least speculative part of the ME research program) with the help of new technologies like scanning probe microscopes. That new generation of molecular electronics has grown into a relatively mature interdisciplinary field. And note that the speculation that set all this off is still controversial—plenty of people expect we’ll never be able or want to manufacture a “molecular computer” even in the reduced sense of “integrating single molecules with silicon”, and no one thinks we’ll be there soon—but since the field doesn’t live so much off of hype, researchers have space to say “hey, it’s basic research” and carry on.

As an addendum to the repost, I should also quote footnote [47]:

According to Hank Wohltjen, Carter and Drexler had a sustained correspondence but eventually they ‘parted ways in that Forrest was not seeking limelight for this stuff, he was trying to play it down and focus on more doable things. And Eric was more into sensationalism’.

And note that the paper linked in the exercise is one of his “papers in mainstream physics and surface science journals in the mid 1980s”, not in an “off-beat” volume.

Hiatus on hiatus

Here it is: the post after a years-long hiatus explaining that the author hopes to revive the blog. You’ve seen this before; you know how often it works. For good luck, I made a real post before this one.

I had a good first month the first time around, then slowed down and stopped over the next six months or so. Other projects became more important. I also stopped tweeting.

I did, however, keep lower-effort posting on my tumblr: here’s a guide to my tags. My most-read post there, and as far as I know the only one that got shared outside of tumblr, is this exegesis of Yu-Gi-Oh! Duel Monsters. Elsewhere, I wrote a handful of articles on effective music practice. There are also a couple other pieces from my LessWrong pastiche phase that were pretty well received, and which I never reposted here, on therapy as a reference class for rationality training and in that light a self-experiment in training noticing confusion.

For the most part, I’ll be posting short pieces and links. Maybe some longer things that I wouldn’t say without the protective irony my tumblr lacks. And I’ll fill in the gaps by polishing up material from tumblr that might be of interest to people who don’t want to follow me there, or that ought to be easier to browse and search for.

My main goal in bringing this back is to increase my internet surface area. Who knows what value my writing has, but if more people have a chance to see it, I’ll have more chances to talk to like-minded people beyond those few that have condensed into my corner of tumblr. In case that’s not clear enough:

If you read my stuff and find it interesting, I want to talk to you. Leave a comment! Say hi!

Not only is that genuinely what I want out of this, but it’s a great way for you to motivate me to post more.

Right now, the best way to get in touch with me is still through Tumblr. There’s also an email link above this post, but you might want to ping me elsewhere (e.g. tumblr anon ask) to prompt me to check that inbox, since it hasn’t seen human activity since 2014.

Exercise #5: Molecular electronics proposals

[As always, I’m not promising this is a good use of your time, but you might find it stimulating.]

Here is a paper from 1983:

In anticipation of the continued size reduction of switching elements to the molecular level, new approaches to materials, memory, and switching elements have been developed. Two of the three most promising switching phenomena include electron tunneling in short periodic arrays and soliton switching in conjugated systems. Assuming a three-dimensional architecture, the element density can range from 1015 to 1018 per cc. In order to make the fabrication of such a molecular electronic device computer feasible, techniques for accomplishing lithography at the molecular scale must be devised. Three approaches possibly involving biological and Langmiur-Blodgett materials are described.

Depending on how you count, the author describes ten or so proposed methods for molecular-level fabrication of electronic devices:

  • Merrifield synthesis: An existing method of amino-acid-by-amino-acid polypeptide synthesis in solution is adapted to build up a network of molecular wires on a lithographically prepared substrate. Later, “switching and control functions are added, adjacent components are bonded together, and the process continued, ultimately forming a three dimensional solid array of active and passive components.”
  • Molecular epitaxial superstructures: Engineer a large flat molecule with edge-edge interactions such that it forms an epitaxial layer with small holes on a substrate; then deposit your desired material so that when you remove your flat molecule, all that’s left is the desired material where there were holes in the original layer.
  • Modulated structures: Heat an insulating film under certain conditions so that small conducting lines, which connect the top and bottom of the film, develop at potentially very close distances to each other, giving you a number of active sites on one surface that can be addressed from the other side of the film.
  • Electron tunnelling: Use a periodic molecular chain to switch electron tunneling from one end to another on and off by modulating the depth of potential wells in the chain so that the electron energy becomes on/off resonance with the wells, making it easy/hard to tunnel across.
  • Soliton switching: Use a propagating soliton in a system of alternating single and double bonds to turn chromophores on and off.
  • Neuraminidase: Use the regular arrangement (when crystallized in two dimensions) of the 10-nanometer spore heads of an influenza virus (or other molecules that form interesting shapes and patterns) to derive useful structures. Maybe use multiple tesselated tiles, like in an Escher drawing.
  • Fractals: Use chemical means to induce self-similar patterns at different scales to bridge the macro and micro scales, like in another Escher drawing.
  • Fiberelectronics: Produce a bundle of long wires of 10 nm diameter by filling a hollow glass rod with metal, heating and pulling out the rod by a factor of 100, bundling many such rods together, then hot drawing by a factor of 100 again.
  • Langmuir-Blodgett films: Modify a known technique for producing a film with a precise number of molecular monolayers, in order to incorporate a pattern of active elements or holes.
  • Monolayer polymerization: Build a device by stitching together monolayers so that interesting things happen at the interfaces.

You can read the paper as deeply as you feel necessary to answer the questions.

  1. What kind of paper is this? Is the author credible? What is he trying to accomplish, and on what timescale?
  2. More specifically: How would you read this paper as a scientist in a field it touches on? As a program officer? As a citizen who wants to understand and encourage innovation effectively?
  3. Which method seems the most experimentally accessible for investigation, from the 1983 perspective or from today’s? [Extra credit: What sort of experiment would you do?]
  4. Which method seems the most speculative in terms of whether it will be ultimately physically/chemically realistic even given advanced experimental techniques?
  5. Which would be the most valuable?
  6. How many of these methods do you think are in use today, almost 35 years later? At what stage of development would they be now, or at what stage were they abandoned? What capabilities will have been achieved by other means? [Extra: Try to determine the actual answers.]

New “Beside Discovery” additions

I recently added a few items to my “messy micro-histories of science” section here, reproduced below:

Anthropologist Hugh Gusterson wrote “A Pedagogy of Diminishing Returns: Scientific Involution across Three Generations of Nuclear Weapons Science” (2005) about the strange sort of inward-turning and withering of nuclear weapons science in the post-testing era. That field (as well as national labs and megaprojects more generally) often seems to be dramatically idiosyncratic or even dysfunctional — but as with many of those dramatic features, the process Gusterson describes is a magnified version of something that plays in some form in all sorts of labs as fads and funding wax and wane. (Not to mention that perhaps most actual work in science is done by people in temporary and training positions, who today are very likely to leave science, taking a great deal of tacit knowledge with them.)

The Hanbury Brown and Twiss experiment is an interesting case where classical electromagnetism easily produces a correct answer, while the quantum mechanical explanation involves important subtleties. It caused controversy when it was performed in the 1950s, with critics saying that if the results were correct they would call for a “major revision of some fundamental concepts in quantum mechanics.” This was not at all true, as some people recognized immediately. From a certain perspective the quantum theory necessary for a correct explanation had been developed decades earlier (doubly true for the debate’s reappearance in the 1990s), but certain distinctions, particularly in source and detector physics, had not yet been made relevant by experiment. (Additionally, Dirac had written something that made a certain sense in the context of the 1930s but confused many physicists trying to apply it to understanding HBT: “Interference between different photons never occurs.”) The HBT paper in 1956 was then one of the motivations for developing theory along these lines, laying the foundations for quantum optics. I may write more about it someday, but for now The Twiss-Hanbury Brown Controversy: A 40-Years Perspective is a good overview.

A Half Century of Density Functional Theory (2015) celebrates a theory exceptional in that it in some sense fits the “discovery” narrative very well — it wasn’t at all “in the air” as these things often are. On the other hand, DFT’s value took some time to be recognized, especially among quantum chemists, for somewhat arbitrary reasons. [Additional links are quotes.]

Exercise #4

Consider the following experimental results:

People who became vegetarians for ethical reasons were found to be more committed to their diet choice and remained vegetarians for longer than those who did so for health reasons.

Loyalty to expert advisers (doctors, financial advisors, etc.) leads to higher prices but not necessarily better services.

Smokers who viewed ads featuring messages about “how” to quit smoking were substantially more likely to quit than those who viewed ads with reasons “why” to quit.

In adults, creativity was substantially inhibited during and shortly after walking (either outdoors or on a treadmill) as compared to sitting.

Answer each question before scrolling down and reading the next, because of SPOILERS:

1. How do you explain these effects?

 

 

 

2. How would you have gone about uncovering them?

 

 

 

3. These are all reversed, and the actual findings were the opposite of what I said. How do you explain the opposite, correct effects?

 

 

 

4. Actually, none of these results could be replicated. Why and how were non-null effects detected in the first place? Answers using your designs from (2) are preferable.

 

 

 

Final spoilers below.

 

 

 

For the real findings, see Useful Science (1234), which is Useful as a source of further exercises, at least. Some of the four are indeed reversed, but as far as I know I made up the part about replication. Reflect on the quality of your explanations and on any feelings of confusion you noticed or failed to notice. I apologize for lying; it was for your own good.

Extra credit: Follow the links to find the original papers. Compare your proposed test, and determine whether your alternative explanations were ruled out.

Beyond Discovery

Beyond Discovery™: The Path from Research to Human Benefit is a series of articles that trace the origins of important recent technological and medical advances. Each story reveals the crucial role played by basic science, the applications of which could not have been anticipated at the time the original research was conducted.

The National Academy of Sciences ran this project from 1996 to 2003. The website went offline in early 2013 and as of June 2014 is still “under construction.” [2015 update: that link now leads to a page with all the articles as PDFs! Except the MRI one, for some reason.] You can [also] find all twenty articles in the Internet Archive, but it’s kind of a pain to navigate. So I’ve gathered them all here.

The articles (each around 8 pages) are roughly popular-magazine-level accounts of variable quality, but I learned quite a bit from all of them, particularly from the biology and medicine articles. They’re very well written, generally with input from the relevant scientists still living (many of them Nobel laureates). In particular I like the broad view of history, the acknowledged scope of the many branches leading to any particular technology, the variety of topics outside the usual suspects, the focus on fairly recent technology, and the emphasis bordering on propagandist on the importance and unpredictability of basic research. It seems to me that they filled an important gap in popular science writing in this way. Links, quotes, and some brief comments follow.

Continue reading