Follow-up on molecular electronics

Exercise #5 discussed a 1983 paper by Forrest Carter on proposed fabrication techniques for “molecular electronics”—electronic devices made from molecular building blocks, which held or hold promise for extending Moore’s Law beyond the limits of silicon. One question was “How many of these methods do you think are in use today, almost 35 years later?” I don’t want to imply that there’s necessarily a moral lesson to be drawn from the answers to the exercise; it was open-ended for a reason. With that in mind, to my knowledge, only one—nanopatterning with LB films—has become an active area of study in a form somewhat like the proposal, although other methods for accomplishing similar goals are also now subjects of experimental research.

For context on Carter, I’ll repost a comment I made on tumblr:

Luke Muehlhauser’s descriptions for OPP of cryonics and especially of molecular nanotechnology as case studies in failures for early field-building strike me as a little odd coming from someone who put Instrumental Community in his annotated bibliography. At the very least why not say something about the success of field-building in (not-MNT) nanotechnology?

This reminds me that I wanted to share “The Long History of Molecular Electronics”, an article about a sort of ancestor of MNT, by Hyungsub Choi and Cyrus Mody (IC author). (A lot of this also shows up in Mody’s new book The Long Arm of Moore’s Law: Microelectronics and American Science, which is really excellent and kind of about me personally.)

Before Drexler, there was Forrest Carter. At the beginning of the story, Carter is a chemist at the Naval Research Lab, specializing in x-ray photoelectron spectroscopy, and he gradually shifts to community-building:

Critically, Carter’s interest in molecular computing grew out of an institutional and disciplinary environment similar to [IBM chemist Ari] Aviram’s, as well as a personal curiosity dating to his graduate training at Caltech. There, he had studied organometallic chemistry under Howard Lucas, graduating in 1956 (Carter, 1956). His Caltech mentors also included Linus Pauling and Richard Feynman; indeed, by Carter’s account, he attended parties at Feynman’s house and played bongos and talked science with the older man. It is interesting to note that Carter knew of and was influenced by Feynman’s (1960) famous ‘Room at the Bottom’ speech – much more so than most other early nanotechnologists.

Moreover, Carter incorporated elements of the Feynman persona into his own presentation of self, developing an expansive, charismatic style that helped him promote bold visions and gather protégés, but which also led to institutional conflict. Like Feynman, he had a taste for exotic hobbies (hot rods, motorcycles, fencing, platform diving, salsa dancing); and, like Feynman, he became known for extraordinary parties, risqué banter, and a coterie of young acolytes. Carter’s striking appearance, rumbling voice, and colorful banter (cited to this day by skeptics and believers alike) personalized molecular electronics as neither Aviram nor the Westinghouse engineers had before him.

By 1983, the radical turn in Carter’s vision for molecular computing was visible in his community-building strategies as well. That year he held the second of his Molecular Electronic Devices workshops (Carter, 1987). Where the first conference had been attended mostly by conducting polymer researchers (many of whom were funded by the Navy), by the second workshop those people were replaced by an eclectic mix of synthetic chemists, biochemists, lithography specialists, and provocative speculators such as Eric Drexler. This broadening of topics and personnel is indicative of Carter’s unmooring of molecular electronics from a particular material such as polysulfur nitride or TCNQ, and his construction of a big tent for all things ‘molecular’.

For Carter, it was among the visionaries in that tent – people like Drexler, Stuart Hameroff, and Stephen Wolfram – that he could discuss cellular automata, molecular assemblers, and biological computing and obtain material for the more radical parts of his vision.[47] That vision was quickly attracting widespread attention. Carter was publishing more papers in mainstream physics and surface science journals in the mid 1980s than at any time in his career; but he was also publishing in more off-beat edited volumes that some of his peers and supervisors were beginning to contest.[48]

Thus, Carter fell into that familiar category of scientist for whom his peers’ evaluations of his science was inseparable from their interpretations of his character and mental state.[51] To his critics, Forrest Carter avoided producing experimental results because he was a second-rate chemist; to his supporters, he was a community-builder whose work was more important for national security than the research the Navy demanded. To critics, he was a wild speculator whose misunderstandings of quantum chemistry would bring the Navy into disrepute; to supporters, a charismatic genius who deliberately advanced unfashionable ideas to provoke discussion .

In some ways he gives us an archetype halfway between Drexler and the mainstream. He was from the start a member of the research community—an experimentalist, even, if not when it came to molecular electronics. He brought the “field” (as much as it was one) into some disrepute (moreso in the US than elsewhere), and his big tent mostly collapsed after his death. But a kernel of experimentalists—who may not have talked to each other without Carter’s efforts—regrouped and began to produce credible results (in the least speculative part of the ME research program) with the help of new technologies like scanning probe microscopes. That new generation of molecular electronics has grown into a relatively mature interdisciplinary field. And note that the speculation that set all this off is still controversial—plenty of people expect we’ll never be able or want to manufacture a “molecular computer” even in the reduced sense of “integrating single molecules with silicon”, and no one thinks we’ll be there soon—but since the field doesn’t live so much off of hype, researchers have space to say “hey, it’s basic research” and carry on.

As an addendum to the repost, I should also quote footnote [47]:

According to Hank Wohltjen, Carter and Drexler had a sustained correspondence but eventually they ‘parted ways in that Forrest was not seeking limelight for this stuff, he was trying to play it down and focus on more doable things. And Eric was more into sensationalism’.

And note that the paper linked in the exercise is one of his “papers in mainstream physics and surface science journals in the mid 1980s”, not in an “off-beat” volume.

Hiatus on hiatus

Here it is: the post after a years-long hiatus explaining that the author hopes to revive the blog. You’ve seen this before; you know how often it works. For good luck, I made a real post before this one.

I had a good first month the first time around, then slowed down and stopped over the next six months or so. Other projects became more important. I also stopped tweeting.

I did, however, keep lower-effort posting on my tumblr: here’s a guide to my tags. My most-read post there, and as far as I know the only one that got shared outside of tumblr, is this exegesis of Yu-Gi-Oh! Duel Monsters. Elsewhere, I wrote a handful of articles on effective music practice. There are also a couple other pieces from my LessWrong pastiche phase that were pretty well received, and which I never reposted here, on therapy as a reference class for rationality training and in that light a self-experiment in training noticing confusion.

For the most part, I’ll be posting short pieces and links. Maybe some longer things that I wouldn’t say without the protective irony my tumblr lacks. And I’ll fill in the gaps by polishing up material from tumblr that might be of interest to people who don’t want to follow me there, or that ought to be easier to browse and search for.

My main goal in bringing this back is to increase my internet surface area. Who knows what value my writing has, but if more people have a chance to see it, I’ll have more chances to talk to like-minded people beyond those few that have condensed into my corner of tumblr. In case that’s not clear enough:

If you read my stuff and find it interesting, I want to talk to you. Leave a comment! Say hi!

Not only is that genuinely what I want out of this, but it’s a great way for you to motivate me to post more.

Right now, the best way to get in touch with me is still through Tumblr. There’s also an email link above this post, but you might want to ping me elsewhere (e.g. tumblr anon ask) to prompt me to check that inbox, since it hasn’t seen human activity since 2014.

Exercise #5: Molecular electronics proposals

[As always, I’m not promising this is a good use of your time, but you might find it stimulating.]

Here is a paper from 1983:

In anticipation of the continued size reduction of switching elements to the molecular level, new approaches to materials, memory, and switching elements have been developed. Two of the three most promising switching phenomena include electron tunneling in short periodic arrays and soliton switching in conjugated systems. Assuming a three-dimensional architecture, the element density can range from 1015 to 1018 per cc. In order to make the fabrication of such a molecular electronic device computer feasible, techniques for accomplishing lithography at the molecular scale must be devised. Three approaches possibly involving biological and Langmiur-Blodgett materials are described.

Depending on how you count, the author describes ten or so proposed methods for molecular-level fabrication of electronic devices:

  • Merrifield synthesis: An existing method of amino-acid-by-amino-acid polypeptide synthesis in solution is adapted to build up a network of molecular wires on a lithographically prepared substrate. Later, “switching and control functions are added, adjacent components are bonded together, and the process continued, ultimately forming a three dimensional solid array of active and passive components.”
  • Molecular epitaxial superstructures: Engineer a large flat molecule with edge-edge interactions such that it forms an epitaxial layer with small holes on a substrate; then deposit your desired material so that when you remove your flat molecule, all that’s left is the desired material where there were holes in the original layer.
  • Modulated structures: Heat an insulating film under certain conditions so that small conducting lines, which connect the top and bottom of the film, develop at potentially very close distances to each other, giving you a number of active sites on one surface that can be addressed from the other side of the film.
  • Electron tunnelling: Use a periodic molecular chain to switch electron tunneling from one end to another on and off by modulating the depth of potential wells in the chain so that the electron energy becomes on/off resonance with the wells, making it easy/hard to tunnel across.
  • Soliton switching: Use a propagating soliton in a system of alternating single and double bonds to turn chromophores on and off.
  • Neuraminidase: Use the regular arrangement (when crystallized in two dimensions) of the 10-nanometer spore heads of an influenza virus (or other molecules that form interesting shapes and patterns) to derive useful structures. Maybe use multiple tesselated tiles, like in an Escher drawing.
  • Fractals: Use chemical means to induce self-similar patterns at different scales to bridge the macro and micro scales, like in another Escher drawing.
  • Fiberelectronics: Produce a bundle of long wires of 10 nm diameter by filling a hollow glass rod with metal, heating and pulling out the rod by a factor of 100, bundling many such rods together, then hot drawing by a factor of 100 again.
  • Langmuir-Blodgett films: Modify a known technique for producing a film with a precise number of molecular monolayers, in order to incorporate a pattern of active elements or holes.
  • Monolayer polymerization: Build a device by stitching together monolayers so that interesting things happen at the interfaces.

You can read the paper as deeply as you feel necessary to answer the questions.

  1. What kind of paper is this? Is the author credible? What is he trying to accomplish, and on what timescale?
  2. More specifically: How would you read this paper as a scientist in a field it touches on? As a program officer? As a citizen who wants to understand and encourage innovation effectively?
  3. Which method seems the most experimentally accessible for investigation, from the 1983 perspective or from today’s? [Extra credit: What sort of experiment would you do?]
  4. Which method seems the most speculative in terms of whether it will be ultimately physically/chemically realistic even given advanced experimental techniques?
  5. Which would be the most valuable?
  6. How many of these methods do you think are in use today, almost 35 years later? At what stage of development would they be now, or at what stage were they abandoned? What capabilities will have been achieved by other means? [Extra: Try to determine the actual answers.]

Once you’ve made your effort, check out my follow-up post.

New “Beside Discovery” additions

I recently added a few items to my “messy micro-histories of science” section here, reproduced below:

Anthropologist Hugh Gusterson wrote “A Pedagogy of Diminishing Returns: Scientific Involution across Three Generations of Nuclear Weapons Science” (2005) about the strange sort of inward-turning and withering of nuclear weapons science in the post-testing era. That field (as well as national labs and megaprojects more generally) often seems to be dramatically idiosyncratic or even dysfunctional — but as with many of those dramatic features, the process Gusterson describes is a magnified version of something that plays in some form in all sorts of labs as fads and funding wax and wane. (Not to mention that perhaps most actual work in science is done by people in temporary and training positions, who today are very likely to leave science, taking a great deal of tacit knowledge with them.)

The Hanbury Brown and Twiss experiment is an interesting case where classical electromagnetism easily produces a correct answer, while the quantum mechanical explanation involves important subtleties. It caused controversy when it was performed in the 1950s, with critics saying that if the results were correct they would call for a “major revision of some fundamental concepts in quantum mechanics.” This was not at all true, as some people recognized immediately. From a certain perspective the quantum theory necessary for a correct explanation had been developed decades earlier (doubly true for the debate’s reappearance in the 1990s), but certain distinctions, particularly in source and detector physics, had not yet been made relevant by experiment. (Additionally, Dirac had written something that made a certain sense in the context of the 1930s but confused many physicists trying to apply it to understanding HBT: “Interference between different photons never occurs.”) The HBT paper in 1956 was then one of the motivations for developing theory along these lines, laying the foundations for quantum optics. I may write more about it someday, but for now The Twiss-Hanbury Brown Controversy: A 40-Years Perspective is a good overview.

A Half Century of Density Functional Theory (2015) celebrates a theory exceptional in that it in some sense fits the “discovery” narrative very well — it wasn’t at all “in the air” as these things often are. On the other hand, DFT’s value took some time to be recognized, especially among quantum chemists, for somewhat arbitrary reasons. [Additional links are quotes.]

Exercise #4

Consider the following experimental results:

People who became vegetarians for ethical reasons were found to be more committed to their diet choice and remained vegetarians for longer than those who did so for health reasons.

Loyalty to expert advisers (doctors, financial advisors, etc.) leads to higher prices but not necessarily better services.

Smokers who viewed ads featuring messages about “how” to quit smoking were substantially more likely to quit than those who viewed ads with reasons “why” to quit.

In adults, creativity was substantially inhibited during and shortly after walking (either outdoors or on a treadmill) as compared to sitting.

Answer each question before scrolling down and reading the next, because of SPOILERS:

1. How do you explain these effects?

 

 

 

2. How would you have gone about uncovering them?

 

 

 

3. These are all reversed, and the actual findings were the opposite of what I said. How do you explain the opposite, correct effects?

 

 

 

4. Actually, none of these results could be replicated. Why and how were non-null effects detected in the first place? Answers using your designs from (2) are preferable.

 

 

 

Final spoilers below.

 

 

 

For the real findings, see Useful Science (, ,,), which is Useful as a source of further exercises, at least. Some of the four are indeed reversed, but as far as I know I made up the part about replication. Reflect on the quality of your explanations and on any feelings of confusion you noticed or failed to notice. I apologize for lying; it was for your own good.

Extra credit: Follow the links to find the original papers. Compare your proposed test, and determine whether your alternative explanations were ruled out.

Beyond Discovery

Beyond Discovery™: The Path from Research to Human Benefit is a series of articles that trace the origins of important recent technological and medical advances. Each story reveals the crucial role played by basic science, the applications of which could not have been anticipated at the time the original research was conducted.

The National Academy of Sciences ran this project from 1996 to 2003. The website went offline in early 2013 and as of June 2014 is still “under construction.” [2015 update: that link now leads to a page with all the articles as PDFs! Except the MRI one, for some reason.] You can [also] find all twenty articles in the Internet Archive, but it’s kind of a pain to navigate. So I’ve gathered them all here.

The articles (each around 8 pages) are roughly popular-magazine-level accounts of variable quality, but I learned quite a bit from all of them, particularly from the biology and medicine articles. They’re very well written, generally with input from the relevant scientists still living (many of them Nobel laureates). In particular I like the broad view of history, the acknowledged scope of the many branches leading to any particular technology, the variety of topics outside the usual suspects, the focus on fairly recent technology, and the emphasis bordering on propagandist on the importance and unpredictability of basic research. It seems to me that they filled an important gap in popular science writing in this way. Links, quotes, and some brief comments follow.

Continue reading

Pendulum

When she lost her faith I said that falsehood yes feels true from inside, and in that moment she was born again.


There’s an old joke. Two lovers freed a genie who would grant them one wish. They wished to switch bodies for a day, so that they might better know each other and thereby grow closer. The genie declared with a sweep of his arm that their wish was his command.

Nothing changed, because dualism is false.


She’d had the bad habit of wandering during the round. All the other matches were always so interesting. Even when she was losing, as did sometimes happen, she couldn’t bring herself to think if it wasn’t her move. Strange what the clock does, and maybe doesn’t do, to your brain.

Once, against a particularly slow adversary, she wandered the entire tournament hall, subitizing some boards and puzzling absorbed over others. She stood finally behind one hopeless player, where for two minutes she invented variations that might save him before she realized she was standing behind her own opponent.

Of course, that was nothing like this. After all, she lost that game.


It started with a Glass knockoff that sold in pairs. See through your partner’s eyes. Experience their field of vision projected onto yours. Nobody cared.

(Well, it saw some success in the adult entertainment industry.)


She went to university to study chemistry, planning-without-planning to go to medical school. We met in P-chem, which I was auditing because I wanted to see just how physical it was. I accidentally convinced her to take quantum mechanics from the Physics department instead. She wound up doing a PhD in physics. These days she does research that sounds more like neurobiology.

If there’s an analogy to be found here, it’s facile, to say the least.


That part about neurobiology is relevant, though. She did a postdoc working on directly shared sensory experience in mice. Biocompatible, electromagnetically and chemically sensitive implants, plus some clever algorithms for inter-nervous-system compatibility. She used to tell me it was easy compared to what was coming next.


She told me about a book she read. It said:

“You must push your head through the wall. It is not difficult to penetrate it, for it is made of thin paper. But what is difficult is not to let yourself be deceived by the fact that there is already an extremely deceptive painting on the wall showing you pushing your head through. It tempts you to say: ‘Am I not pushing through it all the time?’”

She liked that so much she framed an extremely deceptive facsimile of the page on which it appeared and hung it by her desk.

But I hope you don’t think that explains anything.


Uploading turned out to be a dead end, to the chagrin of materialists everywhere. You could store all of the relevant static information, but it never became practical to evolve that information in time outside of a brain. So what are you going to do? Take fine enough timeslices and you can relive experiences by playing them back. And not just sensory experiences—even your thoughts at the time could be reproduced. Motor activity would be deliberately excluded, but proprioception could still be overridden. Your present conscious life would keep going, with part of you aware that you were merely observing; but as long as the present you didn’t interfere, you could without hyperbole relive your past.

A couple early consumer products tried it (isolation tank sold separately), once they figured out how to record and play back without surgery. Received breathless media coverage and millions in venture capital. Went out of business a year after launch, under the pressure of some monster class-action suits. Turned out the physical changes in the brain meant that the same signal eventually produced a different experience, one which was often bizarre and traumatic. The backups didn’t get corrupted—but the hardware did.


She kept a grandfather clock in her dining room. For the pendulum, naturally. So well-behaved at small angles, but force it hard enough and it never retraces its path in phase space. You can repeat a position, but only if you arrive with a different velocity. You can relive a velocity, but never quite in the same place you were the first time. Your only regularity is the fractal structure of your history, your strange attractor.

A terrible metaphor, as you can see. Her clock didn’t even work.


Anyway, that was where she came in: how to account for these physical changes? She’d already figured it out for the basic senses; the architecture was similar enough between individuals. With a mountain of data and graduate students she worked out a system of translation between different versions of an evolving brain.

From there it wasn’t long before she could stream all conscious experience and subconscious activity from one individual to another. The genie just hadn’t been creative enough.


We fought, once, about her poetry. She was struggling, and I dared suggest that she was trying to capture something that wasn’t there. Better to take joy in the real. Like Feynman on the beauty of a flower, or Wallace on the truth in a cliché.

She said:

How many people do you think nod smugly along to that Feynman quote, as though it vindicates their own insipid tastes. As though all art is as juvenile as their high school blue period, and there’s no post-Feynman looking down on them from the other side of the dialectic double helix. Like now that they’ve learned what words really mean and what reality really is they can know instinctively that there’s no materialist consequentialist boundary where art versus science is a meaningful worthwhile framing. Wallace at least knows that to be bullshit, that there’s a reason people talk about things that way. He understands what it means to sublate if anyone does. But he gets interpreted in just as regressive a way as Feynman, like our cynical sophistication failed us and we have to retreat back to where all truth is cliché. Like it wasn’t just another superstimulus, like the taste isn’t still adaptive in the right form and when better informed. But I guess I can explain all day that something not being literally true or materially there isn’t a reason not to write about it and you’ll still fold me with the pre-moderns until you’ve lived through the syncretism yourself.

How my heart aches. I said: Oh I thought you thought Hegel was nonsense.


The moment it was declared safe for humans, she wanted to try it. And who would she trust to stream from?

I asked only that it be mutual.

If you really wanted to understand, you’d have to experience it for yourself. I’m not even sure it would work to stream the experience from someone else. At first it’s just noisy. It takes time to adapt, to figure out how to listen. To see through your partner’s eyes. To think their thoughts, or your brain’s versions of their thoughts. To resolve a disagreement semi-automatically just by finally really seeing each other’s perspective. To mentally ask a question, and for your partner to involuntarily recall the answer in response. To notice the same happening to you, and to then realize how terrifying it is. But then also that your partner isn’t terrified, doesn’t see it as coercive, is sad that you don’t trust her that way. To demand that she turn this thing off before that difference and all your precious differences are resolved by this alien process so mutual that neither of you can control it. To never speak of it again.


There wasn’t much more to it. The two of us, I mean. I keep reliving these stories I collected in the too-short time that we were connected, even though I know they don’t explain anything that happened or might happen yet. There’s no such thing as explanation, after all. But perhaps the pendulum can be pushed back.

Link: In Praise of Passivity

Normally the 2012 publication date would make this too recent for benthic canon, but Michael Huemer’s In Praise of Passivity was written in my heart long ago. The abstract:

Political actors, including voters, activists, and leaders, are often ignorant of basic facts relevant to policy choices. Even experts have little understanding of the working of society and little ability to predict future outcomes. Only the most simple and uncontroversial political claims can be counted on. This is partly because political knowledge is very difficult to attain, and partly because individuals are not sufficiently motivated to attain it. As a result, the best advice for political actors is very often to simply stop trying to solve social problems, since interventions not based on precise understanding are likely to do more harm than good.

I hope you’re already familiar with the anxiety of epistemology that observing polarized debates ought to induce, but Huemer gives an unusually concise, thorough, and well-documented survey of the landscape. This is, as always, not to say that I agree with every word of it—but this is largely a matter of degree, in particular of the extent to which the best we can do is to set our hearts on doing nothing and thus leave nothing undone. The author risks handing readers a lofty principle that’s too easily used to argue one position and dismiss others without ever engaging the positions’ particulars. But I take these ideas as an unspoken starting point for discourse. Or I would, were it ever worthwhile for me to talk to someone about politics. Anyway, it probably doesn’t work if I leave it unspoken.

Exercise #3

Apply the Casuist’s Razor to an explanation, judgment, or argument of your choice. Suggestions from answers to “What is your favorite deep, elegant, or beautiful explanation”: Marti Hearst, Stuart PimmLaurence C. SmithEvgeny Morozov. Bonus points for an evenhanded application to your favorite argument on copyright, piracy, software patents, drug use, abortion, free speech, or another potentially value-laden topic. What phenomena are correctly explained, or what actions are correctly judged? In what cases or in what sense does the opposite explanation or principle correct? What would a more complete account or judgment look like? (How do you reconcile your previous answers? What details do you need to consider? What are the relevant empirical predictions or consequences?)

Extra credit: Apply the Casuist’s Razor to itself. What explanations (etc.) does it correctly identify as good? What is the opposite principle, and what explanations does that correctly identify as good? What details of those explanations are needed to make these accounts compatible?

The Casuist’s Razor

“Casuistry” is today a near synonym for “sophistry”: a certain kind of intricate, deceptive reasoning; highly pejorative. The word originally referred to case-by-case moral analysis (and, as philosophical jargon, still does). But the casuist, evidently, abused the rich particularities offered by reality to justify his prior intuitions. With a torrent of excuses and exceptions he eroded the barrier between right and wrong. This was unacceptable.

If casuistry has fallen out of fashion, then principled reasoning is our new rising star.a Our most successful scientific theories—physics and evolution in particular—are seen as having succeeded on the strength of their simplicity, their ability to explain a wide range of phenomena using only deep, universal principles. The direction of causality is unclear, but today’s intellectual discourse is saturated with a similar reductionist impulse, which I contend is as much aesthetic as practical. Consider the 2012 Annual Question from Edge.org: “What is your favorite deep, elegant, or beautiful explanation?” One wonders whether that disjunction was really necessary.

There are epistemological advantages to keeping your theories small. Derive all your judgments from simple premises, and you no longer risk overfitting. An argument with fewer moving parts requires less justification, is less vulnerable. Meta-level considerations can pinpoint common patterns to achieve vast compression. Hail Occam’s Razor.

Continue reading

  1. In this metaphor, stars take centuries to rise.  (back)