# Some ideas on food and entertainment for those attending SQUINT 2014 in Santa Fe

I'm missing SQUINT 2014 (bummer...) to give a talk at a workshop on Quantum Contextuality, Nonlocality, and the Foundations of Quantum Mechanics in Bad Honnef, Germany, followed by collaboration with Markus Mueller at Heidelberg, and a visit to Caslav Brukner's group and the IQOQI at Vienna.  Herewith some ideas for food and entertainment for SQUINTers in Santa Fe.

Cris Moore will of course provide good advice too.  For a high-endish foodie place, I like Ristra.  You can also eat in the bar there, more casual (woodtop tables instead of white tablecloths), a moderate amount of space (but won't fit an enormous group), some smaller plates.  Pretty reasonable prices (for the excellent quality).  Poblano relleno is one of the best vegetarian entrees I've had in a high-end restaurant---I think it is vegan.  Flash-fried calamari were also excellent... I've eaten here a lot with very few misses.  One of the maitres d' sings in a group I'm in, and we're working on tenor-baritone duets, so if Ed is there you can tell him Howard sent you but then you have to behave ;-).  The food should be good regardless.  If Jonathan is tending bar you can ask him for a flaming chartreuse after dinner... fun stuff and tasty too.  (I assume you're not driving.)  Wines by the glass are good, you should get good advice on pairing with food.

Next door to Ristra is Raaga... some of the best Indian food I've had in a restaurant, and reasonably priced for the quality.

I enjoyed a couple of lunches (fish tacos, grilled portobello sandwich, weird dessert creations...) at Restaurant Martin, was less thrilled by my one foray into dinner there.  Expensive for dinner, less so for lunch, a bit of a foodie vibe.

Fish and chips are excellent at Zia Café (best in town I think), so is the green chile pie--massive slice of a deep-dish quiche-like entity, sweet and hot at the same time.

I like the tapas at El Mesón, especially the fried eggplant, any fried seafood like oysters with salmorejo, roasted red peppers with goat cheese (more interesting than it sounds).  I've had better luck with their sherries (especially finos) better than their wines by the glass.  (I'd skip the Manchego with guava or whatever, as it's not that many slices and you can get cheese at a market.)  Tonight they will have a pretty solid jazz rhythm section, the Three Faces of Jazz, and there are often guests on various horn.  Straight-ahead standards and classic jazz, mostly bop to hard bop to cool jazz or whatever you want to call it.  "Funky Caribbean-infused jazz" with Ryan Finn on trombone on Sat. might be worth checking out too... I haven't heard him with this group but I've heard a few pretty solid solos from him with a big band.  Sounds fun.  The jazz is popular so you might want to make reservations (to eat in the bar/music space, there is also a restaurant area I've never eaten in) especially if you're more than a few people.

La Boca and Taverna La Boca are also fun for tapas, maybe less classically Spanish.  La Boca used to have half-price on a limited selection of tapas and \$1 off on sherry from 3-5 PM.  Not sure if they still do.

Il Piatto is relatively inexpensive Italian, pretty hearty, and they usually have some pretty good deals in fixed-price 3 course meals where you choose from the menu, or early bird specials and such.

Despite a kind of pretentious name Tanti Luci 221, at 221 Shelby, was really excellent the one time I tried it.  There's a bar menu served only in the bar area, where you can also order off the main menu.  They have a happy hour daily, where drinks are half price.  That makes them kinda reasonable.  The Manhattan I had was excellent, though maybe not all that traditional.

If you've got a car and want some down-home Salvadoran food, the Pupuseria y Restaurante Salvadoreño, in front of a motel on Cerillos, is excellent and cheap.

As far as entertainment, get a copy of the free Reporter (or look up their online calendar).  John Rangel and Chris Ishee are two of the best jazz pianists in town;  if either is playing, go.  Chris is also in Pollo Frito, a New Orleans funk outfit that's a lot of fun.  If they're playing at the original 2nd street brewery, it should be a fun time... decent pubby food and brews to eat while you listen.  Saxophonist Arlen Asher is one of the deans of the NM jazz scene, trumpeter and flugelhorn player Bobby Shew is also excellent, both quite straight-ahead.  Dave Anderson also recommended.  The one time I heard JQ Whitcomb on trumpet he was solid, but it's only been once.  I especially liked his compositions.  Faith Amour is a nice singer, last time I heard her was at Pranzo where the acoustics were pretty bad.  (Tiny's was better in that respect.)

For trad New Mexican (food that is) I especially like Tia Sophia's on Washington (I think), and The Shed for red chile enchiladas (and margaritas).

Gotta go.  It's Friday night, when all good grad students, faculty, and postdocs anywhere in the worlkd head for the nearest "Irish pub".

I had a look at Jacob Bekenstein's 1973 Physical Review D paper "Black holes and entropy" for the answer to my question about Susskind's presentation of the Bekenstein derivation of the formula stating that black hole entropy is proportional to horizon area.  An argument similar to the one in Susskind's talk appears in Section IV, except that massive particles are considered, rather than photons, and they can be assumed to be scalar so that the issue I raised, of entropy associated with polarization, is moot.  Bekenstein says:

we can be sure that the absolute minimum of information lost [as a particle falls into a black hole] is that contained in the answer to the question "does the particle exist or not?"  To start with, the answer [to this question] is known to be yes.  But after the particle falls in, one has no information whatever about the answer.  This is because from the point of view of this paper, one knows nothing about the physical conditions inside the black hole, and thus one cannot assess the likelihood of the particle continuing to exist or being destroyed.  One must, therefore, admit to the loss of one bit of information [...] at the very least."

Presumably for the particle to be destroyed, at least in a field-theoretic description, it must annihilate with some stuff that is already inside the black hole (or from the outside point of view, plastered against the horizon). This annihilation could, I guess, create some other particle. In fact it probably must, in order to conserve mass-energy.  My worry in the previous post about the entropy being due to the presence/absence of the particle inside the hole was that this would seem to need to be due to uncertainty about whether the particle fell into the hole in the first place, which did not seem to be part of the story Susskind was telling, and the associated worry that this would make the black hole mass uncertain, which also didn't seem to be a feature of the intended story although I wasn't sure. But the correct story seems to be that the particle definitely goes into the hole, and the uncertainty is about whether it subsequently annihilates with something else inside, in a process obeying all relevant conservation laws, rendering both of my worries inapplicable. I'd still like to see if Bekenstein wrote a version using photons, as Susskind's presentation does. And when I feel quite comfortable, I'll probably post a fairly full description of one (or more) versions of the argument. Prior to the Phys Rev D paper there was a 1972 Letter to Nuovo Cimento, which I plan to have a look at; perhaps it deals with photons. If you want to read Bekenstein's papers too, I suggest you have a look at his webpage.

# Question about Susskind's presentation of Bekenstein's black hole entropy derivation

I'm partway through viewing Leonard Susskind's excellent not-too-technical talk "Inside Black Holes" given at the Kavli Institute for Theoretical Physics at UC Santa Barbara on August 25.  Thanks to John Preskill,  @preskill, for recommending it.

I've decided to try using my blog as a discussion space about this talk, and ultimately perhaps about the "Harlow-Hayden conjecture" about how to avoid accepting the recent claim that black holes must have an information-destroying "firewall" near the horizon.  (I hope I've got that right.)  I'm using  Susskind's paper "Black hole complementarity and the Harlow-Hayden conjecture"  as my first source on the latter question.  It also seems to be a relatively nontechnical presentation (though much more technical than the talk so far)... that should be particularly accessible to quantum information theorists, although it seems to me he also does a good job of explaining the quantum information-theoretic concepts he uses to those not familiar with them.

But first things first.  I'm going to unembarassedly ask elementary questions about the talk and the paper until I understand.  First off, I've got a question about Susskind's "high-school level" presentation, in minutes 18-28 of the video, of Jacob Bekenstein's 1973 argument that in our quantum-mechanical world the entropy of a black hole is proportional to its area (i.e. the area of the horizon, the closed surface inside which nothing, not even light, can escape).   The formula, as given by Susskind, is

$S = (\frac{c^3}{4 \hbar G}) A$,

where $S$ is the entropy (in bits) of the black hole, and $A$ the area of its horizon.  (The constant here may have been tweaked by a small amounts, like $4 \pi$ or its inverse, to reflect considerations that Susskind alluded to but didn't describe, more subtle than those involved in Bekenstein's argument.)

The argument, as presented by Susskind, involves creating the black hole out of photons whose wavelength is roughly the Schwarzschild radius of the black hole.  More precisely, it is built up in steps; each step in creating a black hole of a given mass and radius involves sending in another photon of wavelength roughly the current Schwarzschild radius.  The wavelength needs to be that big so that there is no information going into the hole (equivalently, from the point of view outside the hole, getting "plastered" (Susskind's nice choice of word) against the horizon) about where the photon went in.  Presumably there is some argument about why the wavelength shouldn't be much bigger, either...perhaps so that it is sure to go into the hole, rather than missing.  That raises the question of just what state of the photon field should be impinging on the hole...presumably we want some wavepacket whose spatial width is about the size of the hole, so we'll have a spread of wavelengths centered around some multiple (roughly unity) of the Schwarzschild radius.  Before there is any hole, I guess I also have some issues about momentum conservation... maybe one starts by sending in a spherical shell of radiation impinging on where we want the hole to be, so as to have zero net momentum.  But these aren't my main questions, though of course it could turn out to be necessary to answer them in order to answer my main question.  My main question is:  Susskind says that each such photon carries one bit of information: the information is "whether it's there or not".  This doesn't make sense to me, as if one is uncertain about how many photons went into creating the hole, it seems to me one should have a corresponding uncertainty about its mass, radius, etc...  Moreover, the photons that go in still seem to have a degree of freedom capable of storing a bit of information:  their polarization.  So maybe this is the source of the one bit per photon?  Of course, this would carry angular momentum into the hole/onto the horizon, so I guess uncertainty about this could generate uncertainty about whether or not we have a Schwarzschild or a Kerr (rotating) black hole, i.e. just what the angular momentum of the hole is.

Now, maybe the solution is just that given their wavelength of the same order of the hole, there is uncertainty about whether or not the photons actually get into the hole, and so the entropy of the black hole really is due to uncertainty about its total mass, and the mass M in the Bekenstein formula is just the expected value of mass?

I realize I could probably figure all this out by grabbing some papers, e.g. Bekenstein's original, or perhaps even by checking wikipedia, but I think there's some value in thinking out loud, and in having an actual interchange with people to clear up my confusion... one ends up understanding the concepts better, and remembering the solution.  So, if any physicists knowledgeable about black holes (or able and willing to intelligently speculate about them...) are reading this, straighten me out if you can, or at least let's discuss it and figure it out...

# Bohm on measurement in Bohmian quantum theory

Prompted, as described in the previous post, by Craig Callender's post on the uncertainty principle, I've gone back to David Bohm's original series of two papers "A suggested interpretation of the quantum theory in terms of "hidden" variables I" and "...II", published in Physical Review in 1952 (and reprinted in Wheeler and Zurek's classic collection "Quantum Theory and Measurement", Princeton University Press, 1983).  The Bohm papers and others appear to be downloadable here.

Question 1 of my previous post asked whether it is true that

"a "measurement of position" does not measure the pre-existing value of the variable called, in the theory, "position".  That is, if one considers a single trajectory in phase space (position and momentum, over time), entering an apparatus described as a "position measurement apparatus", that apparatus does not necessarily end up pointing to, approximately, the position of the particle when it entered the apparatus."

It is fairly clear from Bohm's papers that the answer is "Yes". In section 5 of the second paper, he writes

"in the measurement of an "observable," Q, we cannot obtain enough information to provide a complete specification of the state of an electron, because we cannot infer the precisely defined values of the particle momentum and position, which are, for example, needed if we wish to make precise predictions about the future behavior of the electron. [...] the measurement of an "observable" is not really a measurement of any physical property belonging to the observed system alone. Instead, the value of an "observable" measures only an incompletely predictable and controllable potentiality belonging just as much to the measuring apparatus as to the observed system itself."

Since the first sentence quoted says we cannot infer precise values of "momentum and position", it is possible to interpret it as referring to an uncertainty-principle-like tradeoff of precision in measurement of one versus the other, rather than a statement that it is not possible to measure either precisely, but I think that would be a misreading, as the rest of the quote, which clearly concerns any single observable, indicates. Later in the section, he unambiguously gives the answer "Yes" to a mutation of my Question 1 which substitutes momentum for position. Indeed, most of the section is concerned with using momentum measurement as an example of the general principle that the measurements described by standard quantum theory, when interpreted in his formalism, do not measure pre-existing properties of the measured system.

Here's a bit of one of two explicit examples he gives of momentum measurement:

"...consider a stationary state of an atom, of zero angular momentum. [...] the $\psi$-field for such a state is real, so that we obtain

$\mathbf{p} = \nabla S = 0.$

Thus, the particle is at rest. Nevertheless, we see from (14) and (15) that if the momentum "observable" is measured, a large value of this "observable" may be obtained if the $\psi$-field happens to have a large fourier coefficient, $a_\mathbf{p}$, for a high value of $\mathbf{p}$. The reason is that in the process of interaction with the measuring apparatus, the $\psi$-field is altered in such a way that it can give the electron particle a correspondingly large momentum, thus transferring some of the potential energy of interaction of the particle with its $\psi$-field into kinetic energy."

Note that the Bohmian theory involves writing the complex-valued wavefunction $\psi(\mathbf{x})$ as $R(\mathbf{x})e^{i S(\mathbf{x})}$, i.e. in terms of its (real) modulus $R$ and (real) phase $S$. Expressing the Schrödinger equation in terms of these variables is in fact probably what suggested the interpretation, since one gets something resembling classical equations of motion, but with a term that looks like a potential, but depends on $\psi$. Then one takes these classical-like equations of motion seriously, as governing the motions of actual particles that have definite positions and momenta. In order to stay in agreement with quantum theory concerning observed events such as the outcomes of measurements, m theory, one in addition keeps, from quantum theory, the assumption that the wavefunction $\psi$ evolves according to the Schrödinger equation. And one assumes that we don't know the particles' exact position but only that this is distributed with probability measure given (as quantum theory would predict for the outcome of a position measurement) by $R^2(\mathbf{x})$, and that the momentum is $\mathbf{p} = \nabla S$. That's why the real-valuedness of the wavefunction implies that momentum is zero: because the momentum, in Bohmian theory, is the gradient of the phase of the wavefunction.

For completeness we should reproduce Bohm's (15).

(15) $\psi = \sum_\mathbf{p} a_{\mathbf{p}} exp(i \mathbf{p}\cdot \mathbf{x} / \hbar).$

At least in the Wheeler and Zurek book, the equation has $p$ instead of $\mathbf{p}$ as the subscript on $\Sigma$, and $a_1$ instead of $a_\mathbf{p}$; I consider these typos, and have corrected them. (Bohm's reference to (14), which is essentially the same as (15) seems to me to be redundant.)

The upshot is that

"the actual particle momentum existing before the measurement took place is quite different from the numerical value obtained for the momentum "observable,"which, in the usual interpretation, is called the "momentum." "

It would be nice to have this worked out for a position measurement example, as well. The nicest thing, from my point of view, would be an example trajectory, for a definite initial position, under a position-measurement interaction, leading to a final position different from the initial one. I doubt this would be too hard, although it is generally considered to be the case that solving the Bohmian equations of motion is difficult in the technical sense of complexity theory. I don't recall just how difficult, but more difficult than solving the Schrödinger equation, which is sometimes taken as an argument against the Bohmian interpretation: why should nature do all that work, only to reproduce, because of the constraints mentioned above---distribution of $\mathbf{x}$ according to $R^2$, $\mathbf{p} = \nabla S$---observable consequences that can be more easily calculated using the Schrödinger equation?
I think I first heard of this complexity objection (which is of course something of a matter of taste in scientific theories, rather than a knockdown argument) from Daniel Gottesman, in a conversation at one of the Feynman Fests at the University of Maryland, although Antony Valentini (himself a Bohmian) has definitely stressed the ability of Bohmian mechanics to solve problems of high complexity, if one is allowed to violate the constraints that make it observationally indistinguishable from quantum theory. It is clear from rereading Bohm's 1952 papers that Bohm was excited about the physical possibility of going beyond these constraints, and thus beyond the limitations of standard quantum theory, if his theory was correct.

In fairness to Bohmianism, I should mention that in these papers Bohm suggests that the constraints that give standard quantum behavior may be an equilibrium, and in another paper he gives arguments in favor of this claim. Others have since taken up this line of argument and done more with it. I'm not familiar with the details. But the analogy with thermodynamics and statistical mechanics breaks down in at least one respect, that one can observe nonequilibrium phenomena, and processes of equilibration, with respect to standard thermodynamics, but nothing like this has so far been observed with respect to Bohmian quantum theory. (Of course that does not mean we shouldn't think harder, guided by Bohmian theory, about where such violations might be observed... I believe Valentini has suggested some possibilities in early-universe physics.)

|

# A question about measurement in Bohmian quantum mechanics

I was disturbed by aspects of Craig Callender's post "Nothing to see here," on the uncertainty principle, in the New York Times' online philosophy blog "The Stone," and I'm pondering a response, which I hope to post here soon.  But in the process of pondering, some questions have arisen which I'd like to know the answers to.  Here are a couple:

Callender thinks it is important that quantum theory be formulated in a way that does not posit measurement as fundamental.  In particular he discusses the Bohmian variant of quantum theory (which I might prefer to describe as an alternative theory) as one of several possibilities for doing so.  In this theory, he claims,

Uncertainty still exists. The laws of motion of this theory imply that one can’t know everything, for example, that no perfectly accurate measurement of the particle’s velocity exists. This is still surprising and nonclassical, yes, but the limitation to our knowledge is only temporary. It’s perfectly compatible with the uncertainty principle as it functions in this theory that I measure position exactly and then later calculate the system’s velocity exactly.

While I've read Bohm's and Bell's papers on the subject, and some others, it's been a long time in most cases, and this theory is not something I consider very promising as physics even though it is important as an illustration of what can be done to recover quantum phenomena in a somewhat classical theory (and of the weird properties one can end up with when one tries to do so).  So I don't work with it routinely.  And so I'd like to ask anyone, preferably more expert than I am in technical aspects of the theory, though not necessarily a de Broglie-Bohm adherent, who can help me understand the above claims, in technical or non-technical terms, to chime in in the comments section.

I have a few specific questions.  It's my impression that in this theory, a "measurement of position" does not measure the pre-existing value of the variable called, in the theory, "position".  That is, if one considers a single trajectory in phase space (position and momentum, over time), entering an apparatus described as a "position measurement apparatus", that apparatus does not necessarily end up pointing to, approximately, the position of the particle when it entered the apparatus.

Question 1:  Is that correct?

A little more discussion of Question 1.  On my understanding, what is claimed is, rather, something like: that if one has a probability distribution over particle positions and momenta and a "pilot wave" (quantum wave function) whose squared amplitude agrees with these distributions (is this required in both position and momentum space? I'm guessing so), then the probability (calculated using the distribution over initial positions and momenta, and the deterministic "laws of motion" by which these interact with the "pilot wave" and the apparatus) for the apparatus to end up showing position in a given range, is the same as the integral of the squared modulus of the wavefunction, in the position representation, over that range.  Prima facie, this could be achieved in ways other than having the measurement reading being perfectly correlated with the initial position on a given trajectory, and my guess is that in fact it is not achieved in that way in the theory.    If that were so it seems like the correlation should hold whatever the pilot wave is.  Now, perhaps that's not a problem, but it makes the pilot wave feel a bit superfluous to me, and I know that it's not, in this theory.  My sense is that what happens is more like:  whatever the initial position is, the pilot wave guides it to some---definite, of course---different final position, but when the initial distribution is given by the squared modulus of the pilot wave itself, then the distribution of final positions is given by the squared modulus of the (initial, I guess) pilot wave.

But if the answer to question 1 is "Yes", I have trouble understanding what Callender means by "I measure position exactly".  Also, regardless of the answer to Question 1, either there is a subtle distinction being made between measuring "perfectly accurately" and measuring "exactly" (in which case I'd like to know what the distinction is), or these sentences need to be reformulated more carefully.  Not trying to do a gotcha on Callender here, just trying to understand the claim, and de Broglie Bohm.

My second question relates to Callender's statement that:

It’s perfectly compatible with the uncertainty principle as it functions in this theory that I measure position exactly and then later calculate the system’s velocity exactly

Question 2: How does this way of ascertaining the system's velocity differ from the sort of "direct measurement" that is, presumably, subject to the uncertainty principle? I'm guessing that by the time one has enough information (possibly about further positions?) to calculate what the velocity was, one can't do with it the sorts of things that one could have done if one had known the position and velocity simultaneously.  But this depends greatly on what it would mean to "have known" the position and/or velocity, which --- especially if the answer to Question 1 was "Yes"--- seems a rather subtle matter.

So, physicists and other readers knowledgeable on these matters (if any such exist), your replies with explanations, or links to explanations, of these points would be greatly appreciated.  And even if you don't know the answers, but know de Broglie-Bohm well on a technical level... let's figure this out!  (My guess is that it's well known, and indeed that the answer to Question 1 in particular is among the most basic things one learns about this interpretation...)

# Nagel and DeLong I: Common Sense

Brad DeLong has been hammering --- perhaps even bashing --- away at Thomas Nagel's new book Mind and Cosmos (Oxford, 2012).  Here's a link to his latest blow. I think Nagel's wrong on several key points in that book, but I think Brad is giving people a misleading picture of Nagel's arguments.  This matters because Nagel has made very important points --- some of which are repeated in this book, though more thoroughly covered in his earlier The Last Word (Oxford, 1997) --- about the nature of reason, defending the possibility of achieving, in part through the use of reason, objectively correct knowledge (if that is the right word) in areas other than science, and giving us some valuable ideas about how this can work in particular cases, for example, in the case of ethics, in The Possibility of Altruism [Princeton, 1979].

In his latest salvo Brad suggests that "If you are going to reject any branch of science on the grounds that it flies in the face of common sense, require[s] us to subordinate the incredulity of common sense, is not based ultimately on common sense, or is a heroic triumph of ideological theory over common sense--quantum mechanics is definitely the place to start…".  This is preceded by some quotes from Nagel:

• But it seems to me that, as it is usually presented, the current orthodoxy about the cosmic order is the product of governing assumptions that are unsupported, and that it flies in the face of common sense…
• My skepticism is… just a belief that the available scientific evidence, in spite of the consensus of scientific opinion, does not… rationally require us to subordinate the incredulity of common sense…
• Everything we believe, even the most far-reaching cosmological theories, has to be based ultimately on common sense, and on what is plainly undeniable…
• I have argued patiently against the prevailing form of naturalism, a reductive materialism that purports to capture life and mind through its neo-Darwinian extension…. I find this view antecedently unbelievable— a heroic triumph of ideological theory over common sense…

Now there are things I disagree with here, but Nagel is clearly not claiming that no theory that is not itself a piece of common sense is acceptable. Indeed, the second bullet point makes it clear that he allows for the possibility that scientific evidence could "rationally require" him to subordinate the incredulity of common sense. It is his judgment that it does not in this case. Now---at least with regard to the possibility of an explanation by evolutionary biology of the emergence of life, consciousness, and reason on our planet and in our species, which is what I think is at issue--- I don't share his incredulity, and I also suspect that I would weigh the scientific evidence much more heavily against such incredulity, if I did share some of it.  But Nagel is not commited to a blanket policy of "reject[ing] scientific theories because they fail to match up to your common sense." Regarding the third bullet point, it's perhaps stated in too-strong terms, but it's far from a claim that every scientific theory can directly be compared to common sense and judged on that basis. The claim that scientific theories are "ultimately based in common sense and on what is plainly undeniable" does not imply that this basis must be plain and direct. Logic and mathematics develop out of common-sense roots, counting and speaking and such... science develops to explain "plainly undeniable" results of experiments, accounts of which are given in terms of macroscopic objects... Some of this smacks a bit too much of notions that may have proved problematic for positivism ("plainly undeniable" observation reports?)... but the point is that common sense carries some weight and indeed is a crucial element of our scientific activities, not that whatever aspect of "common sense" finds quantum theory hard to deal with must outweigh the enormous weight of scientific experience and engineering practice, also rooted "ultimately" according to Nagel in common sense, in favor of that theory.

Just for the record I don't find that the bare instrumentalist version of quantum theory as an account of the probabilities of experimental results "flies in the face of common sense" --- but it does seem that it might create serious difficulty for the conception of physical reality existing independent of our interactions with it. At any rate it does not seem to provide us with a picture of that sort of physical reality (unless you accept the Bohm or Everett interpretations), despite what one might have hoped for from a formalism that is used to describe the behavior of what we tend to think of as the basic constituents of physical reality, the various elementary particles or better, quantum fields.  But if someone, say Nagel, did believe that this all flies in the face of common sense, it would be open to him to say that in this case, we are permitted, encourage, or perhaps even required to fly in said face by the weight of scientific evidence.

As I've said, I disagree on two counts with Nagel's skepticism about an evolutionary explanation of mind and reason: it doesn't fly in the face of my common sense, and I weigh the evidence as favoring it more strongly than does Nagel. Part of my disagreement may be that what Nagel has in mind is an evolutionary explanation that is commited to a "reductive materialism that purports to capture life and mind through its neo-Darwinian extension." Whereas I have in mind a less reductive approach, in which consciousness and reason are evolutionarily favored because they have survival value, but we do not necessarily reduce these concepts themselves to physical terms. In my view, biology is rife with concepts that are not physical, nor likely to be usefully reduced to physical terms--- like, say, "eye". As with "eye", there may be no useful reduction of "consciousness" or "perception" or "thought" or "word" or "proposition", etc.., to physics, but I don't think that implies that the appearance of such things cannot have an evolutionary explanation. (Nor, just to be clear, does it imply that these things are not realized in physical processes.) So I might share Nagel's incredulity that such things could have a "materialist" explanation, if by this he means one in terms of physics, but not his incredulity about evolutionary explanations of the appearance of mind and reason. To me, it seems quite credible that these phenomena form part of the mental aspect of structures made of physical stuff, though we will never have full explanations for all the phenomena of consciousness and the doings of reason, in terms of this physical structure.

(David Deutsch's recent book The Beginning of Infinity is one excellent source for understanding such non-reductionism---see in particular its Chapter 5, "The Reality of Abstractions".)

I'll likely make several more posts on this business, both on other ways in which I think Brad and others have mischaracterized Nagel's arguments or misplaced the emphasis in their criticisms, and on why this matters because some important points that Nagel has made on matters closely related to these, that I think have value, are in danger of being obscured, caricatured, or dismissed under the influence of the present discussion by Brad and others.

# Thomas Nagel's "Mind and Cosmos"

I've just finished reading Thomas Nagel's newish book, "Mind and Cosmos" (Oxford, 2012).  It's deeply flawed, but in spite of its flaws some of the points it makes deserve more attention, especially in the broader culture, than they're likely to receive in the context of a book that's gotten plenty of people exercised about its flaws.  I'm currently undecided about whether to recommend reading his book for these points, as they are probably made, without the distracting context and possibly better formulated, equally well elsewhere, notably in Nagel's  "The Last Word" (Oxford, 1997).  The positive points are the emphasis on the reality of mental phenomena and (more controversially) their ireducibility to physical or even biological terms, the unacceptability of viewing the activities of reason in similarly reductive terms, and a sense that mind and reason are central to the nature of reality.  Its greatest flaws are an excessively reductionist view of the nature of science, and, to some degree in consequence of this, an excessive skepticism about the potential for evolutionary explanations of the origins of life, consciousness, and reason.

One of the main flaws of Nagel's book is that he seems --- very surprisingly --- to view explananations in terms of, say, evolutionary biology, as "reductively materialist".  He seems not to appreciate the degree to which the "higher" sciences involve "emergent" phenomena, not reducible---or not, in any case, reduced---to the terms of sciences "below" them in the putative reductionist hierarchy.  Of course there is no guarantee that explanations in terms of these disciplines' concepts will not be replaced by explanations in terms of the concepts of physics, but it has not happened, and may well never happen.  The rough picture is that the higher disciplines involve patterns or structures formed, if you like, out of the material of the lower ones, but the concepts in terms of which we deal with these patterns or structures are not those of physics, they are higher-order ones.  And these structures and their properties---described in the language of the higher sciences, not of physics---are just as real as the entities and properties of physics.  My view --- and while it is non-reductionist, I do not think it is hugely at variance with that of many, perhaps most, scientists who have considered the matter carefully --- is that at a certain very high level, some of these patterns have genuine mental aspects.  I don't feel certain that we will explain, in some sense, all mental phenomena in terms of these patterns, but neither does it seem unreasonable that we might.  ("Explanation" in this sense needn't imply the ability to predict perfectly (or even very well), nor, as is well known, need the ability to predict perfectly be viewed as providing us with a full and adequate explanation---simulation, for example, is not necessarily understanding.)   Among scientists and philosophers who like Nagel hold a broadly "rationalist" worldview David Deutsch, in his books The Fabric of Reality and especially The Beginning of Infinity, is much more in touch with the non-reductionist nature of much of science.

Note that none of this means there isn't in some sense a "physical basis" for mind and reason.  It is consistent with the idea that there can be "no mental difference without a physical difference", for example (a view that I think even Nagel, however, agrees with).

This excessively reductionist view of modern science can also be found among scientists and popular observers of science, though it is far from universal.   It is probably in part, though only in part, responsible for two other serious flaws in Nagel's book.  The first of these is his skepticism about the likelihood that we will arrive at an explanation of the origin of life in terms of physics, chemistry, and perhaps other sciences that emerge from them---planetary science, geology, or perhaps some area on the borderline between complex chemistry and biology that will require new concepts, but not in a way radically different from the way these disciplines themselves involve new concepts not found in basic physics.  The second is his skepticism that the origins of consciousness and reason can be explained primarily in terms of biological evolution.  I suspect he is wrong about this.  The kind of evolutionary explanation I expect is of course likely to use the terms "consciousness" and "reason" in ways that are not entirely reductive.   I don't think that will prevent us from understanding them as likely to evolve through natural selection.   I expect we will see that to possess the faculty of reason, understood (with Nagel) as having the---fallible, to be sure!---power to help get us in touch with a reality that transcends, while including, our subjective point of view, confers selective advantage.  Nagel is aware of the possibility of this type of explanation but --- surprisingly, in my view --- views it as implausible that it should be adaptive to possess reason in this strong sense, rather than just some locally useful heuristics.

The shortcomings in his views on evolution and the potential for an evolutionary explanation of life, consciousness, and reason deserve more discussion, but I'll leave that for a possible later post.

The part of Nagel's worldview that I like, and that may go underappreciated by those who focus on his shortcomings, is, as I mentioned above, the reality of the mental aspect of things, and the need to take seriously the view that we have the power, fallible as it may be, to make progress toward the truth about how reality is, about what is good, and about what is right and wrong.  I also like his insistence that much is still unclear about how and why this is so.  But to repeat, I think he's somewhat underplaying the potential involvement of evolution in an eventual understanding of these matters.  He may also be underplaying something I think he laid more stress on in previous books, notably The View from Nowhere and the collection of papers and essays Mortal Questions: the degree to which there may be an irreconcilable tension between the "inside" and "outside" views of ourselves.  However, his attitude here is to try to reconcile them. Indeed, one of the more appealing aspects of his worldview as expressed in both Mind and Cosmos and The Last Word is the observation that my experience "from inside" of what it is to be a reasoning subject, involves thinking of myself as part of a larger objective order and trying to situate my own perspective as one of many perspectives, including those of my fellow humans and any other conscious and reasoning beings that exist, upon it.  It is to understand much of my reasoning as attempting, even while operating as it must from my particular perspective, to gain an understanding of this objective reality that transcends that perspective.

So far I haven't said much about the positive possibilities Nagel moots, in place of a purely biological evolutionary account, for explaining the origin of life, consciousness, and reason.  These are roughly teleological, involving a tendency "toward the marvelous".  This is avowedly a very preliminary suggestion.  My own views on the likely role of mind and reason in the nature of reality, even more tentative than Nagel's, are that it is less likely that it arises from a teleological tendency toward the marvelous than that a potential for consciousness, reason, and value is deeply entwined with the very possibility of existence itself.  Obviously we are very far from understanding this.  I would like to think this is fairly compatible with a broadly evolutionary account of the origin of life and human consciousness and reasoning on our planet, and with the view that we're made out of physical stuff.

# My short review of David Deutsch's "The Beginning of Infinity" in Physics Today

Here is a link to my short review of David Deutsch's book The Beginning of Infinity, in Physics Today, the monthly magazine for members of the American Physical Society.  I had much more to say about the book, which is particularly ill-suited to a short-format review like those in Physics Today.  (The title is suggestive; and a reasonable alternative would have been "Life, the Universe, and Everything.")   It was an interesting exercise to boil it down to this length, which was already longer than their ideal.  I may say some of it in a blog post later.

It was also interesting to have such extensive input from editors.  Mostly this improved things, but in a couple of cases (not helped by my internet failing just as the for-publication version had been produced) the result was not good.  In particular, the beginning of the second-to-last paragraph, which reads "For some of Deutsch’s concerns, prematurity is irrelevant. But fallibilism undermines some of his claims ... " is not as I'd wanted.  I'd had "this" in place of "prematurity" and "it" in place of "fallibilism".  I'd wanted, in both cases, to refer in general to the immediately preceding discussion, more broadly than just to "prematurity" in one case and "fallibilism" in the other.  It seems the editors felt uncomfortable with a pronoun whose antecedent was not extremely specific.  I'd have to go back to notes to see what I ultimately agreed to, but definitely not plain "prematurity".

One other thing I should perhaps point out is that when I wrote:

Deutsch’s view that objective correctness is possible in areas outside science is appealing. And his suggestion that Popperian explanation underwrites that possibility is intriguing, but may overemphasize the importance of explanations as opposed to other exercises of reason. A broader, more balanced perspective may be found in the writings of Roger Scruton, Thomas Nagel, and others.

I was referring to a broader perspective on the role of reason in arriving at objectively correct views in areas outside science. "More balanced" was another editorial addition, in this case one that I acquiesced in, but perhaps I should not have as some of its possible connotations are more negative than I intended.  "Appealing," though not an editorial edition, is somewhat off from what I intended.  I wanted also to include suggestion of "probably correct" since something can be appealing but wrong, but couldn't find the right word.  I shortened this discussion for reasons of space, but I had initially cited Scruton specifically for aesthetics, and recommended his "On Beauty", "Art and the Imagination", and "The Aesthetics of Architecture".  I haven't read much of his work on politics (he is a conservative, although from what I have read a relatively sensible one at the philosophical level) nor his "Sexual Desire", so don't mean to endorse them.  Likewise I had initially recommended specifically Nagel's "The View from Nowhere" and "The Last Word", and was not aware of his recent "Mind and Cosmos"; I emphatically did not mean to endorse his skepticism, in that book, about evolutionary explanations of the origins of life and mind, although I do think there is much of interest in that book, and some (but certainly not all!) of the criticism of it that I've seen on the web is misguided.  I am much more in sympathy with Deutsch's views on reductionism than with Nagel's:  both are skeptical about the prospects for a thoroughoing reduction of mind, reason, and consciousness to physical terms, but Nagel, bafflingly, seems to think that an evolutionary explanation of such phenomena is tantamount to such physical reductionism.  Deutsch seems to me more sophisticated about the nature of actual science, and how non-reductionist many scientific explanations are, and about how that can nevertheless be compatible with physical law.  I should say, though, that I am less sympathetic than Deutsch is to accounts of mind and consciousness as being essentially a computer running a certain kind of program.  I view embodiment, interaction with a sufficiently rich environment, and probably a difficulty in disentangling "hardware" and "software" (perhaps related to Douglas Hofstadter's notion of "strange loops") as likely to be crucial elements of an understanding of mind and consciousness.  Of course it may be that with a sufficiently loose notion of "kind of computer program" and "kind of input" some of this could be understood in the computational terms Deutsch seeks.

|

# No new enlightenment: A critique of "quantum reason"

I have a lot of respect for Scientific American contributing physics editor George Musser's willingness to solicit and publish articles on some fairly speculative and, especially, foundational, topics whether in string theory, cosmology, the foundations of quantum theory, quantum gravity, or quantum information.  I've enjoyed and learned from these articles even when I haven't agreed with them.  (OK, I haven't enjoyed all of them of course... a few have gotten under my skin.)  I've met George myself, at the most recent FQXi conference; he's a great guy and was very interested in hearing, both from me and from others, about cutting-edge research.  I also have a lot of respect for his willingness to dive in to a fairly speculative area and write an article himself, as he has done with "A New Enlightenment" in the November 2012 Scientific American (previewed here).  So although I'm about to critique some of the content of that article fairly strongly, I hope it won't be taken as mean-spirited.  The issues raised are very interesting, and I think we can learn a lot by thinking about them; I certainly have.

The article covers a fairly wide range of topics, and for now I'm just going to cover the main points that I, so far, feel compelled to make about the article.  I may address further points later; in any case, I'll probably do some more detailed posts, maybe including formal proofs, on some of these issues.

The basic organizing theme of the article is that quantum processes, or quantum ideas, can be applied to situations which social scientists usually model as involving the interactions of "rational agents"...or perhaps, as they sometimes observe, agents that are somewhat rational and somewhat irrational.  The claim, or hope, seems to be that in some cases we can either get better results by substituting quantum processes (for instance, "quantum games", or "quantum voting rules") for classical ones, or perhaps better explain behavior that seems irrational.  In the latter case, in this article, quantum theory seems to be being used more as a metaphor for human behavior than as a model of a physical process underlying it.  It isn't clear to me whether we're supposed to view this as an explanation of irrationality, or in some cases as the introduction of a "better", quantum, notion of rationality.  However, the main point of this post is to address specifics, so here are four main points; the last one is not quantum, just a point of classical political science.

(1) Quantum games.  There are many points to make on this topic.  Probably most important is this one: quantum theory does not resolve the Prisoner's Dilemma.  Under the definitions I've seen of "quantum version of a classical game", the quantum version is also a classical game, just a different one.  Typically the strategy space is much bigger.  Somewhere in the strategy space, typically as a basis for a complex vector space ("quantum state space") of strategies, or as a commuting ("classical") subset of the possible set of "quantum actions" (often unitary transformations, say, that the players can apply to physical systems that are part of the game-playing apparatus), one can set things up so one can compare the expected payoff of the solution, under various solution concepts such as Nash equilibrium, for the classical game and its "quantum version", and it may be that the quantum version has a better result for all players, using the same solution concept.  This was so for Eisert, Lewenstein, and Wilkens' (ELW for short) quantum version of Prisoner's Dilemma.  But this does not mean (nor, in their article, did ELW claim it did) that quantum theory "solves the Prisoner's Dilemma", although I suspect when they set out on their research, they might have had hope that it could.  It doesn't because the prisoners can't transform their situation into quantum prisoners dilemma; to play that game, whether by quantum or classical means, would require the jailer to do something differently.  ELW's quantum prisoner's dilemma involves starting with an entangled state of two qubits.  The state space consists of the unit Euclidean norm sphere in a 4-dimensional complex vector space (equipped with Euclidean inner product); it has a distinguished orthonormal basis which is a product of two local "classical bases", each of which is labeled by the two actions available to the relevant player in the classical game.  However the quantum game consists of each player choosing a unitary operator to perform on their local state.  Payoff is determined---and here is where the jailer must be complicit---by performing a certain two-qubit unitary---one which does not factor as a product of local unitaries---and then measuring in the "classical product basis", with payoffs given by the classical payoff corresponding to the label of the basis vector corresponding to the result.  Now, Musser does say that "Quantum physics does not erase the original paradoxes or provide a practical system for decision making unless public officials are willing to let people carry entangled particles into the voting booth or the police interrogation room."  But the situation is worse than that.  Even if prisoners could smuggle in the entangled particles (and in some realizations of prisoners' dilemma in settings other than systems of detention, the players will have a fairly easy time supplying themselves with such entangled pairs, if quantum technology is feasible at all), they won't help unless the rest of the world, implementing the game, implements the desired game, i.e. unless the mechanism producing the payoffs doesn't just measure in a product basis, but implements the desired game by measuring in an entangled basis.  Even more importantly, in many real-world games, the variables being measured are already highly decohered; to ensure that they are quantum coherent the whole situation has to be rejiggered.  So even if you didn't need the jailer to make an entangled measurement---if the measurement was just his independently asking each one of you some question---if all you needed was to entangle your answers---you'd have to either entangle your entire selves, or covertly measure your particle and then repeat the answer to the jailer.  But in the latter case, you're not playing the game where the payoff is necessarily based on the measurement result: you could decide to say something different from the measurement result.  And that would have to be included in the strategy set.

There are still potential applications:  if we are explicitly designing games as mechanisms for implementing some social decision procedure, then we could decide to implement a quantum version (according to some particular "quantization scheme") of a classical game.  Of course, as I've pointed out, and as ELW do in their paper, that's just another classical game.  But as ELW note, it is possible---in a setting where quantum operations (quantum computer "flops") aren't too much more expensive than their classical counterparts---that playing the game by quantum means might use less resources than playing it by simulating it classically.  In a mechanism design problem that is supposed to scale to a large number of players, it even seems possible that the classical implementation could scale so badly with the number of players as to become infeasible, while the quantum one could remain efficient.  For this reason, mechanism design for preference revelation as part of a public goods provision scheme, for instance, might be a good place to look for applications of quantum prisoners-dilemma like games.  (I would not be surprised if this has been investigated already.)

Another possible place where quantum implementations might have an advantage is in situations where one does not fully trust the referee who is implementing the mechanism.  It is possible that quantum theory might enable the referee to provide better assurances to the players that he/she has actually implemented the stated game.  In the usual formulation of game theory, the players know the game, and this is not an issue.  But it is not necessarily irrelevant in real-world mechanism design, even if it might not fit strictly into some definitions of game theory.  I don't have a strong intuition one way or the other as to whether or not this actually works but I guess it's been looked into.

(2) "Quantum democracy".  The part of the quote, in the previous item, about taking entangled particles into the voting booth, alludes to this topic.  Gavriel Segre has a 2008 arxiv preprint entitled "Quantum democracy is possible" in which he seems to be suggesting that quantum theory can help us the difficulties that Arrow's Theorem supposedly shows exist with democracy.  I will go into this in much more detail in another post.  But briefly, if we consider a finite set A of "alternatives", like candidates to fill a single position, or mutually exclusive policies to be implemented, and a finite set I of "individuals" who will "vote" on them by listing them in the order they prefer them, a "social choice rule" or "voting rule" is a function that, for every "preference profile", i.e. every possible indexed set of preference orderings (indexed by the set of individuals), returns a preference ordering, called the "social preference ordering", over the alternatives.  The idea is that then whatever subset of alternatives is feasible, society should choose the one mostly highly ranked by the social preference ordering,  from among those alternatives that are feasible.  Arrow showed that if we impose the seemingly reasonable requirements that if everyone prefers x to y, society should prefer x to y ("unanimity") and that whether or not society prefers x to y should be affected only by the information of which individuals prefer x to y, and not by othe aspects of individuals' preference orderings ("independence of irrelevant alternatives", "IIA"), the only possible voting rules are the ones such that, for some individual i called the "dictator" for the rule, the rule is that that individual's preferences are the social preferences.  If you define a democracy as a voting rule that satisfies the requirements of unanimity and IIA and that is not dictatorial, then "democracy is impossible".  Of course this is an unacceptably thin concept of individual and of democracy.  But anyway, there's the theorem; it definitely tells you something about the limitations of voting schemes, or, in a slighlty different interpretation, of the impossibility of forming a reasonable idea of what is a good social choice, if all that we can take into account in making the choice is a potentially arbitrary set of individuals' orderings over the possible alternatives.

Arrow's theorem tends to have two closely related interpretations:  as a mechanism for combining actual individual preferences to obtain social preferences that depend in desirable ways on individual ones, or as a mechanism for combining formal preference orderings stated by individuals, into a social preference ordering.  Again this is supposed to have desirable properties, and those properties are usually motivated by the supposition that the stated formal preference orderings are the individuals' actual preferences, although I suppose in a voting situation one might come up with other motivations.  But even if those are the motivations, in the voting interpretation, the stated orderings are somewhat like strategies in a game, and need not coincide with agents' actual preference orderings if there are strategic advantages to be had by letting these two diverge.

What could a quantum mitigation of the issues raised by Arrow's theorem---on either interpretation---mean?  We must be modifying some concept in the theorem... that of an individual's preference ordering, or voting strategy, or that of alternative, or---although this seems less promising---that of individual---and arguing that somehow that gets us around the problems posed by the theorem.  None of this seems very promising, for reasons I'll get around to in my next post.  The main point is that if the idea is similar to the --- as we've seen, dubious --- idea that superposing strategies can help in quantum games, it doesn't seem to help with interpretations where the individual preference ordering is their actual preference ordering.  How are we to superpose those?  Superposing alternatives seems like it could have applications in a many-worlds type interpretation of quantum theory, where all alternatives are superpositions to begin with, but as far as I can see, Segre's formalism is not about that.  It actually seems to be more about superpositions of individuals, but one of the big motivational problems with Segre's paper is that what he "quantizes" is not the desired Arrow properties of unanimity, independence of irrelevant alternatives, and nondictatoriality, but something else that can be used as an interesting intermediate step in proving Arrow's theorem.  However, there are bigger problems than motivation:  Segre's main theorem, his IV.4, is very weak, and actually does not differentiate between quantum and classical situations.  As I discuss in more detail below, it looks like for the quantum logics of most interest for standard quantum theory, namely the projection lattices of of von Neumann algebras, the dividing line between ones having what Segre would call a "democracy", a certain generalization of a voting rule satisfying Arrow's criteria, and ones that don't (i.e. that have an "Arrow-like theorem") is not commutativity versus noncommutativity of the algebra (ie., classicality versus quantumness), but just infinite-dimensionality versus finite-dimensionality, which was already understood for the classical case.  So quantum adds nothing.  In a later post, I will go through (or post a .pdf document) all the formalities, but here are the basics.

Arrow's Theorem can be proved by defining a set S of individuals to be decisive if for every pair x,y of alternatives, whenever everyone in S prefers x to y, and everyone not in x prefers y to x, society prefers x to y.  Then one shows that the set of decisive sets is an ultrafilter on the set of individuals.  What's an ultrafilter?  Well, lets define it for an arbitrary lattice.  The set, often called P(I), of subsets of any set I, is a lattice (the relevant ordering is subset inclusion, the defined meet and join are intersection and union).   A filter---not yet ultra---in a lattice is a subset of the lattice that is upward-closed, and meet-closed.  That is, to say that F is a filter is to say that  if x is in F, and y is greater than or equal to x, then y is in F, and that if x and y are both in f, so is x meet y.  For P(I), this means that a filter has to include every superset of each set in the filter, and also the intersection of every pair of sets in the filter.  Then we say a filter is proper if it's not the whole lattice, and it's an ultrafilter if it's a maximal proper filter, i.e. it's not properly contained in any other filter (other than the whole lattice).  A filter is called principal if it's generated by a single element of the lattice:  i.e. if it's the smallest filter containing that element.  Equivalently, it's the set consisting of that element and everything above it.  So in the case of P(I), a principal filter consists of a given set, and all sets containing that set.

To prove Arrow's theorem using ultrafilters, one shows that unanimity and IIA imply that the set of decisive sets is an ultrafilter on P(I).  But it was already well known, and is easy to show, that all ultrafilters on the powerset of a finite set are principal, and are generated by singletons of I, that is, sets containing single elements of I.  So a social choice rule satisfying unanimity and IIA has a decisive set containing a single element i, and furthermore, all sets containing i are decisive.  In other words, if i favors x over y, it doesn't matter who else favors x over y and who opposes it: x is socially preferred to y.  In other words, the rule is dictatorial.  QED.

Note that it is crucial here that the set I is finite.  If you assume the axiom of choice (no pun intended ahead of time), then non-principal ultrafilters do exist in the lattice of subspaces of an infinite set, and the more abstract-minded people who have thought about Arrow's theorem and ultrafilters have indeed noticed that if you are willing to generalize Arrow's conditions to an infinite electorate, whatever that means, the theorem doesn't generalize to that situation.  The standard existence proof for a non-principal ultrafilter is to use the axiom of choice in the form of Zorn's lemma to establish that any proper filter is contained in a maximal one (i.e. an ultrafilter) and then take the set of subsets whose complement (in I) is finite, show it's a filter, and show it's extension to an ultrafilter is not principal.  Just for fun, we'll do this in a later post.  I wouldn't summarize the situation by saying "infinite democracies exist", though.  As a sidelight, some people don't like the fact that the existence proof is nonconstructive.

As I said, I'll give the details in a later post.  Here, we want to examine Segre's proposed generalization.  He defines a quantum democracy  to be a nonprincipal ultrafilter on the lattice of projections of an "operator-algebraically finite von Neumann algebra".  In the preprint there's no discussion of motivation, nor are there explicit generalizations of unanimity and IIA to corresponding quantum notions.  To figure out such a correspondence for Segre's setup we'd need to convince ourselves that social choice rules, or ones satisfying one or the other of Arrow's properties, are related one to one to their sets of decisive coalitions, and then relate properties of the rule (or the remaining property), to the decisive coalitions' forming an ultrafilter.  Nonprincipality is clearly supposed to correspond to nondictatorship.  But I won't try to tease out, and then critique, a full correspondence right now, if one even exists.

Instead, let's look at Segre's main point.  He defines a quantum logic as a non-Boolean orthomodular lattice.  He defines a quantum democracy as a non-principal ultrafilter in a quantum logic.  His main theorem, IV.4, as stated, is that the set of quantum democracies is non-empty.  Thus stated, of course, it can be proved by showing the existence of even one quantum logic that has a non-principal ultrafilter.  These do exist, so the theorem is true.

However, there is nothing distinctively quantum about this fact.  Here, it's relevant that Segre's Theorem IV.3 as stated is wrong.  He states (I paraphrase to clarify scope of some quantifiers) that L is an operator-algebraically finite orthomodular lattice all of whose ultrafilters are principal if, and only if, L is a classical logic (i.e. a Boolean lattice).  But this is false.  It's true that to get his theorem IV.4, he doesn't need this equivalence.  But what is a von Neumann algebra?  It's a *-algebra consisting of bounded operators on a Hilbert space, closed in the weak operator topology.  (Or something isomorphic in the relevant sense to one of these.) There are commutative and noncommutative ones.  And there are finite-dimensional ones and infinite-dimensional ones.  The finite-dimensional ones include:  (1) the algebra of all bounded operators on a finite-dimensional Hilbert space (under operator multiplication and complex conjugation), these are noncommutative for dimension > 1  (2) the algebra of complex functions on a finite set I (under pointwise multiplication and complex conjugation) and (3) finite products (or if you prefer the term, direct sums) of algebras of these types.  (Actually we could get away with just type (1) and finite products since the type (2) ones are just finite direct sums of one-dimensional instances of type (1).)   The projection lattices of the cases (2) are isomorphic to P(I) for I the finite set.  These are the projection lattices for which Arrow's theorem can be proved using the fact that they have no nonprincipal ultrafilters.  The cases (1) are their obvious quantum analogues.  And it is easy to show that in these cases, too, there are no nonprincipal ultrafilters.  Because the lattice of projections of a von Neumann algebra is complete, one can use  essentially the same proof as for the case of P(I) for finite I.  So for the obvious quantum analogues of the setups where Arrow's theorem is proven, the analogue of Arrow's theorem does hold, and Segre's "quantum democracies" do not exist.

Moreover, Alex Wilce pointed out to me in email that essentially the same proof as for P(I) with I infinite, gives the existence of a nonprincipal ultrafilter for any infinite-dimensional von Neumann algebra:  one takes the set of projections of cofinite rank (i.e. whose orthocomplementary projection has finite rank), shows it's a filter, extends it (using Zorn's lemma) to an ultrafilter, and shows that's not principal.  So (if the dividing line between finite-dimensional and infinite-dimensional von Neumann algebras is precisely that their lowest-dimensional faithful representations are on finite-dimensional Hilbert spaces, which seems quite likely) the dividing line between projection lattices of von Neumann algebras on which Segre-style "democracies" (nonprincipal ultrafilters) exist, is precisely that between finite and infinite dimension, and not that between commutativity and noncommutativity.  I.e. the existence or not of a generalized decision rule satisfying a generalization of the conjunction of Arrow's conditions has nothing to do with quantumness.  (Not that I think it would mean much for social choice theory or voting if it did.)

(3) I'll only say a little bit here about "quantum psychology".  Some supposedly paradoxical empirical facts are described at the end of the article.  When subjects playing Prisoner's Dilemma are told that the other player will snitch, they always (nearly always? there must be a few mistakes...) snitch.  When they are told that the other player will stay mum, they usually also fink, but sometimes (around 20% of the time---it is not stated whether this typical of a single individual in repeated trials, or a percentage of individuals in single trials) stay mum.  However, if they are not told what the other player will do, "about 40% of the time" they stay mum.  Emanuel Pothos and Jerome Busemeyr devised a "quantum model" that reproduced the result.  As described in Sci Am, Pothos interprets it in terms of destructive interference between (amplitudes associated with, presumably) the 100% probability of snitching when the other snitches and the 80% probability of snitching when they other does not that reduces the probability to 60% when they are not sure whether the other will snitch.  It is a model; they do not claim that quantum physics of the brain is responsible.  However, I think there is a better explanation, in terms of what Douglas Hofstadter called "superrationality", Nigel Howard called "metarationality", and I like to call a Kantian equilibrium concept, after the version of Kant's categorial imperative that urges you to act according to a maxim that you could will to be a universal law.  Simply put, it's the line of reasoning that says "the other guy is rational like me, so he'll do what I do.  What does one do if he believes that?  Well, if we both snitch, we're sunk.  If we both stay mum, we're in great shape.  So we'll stay mum."  Is that rational?  I dunno.  Kant might have argued it is.  But in any case, people do consider this argument, as well, presumably, as the one for the Nash equilibrium.  But in either of the cases where the person is told what the other will do, there is less role for the categorical imperative; one is being put more in the Nash frame of mind.  Now it is quite interesting that people still cooperate a fair amount of the time when they know the other person is staying mum; I think they are thinking of the other person's action as the outcome of the categorical imperative reasoning, and they feel some moral pressure to stay with the categorical imperative reasoning.  Whereas they are easily swayed to completely dump that reasoning in the case when told the other person snitched: the other has already betrayed the categorical imperative.  Still, it is a bit paradoxical that people are more likely to cooperate when they are not sure whether the other person is doing so;  I think the uncertainty makes the story that "he will do what I do" more vivid, and the tempting benefit of snitching when the other stays mum less vivid, because one doesn't know *for sure* that the other has stayed mum.  Whether that all fits into the "quantum metaphor" I don't know but it seems we can get quite a bit of potential understanding here without invoking.  Moreover there probably already exists data to help explore some of these ideas, namely about how the same individual behaves under the different certain and uncertain conditions, in anonymous trials guaranteed not to involve repetition with the same opponent.

Less relevant to quantum theory, but perhaps relevant in assessing how important voting paradoxes are in the real world, is an entirely non-quantum point:

(4)  A claim by Piergiorgio Odifreddi, that the 1976 US election is an example of Condorcet's paradox of cyclic pairwise majority voting, is prima facie highly implausible to anyone who lived through that election in the US.  The claim is that a majority would have favored, in two-candidate elections:

Carter over Ford (as in the actual election)

Ford over Reagan

Reagan over Carter

I strongly doubt that Reagan would have beat Carter in that election.  There is some question of what this counterfactual means, of course:  using polls conducted near the time of the election does not settle the issue of what would have happened in a full general-election campaign pitting Carter against Reagan.  In "Preference Cycles in American Elections", Electoral Studies 13: 50-57 (1994), as summarized in Democracy Defended by Gerry Mackie, political scientist Benjamin Radcliff analyzed electoral data and previous studies concerning the US Presidential elections from 1972 through 1984, and found no Condorcet cycles.  In 1976, the pairwise orderings he found for (hypothetical, in two of the cases) two-candidate elections were Carter > Ford, Ford > Reagan, and Carter > Reagan.  Transitivity is satisfied; no cycle.  Obviously, as I've already discussed, there are issues of methodology, and how to analyze a counterfactual concerning a general election.  More on this, perhaps, after I've tracked down Odifreddi's article.  Odifreddi is in the Sci Am article because an article by him inspired Gavriel Segre to try to show that such problems with social choice mechanisms like voting might be absent in a quantum setting.

Odifreddi is cited by Musser as pointing out that democracies usually avoid Condorcet paradoxes because voters tend to line up on an ideological spectrum---I'm just sceptical until I see more evidence, that that was not the case also in 1976 in the US.  I have some doubt also about the claim that Condorcet cycles are the cause of democracy "becoming completely dysfunctional" in "politically unsettled times", or indeed that it does become completely dysfunctional in such times.  But I must remember that Odifreddi is from the land of Berlusconi.  But then again, I doubt cycles are the main issue with him...

|

# Physics and philosophy: a civil and enlightening discussion

So, more on physics and philosophy:  this discussion thread involving Wayne Myrvold, Vishnya Maudlin, and Matthew Leifer is a model of civil discussion in which it looks like mutual understanding is increased, and that should be enlightening, or at least clarifying, to "listeners".  Matthew makes a point I made in my previous post:

Matthew Leifer [...] Wayne, I disagree with you that studying the foundations of quantum theory is philosophy. It is physics, it is just that most physicists do not realize that it is physics yet. Of course, there are some questions of a more philosophical nature, but I would argue that the most fertile areas are those which are not obviously purely philosophy.

Wayne Myrvold (June 12 at 6:42am)

Ah, but Matt, but part of the main point of the post was that we shouldn’t worry too much about where we draw the boundaries between disciplines. It’s natural philosophy in the sense of Newton, not counted as physics by many physicists, and may one day will be regarded as clearly part of physics by the physics community—- does it really matter what we call it? [...]

Matthew's response: "Well, it matters a lot on a personal level if you are trying to get a job doing foundations of quantum theory in a physics department More seriously, I think there is a distinction to be made between studying the foundations of a theory in order to better comprehend the theory as it presently exists and studying them in order to arrive at the next theory."

Matthew puts a smiley face on the first sentence, and continues "More seriously..." But I think this is more serious than he is letting on here. In my view, thinking about M-theory and string theory and thinking about the foundations of quantum theory are roughly evenly matched as far as their likelihood (by which I mean probability) of giving rise to genuine progress in our understanding of the world (I'd give quantum foundations the advantage by about a factor of 10.) In fact, thinking about quantum foundations led David Deutsch to come up with what is pretty much our present concept of universal quantum computation. Yet you basically can't do it in a US physics department without spending much of your time on something else in order to get tenure. This is part of why I'm not just annoyed, but more like outraged, when I read pronouncements like Hawking's about philosophy being dead.

As with Wayne's post on which this thread comments, I thank Matthew Leifer for the link to this thread. Do read the whole thing if you find this topic area at all interesting as there are several other excellent and clearly expressed insights in it.

|