Deutsch, Popper, Gelman and Shalizi, with a side of Mayo, on Bayesian ideas, models and fallibilism in the philosophy of science and in statistics (I)

A few years back, when I reviewed David Deutsch's The Beginning of Infinity for Physics Today (see also my short note on the review at this blog), I ended up spending a fair amount of time revisiting an area of perennial interest to me: the philosophy of science, and the status of Popper's falsificationist and anti-inductive view of scientific reasoning. I tend to like the view that one should think of scientific reasoning in terms of coherently updating subjective probabilities, which might be thought of as Bayesian in a broad sense. (Broad because it might be more aligned with Richard Jeffrey's point of view, in which any aspect of one's probabilities might be adjusted in light of experience, rather than a more traditional view on which belief change is always and only via conditioning the probabilities of various hypotheses on newly acquired data, with one's subjective probabilities of data given the hypotheses never adjusting.) I thought Deutsch didn't give an adequate treatment of this broadly Bayesian attitude toward scientific reasoning, and wrote:

Less appealing is Deutsch and Popper’s denial of the validity of inductive reasoning; if this involves a denial that evidence can increase the probability of general statements such as scientific laws, it is deeply problematic. To appreciate the nature and proper role of induction, one should also read such Bayesian accounts as Richard Jeffrey’s (Cambridge University Press, 2004) and John Earman’s (MIT Press, 1992).

Deutsch and Popper also oppose instrumentalism and physical reductionism but strongly embrace fallibilism. An instrumentalist believes that particular statements or entities are not literally true or real, but primarily useful for deriving predictions about other matters. A reductionist believes that they have explanations couched in the terms of some other subject area, often physics. Fallibilism is the view that our best theories and explanations are or may well be false. Indeed many of the best have already proved not to be strictly true. How then does science progress? Our theories approximate truth, and science replaces falsified theories with ones closer to the truth. As Deutsch puts it, we “advance from misconception to ever better misconception.” How that works is far from settled. This seems to make premature Deutsch’s apparent dismissal of any role for instrumentalist ideas, and his neglect of pragmatist ones, according to which meaning and truth have largely to do with how statements are used and whether they are useful.

Thanks to Brad DeLong I have been reading a very interesting paper from a few years back by Andrew Gelman and Cosma Shalizi, "Philosophy and the practice of Bayesian statistics" that critiques the Bayesian perspective on the philosophy of science from a broadly Popperian---they say "hypothetico-deductive"---point of view, that embraces (as did Popper in his later years) fallibilism (in the sense of the quote from my review above).  They are particularly concerned to point out that the increasing use of Bayesian methods in statistical analysis should not necessarily be interpreted as supporting a Bayesian viewpoint on the acquisition of scientific knowledge more generally.  That point is well taken; indeed I take it to be similar to my point in this post that the use of classical methods in statistical analysis need not be interpreted as supporting a non-Bayesian viewpoint on the acquisition of knowledge.  From this point of view, statistical analysis, whether formally Bayesian or "classical" is an input to further processes of scientific reasoning; the fact that Bayesian or classical methods may be useful at some stage of statistical analysis of the results of some study or experiment does not imply that all evaluation of the issues being investigated must be done by the same methods.  While I was most concerned to point out that use of classical methods in data analysis does not invalidate a Bayesian (in the broad sense) point of view toward how the results of that analysis should be integrated with the rest of our knowledge, Gelman and Shalizi's point is the mirror image of this.  Neither of these points, of course, is decisive for the "philosophy of science" question of how that broader integration of new experience with knowledge should proceed.

Although it is primarily concerned to argue against construing the use of  Bayesian methods in data analysis as supporting a Bayesian view of scientific methods more generally, Gelman and Shalizi's paper does also contain some argument against Bayesian, and more broadly "inductive", accounts of scientific method, and in favor of a broadly Popperian, or what they call "hypothetico-deductive" view.  (Note that they distinguish this from the "hypothetico-deductive" account of scientific method which they associate with, for instance, Carl Hempel and others, mostly in the 1950s.)

To some extent, I think this argument may be reaching a point that is often reached when smart people, indeed smart communities of people,  discuss, over many years, fundamental issues like this on which they start out with strong differences of opinion:  positions become more nuanced on each side, and effectively closer, but each side wants to keep the labels they started with, perhaps in part as a way of wanting to point to the valid or partially valid insights that have come from "their" side of the argument (even if they have come from the other side as well in somewhat different terms), and perhaps also as a way of wanting to avoid admitting having been wrong in "fundamental" ways.  For example, one sees insights similar to those in the work of Richard Jeffrey and others from a "broadly Bayesian" perspective, about how belief change isn't always via conditionalization using fixed likelihoods, also arising in the work of the "hypothetico-deductive" camp, where they are used against the simpler "all-conditionalization-all-the-time" Bayesianism.  Similarly, probably Popperian ideas played a role in converting some  "relatively crude" inductivists to more sophisticated Bayesian or Jefferian approach.  (Nelson Goodman's "Fact, Fiction, and Forecast", with its celebrated "paradox of the grue emeralds", probably played this role a generation or two later.)  Roughly speaking, the "corroboration" of hypotheses of which Popper speaks, involves not just piling up observations compatible with the hypothesis (a caricature of "inductive support") but rather the passage of stringent tests. In the straight "falsification"  view of Popper, these are stringent because there is a possibility they will generate results inconsistent with the hypothesis, thereby "falsifying" it; on the view which takes it as pointing toward a more Bayesian view of things (I believe I once read something by I.J.Good in which he said that this was the main thing to be gotten from Popper), this might be relaxed to the statement that there are outcomes that are very unlikely if the hypothesis is true, thereby having the potential, at least, of leading to a drastic lowering of the posterior probability of the hypothesis (perhaps we can think of this as a softer version of falsification) if observed.  The posterior probability given that such an outcome is observed of course does not depend only on the prior probability of the hypothesis and the probability of the data conditional on the hypothesis---it also depends on many other probabilities.  So, for instance, one might also want such a test to have the property that "it would be difficult (rather than easy) to get an accordance between data x and H (as strong as the one obtained) if H were false (or specifiably flawed)".  The quote is from this post on Popper's "Conjectures and Refutations" by philosopher of science D. G. Mayo, who characterizes it as part of "a modification of Popper".  ("The one obtained" refers to an outcome in which the hypothesis is considered to pass the test.)  I view the conjunction of these two aspects of a test of a hypothesis or theory as rather Bayesian in spirit.  (I do not mean to attribute this view to Mayo.)
I'll focus later---most likely in a follow-up post---on Gelman and Shalizi's direct arguments against inductivism and more broadly Bayesian approaches to scientific methodology and the philosophy of science.  First I want to focus on a point that bears on these questions but arises in their discussion of Bayesian data analysis.  It is that in actual Bayesian statistical data analysis "the prior distribution is one of the assumptions of the model and does not need to represent the statistician's personal degree of belief in alternative parameter values".  They go on to say "the prior is connected to the data, so is potentially testable".  It is presumably just this sort of testing that Matt Leifer was referring to when he wrote (commenting on my earlier blog entry on Bayesian methods in statistics)

"What I often hear from statisticians these days is that it is good to use Bayesian methods, but classical methods provide a means to check the veracity of a proposed Bayesian method. I do not quite understand what they mean by this, but I think they are talking at a much more practical level than the abstract subjective vs. frequentist debate in the foundations of probability, which obviously would not countenance such a thing.

The point Gelman and Shalizi are making is that the Bayesian prior being used for data analysis may not capture "the truth", or more loosely, since they are taking into account the strong possibility that no model under consideration is literally true, that it may not adequately capture those aspects of the truth one is interested in---for example, may not be good at predicting things one is interested in. Hence one wants to do some kind of test of whether or not the model is acceptable. This can be based on using the Bayesian posterior distribution as a model to be tested further, typically with classical tests such as "pure significance tests".
As Matthew's comment above might suggest, those of us of more Bayesian tendencies, who might agree that the particular family of priors---and potential posteriors---being used in data analysis (qua "parameter fitting" where perhaps we think of the prior distribution as the (higher-level) "parameter" being fit) might well not "contain the truth", might be able to take these tests of the model, even if done using some classical statistic, as fodder for further, if perhaps less formal, Bayesian/Jeffreysian reasoning about what hypotheses are likely to do a good job of predicting what is of interest.

One of the most interesting things about Gelman and Shalizi's paper is that they are thinking about how to deal with "fallibilism" (Popper's term?), in particular, inference about hypotheses that are literally false but useful. This is very much in line with recent discussion at various blogs of the importance of models in economics, where it is clear that the models are so oversimplified as to be literally false, but nonetheless they may prove predictively useful.  (The situation is complicated, however, by the fact that the link to prediction may also be relatively loose in economics; but presumably it is intended to be there somehow.)  It is not very clear how Popperian "falsificationism" is supposed to adapt to the fact that most of the hypotheses that are up for falsification are already known to be false. Probably I should go back and see what Popper had to say on that score, later in his career when he had embraced fallibilism. (I do recall that he tried introducing a notion of "verisimilitude", i.e. some kind of closeness to the truth, and that the consensus seems to have been---as Gelman and Shalizi point out in a footnote---that this wasn't very successful.)  It seems to that a Bayesian might want to say one is reasoning about the probability of statements like "the model is a good predictor of X in circumstances Y", " the model does a good job capturing how W relates to Z" , and so forth. It is perhaps statements like these that are really being tested when one does the " pure significance tests" advocated by Gelman and Shalizi when they write things like "In designing a good test for model checking, we are interested in finding particular errors which, if present, would mess up particular inferences, and devise a test statistic which is sensitive to this sort of mis-specification."

As I said above, I hope to take up Gelman and Shalizi's more direct arguments (in the cited paper) against "inductivism" (some of which I may agree with) and Bayesianism sensu lato as scientific methodology in a later post. I do think their point that the increasing use of Bayesian analysis in actual statistical practice, such as estimation of models by calculating a posterior distribution over model parameters beginning with some prior, via formal Bayesian conditioning, does not necessarily tell in favor of a Bayesian account of scientific reasoning generally, is important. In fact this point is important for those who do hold such a loosely Bayesian view of scientific reasoning:  most of us do not wish to get stuck with interpreting such priors as the full prior input to the scientific reasoning process.  There is always implicit the possibility that such a definite specification is wrong, or, when it is already known to be wrong but thought to be potentially useful for some purposes nonetheless, "too wrong to be useful for those purposes".

Nagel and Delong II: Fallibility and Transcendence

In my first post on Brad Delong's series of criticisms of Thomas Nagel's new book Mind and Cosmos I focused not on Brad's initial critcism but on a later post that seemed to be implying Nagel put too much weight on "common sense".  In this post I'll focus on Brad's initial criticism, and in particular on what seems to me his misunderstanding of Nagel's arguments concerning reason, as crucially dependent on the notion that reason is infallible at least in some cases.

Brad's critique began with a reaction to some remarks by Tyler Cowen, in particular Cowen's assertion that "People will dismiss his [Nagel's] arguments to remain in their comfort zone, while temporarily forgetting he is smarter than they are and furthermore that many of their views do not make sense or cohere internally."

Now I think it is unfortunate that Cowen is speculating about who's "smarter than" who, and unfortunate that Brad joins him in doing so.  Everyone involved seems to be quite smart, but unfortunately Brad seems to me to be misunderstanding what the main thrust of Nagel's argument is, and where its main weakness lies.  DeLong reacts to Cowen:

And here Tyler appears to me to have gone off the rails. Thomas Nagel is not smarter than we are--in fact, he seems to me to be distinctly dumber than anybody who is running even an eight-bit virtual David Hume on his wetware.

He fixates on a single example taken from Nagel's book and, I think, fails to understand the role Nagel thinks this example plays. Brad seems to think it is crucial that Nagel view reason as infallible. "And my certainty that I know must be correct!" as he puts it in his gloss on Nagel:

Nagel's argument, to the extent that I understand it and that it is coherent, goes roughly like this:

Suppose we think we are going south-southwest and see the sun rising before us. We don't think: "the heuristics of reasoning that have evolved because they tend to boost reproductive fitness conclude that it is very likely that I am not in fact going south-southwest". We think, instead: "I know that the sun rises in front of me when I am going east! Either I am hallucinating, or I must be going roughly east! I deduce this by my reason, and my reason is a mechanism that can see that the algorithm it follows is truth-preserving! My mind is in immediate contact with the rational order of the universe! I don't just think I am going east! I know I am either hallucinating or going east! And my certainty that I know must be correct! And I know that my certainty must be correct--and that triumph of reason cannot be given a purely physical explanation! Since I believe I am not hallucinating, I abandon the belief that I am going south-southwest because of my reason's transcendent grasp of objective reality! My consciousness is an instrument of transcendence that grasps objective reality! And no blind evolutionary process can produce such a transcendent instrument!"

Aspects of this example of Nagel's bothered me as well, but it plays a much less central part in Nagel's book than you might think from Brad's gloss. Note that even the material in quotes is a gloss, not a quote from Nagel, although it draws fairly heavily on him. (Further down in this post, I quote the passage in Nagel from the first appearance of the example to the last explicit reference to it.  Brad's post also contains further material on the general topic of reason directly quoted from Nagel.)  In the context of Brad's post, the term "Nagel's argument" seems to imply that this is the main argument of the book, on which it stands or falls.

As I said, I don't think the example is central. Rather, it is intended as an example of something central to the book, which is the claim that reason has the power --- imperfect and fallible, to be sure --- to get us in touch with objective reality, in a way that helps us transcend the appearance of things from our own particular viewpoint or perspective. It is probably also intended as part of a discussion of how logic --- the avoidance of contradiction --- is an essential part of our ability to engage in more subtle and substantive forms of reasoning. I will discuss this second point later, concentrating for the moment on the first.

Nagel is extremely clear that he does not believe that reason's power to help get us in touch with objective reality is infallible. (See the next quote I display from Nagel for an utterly explicit statement of this.)  It may seem that he is claiming it to be infallible in driving example, but even if he is, that does not seem crucial to his main line of argument. Most of Brad's ridicule of Nagel's argument is directed against the claim of infallibility, so it just misses its target if by "Nagel's argument" is meant, as is clear from the context, the overall line of argument of Nagel's book.

The overall argument of the book is not a single line of reasoning. But some main strands concern the nature and origin of life, of consciousness, and---what is under discussion here----of reason. Here is how Nagel puts his main argument concerning the nature of reason:

Thought and reasoning are correct or incorrect in virtue of something independent of the thinker's beliefs, and even independent of the community of thinkers to which he belongs. We take ourselves to have the capacity to form true beliefs about the world around us, about the timeless domains of logic and mathematics, and about the right thing to do. We don't take these capacities to be infallible, but we think they are often reliable, in an objective sense, and that they can give us knowledge. [Mind and Cosmos, pp. 80-81]

Perhaps some confusion has arisen because of Nagel's use of the word "reliable" elsewhere in the book (e.g. in the excerpt DeLong quotes)...it should not be taken to imply infallibility. In Brad's favor, the "directness" with which Nagel says reason "puts us in touch with the rational order of things" in this particular example, is thought by Nagel to strengthen his case. I just don't think it's the main point.

Lest anyone misunderstand, I don't agree with Nagel that our understanding of reason as part of a process enabling us to --- fallibly, Nagel admits, and partially, I might add --- get in touch with an objective reality that transcends each of our particular perspectives on it, provides support to the view that reason could not have evolved through natural selection.

Brad goes on to propose a counterexample to the claim that the bit of reasoning in Nagel's example is infallible. It is that "During northern hemisphere winter, if you are near the North Pole, it is perfectly possible to see the sun rise due south if you are due solar north of the center of the earth as you come out of the Earth's shadow. And I was. And I did."

Several things can be said about this. The most important one is that Nagel need not and does not claim infallibility. Less important is that Nagel explicitly described his example as one in which "I am driving...". Brad was flying. So Brad's "it happened to me" is not literally true. Moreover the distinction between flying and driving is not an irrelevant one (like that between its being Nagel or DeLong who is doing the reasoning...) but one that is probably relevant. I don't know whether there are any roads near enough the north pole that one could, driving, have the experience Brad did. Perhaps there is land, or at certain times of the year, perhaps there is still enough sea ice, near enough the pole that one could do this off-road, by driving a long way off-road, or bringing a vehicle in by air. Or perhaps not. I really don't think it matters much. There are some background assumptions that are not made explicit, though suggested by the framing of the situation, as there are in most pieces of reasoning.

Here's Nagel's introduction of the example, and its sequel:

But suppose I observe a contradiction among my beliefs and "see" that I must give up at least one of them. (I am driving south in the early morning and the sun rises on my right.) In that case, I see that the contradictory beliefs cannot all be true, and I see it simply because it is the case. I grasp it directly. It is not adequate to say that, faced with a contradiction, I feel the urgent need to alter my beliefs to escape it, which is explained by the fact that avoiding contradictions, like avoiding snakes and precipices, was fitness-enhancing for my ancestors. That would be an indirect explanation of how the impossibility of the contradiction explains my belief that it cannot be true. But even if some of our ancestors were prey to mere logical phobias and instincts, we have gone beyond that: We reject a contradiction just because we see that it is impossible, and we accept a logical entailment just because we see that it is necessarily true.

In ordinary perception, we are like mechanisms governed by a (roughly) truth-preserving algorithm. But when we reason, we are like a mechanism that can see that the algorithm it follows is truth-preserving. Something has happened that has gotten our minds into immediate contact with the rational order of the world, or at least with the basic elements of that order, which can in turn be used to reach a great deal more. That enables us to possess concepts that display the compatibility or incompatibility of particular beliefs with general hypotheses. We have to start by regarding our prereflective impressions as a partial and perspectival view of the world, but we are then able to use reason and imagination to construct candidates for a larger conception that can contain and account fo that part. This applies in the domain of value as well as of fact. The process is highly fallible, but it could not even be attempted without this hard core of self-evidence, on which all less certain reasoning depends. In the criticism and correction of reasoning, the final court of appeal is always reason itself.

What this means is that if we hope to include the human mind in the natural order, we have to explain not only consciousness as it enters into perception, emotion, desire, and aversion but also the conscious control of belief and conduct in response to the awareness of reasons---the avoidance of inconsistency, the subsumption of particular cases under general principles, the confirmation or disconfirmation of general principles by particular observations, and so forth. This is what it is to allow oneself to be guided by the objective truth rather than just by one's impressions. It is a kind of freedom---the freedom that reflective consciousness gives us from the rule of innate perceptual and motivational dispositions together with conditioning. Rational creatures can step back from these influences and try to make up their own minds. I set aside the question whether this kind of freedom is compatible or incompatible with causal determinism, but it does seem to be something that cannot be given a purely physical analysis and therefore, like the more passive forms of consciousness, cannot be given a purely physical explanation either.

If I decide, when the sun rises on my right, that I must be driving north instead of south, it is because I recognize that my belief that I am driving south is inconsistent with that observation, together with what I know about the direction of rotation of the earth. I abandon the belief because I recognize that it couldn't be true. If I put money into a retirement account because the future income it generates will be more valuable to me than what I could spend it on now, I act because I see that this makes it a good thing to do. If I oppose the abolition of the inheritance tax, it is because I recognize that the design of property rights should be sensitive not only to autonomy but also to fairness. As the saying goes, I operate in the space of reasons.[Mind and Cosmos, pp. 91-92]

Gene Callahan criticizes Brad as follows:

So Nagel gives us two beliefs:
1) The sun rises in the east (where I am); and
2) I am driving south, which means the east will be on my left.
And a fact: But the sun is rising to my right!

So Nagel's point is that we cannot continue to hold 1) and 2) simultaneously: "I must give up at least one of them." How could he have said that more plainly?

Then Nagel goes on to state that "IF" (notice, that "if" is right in the original text, I did not add it!) he decides to give up belief 2), it will be because he sees he cannot logically hold 1) and 2) at the same time. Notice what the "if" implies: Nagel clearly understands that he has the option of giving up belief 1) instead! Otherwise, no point to the "if."

Now, Brad Delong comes along and says, "What an idiot! [And he really does insult Nagel like that.] Once, I was in that situation, and I had to give up belief 1)!"

Ahem. One does not disprove the proposition that one ought to give up at least one of two contradictory beliefs by showing how once, one gave up one of two contradictory beliefs.

Brad's response:

Nagel does not believe: "the sun rises in the east (where I am)." Nagel believes: "the sun rises on my right".

Thus the two beliefs that Nagel's reason tells him are in conflict are (a) his belief that he is going south, and (b) his belief he sees the sun rising on his right. The choice he gives himself is between concluding that he is going north and concludeingthat he is hallucinating.

Now I understand that Callahan wishes that Nagel were not Nagel but rather some Nagel' who had added a third belief: (c) "I am in a normal place (but there are weird places on earth where the sun rises in a non-standard way)."

But we go to argument with the Nagel we have, and not the Nagel' Callahan wishes we had.

Callahan would presumably say that Nagel was just being sloppy, and that there is actually an unsloppy Nagel' who had made the argument that Callahan wishes he had made, and whose reason does have transcendental access to objective reality, and that we should deal with the argument not of Nagel but of Nagel'.

But Callahan's confusion of the Nagel' he wishes we were talking about with the Nagel who we are talking about demonstrates my big point quite effectively: powerful evidence that Nagel is a jumped-up monkey using wetware evolved to advance his reproductive fitness, rather than a winged angelic reasoning being with transcendental access to objective reality. No?

I think Callahan is roughly right here. Roughly because it's not obviously correct that "Nagel gives us two beliefs". Callahan's (2) is stated in Nagel's parenthetical introduction of the example (see the quote above). But the parenthetical introduction is probably best read as describing the situation, not explicitly attributing beliefs ("I am driving south", not "I believe I am driving south"). It's clear we're to take as implicit that the subject of the example believes this, though, and when Nagel returns to the example later in the passage I quoted it is made explicit: this is the belief that is given up. That return to the example also makes it clear that there are background beliefs not initially mentioned in Nagel's parenthetical introduction of the example: Nagel mentions "what I know about the direction of rotation of the earth". This is presumably where Callahan gets number (2) namely "The sun rises in the east (where I am)." That seems a correct reading of Nagel, so DeLong's "Nagel does not believe: "the sun rises in the east (where I am)." Nagel believes: "the sun rises on my right"." just seems wrong.  The Nagel of the example believes both of these things (if we understand "the sun rises on my right" to mean something like "the sun is rising on my right".  Brad's misinterpretation is probably based on taking the parenthetical sentence introducing the example as a statement of the two beliefs that are in contradiction, rather than a sketch of a situation in which "I observe a contradiction in my beliefs". (Brad also changes Nagel's "driving south" to "going south", which affects, as I discussed above, whether Brad's flying experience is relevant.)

I think Nagel is getting at several things with this example, in light of the surrounding discussion.

(1) One is the idea that deductive reason helps us access truths about the world that go beyond our own particular perspective on it, because the avoidance of inconsistency is integral to the use of language, which in turn enables us to describe how the world is or might be from a point of view that is not just the perspective of one being that it makes possible. I am not sure what Brad means by "transcendent access" to objective reality--- it may just be a rhetorical flourish, liked "winged" and "angelic". The term "transcendent access" does not appear in Mind and Cosmos. When Nagel uses words with the root "transcend", he is referring to transcending a limited point of view to come up with a view of the world "as it is independently of the thinker's beliefs and even independently of the community of thinkers to which he belongs." (He also uses it---probably in the same sense---to refer to "a transcendent being", a notion he finds unappealing.) In his description of "what it is to be guided by the objective truth" toward the end of the long quote above, he is quite clear that this involves observations and (broadly speaking) "inductive" reasoning ("confirmation and disconfirmation"), and earlier he mentions "imagination" in addition to reasoning. (So if Brad's "Humean heuristics" just means inductive reasoning broadly construed, then it looks like Nagel's on board with it.) When reading the discussion of the driving example in Mind and Cosmos and related passages in The Last Word, I have sometimes felt puzzled about why Nagel seems to be laying such emphasis on deductive reasoning. And in general, I'm slightly frustrated by the relative lack of discussion of induction or related non-inductive aspects of scientific reasoning in Nagel's writings. But I think the quoted passages make clear that for Nagel, reason comprises induction too. I think the reason for his stress on deduction and consistency is the importance --- as Nagel sees it --- of language, and language's intimate link with logic --- to the very formulation of theories and hypotheses, scientific and otherwise. Nagel's emphasis on "directness" in simple cases may or may not be misplaced, but I don't think it's the linchpin of his broader argument.

(2) Secondly, and perhaps more controversially, Nagel believes that we must conceive of our reasoning as autonomous and free---that we cannot view it as a mere disposition. A mere disposition is how Hume, on one reading, viewed "induction", if not deduction... here I think Nagel would disagree with Hume, and perhaps with DeLong, if "mere disposition" is what DeLong means by "Humean heuristics". The key point, for Nagel, is that "In the criticism and correction of reasoning, the final court of appeal is always reason itself." The theory of evolution itself is part of that objective picture of reality, transcending our individual perspectives on it, that reason enables us to arrive at. For Nagel, it would be absurd to let a belief in evolution by natural selection undermine our view that our reasoning, in conjunction with imagination and observation, can get us in touch with and is getting us in touch with objective reality, because our very belief in evolution itself relies on this view. It is this, and not infallibility or "transcendent access" (a term Nagel never uses) that is the most important, and that I think is crucial to his broader argument.  Note that this does not automatically imply that a belief in evolution by natural selection cannot modify our assessment of our reasoning, perhaps leading us to view particular judgments or modes of reasoning as suspect, because arising from heuristics that we can see to be reliable only in certain situations similar to those in which they evolved.  Indeed, Nagel seems overly impressed with this possibility---one of his main grounds for rejection of the notion that there could be an evolutionary-biological explanation of the advent of reason in humans is his view that such an explanation would necessarily undermine our assessment that the reasoning we exercise in conjunction with our other faculties actually is, on balance, tending to get us in touch with objective reality.

Brad does address some of these issues, in response to Callahan's pointing out that they are the main ones; I will take up that part of their discussion in a later post.

Let me here take up an element of the quoted passage from Nagel that is bound to have raised some hackles.
Nagel: "I set aside the question whether this kind of freedom [to decide what to believe and how to act for reasons, i.e. by reasoning -- HB] is compatible or incompatible with causal determinism, but it does seem to be something that cannot be given a purely physical analysis and therefore, like the more passive forms of consciousness, cannot be given a purely physical explanation either."

This really is a key argument for Nagel. However, in my view, it needs to be understood in terms of the subtleties of emergence. As I have written elsewhere, I think there is some crucial unclarity in Nagel (or in my understanding of him) about what "purely physical explanation" might mean. If it means "explanation in terms of the concepts of physics" then I suspect that the hypothesis that "this kind of freedom [...] cannot be given a purely physical explanation [...]" is correct.  (However, I think I still have a substantive disagreement with Nagel on the meaning of "explanation".)   But if we allow an evolutionary biological evolution to use concepts like "reason" (which seems rather reasonable if one is going to try to explain the origin of our ability to reason), it seems to me that this is compatible with our eventually having an evolutionary-biological explanation of its historical origin. Here also I think I disagree with Nagel, who sometimes refers to "physics extended to include biology", suggesting that to him an evolutionary-biological explanation is a kind of purely physical explanation. I've discussed this some in my first post on Mind and Cosmos, and will discuss it more in future posts. There are deep and subtle philosophical and scientific questions involved, but in my view it is here if anywhere that Nagel goes importantly astray in dealing with reason, and not primarily in some actual or putative attribution of infallibility to simple judgements of contradiction, nor even in the notion (which Nagel does appear to subscribe to, but which I'm not sure I want to endorse) that the faculty of avoiding contradiction involves our minds being in "immediate contact with the rational order of things" [Mind and Cosmos, p. 91].

 

 

Thomas Nagel's "Mind and Cosmos"

I've just finished reading Thomas Nagel's newish book, "Mind and Cosmos" (Oxford, 2012).  It's deeply flawed, but in spite of its flaws some of the points it makes deserve more attention, especially in the broader culture, than they're likely to receive in the context of a book that's gotten plenty of people exercised about its flaws.  I'm currently undecided about whether to recommend reading his book for these points, as they are probably made, without the distracting context and possibly better formulated, equally well elsewhere, notably in Nagel's  "The Last Word" (Oxford, 1997).  The positive points are the emphasis on the reality of mental phenomena and (more controversially) their ireducibility to physical or even biological terms, the unacceptability of viewing the activities of reason in similarly reductive terms, and a sense that mind and reason are central to the nature of reality.  Its greatest flaws are an excessively reductionist view of the nature of science, and, to some degree in consequence of this, an excessive skepticism about the potential for evolutionary explanations of the origins of life, consciousness, and reason.

One of the main flaws of Nagel's book is that he seems --- very surprisingly --- to view explananations in terms of, say, evolutionary biology, as "reductively materialist".  He seems not to appreciate the degree to which the "higher" sciences involve "emergent" phenomena, not reducible---or not, in any case, reduced---to the terms of sciences "below" them in the putative reductionist hierarchy.  Of course there is no guarantee that explanations in terms of these disciplines' concepts will not be replaced by explanations in terms of the concepts of physics, but it has not happened, and may well never happen.  The rough picture is that the higher disciplines involve patterns or structures formed, if you like, out of the material of the lower ones, but the concepts in terms of which we deal with these patterns or structures are not those of physics, they are higher-order ones.  And these structures and their properties---described in the language of the higher sciences, not of physics---are just as real as the entities and properties of physics.  My view --- and while it is non-reductionist, I do not think it is hugely at variance with that of many, perhaps most, scientists who have considered the matter carefully --- is that at a certain very high level, some of these patterns have genuine mental aspects.  I don't feel certain that we will explain, in some sense, all mental phenomena in terms of these patterns, but neither does it seem unreasonable that we might.  ("Explanation" in this sense needn't imply the ability to predict perfectly (or even very well), nor, as is well known, need the ability to predict perfectly be viewed as providing us with a full and adequate explanation---simulation, for example, is not necessarily understanding.)   Among scientists and philosophers who like Nagel hold a broadly "rationalist" worldview David Deutsch, in his books The Fabric of Reality and especially The Beginning of Infinity, is much more in touch with the non-reductionist nature of much of science.

Note that none of this means there isn't in some sense a "physical basis" for mind and reason.  It is consistent with the idea that there can be "no mental difference without a physical difference", for example (a view that I think even Nagel, however, agrees with).

This excessively reductionist view of modern science can also be found among scientists and popular observers of science, though it is far from universal.   It is probably in part, though only in part, responsible for two other serious flaws in Nagel's book.  The first of these is his skepticism about the likelihood that we will arrive at an explanation of the origin of life in terms of physics, chemistry, and perhaps other sciences that emerge from them---planetary science, geology, or perhaps some area on the borderline between complex chemistry and biology that will require new concepts, but not in a way radically different from the way these disciplines themselves involve new concepts not found in basic physics.  The second is his skepticism that the origins of consciousness and reason can be explained primarily in terms of biological evolution.  I suspect he is wrong about this.  The kind of evolutionary explanation I expect is of course likely to use the terms "consciousness" and "reason" in ways that are not entirely reductive.   I don't think that will prevent us from understanding them as likely to evolve through natural selection.   I expect we will see that to possess the faculty of reason, understood (with Nagel) as having the---fallible, to be sure!---power to help get us in touch with a reality that transcends, while including, our subjective point of view, confers selective advantage.  Nagel is aware of the possibility of this type of explanation but --- surprisingly, in my view --- views it as implausible that it should be adaptive to possess reason in this strong sense, rather than just some locally useful heuristics.

The shortcomings in his views on evolution and the potential for an evolutionary explanation of life, consciousness, and reason deserve more discussion, but I'll leave that for a possible later post.

The part of Nagel's worldview that I like, and that may go underappreciated by those who focus on his shortcomings, is, as I mentioned above, the reality of the mental aspect of things, and the need to take seriously the view that we have the power, fallible as it may be, to make progress toward the truth about how reality is, about what is good, and about what is right and wrong.  I also like his insistence that much is still unclear about how and why this is so.  But to repeat, I think he's somewhat underplaying the potential involvement of evolution in an eventual understanding of these matters.  He may also be underplaying something I think he laid more stress on in previous books, notably The View from Nowhere and the collection of papers and essays Mortal Questions: the degree to which there may be an irreconcilable tension between the "inside" and "outside" views of ourselves.  However, his attitude here is to try to reconcile them. Indeed, one of the more appealing aspects of his worldview as expressed in both Mind and Cosmos and The Last Word is the observation that my experience "from inside" of what it is to be a reasoning subject, involves thinking of myself as part of a larger objective order and trying to situate my own perspective as one of many perspectives, including those of my fellow humans and any other conscious and reasoning beings that exist, upon it.  It is to understand much of my reasoning as attempting, even while operating as it must from my particular perspective, to gain an understanding of this objective reality that transcends that perspective.

So far I haven't said much about the positive possibilities Nagel moots, in place of a purely biological evolutionary account, for explaining the origin of life, consciousness, and reason.  These are roughly teleological, involving a tendency "toward the marvelous".  This is avowedly a very preliminary suggestion.  My own views on the likely role of mind and reason in the nature of reality, even more tentative than Nagel's, are that it is less likely that it arises from a teleological tendency toward the marvelous than that a potential for consciousness, reason, and value is deeply entwined with the very possibility of existence itself.  Obviously we are very far from understanding this.  I would like to think this is fairly compatible with a broadly evolutionary account of the origin of life and human consciousness and reasoning on our planet, and with the view that we're made out of physical stuff.

No new enlightenment: A critique of "quantum reason"

I have a lot of respect for Scientific American contributing physics editor George Musser's willingness to solicit and publish articles on some fairly speculative and, especially, foundational, topics whether in string theory, cosmology, the foundations of quantum theory, quantum gravity, or quantum information.  I've enjoyed and learned from these articles even when I haven't agreed with them.  (OK, I haven't enjoyed all of them of course... a few have gotten under my skin.)  I've met George myself, at the most recent FQXi conference; he's a great guy and was very interested in hearing, both from me and from others, about cutting-edge research.  I also have a lot of respect for his willingness to dive in to a fairly speculative area and write an article himself, as he has done with "A New Enlightenment" in the November 2012 Scientific American (previewed here).  So although I'm about to critique some of the content of that article fairly strongly, I hope it won't be taken as mean-spirited.  The issues raised are very interesting, and I think we can learn a lot by thinking about them; I certainly have.

The article covers a fairly wide range of topics, and for now I'm just going to cover the main points that I, so far, feel compelled to make about the article.  I may address further points later; in any case, I'll probably do some more detailed posts, maybe including formal proofs, on some of these issues.

The basic organizing theme of the article is that quantum processes, or quantum ideas, can be applied to situations which social scientists usually model as involving the interactions of "rational agents"...or perhaps, as they sometimes observe, agents that are somewhat rational and somewhat irrational.  The claim, or hope, seems to be that in some cases we can either get better results by substituting quantum processes (for instance, "quantum games", or "quantum voting rules") for classical ones, or perhaps better explain behavior that seems irrational.  In the latter case, in this article, quantum theory seems to be being used more as a metaphor for human behavior than as a model of a physical process underlying it.  It isn't clear to me whether we're supposed to view this as an explanation of irrationality, or in some cases as the introduction of a "better", quantum, notion of rationality.  However, the main point of this post is to address specifics, so here are four main points; the last one is not quantum, just a point of classical political science.

 

(1) Quantum games.  There are many points to make on this topic.  Probably most important is this one: quantum theory does not resolve the Prisoner's Dilemma.  Under the definitions I've seen of "quantum version of a classical game", the quantum version is also a classical game, just a different one.  Typically the strategy space is much bigger.  Somewhere in the strategy space, typically as a basis for a complex vector space ("quantum state space") of strategies, or as a commuting ("classical") subset of the possible set of "quantum actions" (often unitary transformations, say, that the players can apply to physical systems that are part of the game-playing apparatus), one can set things up so one can compare the expected payoff of the solution, under various solution concepts such as Nash equilibrium, for the classical game and its "quantum version", and it may be that the quantum version has a better result for all players, using the same solution concept.  This was so for Eisert, Lewenstein, and Wilkens' (ELW for short) quantum version of Prisoner's Dilemma.  But this does not mean (nor, in their article, did ELW claim it did) that quantum theory "solves the Prisoner's Dilemma", although I suspect when they set out on their research, they might have had hope that it could.  It doesn't because the prisoners can't transform their situation into quantum prisoners dilemma; to play that game, whether by quantum or classical means, would require the jailer to do something differently.  ELW's quantum prisoner's dilemma involves starting with an entangled state of two qubits.  The state space consists of the unit Euclidean norm sphere in a 4-dimensional complex vector space (equipped with Euclidean inner product); it has a distinguished orthonormal basis which is a product of two local "classical bases", each of which is labeled by the two actions available to the relevant player in the classical game.  However the quantum game consists of each player choosing a unitary operator to perform on their local state.  Payoff is determined---and here is where the jailer must be complicit---by performing a certain two-qubit unitary---one which does not factor as a product of local unitaries---and then measuring in the "classical product basis", with payoffs given by the classical payoff corresponding to the label of the basis vector corresponding to the result.  Now, Musser does say that "Quantum physics does not erase the original paradoxes or provide a practical system for decision making unless public officials are willing to let people carry entangled particles into the voting booth or the police interrogation room."  But the situation is worse than that.  Even if prisoners could smuggle in the entangled particles (and in some realizations of prisoners' dilemma in settings other than systems of detention, the players will have a fairly easy time supplying themselves with such entangled pairs, if quantum technology is feasible at all), they won't help unless the rest of the world, implementing the game, implements the desired game, i.e. unless the mechanism producing the payoffs doesn't just measure in a product basis, but implements the desired game by measuring in an entangled basis.  Even more importantly, in many real-world games, the variables being measured are already highly decohered; to ensure that they are quantum coherent the whole situation has to be rejiggered.  So even if you didn't need the jailer to make an entangled measurement---if the measurement was just his independently asking each one of you some question---if all you needed was to entangle your answers---you'd have to either entangle your entire selves, or covertly measure your particle and then repeat the answer to the jailer.  But in the latter case, you're not playing the game where the payoff is necessarily based on the measurement result: you could decide to say something different from the measurement result.  And that would have to be included in the strategy set.

There are still potential applications:  if we are explicitly designing games as mechanisms for implementing some social decision procedure, then we could decide to implement a quantum version (according to some particular "quantization scheme") of a classical game.  Of course, as I've pointed out, and as ELW do in their paper, that's just another classical game.  But as ELW note, it is possible---in a setting where quantum operations (quantum computer "flops") aren't too much more expensive than their classical counterparts---that playing the game by quantum means might use less resources than playing it by simulating it classically.  In a mechanism design problem that is supposed to scale to a large number of players, it even seems possible that the classical implementation could scale so badly with the number of players as to become infeasible, while the quantum one could remain efficient.  For this reason, mechanism design for preference revelation as part of a public goods provision scheme, for instance, might be a good place to look for applications of quantum prisoners-dilemma like games.  (I would not be surprised if this has been investigated already.)

Another possible place where quantum implementations might have an advantage is in situations where one does not fully trust the referee who is implementing the mechanism.  It is possible that quantum theory might enable the referee to provide better assurances to the players that he/she has actually implemented the stated game.  In the usual formulation of game theory, the players know the game, and this is not an issue.  But it is not necessarily irrelevant in real-world mechanism design, even if it might not fit strictly into some definitions of game theory.  I don't have a strong intuition one way or the other as to whether or not this actually works but I guess it's been looked into.

(2) "Quantum democracy".  The part of the quote, in the previous item, about taking entangled particles into the voting booth, alludes to this topic.  Gavriel Segre has a 2008 arxiv preprint entitled "Quantum democracy is possible" in which he seems to be suggesting that quantum theory can help us the difficulties that Arrow's Theorem supposedly shows exist with democracy.  I will go into this in much more detail in another post.  But briefly, if we consider a finite set A of "alternatives", like candidates to fill a single position, or mutually exclusive policies to be implemented, and a finite set I of "individuals" who will "vote" on them by listing them in the order they prefer them, a "social choice rule" or "voting rule" is a function that, for every "preference profile", i.e. every possible indexed set of preference orderings (indexed by the set of individuals), returns a preference ordering, called the "social preference ordering", over the alternatives.  The idea is that then whatever subset of alternatives is feasible, society should choose the one mostly highly ranked by the social preference ordering,  from among those alternatives that are feasible.  Arrow showed that if we impose the seemingly reasonable requirements that if everyone prefers x to y, society should prefer x to y ("unanimity") and that whether or not society prefers x to y should be affected only by the information of which individuals prefer x to y, and not by othe aspects of individuals' preference orderings ("independence of irrelevant alternatives", "IIA"), the only possible voting rules are the ones such that, for some individual i called the "dictator" for the rule, the rule is that that individual's preferences are the social preferences.  If you define a democracy as a voting rule that satisfies the requirements of unanimity and IIA and that is not dictatorial, then "democracy is impossible".  Of course this is an unacceptably thin concept of individual and of democracy.  But anyway, there's the theorem; it definitely tells you something about the limitations of voting schemes, or, in a slighlty different interpretation, of the impossibility of forming a reasonable idea of what is a good social choice, if all that we can take into account in making the choice is a potentially arbitrary set of individuals' orderings over the possible alternatives.

Arrow's theorem tends to have two closely related interpretations:  as a mechanism for combining actual individual preferences to obtain social preferences that depend in desirable ways on individual ones, or as a mechanism for combining formal preference orderings stated by individuals, into a social preference ordering.  Again this is supposed to have desirable properties, and those properties are usually motivated by the supposition that the stated formal preference orderings are the individuals' actual preferences, although I suppose in a voting situation one might come up with other motivations.  But even if those are the motivations, in the voting interpretation, the stated orderings are somewhat like strategies in a game, and need not coincide with agents' actual preference orderings if there are strategic advantages to be had by letting these two diverge.

What could a quantum mitigation of the issues raised by Arrow's theorem---on either interpretation---mean?  We must be modifying some concept in the theorem... that of an individual's preference ordering, or voting strategy, or that of alternative, or---although this seems less promising---that of individual---and arguing that somehow that gets us around the problems posed by the theorem.  None of this seems very promising, for reasons I'll get around to in my next post.  The main point is that if the idea is similar to the --- as we've seen, dubious --- idea that superposing strategies can help in quantum games, it doesn't seem to help with interpretations where the individual preference ordering is their actual preference ordering.  How are we to superpose those?  Superposing alternatives seems like it could have applications in a many-worlds type interpretation of quantum theory, where all alternatives are superpositions to begin with, but as far as I can see, Segre's formalism is not about that.  It actually seems to be more about superpositions of individuals, but one of the big motivational problems with Segre's paper is that what he "quantizes" is not the desired Arrow properties of unanimity, independence of irrelevant alternatives, and nondictatoriality, but something else that can be used as an interesting intermediate step in proving Arrow's theorem.  However, there are bigger problems than motivation:  Segre's main theorem, his IV.4, is very weak, and actually does not differentiate between quantum and classical situations.  As I discuss in more detail below, it looks like for the quantum logics of most interest for standard quantum theory, namely the projection lattices of of von Neumann algebras, the dividing line between ones having what Segre would call a "democracy", a certain generalization of a voting rule satisfying Arrow's criteria, and ones that don't (i.e. that have an "Arrow-like theorem") is not commutativity versus noncommutativity of the algebra (ie., classicality versus quantumness), but just infinite-dimensionality versus finite-dimensionality, which was already understood for the classical case.  So quantum adds nothing.  In a later post, I will go through (or post a .pdf document) all the formalities, but here are the basics.

Arrow's Theorem can be proved by defining a set S of individuals to be decisive if for every pair x,y of alternatives, whenever everyone in S prefers x to y, and everyone not in x prefers y to x, society prefers x to y.  Then one shows that the set of decisive sets is an ultrafilter on the set of individuals.  What's an ultrafilter?  Well, lets define it for an arbitrary lattice.  The set, often called P(I), of subsets of any set I, is a lattice (the relevant ordering is subset inclusion, the defined meet and join are intersection and union).   A filter---not yet ultra---in a lattice is a subset of the lattice that is upward-closed, and meet-closed.  That is, to say that F is a filter is to say that  if x is in F, and y is greater than or equal to x, then y is in F, and that if x and y are both in f, so is x meet y.  For P(I), this means that a filter has to include every superset of each set in the filter, and also the intersection of every pair of sets in the filter.  Then we say a filter is proper if it's not the whole lattice, and it's an ultrafilter if it's a maximal proper filter, i.e. it's not properly contained in any other filter (other than the whole lattice).  A filter is called principal if it's generated by a single element of the lattice:  i.e. if it's the smallest filter containing that element.  Equivalently, it's the set consisting of that element and everything above it.  So in the case of P(I), a principal filter consists of a given set, and all sets containing that set.

To prove Arrow's theorem using ultrafilters, one shows that unanimity and IIA imply that the set of decisive sets is an ultrafilter on P(I).  But it was already well known, and is easy to show, that all ultrafilters on the powerset of a finite set are principal, and are generated by singletons of I, that is, sets containing single elements of I.  So a social choice rule satisfying unanimity and IIA has a decisive set containing a single element i, and furthermore, all sets containing i are decisive.  In other words, if i favors x over y, it doesn't matter who else favors x over y and who opposes it: x is socially preferred to y.  In other words, the rule is dictatorial.  QED.

Note that it is crucial here that the set I is finite.  If you assume the axiom of choice (no pun intended ahead of time), then non-principal ultrafilters do exist in the lattice of subspaces of an infinite set, and the more abstract-minded people who have thought about Arrow's theorem and ultrafilters have indeed noticed that if you are willing to generalize Arrow's conditions to an infinite electorate, whatever that means, the theorem doesn't generalize to that situation.  The standard existence proof for a non-principal ultrafilter is to use the axiom of choice in the form of Zorn's lemma to establish that any proper filter is contained in a maximal one (i.e. an ultrafilter) and then take the set of subsets whose complement (in I) is finite, show it's a filter, and show it's extension to an ultrafilter is not principal.  Just for fun, we'll do this in a later post.  I wouldn't summarize the situation by saying "infinite democracies exist", though.  As a sidelight, some people don't like the fact that the existence proof is nonconstructive.

As I said, I'll give the details in a later post.  Here, we want to examine Segre's proposed generalization.  He defines a quantum democracy  to be a nonprincipal ultrafilter on the lattice of projections of an "operator-algebraically finite von Neumann algebra".  In the preprint there's no discussion of motivation, nor are there explicit generalizations of unanimity and IIA to corresponding quantum notions.  To figure out such a correspondence for Segre's setup we'd need to convince ourselves that social choice rules, or ones satisfying one or the other of Arrow's properties, are related one to one to their sets of decisive coalitions, and then relate properties of the rule (or the remaining property), to the decisive coalitions' forming an ultrafilter.  Nonprincipality is clearly supposed to correspond to nondictatorship.  But I won't try to tease out, and then critique, a full correspondence right now, if one even exists.

Instead, let's look at Segre's main point.  He defines a quantum logic as a non-Boolean orthomodular lattice.  He defines a quantum democracy as a non-principal ultrafilter in a quantum logic.  His main theorem, IV.4, as stated, is that the set of quantum democracies is non-empty.  Thus stated, of course, it can be proved by showing the existence of even one quantum logic that has a non-principal ultrafilter.  These do exist, so the theorem is true.

However, there is nothing distinctively quantum about this fact.  Here, it's relevant that Segre's Theorem IV.3 as stated is wrong.  He states (I paraphrase to clarify scope of some quantifiers) that L is an operator-algebraically finite orthomodular lattice all of whose ultrafilters are principal if, and only if, L is a classical logic (i.e. a Boolean lattice).  But this is false.  It's true that to get his theorem IV.4, he doesn't need this equivalence.  But what is a von Neumann algebra?  It's a *-algebra consisting of bounded operators on a Hilbert space, closed in the weak operator topology.  (Or something isomorphic in the relevant sense to one of these.) There are commutative and noncommutative ones.  And there are finite-dimensional ones and infinite-dimensional ones.  The finite-dimensional ones include:  (1) the algebra of all bounded operators on a finite-dimensional Hilbert space (under operator multiplication and complex conjugation), these are noncommutative for dimension > 1  (2) the algebra of complex functions on a finite set I (under pointwise multiplication and complex conjugation) and (3) finite products (or if you prefer the term, direct sums) of algebras of these types.  (Actually we could get away with just type (1) and finite products since the type (2) ones are just finite direct sums of one-dimensional instances of type (1).)   The projection lattices of the cases (2) are isomorphic to P(I) for I the finite set.  These are the projection lattices for which Arrow's theorem can be proved using the fact that they have no nonprincipal ultrafilters.  The cases (1) are their obvious quantum analogues.  And it is easy to show that in these cases, too, there are no nonprincipal ultrafilters.  Because the lattice of projections of a von Neumann algebra is complete, one can use  essentially the same proof as for the case of P(I) for finite I.  So for the obvious quantum analogues of the setups where Arrow's theorem is proven, the analogue of Arrow's theorem does hold, and Segre's "quantum democracies" do not exist.

Moreover, Alex Wilce pointed out to me in email that essentially the same proof as for P(I) with I infinite, gives the existence of a nonprincipal ultrafilter for any infinite-dimensional von Neumann algebra:  one takes the set of projections of cofinite rank (i.e. whose orthocomplementary projection has finite rank), shows it's a filter, extends it (using Zorn's lemma) to an ultrafilter, and shows that's not principal.  So (if the dividing line between finite-dimensional and infinite-dimensional von Neumann algebras is precisely that their lowest-dimensional faithful representations are on finite-dimensional Hilbert spaces, which seems quite likely) the dividing line between projection lattices of von Neumann algebras on which Segre-style "democracies" (nonprincipal ultrafilters) exist, is precisely that between finite and infinite dimension, and not that between commutativity and noncommutativity.  I.e. the existence or not of a generalized decision rule satisfying a generalization of the conjunction of Arrow's conditions has nothing to do with quantumness.  (Not that I think it would mean much for social choice theory or voting if it did.)

(3) I'll only say a little bit here about "quantum psychology".  Some supposedly paradoxical empirical facts are described at the end of the article.  When subjects playing Prisoner's Dilemma are told that the other player will snitch, they always (nearly always? there must be a few mistakes...) snitch.  When they are told that the other player will stay mum, they usually also fink, but sometimes (around 20% of the time---it is not stated whether this typical of a single individual in repeated trials, or a percentage of individuals in single trials) stay mum.  However, if they are not told what the other player will do, "about 40% of the time" they stay mum.  Emanuel Pothos and Jerome Busemeyr devised a "quantum model" that reproduced the result.  As described in Sci Am, Pothos interprets it in terms of destructive interference between (amplitudes associated with, presumably) the 100% probability of snitching when the other snitches and the 80% probability of snitching when they other does not that reduces the probability to 60% when they are not sure whether the other will snitch.  It is a model; they do not claim that quantum physics of the brain is responsible.  However, I think there is a better explanation, in terms of what Douglas Hofstadter called "superrationality", Nigel Howard called "metarationality", and I like to call a Kantian equilibrium concept, after the version of Kant's categorial imperative that urges you to act according to a maxim that you could will to be a universal law.  Simply put, it's the line of reasoning that says "the other guy is rational like me, so he'll do what I do.  What does one do if he believes that?  Well, if we both snitch, we're sunk.  If we both stay mum, we're in great shape.  So we'll stay mum."  Is that rational?  I dunno.  Kant might have argued it is.  But in any case, people do consider this argument, as well, presumably, as the one for the Nash equilibrium.  But in either of the cases where the person is told what the other will do, there is less role for the categorical imperative; one is being put more in the Nash frame of mind.  Now it is quite interesting that people still cooperate a fair amount of the time when they know the other person is staying mum; I think they are thinking of the other person's action as the outcome of the categorical imperative reasoning, and they feel some moral pressure to stay with the categorical imperative reasoning.  Whereas they are easily swayed to completely dump that reasoning in the case when told the other person snitched: the other has already betrayed the categorical imperative.  Still, it is a bit paradoxical that people are more likely to cooperate when they are not sure whether the other person is doing so;  I think the uncertainty makes the story that "he will do what I do" more vivid, and the tempting benefit of snitching when the other stays mum less vivid, because one doesn't know *for sure* that the other has stayed mum.  Whether that all fits into the "quantum metaphor" I don't know but it seems we can get quite a bit of potential understanding here without invoking.  Moreover there probably already exists data to help explore some of these ideas, namely about how the same individual behaves under the different certain and uncertain conditions, in anonymous trials guaranteed not to involve repetition with the same opponent.

Less relevant to quantum theory, but perhaps relevant in assessing how important voting paradoxes are in the real world, is an entirely non-quantum point:

(4)  A claim by Piergiorgio Odifreddi, that the 1976 US election is an example of Condorcet's paradox of cyclic pairwise majority voting, is prima facie highly implausible to anyone who lived through that election in the US.  The claim is that a majority would have favored, in two-candidate elections:

Carter over Ford (as in the actual election)

Ford over Reagan

Reagan over Carter

I strongly doubt that Reagan would have beat Carter in that election.  There is some question of what this counterfactual means, of course:  using polls conducted near the time of the election does not settle the issue of what would have happened in a full general-election campaign pitting Carter against Reagan.  In "Preference Cycles in American Elections", Electoral Studies 13: 50-57 (1994), as summarized in Democracy Defended by Gerry Mackie, political scientist Benjamin Radcliff analyzed electoral data and previous studies concerning the US Presidential elections from 1972 through 1984, and found no Condorcet cycles.  In 1976, the pairwise orderings he found for (hypothetical, in two of the cases) two-candidate elections were Carter > Ford, Ford > Reagan, and Carter > Reagan.  Transitivity is satisfied; no cycle.  Obviously, as I've already discussed, there are issues of methodology, and how to analyze a counterfactual concerning a general election.  More on this, perhaps, after I've tracked down Odifreddi's article.  Odifreddi is in the Sci Am article because an article by him inspired Gavriel Segre to try to show that such problems with social choice mechanisms like voting might be absent in a quantum setting.

Odifreddi is cited by Musser as pointing out that democracies usually avoid Condorcet paradoxes because voters tend to line up on an ideological spectrum---I'm just sceptical until I see more evidence, that that was not the case also in 1976 in the US.  I have some doubt also about the claim that Condorcet cycles are the cause of democracy "becoming completely dysfunctional" in "politically unsettled times", or indeed that it does become completely dysfunctional in such times.  But I must remember that Odifreddi is from the land of Berlusconi.  But then again, I doubt cycles are the main issue with him...

Thoughts on the evolution of technology-using intelligence

A few more thoughts inspired by Tim Maudlin's remarks in an interview with the Atlantic magazine. I'll quote Tim again first:

The question remains as to how often, after life evolves, you'll have intelligent life capable of making technology. What people haven't seemed to notice is that on earth, of all the billions of species that have evolved, only one has developed intelligence to the level of producing technology. Which means that kind of intelligence is really not very useful. It's not actually, in the general case, of much evolutionary value. We tend to think, because we love to think of ourselves, human beings, as the top of the evolutionary ladder, that the intelligence we have, that makes us human beings, is the thing that all of evolution is striving toward. But what we know is that that's not true. Obviously it doesn't matter that much if you're a beetle, that you be really smart. If it were, evolution would have produced much more intelligent beetles. We have no empirical data to suggest that there's a high probability that evolution on another planet would lead to technological intelligence. There is just too much we don't know.

Certainly it is remarkable that only one technology-using (if you discount a few cases of the most rudimentary use of natural objects, and perhaps a few cases of very rudimentarily modified objects, as tools by other animals) species has evolved on Earth. An interesting question is whether most planets that do evolve life eventually evolve species that produce and use complex technology, and how many such species. My guess, certainly not supported by long consideration, but by a modest amount of offhand thought, is that given enough time, they do. My guess is also that it takes a fair amount of time for evolution to produce such a species, and that this is part of an overall evolution towards increasing complexity, in some sense I'll here leave ill-defined, of the overall web of life on the planet, and in particular, of the most complex species on the planet. (Perhaps the notions of complexity explored by Charles Bennett, Murray Gell-Mann, and Seth Lloyd may be relevant.) Since the environment in which organisms must survive and propagate evolution consists in significant measure of other organisms, evolution itself creates new niches in this environment for organisms to evolve toward filling. As Robinson Jeffers wrote:

What but the wolf’s tooth whittled so fine
The fleet limbs of the antelope?
What but fear winged the birds, and hunger
Jewelled with such eyes the great goshawk’s head?

While I don't see a clear and obvious argument for it offhand, it is plausible to me that this process tends over time to create more and more complexity. In this sense, I suspect there is "progress" in evolution, despite a fair amount of scoffing in some quarters at the notion of evolutionary progress. This looks like a fruitful area for research. It's also plausible that this sort of evolutionary progress may eventually create both a niche for, and an accessible evolutionary path towards, a technology-using intelligent species. Whether the fact that we have only a single such species on this planet is due primarily to their being essentially only one niche for such a species, to the fact that it takes a long time for such a species to evolve due to the many precursor steps necessary (this would need to be a rather tricky anthropic argument, I suspect), or just happenstance, seems even more speculative, but very interesting.

So it seems to me I find myself supporting, at least speculatively, what may seem the naive, knee-jerk view "that the intelligence we have, that makes us human beings, is the thing that all of evolution is striving toward". With allowance for a metaphorical use of "striving", and modification of the definite article ("the thing"), I'm not too unhappy with that characterization. How far things go beyond "the intelligence we have", though, I'm not prepared to say. And it may be that there are very different paths for a planetary ecosystem to take, other than the production of one (or a few?) technology-using species. As Tim says, in the end we really don't know. But I do think there is interesting knowledge to be sought here.

Tim Maudlin on the training of physicists, the evolution of intelligence, and more

I had to link this interview with philosopher Tim Maudlin, in the Atlantic, when I read his observation that "The asking of fundamental physical questions is just not part of the training of a physicist anymore." But there's a lot more of interest in the interview as well. I found the article via Andrew Sullivan's blog; Sullivan found Tim's thoughts on the evolution of intelligence to be particularly interesting:

What people haven't seemed to notice is that on earth, of all the billions of species that have evolved, only one has developed intelligence to the level of producing technology. Which means that kind of intelligence is really not very useful. It's not actually, in the general case, of much evolutionary value. We tend to think, because we love to think of ourselves, human beings, as the top of the evolutionary ladder, that the intelligence we have, that makes us human beings, is the thing that all of evolution is striving toward. But what we know is that that's not true. Obviously it doesn't matter that much if you're a beetle, that you be really smart. If it were, evolution would have produced much more intelligent beetles. We have no empirical data to suggest that there's a high probability that evolution on another planet would lead to technological intelligence. There is just too much we don't know.

Indeed there is, but it points out some very interesting questions: is there a tendency, given enough time, for a species intelligent enough to produce technology to arise on an earth-like planet? Is there, perhaps, a tendency for it to inhibit the evolution of other such species? My personal guess (and it's just that, a guess, not supported by careful thought) is that there is such a tendency, but it takes a lot of time, it builds on, and is part of, a slow increase in the complexity of the most complex organisms. This is, of course, probably the "knee-jerk" view. Whether it inhibits the evolution of other species is something I'm less willing to speculate on (though if Neanderthals were another such species, we may have some evidence (one case!) for inhibition of the branching of a potentially technologically-capable intelligent species into two such species). Whether vertebrates have characteristics making it more likely for them to evolve technologically-capable intelligence than it is for, say, insects to evolve it is another interesting question.

Bill in Congress would prevent NIH from providing open access to taxpayer-funded research

NIH has long required its grantees to provide open access to all articles produced using its funding.  Now, as described in this New York Times editorial, there's a bill in Congress that would kill this open access policy.  Offhand, I don't agree with the writer's suggestion that the principle should be "if taxpayers paid for it, they own it", in the sense suggested in the next sentence, that all work produced with government funding should be excluded from copyright.  But I do believe there should be open access to government-funded research.

 

 

Smash (well, hope the U.K. has the good sense to radically revise) the British libel laws!

I wasn't planning another post in the "Smash" series for awhile, but this just had to be titled so.  When I followed this up from Matt Leifer's site, I just had to draw attention to it.   British science writer Simon Singh is being sued for libel by the British Chiropractic association for calling some of their treatments bogus.  Part of a broader problem of British libel laws chilling free speech, including discussion of so-called Islamist extremism in Britain, as discussed in this Daily Mail article.  More links at "sense about science".  Matt's post is from last August, so hopefully something strong has been done about this by now; I'll have to look into it.  This could seriously damage Britain if something isn't done about it.

2005 Winner's Tank Shiraz, Langhorne Creek, Australia... and the Future of Science

I've mentioned before how fantastic the 2005 Aussie Shirazes are, especially from the Barossa valley and McClaren Vale (e.g. The Maverick).  Here's a review of the wine that got me started on them, in the form of an email I send to Michael Nielsen a few  years back, when I first tasted this wine.  The 2006 was also good, but like many of the '06 Australians, less balanced and suave, and a bit thinner and sharper, than the the '05 incarnation.  I've added a few links.   Maybe soon I'll post more on the '05 and '06 Shirazes from Oz.

Hi Michael---

I opened a wine tonight that in several ways reminded me of you.  So I'm suggesting you try a bottle
or six before you depart your native land for the greener (?), but certainly colder (except in the summer when you'll be sweating buckets) pastures of Ontario.  It's "The Winner's Tank" 2005 Aussie Shiraz, Langhorne Creek.  I was dubious about this puppy because its label is a photo of some big square concrete tank in the middle of pasture, behind a barbed-wire fence, with "Hawks '05" inscribed, along with some shtick about how the local tradition is for the winners of the annual Aussie Rules football tournament to gather in the vineyard and paint their names on the tank.

Label:

Label:

Clearly just a bunch of hooey from some canny Aussie businessmen-winemakers to sucker some of us ever-gullible yanks into spending twelve bucks on a bottle---to be consumed, no doubt, with the shrimp we've got going on our barbie.  But having allowed myself to be suckered into it by a salesman at the Santa Fe Cost Plus---or else at Kokoman, our local Pojoaque-pueblo based purveyor of cheap beer to the masses and expensive Bordeaux to the Santa Fe/Los Alamos crowd--I opened it tonight.  Well, it was excellent.  Probably shouldn't talk it up too much for that promotes disappointment (it's just wine, for crissake) but, what the heck.  One of the better wines I've ever had---starting out kind of velvety, and also fruity  but not with the enjoyable but somewhat tacky blueberries-'n-bubble-gum taste of some of the cheaper-but-still-decent Australian shirazes.  Nope, this also had a hint of darkness, maybe even veering towards an off-taste, rubbery or rotty but opening out with air into a kind of stony complexity you get with the best Rhone Valley syrahs of France (or one I had from the Santa Barbara area).  Of course the 15.5 percent alcohol could be influencing my perceptions too.  (But more often it's hard for the flavors to stand up to that alcohol level.)

Anyway, recalling the tasty bottle of Jacob's Creek Cabernet you once bought me for my birthday, or my dissertation submission or wedding or something, and the fact that you're probably the first person I ever heard about Aussie Rules football from, I thought you might enjoy this recommendation, that is if you indulge in wine on occasion.

Sorry I cheesed out on QIP this year... I can't recall if it's because of some confusion about abstract submission and the international date line, or just not getting my paperwork in at LANL with the ever-lengthing lead time required.  Possibly I was even doing some research at the time I needed to be paying attention to registration or paperwork.  I got into the staying-up-late-at-night-trying-to-prove-stuff mode about extending the no-broadcasting theorem to a general ordered vector spaces context, with Jon Barrett, Matt Leifer, and Alex Wilce (cf. our quant-ph), and kind of let everything else go to hell.  It was great, and I have a few other similarly abstract things in the pipeline as a side benefit.  I'll bet QIP07 was great too, though.

Anyway, if you run into this wine, try a bottle.  You might even stop by your local wine store and see if they have it.  If you don't like it, complain to me and I'll reimburse you.

Cheers,

Howard

P.S. By the way, I just saw for the first time your 2004 blog post on effective research and  really enjoyed it.  It encapsulates some things I've been realizing.  (You may see me next writing a book on information-processing in categories of ordered linear spaces, and hoping to reap some dividends in cool theorems along the way.)

I don't think Mike ever tried the wine.  There are a lot of Aussie wines, and not all are available everywhere, especially not in Ontario, where I've learned to my chagrin that there is only one source---the Liquor Control Board of Ontario (well, there are some wineries you can buy from, too, but their stuff is well represnted at LCBO).  I'm now in Ontario myself---at Perimeter institute, in part to work on the book referred to in the e-mail I quoted; Mike is still in Waterloo, but now instead of working at Perimeter on quantum information, he's writing his next book, The Future of Science, on how the internet will transform scientific research.   We invited him to give the after-banquet talk at QIP 2009 in Santa Fe, and I found it inspiring; one of several things that led to me start this blog.  For those of you who don't know, here's Michael Nielsen's first book (coauthored with Ike Chuang).