Ed Dolan on the case for a universal basic income

Ed Dolan suggests that claims that social safety net programs on average don't disincentivize work much may depend on the current relatively limited coverage of such programs (especially compared to the situation before, say, the Reagan administration and (hopefully soon to be qualified with the word "first") Clinton administration.  He thinks advocates should consider replacing many of these programs with a universal basic income.  There is a lot more one could say about this issue, but I think this is an important point to keep in mind.

Nagel's Mind and Cosmos, Objective Value, Delong and Blackburn

I have trouble understanding why critics of Thomas Nagel's Mind and Cosmos are coming down so hard on his belief that value statements---particularly ethical ones, can (some of them, at any rate) be objectively true or false.  I'll consider two examples here.  Brad DeLong's objection seems to me based primarily on his continued mistaken view that Nagel views his reason as infallible.  It's therefore not specific to the case of moral or other value judgments.  Simon Blackburn's objections are more interesting because they are more specific to value judgments, and better address Nagel's actual position.

Brad DeLong seems to think that Nagel's juxtaposition of reasoning in the form modification of a belief about the direction one is driving in, because of its inconsistency with newly acquired evidence, with reasoning like Nagel's "I oppose the abolition of the inheritance tax... because I recognize that the design of property rights should be sensitive not only to autonomy but also to fairness..." is self-evidently ridiculous.  Says Brad:

"I do wonder: Does Gene Callahan have any idea what he has committed himself to when he endorses Thomas Nagel's claim that Nagel has transcendent direct access to truths of objective reality? I think not:

Thomas Nagel: [...my (HB's) ellipsis here, in place of a typo by Brad that repeated part of his own introduction, quoted above, to this quote...] I decide, when the sun rises on my right, that I must be driving north instead of south... because I recognize that my belief that I am driving south is inconsistent with that observation, together with what I know about the direction of rotation of the earth. I abandon the belief because I recognize that it could not be true.... I oppose the abolition of the inheritance tax... because I recognize that the design of property rights should be sensitive not only to autonomy but also to fairness...

Game, set, match, and tournament!"

That last sentence, which is Brad's, seems revealing of a mindset that sometimes creeps into his writing in his blog, less aimed at truth than at victory in some argumentative competition. I like a lot of what he does on his blog, but that attitude, and the related one that reads like an attempt to exhibit his hip and with-it-ness by using internet jargon that the unhip like me have to google ("self-pwnage", which Callahan is said to have committed), are not so appealing. The "transcendent direct access" I have already argued is mostly a straw-man of Brad's own creation, Nagel's point being primarily that (as says immediately following what Brad has quoted) "As the saying goes, I operate in the space of reasons." One aspect of operating in the space of reasons is trying to preserve some consistency between ones various beliefs; that seems to be the nub of the driving example (but we should not forget the important point that there is more than just deductive logic going on here... we have to decide which of the contradictory beliefs to give up). And we are also to some extent doing so (preserving consistency) in the case of the inheritance tax, though the full argument in this case is likely to be much more involved and less clear than in the case of the driving example. Nagel is arguing that we try to square our beliefs about the particular case of the inheritance tax with general beliefs that we (may) hold about how social institutions like property rights should be designed. Focusing on this consistency issue, though, can --- in both factual and ethical situations --- obscure the essential role of factors other than mere consistency in the process of reasoning about what beliefs to hold. As I mentioned in earlier posts, Nagel gives this somewhat short shrift, notably by not discussing inductive reasoning much, though he's clear about the fact that it's needed. But it's remarkable that DeLong---who I would guess shares Nagel's views on the inheritance tax, and possibly even his reasons (although he may also find some strength in arguments involving "social welfare functions) should think that this passage grounds an immediate declaration of victory. I guess it's because he wrongly thinks the issue is about "direct transcendent access".

Even more remarkable is philosopher Simon Blackburn's very similar reaction---if, as I am guessing, his example of "why income distribution in the US is unjust" is prompted in part by Nagel's reference to the inheritance tax. There are points I agree with in Blackburn's article, but then there is this:

According to Nagel, Darwinians can explain, say, why we dislike pain and seek to minimize bringing it about for ourselves and for others we love. But, Nagel thinks, for the Darwinian, its “real badness” can be no part of the explanation of why we are averse to it. So it is another mystery how real badness and other real normative properties enter our minds. Nagel here manifests his founding membership of a peculiar and fortunately local philosophical subculture that thrives by resolutely dismissing the resources of the alternative, Humean picture, which sees our judgement that pain is a bad thing as a useful expression of our natural aversion to it. All he says about this is that it “denies that value judgements can be true in their own right”, which he finds implausible. He is silent about why he thinks this, perhaps wisely, if only because nobody thinks that value judgements are true in their own right. The judgement that income distribution in the US is unjust, for instance, is not true in its own right. It is true in virtue of that fact that after decades of lobbying, chief executives of major companies earn several hundred times the income of their rank-and-file workers. It is true because of natural facts.

Parenthetically, but importantly: I agree with Blackburn's characterization of Nagel as believing that the "real badness"
of pain cannot be a main part of a Darwinian explanation of our aversion to pain. And I disagree with this belief of Nagel's.

However, I don't know what's so peculiar and local about resolutely dismissing (sometimes with plenty of discussion, though one virtue of Nagel's book is that it is short, so a point like this may not get extensive discussion) the Humean view here that this badness is just "natural aversion".  But in any case, Blackburn's discussion of his example is truly weird.  It seems reasonable to view a statement like "income distribution in the US is unjust" as true both because of the "natural facts" Blackburn cites, which explain how it has come to be what it is, and because of the component where the actual "values" come in, which give reasons for our belief that this high degree of inequality, is in fact unjust.  True, according to some theories of justice, e.g. a libertarian one, the genesis of a pattern of income and wealth distribution may be germane to whether or not it is just.  Blackburn might be adducing such an explanation, since he mentions "lobbying" as a cause (and not, say "hard work").  But if so, he still hasn't explained: what's wrong with lobbying?  Why does it cast doubt on the justice of the resulting outcome?  What Nagel means by value judgements being true "in their own right" is not likely that every statement with a value component, like Blackburn's about US income and wealth distribution, is true in and of itself and no reasons can be given for it.  What I think he means is that at some point, probably at many different points, there enters into our beliefs about matters of value an element of irreducible judgement that something is right or wrong, good or bad, and that this is objective, not just a matter of personal taste or "natural aversion".  What Blackburn's statement reads most like, due to his emphasis on "natural facts", is an attempt to substitute the causal factors leading to US income distribution being what it is, for the moral and political considerations---quite involved, perhaps subtle, and certainly contentious---that have led many to judge that it should not be what it is.  It's quite clear from Nagel's discussion of the inheritance tax what he thinks some of those considerations are: "autonomy and fairness". I just don't understand how someone could think that Blackburn's discussion of why US income distribution is unjust is better than an account in terms of concepts like autonomy and fairness---the sort of account that Nagel would obviously give. I've gotten some value from parts of Blackburn's work, even parts of this article, but this part---if this reading is correct---seems monumentally misguided.  Or does he think that the rest of the explanation is that human beings just have a "natural aversion" to income distribution that is as unequal, or perhaps as influenced by lobbying, as the US's currently is.  But you might think that a cursory look at a large part of the Republican party in the US would have disabused him of that notion.

Perhaps I'm being excessively snarky here...advocates, like Blackburn, of the natural aversion view would probably argue that it needs to be supplemented and modified by reasoning.... perhaps it is just that the "irreducibly moral", as opposed to the deductive/analogical reasoning component, of this process, is still just a matter of natural aversion.  I would think more Hobbesian considerations would come into play as well, but that is a matter for (you may be sorry to hear) another post.

No new enlightenment: A critique of "quantum reason"

I have a lot of respect for Scientific American contributing physics editor George Musser's willingness to solicit and publish articles on some fairly speculative and, especially, foundational, topics whether in string theory, cosmology, the foundations of quantum theory, quantum gravity, or quantum information.  I've enjoyed and learned from these articles even when I haven't agreed with them.  (OK, I haven't enjoyed all of them of course... a few have gotten under my skin.)  I've met George myself, at the most recent FQXi conference; he's a great guy and was very interested in hearing, both from me and from others, about cutting-edge research.  I also have a lot of respect for his willingness to dive in to a fairly speculative area and write an article himself, as he has done with "A New Enlightenment" in the November 2012 Scientific American (previewed here).  So although I'm about to critique some of the content of that article fairly strongly, I hope it won't be taken as mean-spirited.  The issues raised are very interesting, and I think we can learn a lot by thinking about them; I certainly have.

The article covers a fairly wide range of topics, and for now I'm just going to cover the main points that I, so far, feel compelled to make about the article.  I may address further points later; in any case, I'll probably do some more detailed posts, maybe including formal proofs, on some of these issues.

The basic organizing theme of the article is that quantum processes, or quantum ideas, can be applied to situations which social scientists usually model as involving the interactions of "rational agents"...or perhaps, as they sometimes observe, agents that are somewhat rational and somewhat irrational.  The claim, or hope, seems to be that in some cases we can either get better results by substituting quantum processes (for instance, "quantum games", or "quantum voting rules") for classical ones, or perhaps better explain behavior that seems irrational.  In the latter case, in this article, quantum theory seems to be being used more as a metaphor for human behavior than as a model of a physical process underlying it.  It isn't clear to me whether we're supposed to view this as an explanation of irrationality, or in some cases as the introduction of a "better", quantum, notion of rationality.  However, the main point of this post is to address specifics, so here are four main points; the last one is not quantum, just a point of classical political science.

 

(1) Quantum games.  There are many points to make on this topic.  Probably most important is this one: quantum theory does not resolve the Prisoner's Dilemma.  Under the definitions I've seen of "quantum version of a classical game", the quantum version is also a classical game, just a different one.  Typically the strategy space is much bigger.  Somewhere in the strategy space, typically as a basis for a complex vector space ("quantum state space") of strategies, or as a commuting ("classical") subset of the possible set of "quantum actions" (often unitary transformations, say, that the players can apply to physical systems that are part of the game-playing apparatus), one can set things up so one can compare the expected payoff of the solution, under various solution concepts such as Nash equilibrium, for the classical game and its "quantum version", and it may be that the quantum version has a better result for all players, using the same solution concept.  This was so for Eisert, Lewenstein, and Wilkens' (ELW for short) quantum version of Prisoner's Dilemma.  But this does not mean (nor, in their article, did ELW claim it did) that quantum theory "solves the Prisoner's Dilemma", although I suspect when they set out on their research, they might have had hope that it could.  It doesn't because the prisoners can't transform their situation into quantum prisoners dilemma; to play that game, whether by quantum or classical means, would require the jailer to do something differently.  ELW's quantum prisoner's dilemma involves starting with an entangled state of two qubits.  The state space consists of the unit Euclidean norm sphere in a 4-dimensional complex vector space (equipped with Euclidean inner product); it has a distinguished orthonormal basis which is a product of two local "classical bases", each of which is labeled by the two actions available to the relevant player in the classical game.  However the quantum game consists of each player choosing a unitary operator to perform on their local state.  Payoff is determined---and here is where the jailer must be complicit---by performing a certain two-qubit unitary---one which does not factor as a product of local unitaries---and then measuring in the "classical product basis", with payoffs given by the classical payoff corresponding to the label of the basis vector corresponding to the result.  Now, Musser does say that "Quantum physics does not erase the original paradoxes or provide a practical system for decision making unless public officials are willing to let people carry entangled particles into the voting booth or the police interrogation room."  But the situation is worse than that.  Even if prisoners could smuggle in the entangled particles (and in some realizations of prisoners' dilemma in settings other than systems of detention, the players will have a fairly easy time supplying themselves with such entangled pairs, if quantum technology is feasible at all), they won't help unless the rest of the world, implementing the game, implements the desired game, i.e. unless the mechanism producing the payoffs doesn't just measure in a product basis, but implements the desired game by measuring in an entangled basis.  Even more importantly, in many real-world games, the variables being measured are already highly decohered; to ensure that they are quantum coherent the whole situation has to be rejiggered.  So even if you didn't need the jailer to make an entangled measurement---if the measurement was just his independently asking each one of you some question---if all you needed was to entangle your answers---you'd have to either entangle your entire selves, or covertly measure your particle and then repeat the answer to the jailer.  But in the latter case, you're not playing the game where the payoff is necessarily based on the measurement result: you could decide to say something different from the measurement result.  And that would have to be included in the strategy set.

There are still potential applications:  if we are explicitly designing games as mechanisms for implementing some social decision procedure, then we could decide to implement a quantum version (according to some particular "quantization scheme") of a classical game.  Of course, as I've pointed out, and as ELW do in their paper, that's just another classical game.  But as ELW note, it is possible---in a setting where quantum operations (quantum computer "flops") aren't too much more expensive than their classical counterparts---that playing the game by quantum means might use less resources than playing it by simulating it classically.  In a mechanism design problem that is supposed to scale to a large number of players, it even seems possible that the classical implementation could scale so badly with the number of players as to become infeasible, while the quantum one could remain efficient.  For this reason, mechanism design for preference revelation as part of a public goods provision scheme, for instance, might be a good place to look for applications of quantum prisoners-dilemma like games.  (I would not be surprised if this has been investigated already.)

Another possible place where quantum implementations might have an advantage is in situations where one does not fully trust the referee who is implementing the mechanism.  It is possible that quantum theory might enable the referee to provide better assurances to the players that he/she has actually implemented the stated game.  In the usual formulation of game theory, the players know the game, and this is not an issue.  But it is not necessarily irrelevant in real-world mechanism design, even if it might not fit strictly into some definitions of game theory.  I don't have a strong intuition one way or the other as to whether or not this actually works but I guess it's been looked into.

(2) "Quantum democracy".  The part of the quote, in the previous item, about taking entangled particles into the voting booth, alludes to this topic.  Gavriel Segre has a 2008 arxiv preprint entitled "Quantum democracy is possible" in which he seems to be suggesting that quantum theory can help us the difficulties that Arrow's Theorem supposedly shows exist with democracy.  I will go into this in much more detail in another post.  But briefly, if we consider a finite set A of "alternatives", like candidates to fill a single position, or mutually exclusive policies to be implemented, and a finite set I of "individuals" who will "vote" on them by listing them in the order they prefer them, a "social choice rule" or "voting rule" is a function that, for every "preference profile", i.e. every possible indexed set of preference orderings (indexed by the set of individuals), returns a preference ordering, called the "social preference ordering", over the alternatives.  The idea is that then whatever subset of alternatives is feasible, society should choose the one mostly highly ranked by the social preference ordering,  from among those alternatives that are feasible.  Arrow showed that if we impose the seemingly reasonable requirements that if everyone prefers x to y, society should prefer x to y ("unanimity") and that whether or not society prefers x to y should be affected only by the information of which individuals prefer x to y, and not by othe aspects of individuals' preference orderings ("independence of irrelevant alternatives", "IIA"), the only possible voting rules are the ones such that, for some individual i called the "dictator" for the rule, the rule is that that individual's preferences are the social preferences.  If you define a democracy as a voting rule that satisfies the requirements of unanimity and IIA and that is not dictatorial, then "democracy is impossible".  Of course this is an unacceptably thin concept of individual and of democracy.  But anyway, there's the theorem; it definitely tells you something about the limitations of voting schemes, or, in a slighlty different interpretation, of the impossibility of forming a reasonable idea of what is a good social choice, if all that we can take into account in making the choice is a potentially arbitrary set of individuals' orderings over the possible alternatives.

Arrow's theorem tends to have two closely related interpretations:  as a mechanism for combining actual individual preferences to obtain social preferences that depend in desirable ways on individual ones, or as a mechanism for combining formal preference orderings stated by individuals, into a social preference ordering.  Again this is supposed to have desirable properties, and those properties are usually motivated by the supposition that the stated formal preference orderings are the individuals' actual preferences, although I suppose in a voting situation one might come up with other motivations.  But even if those are the motivations, in the voting interpretation, the stated orderings are somewhat like strategies in a game, and need not coincide with agents' actual preference orderings if there are strategic advantages to be had by letting these two diverge.

What could a quantum mitigation of the issues raised by Arrow's theorem---on either interpretation---mean?  We must be modifying some concept in the theorem... that of an individual's preference ordering, or voting strategy, or that of alternative, or---although this seems less promising---that of individual---and arguing that somehow that gets us around the problems posed by the theorem.  None of this seems very promising, for reasons I'll get around to in my next post.  The main point is that if the idea is similar to the --- as we've seen, dubious --- idea that superposing strategies can help in quantum games, it doesn't seem to help with interpretations where the individual preference ordering is their actual preference ordering.  How are we to superpose those?  Superposing alternatives seems like it could have applications in a many-worlds type interpretation of quantum theory, where all alternatives are superpositions to begin with, but as far as I can see, Segre's formalism is not about that.  It actually seems to be more about superpositions of individuals, but one of the big motivational problems with Segre's paper is that what he "quantizes" is not the desired Arrow properties of unanimity, independence of irrelevant alternatives, and nondictatoriality, but something else that can be used as an interesting intermediate step in proving Arrow's theorem.  However, there are bigger problems than motivation:  Segre's main theorem, his IV.4, is very weak, and actually does not differentiate between quantum and classical situations.  As I discuss in more detail below, it looks like for the quantum logics of most interest for standard quantum theory, namely the projection lattices of of von Neumann algebras, the dividing line between ones having what Segre would call a "democracy", a certain generalization of a voting rule satisfying Arrow's criteria, and ones that don't (i.e. that have an "Arrow-like theorem") is not commutativity versus noncommutativity of the algebra (ie., classicality versus quantumness), but just infinite-dimensionality versus finite-dimensionality, which was already understood for the classical case.  So quantum adds nothing.  In a later post, I will go through (or post a .pdf document) all the formalities, but here are the basics.

Arrow's Theorem can be proved by defining a set S of individuals to be decisive if for every pair x,y of alternatives, whenever everyone in S prefers x to y, and everyone not in x prefers y to x, society prefers x to y.  Then one shows that the set of decisive sets is an ultrafilter on the set of individuals.  What's an ultrafilter?  Well, lets define it for an arbitrary lattice.  The set, often called P(I), of subsets of any set I, is a lattice (the relevant ordering is subset inclusion, the defined meet and join are intersection and union).   A filter---not yet ultra---in a lattice is a subset of the lattice that is upward-closed, and meet-closed.  That is, to say that F is a filter is to say that  if x is in F, and y is greater than or equal to x, then y is in F, and that if x and y are both in f, so is x meet y.  For P(I), this means that a filter has to include every superset of each set in the filter, and also the intersection of every pair of sets in the filter.  Then we say a filter is proper if it's not the whole lattice, and it's an ultrafilter if it's a maximal proper filter, i.e. it's not properly contained in any other filter (other than the whole lattice).  A filter is called principal if it's generated by a single element of the lattice:  i.e. if it's the smallest filter containing that element.  Equivalently, it's the set consisting of that element and everything above it.  So in the case of P(I), a principal filter consists of a given set, and all sets containing that set.

To prove Arrow's theorem using ultrafilters, one shows that unanimity and IIA imply that the set of decisive sets is an ultrafilter on P(I).  But it was already well known, and is easy to show, that all ultrafilters on the powerset of a finite set are principal, and are generated by singletons of I, that is, sets containing single elements of I.  So a social choice rule satisfying unanimity and IIA has a decisive set containing a single element i, and furthermore, all sets containing i are decisive.  In other words, if i favors x over y, it doesn't matter who else favors x over y and who opposes it: x is socially preferred to y.  In other words, the rule is dictatorial.  QED.

Note that it is crucial here that the set I is finite.  If you assume the axiom of choice (no pun intended ahead of time), then non-principal ultrafilters do exist in the lattice of subspaces of an infinite set, and the more abstract-minded people who have thought about Arrow's theorem and ultrafilters have indeed noticed that if you are willing to generalize Arrow's conditions to an infinite electorate, whatever that means, the theorem doesn't generalize to that situation.  The standard existence proof for a non-principal ultrafilter is to use the axiom of choice in the form of Zorn's lemma to establish that any proper filter is contained in a maximal one (i.e. an ultrafilter) and then take the set of subsets whose complement (in I) is finite, show it's a filter, and show it's extension to an ultrafilter is not principal.  Just for fun, we'll do this in a later post.  I wouldn't summarize the situation by saying "infinite democracies exist", though.  As a sidelight, some people don't like the fact that the existence proof is nonconstructive.

As I said, I'll give the details in a later post.  Here, we want to examine Segre's proposed generalization.  He defines a quantum democracy  to be a nonprincipal ultrafilter on the lattice of projections of an "operator-algebraically finite von Neumann algebra".  In the preprint there's no discussion of motivation, nor are there explicit generalizations of unanimity and IIA to corresponding quantum notions.  To figure out such a correspondence for Segre's setup we'd need to convince ourselves that social choice rules, or ones satisfying one or the other of Arrow's properties, are related one to one to their sets of decisive coalitions, and then relate properties of the rule (or the remaining property), to the decisive coalitions' forming an ultrafilter.  Nonprincipality is clearly supposed to correspond to nondictatorship.  But I won't try to tease out, and then critique, a full correspondence right now, if one even exists.

Instead, let's look at Segre's main point.  He defines a quantum logic as a non-Boolean orthomodular lattice.  He defines a quantum democracy as a non-principal ultrafilter in a quantum logic.  His main theorem, IV.4, as stated, is that the set of quantum democracies is non-empty.  Thus stated, of course, it can be proved by showing the existence of even one quantum logic that has a non-principal ultrafilter.  These do exist, so the theorem is true.

However, there is nothing distinctively quantum about this fact.  Here, it's relevant that Segre's Theorem IV.3 as stated is wrong.  He states (I paraphrase to clarify scope of some quantifiers) that L is an operator-algebraically finite orthomodular lattice all of whose ultrafilters are principal if, and only if, L is a classical logic (i.e. a Boolean lattice).  But this is false.  It's true that to get his theorem IV.4, he doesn't need this equivalence.  But what is a von Neumann algebra?  It's a *-algebra consisting of bounded operators on a Hilbert space, closed in the weak operator topology.  (Or something isomorphic in the relevant sense to one of these.) There are commutative and noncommutative ones.  And there are finite-dimensional ones and infinite-dimensional ones.  The finite-dimensional ones include:  (1) the algebra of all bounded operators on a finite-dimensional Hilbert space (under operator multiplication and complex conjugation), these are noncommutative for dimension > 1  (2) the algebra of complex functions on a finite set I (under pointwise multiplication and complex conjugation) and (3) finite products (or if you prefer the term, direct sums) of algebras of these types.  (Actually we could get away with just type (1) and finite products since the type (2) ones are just finite direct sums of one-dimensional instances of type (1).)   The projection lattices of the cases (2) are isomorphic to P(I) for I the finite set.  These are the projection lattices for which Arrow's theorem can be proved using the fact that they have no nonprincipal ultrafilters.  The cases (1) are their obvious quantum analogues.  And it is easy to show that in these cases, too, there are no nonprincipal ultrafilters.  Because the lattice of projections of a von Neumann algebra is complete, one can use  essentially the same proof as for the case of P(I) for finite I.  So for the obvious quantum analogues of the setups where Arrow's theorem is proven, the analogue of Arrow's theorem does hold, and Segre's "quantum democracies" do not exist.

Moreover, Alex Wilce pointed out to me in email that essentially the same proof as for P(I) with I infinite, gives the existence of a nonprincipal ultrafilter for any infinite-dimensional von Neumann algebra:  one takes the set of projections of cofinite rank (i.e. whose orthocomplementary projection has finite rank), shows it's a filter, extends it (using Zorn's lemma) to an ultrafilter, and shows that's not principal.  So (if the dividing line between finite-dimensional and infinite-dimensional von Neumann algebras is precisely that their lowest-dimensional faithful representations are on finite-dimensional Hilbert spaces, which seems quite likely) the dividing line between projection lattices of von Neumann algebras on which Segre-style "democracies" (nonprincipal ultrafilters) exist, is precisely that between finite and infinite dimension, and not that between commutativity and noncommutativity.  I.e. the existence or not of a generalized decision rule satisfying a generalization of the conjunction of Arrow's conditions has nothing to do with quantumness.  (Not that I think it would mean much for social choice theory or voting if it did.)

(3) I'll only say a little bit here about "quantum psychology".  Some supposedly paradoxical empirical facts are described at the end of the article.  When subjects playing Prisoner's Dilemma are told that the other player will snitch, they always (nearly always? there must be a few mistakes...) snitch.  When they are told that the other player will stay mum, they usually also fink, but sometimes (around 20% of the time---it is not stated whether this typical of a single individual in repeated trials, or a percentage of individuals in single trials) stay mum.  However, if they are not told what the other player will do, "about 40% of the time" they stay mum.  Emanuel Pothos and Jerome Busemeyr devised a "quantum model" that reproduced the result.  As described in Sci Am, Pothos interprets it in terms of destructive interference between (amplitudes associated with, presumably) the 100% probability of snitching when the other snitches and the 80% probability of snitching when they other does not that reduces the probability to 60% when they are not sure whether the other will snitch.  It is a model; they do not claim that quantum physics of the brain is responsible.  However, I think there is a better explanation, in terms of what Douglas Hofstadter called "superrationality", Nigel Howard called "metarationality", and I like to call a Kantian equilibrium concept, after the version of Kant's categorial imperative that urges you to act according to a maxim that you could will to be a universal law.  Simply put, it's the line of reasoning that says "the other guy is rational like me, so he'll do what I do.  What does one do if he believes that?  Well, if we both snitch, we're sunk.  If we both stay mum, we're in great shape.  So we'll stay mum."  Is that rational?  I dunno.  Kant might have argued it is.  But in any case, people do consider this argument, as well, presumably, as the one for the Nash equilibrium.  But in either of the cases where the person is told what the other will do, there is less role for the categorical imperative; one is being put more in the Nash frame of mind.  Now it is quite interesting that people still cooperate a fair amount of the time when they know the other person is staying mum; I think they are thinking of the other person's action as the outcome of the categorical imperative reasoning, and they feel some moral pressure to stay with the categorical imperative reasoning.  Whereas they are easily swayed to completely dump that reasoning in the case when told the other person snitched: the other has already betrayed the categorical imperative.  Still, it is a bit paradoxical that people are more likely to cooperate when they are not sure whether the other person is doing so;  I think the uncertainty makes the story that "he will do what I do" more vivid, and the tempting benefit of snitching when the other stays mum less vivid, because one doesn't know *for sure* that the other has stayed mum.  Whether that all fits into the "quantum metaphor" I don't know but it seems we can get quite a bit of potential understanding here without invoking.  Moreover there probably already exists data to help explore some of these ideas, namely about how the same individual behaves under the different certain and uncertain conditions, in anonymous trials guaranteed not to involve repetition with the same opponent.

Less relevant to quantum theory, but perhaps relevant in assessing how important voting paradoxes are in the real world, is an entirely non-quantum point:

(4)  A claim by Piergiorgio Odifreddi, that the 1976 US election is an example of Condorcet's paradox of cyclic pairwise majority voting, is prima facie highly implausible to anyone who lived through that election in the US.  The claim is that a majority would have favored, in two-candidate elections:

Carter over Ford (as in the actual election)

Ford over Reagan

Reagan over Carter

I strongly doubt that Reagan would have beat Carter in that election.  There is some question of what this counterfactual means, of course:  using polls conducted near the time of the election does not settle the issue of what would have happened in a full general-election campaign pitting Carter against Reagan.  In "Preference Cycles in American Elections", Electoral Studies 13: 50-57 (1994), as summarized in Democracy Defended by Gerry Mackie, political scientist Benjamin Radcliff analyzed electoral data and previous studies concerning the US Presidential elections from 1972 through 1984, and found no Condorcet cycles.  In 1976, the pairwise orderings he found for (hypothetical, in two of the cases) two-candidate elections were Carter > Ford, Ford > Reagan, and Carter > Reagan.  Transitivity is satisfied; no cycle.  Obviously, as I've already discussed, there are issues of methodology, and how to analyze a counterfactual concerning a general election.  More on this, perhaps, after I've tracked down Odifreddi's article.  Odifreddi is in the Sci Am article because an article by him inspired Gavriel Segre to try to show that such problems with social choice mechanisms like voting might be absent in a quantum setting.

Odifreddi is cited by Musser as pointing out that democracies usually avoid Condorcet paradoxes because voters tend to line up on an ideological spectrum---I'm just sceptical until I see more evidence, that that was not the case also in 1976 in the US.  I have some doubt also about the claim that Condorcet cycles are the cause of democracy "becoming completely dysfunctional" in "politically unsettled times", or indeed that it does become completely dysfunctional in such times.  But I must remember that Odifreddi is from the land of Berlusconi.  But then again, I doubt cycles are the main issue with him...

Romney's economists tie him to George W. Bush

I'm not going to weigh in on the merits of the white paper "The Romney Plan for Economic Growth, Jobs, and Recovery," by some of Romney's top economic advisers, until I read it (which may be awhile because I'm not going to create a login at the linked site just to download it). But one point should probably be made: when Barack Obama, or disgruntled Republicans, complain that Romney will just bring us more of the George W. Bush economic policies that helped get us into the present economic difficulties, Romney shouldn't be allowed get off by to dissociating himself from Bush's economic policies. Two of the authors of this study served as chair of W's Council of Economic Advisers: Hubbard from February 2001 through March 2003, and Mankiw from 2003-2005. Hubbard is often mentioned as one of the architects of the 2003 Bush tax cuts. Both are not just authors of this white paper, but advisers to the Romney campaign. Another author, John Taylor, was undersecretary of the Treasury for international financial affairs from 2001-2005. Kevin "Dow 36,000" Hassett was an advisor to the 2004 Bush campaign, but his cv doesn't list positions in the Bush administration.

You could probably do a lot worse than Mankiw or Taylor in particular... my point here is just that with these guys as his main economic advisers, Romney shouldn't be allowed to dodge the legacy of George W. Bush's economic policies. The substantive similarity (or in fact, identity, in the case of extending the Bush tax cuts) of these policies (more tax cuts!) should be noted in this context too.

Delong vs. Romney's economists: the gloves come off

Brad DeLong takes on the Kevin Hassett, Glenn Hubbard, Greg Mankiw, John Taylor position paper defending the Romney economic "plan".  Mostly bullseyes, I think, though there'd be a lot of food for thought in delving into things further.  I particularly liked Brad's evisceration of HHMT's appeal to the supposed success of supply side notions such as those Romney currently leans on, during the Reagan years:

And the embarrassing reality underlying the Reagan years 1981-1989 is that the rate of growth of America’s productive potential, as estimated by the Congressional Budget Office, was no faster over 1981-1989 than it had been over 1973-1981. If Reagan administration policies were truly aimed at boosting American growth, they failed—in large part because of the drag placed on investment by the high real interest rates that businesses had to pay in the Reagan years, as they competed for scarce pools of capital left over after the U.S. government had financed the Reagan deficits.

The mythology and hagiography surrounding Reagan has done a lot of damage to popular American political and economic thinking over the most recent decades, in my opinion, and in my view we need a political party, political leaders, and opinion leaders willing to take it on much more aggressively. (I would prefer, in Brad's comparison, to see the period 1971-1980 (inclusive) rather than 1973-1981, though I doubt it would change the comparison much.)

Spanish bond yields back up...

Based on my reading of Draghi's speech that seems to have excited the market so much, and my general view on Euro policy and politics, I'm not surprised that Spanish 10-year bond yields are back up over 7%.  I guess my probabilities are about 40% for the Euro surviving in pretty much its present form (with or without Greece), without very much of the looser monetary policy involved in the third alternative below, and perhaps with a bit more of the political and fiscal integration that Draghi and many other Euro policymakers seem to view as essential to the Euro's survival,  but with Europe facing a lost decade à la Japan; 40% that it unravels fast, at some unpredicable point in time but most likely within the next two years; and 20% that in the face of further crisis, the Europeans finally collectively figure out a reasonable macroeconomic response involving additional monetary stimulus and acceptance of moderate inflation and further Euro devaluation, as well as a turn in the real terms of trade to make the Southern Eurozone countries more competitive relative to the Northern ones (especially relative to Germany).

Skim away (Long run return to equities...)

Everybody who makes any investment decisions (like what to do with the money in your IRA or other retirement account) needs to understand something that Bill Gross, managing director at PIMCO and involved in the running of the PIMCO Total Return bond fund (PTTRX), the world's largest, with a market capitalization of $263 billion as of the market close yesterday, apparently does not.  Namely, that one should not expect the long run rate of return on ownership of stock to be equal to the growth rate of GDP, or even of the economy's capital stock.  Gross (quoted by CNBC online):

The 6.6 percent real return belied a commonsensical flaw much like that of a chain letter or yes - a Ponzi scheme," he says. "If wealth or real GDP was only being created at an annual rate of 3.5 percent over the same period of time, then somehow stockholders must be skimming 3 percent off the top each and every year."

"If an economy's GDP could only provide 3.5 percent more goods and services per year, then how could one segment (stockholders) so consistently profit at the expense of the others (lenders, laborers and government)?"

It's remarkable that someone in finance is conceptualizing the returns to capital as "at the expense of others (labor, lenders, and the government)".  (I'm not implying that that's never the case...) I guess it is part of his main misconception, that returns beyond the rate of growth of the capital stock must be somehow at the expense of other sectors, rather than any kind of return to capital based on its productivity, or scarcity, or similar economic factors.    What Gross fails to understand is that the economy does not necessarily invest all of the returns to capital in growing the capital stock, and thus growing potential GDP; some of these returns are, if you like, "skimmed off the top" and consumed.  For publicly traded companies, for example, this can take the form of dividends returned to shareholders.  Doing a more precise accounting of where firms' profits go, in terms of new capital formation, dividend payouts and other ways of getting cash to investors, and so forth, and comparing it to long-run stock market returns, seems like a worthwhile exercise.  It's one I'm not especially well-equipped to do although I suspect it would begin with a (salutary for any investor) review of financial accounting---in particular, how to interpret corporate income statements and balance sheets.   And I suspect many economists have done versions of it.  [Added Aug. 2nd 2012:  I think that dividends and distributions are not really the key here:  investors might reinvest dividends (supporting the stock price, and tending to increase market returns), or sell stocks and consume some of the proceeds without reinvesting (reducing overall market returns).  The main point is as stated above, that economy-wide, all of the returns to capital should not be assumed to be reinvested.] But it's shocking that someone managing money at this scale (or any scale, actually!), even if it's mostly not equities, doesn't have their mind around this basic fact.

That doesn't mean I necessarily endorse the 6.6% real returns mentioned by Gross for US equities over the last century, as a reasonable expectation of (very) long-run returns to US stocks...I would need to see how the calculation is done, and of course, past performance is no guarantee of future returns...    But when I saw Gross' statements linked on major financial information websites, I felt like I had to say something.  Brad DeLong explains in more detail.  He also makes the point that the earnings yield of the S & P 500 is currently around 7.7%, and in a further post, points out that the yield using the past 10 years' earnings, smoothed, is around 4.5%, which together with an (historically reasonable) assumption of 2% real earnings growth, suggests to him that around 5% real returns to equity is a reasonable expectation (unless you expect a collapse in earnings, or price-to-equityearnings ratio).

Stiglitz likes South African infrastructure investment plans

Since I'm in South Africa this month as a Fellow of the Stellenbosch Institute for Advanced Study (about which more later), I came across this interesting article from a main South African paper, Business Day, about Joseph Stiglitz' involvement in South African economic issues.  According to the article he "voiced strong support for the government’s R840bn infrastructure programme, which he said could create a "virtuous circle" of investment and growth and set SA on the path to a more productive and equal society."  (Link added.) 840 billion rand is 108 billion US dollars at today's rate.  That is roughly 25% of South African nominal GDP as forecast for 2012, but this article, also in Business Day, refers to it as a 20-year rolling program, in which case it is on the order of 1% of GDP annually, depending on the spending profile, future GDP growth, and how the total nominal value of R850bn in planned spending is calculated.  Also interesting is the plan to finance it with mandatory retirement plan savings; while there are probably further details, it sounds on the face of it similar to a social-security type plan.  However it lacks, one suspects, the (rather inappropriate, in my view) feature of US social security as currently (but rather recently, in historic terms) formulated, of being officially described as a trust fund invested entirely in central government securities.  From the second Business Day article cited above:

In the final session of the conference, business leader Bobby Godsell and Zwelinzima Vavi, general secretary of the Congress of South African Trade Unions, both made guarded commitments to this. Mr Godsell said a society-wide discussion was needed on the concept of "a reasonable return" for investment, while Mr Vavi said he backed plans to introduce mandatory savings for all employees.

This sounds like how it should be done.

I have picked up a widespread sense that there is a lot of corruption and siphoning off of funds in the awarding and performance of goverment contracts in South Africa.  Obviously a big infrastructure programme provides big opportunities for more of this, which can of course be damaging economically and perhaps even more, politically; I suspect, and certainly hope, Stiglitz has factored in this aspect of the South African scene, and still thinks the plan worthwhile but it would be interesting to see it addressed directly as it is certainly not a minor issue.