Free will and retrocausality at Cambridge II: Conspiracy vs. Retrocausality; Signaling and Fine-Tuning

Expect (with moderate probability) substantial revisions to this post, hopefully including links to relevant talks from the Cambridge conference on retrocausality and free will in quantum theory, but for now I think it's best just to put this out there.

Conspiracy versus Retrocausality

One of the main things I hoped to straighten out for myself at the conference on retrocausality in Cambridge was whether the correlation between measurement settings and "hidden variables" involved in a retrocausal explanation of Bell-inequality-violating quantum correlations are necessarily "conspiratorial", as Bell himself seems to have thought.  The idea seems to be that correlations between measurement settings and hidden variables must be due to some "common cause" in the intersection of the backward light cones of the two.  That is, a kind of "conspiracy" coordinating the relevant hidden variables that can affect the meausrement outcome with all sorts of intricate processes that can affect which measurement is made, such as those affecting your "free" decision as to how to set a polarizer, or, in case you set up a mechanism to control the polarizer setting according to some apparatus reasonably viewed as random ("the Swiss national lottery machine" was the one envisioned by Bell), the functioning of this mechanism.  I left the conference convinced once again (after doubts on this score had been raised in my mind by some discussions at New Directions in the Philosophy of Physics 2013) that the retrocausal type of explanation Price has in mind is different from a conspiratorial one.

Deflationary accounts of causality: their impact on retrocausal explanation

Distinguishing "retrocausality" from "conspiratorial causality" is subtle, because it is not clear that causality makes sense as part of a fundamental physical theory.   (This is a point which, in this form, apparently goes back to Bertrand Russell early in this century.  It also reminds me of David Hume, although he was perhaps not limiting his "deflationary" account of causality to causality in physical theories.)  Causality might be a concept that makes sense at the fundamental level for some types of theory, e.g. a version ("interpretation") of quantum theory that takes measurement settings and outcomes as fundamental, taking an "instrumentalist" view of the quantum state as a means of calculating outcome probabilities giving settings, and not as itself real, without giving a further formal theoretical account of what is real.  But in general, a theory may give an account of logical implications between events, or more generally, correlations between them, without specifying which events cause, or exert some (perhaps probabilistic) causal influence on others.  The notion of causality may be something that is emergent, that appears from the perspective of beings like us, that are part of the world, and intervene in it, or model parts of it theoretically.  In our use of a theory to model parts of the world, we end up taking certain events as "exogenous".  Loosely speaking, they might be determined by us agents (using our "free will"), or by factors outside the model.  (And perhaps "determined" is the wrong word.)   If these "exogenous" events are correlated with other things in the model, we may speak of this correlation as causal influence.  This is a useful way of speaking, for example, if we control some of the exogenous variables:  roughly speaking, if we believe a model that describes correlations between these and other variables not taken as exogenous, then we say these variables are causally influenced by the variables we control that are correlated with them.  We find this sort of notion of causality valuable because it helps us decide how to influence those variables we can influence, in order to make it more likely that other variables, that we don't control directly, take values we want them to.  This view of causality, put forward for example in Judea Pearl's book "Causality", has been gaining acceptance over the last 10-15 years, but it has deeper roots.  Phil Dowe's talk at Cambridge was an especially clear exposition of this point of view on causality (emphasizing exogeneity of certain variables over the need for any strong notion of free will), and its relevance to retrocausality.

This makes the discussion of retrocausality more subtle because it raises the possibility that a retrocausal and a conspiratorial account of what's going on with a Bell experiment might describe the same correlations, between the Swiss National lottery machine, or whatever controls my whims in setting a polarizer, all the variables these things are influenced by, and the polarizer settings and outcomes in a Bell experiment, differing only in the causal relations they describe between these variables.  That might be true, if a retrocausalist decided to try to model the process by which the polarizer was set.  But the point of the retrocausal account seems to be that it is not necessary to model this to explain the correlations between measurement results.  The retrocausalist posits a lawlike relation of correlation between measurement settings and some of the hidden variables that are in the past light cone of both measurement outcomes.  As long as this retrocausal influence does not influence observable past events, but only the values of "hidden", although real, variables, there is nothing obviously more paradoxical about imagining this than about imagining----as we do all the time---that macroscopic variables that we exert some control over, such as measurement settings, are correlated with things in the future.   Indeed, as Huw Price has long (I have only recently realized for just how long) been pointing out, if we believe that the fundamental laws of physics are symmetric with respect to time-reversal, then it would be the absence of retrocausality, if we dismiss its possibility, and even if we accept its possibility to the limited extent needed to potentially explain Bell correlations, its relative scarcity, that needs explaining.  Part of the explanation, of course, is likely that causality, as mentioned above, is a notion that is useful for agents situated within the world, rather than one that applies to the "view from nowhere and nowhen" that some (e.g. Price, who I think coined the term "nowhen") think is, or should be,  taken by fundamental physical theories.  Therefore whatever asymmetries---- these could be somewhat local-in-spacetime even if extremely large-scale, or due to "spontaneous" (i.e. explicit, even if due to a small perturbation) symmetry-breaking --- are associated with our apparently symmetry-breaking experience of directionality of time may also be the explanation for why we introduce the causal arrows we do into our description, and therefore why we so rarely introduce retrocausal ones.  At the same time, such an explanation might well leave room for the limited retrocausality Price would like to introduce into our description, for the purpose of explaining Bell correlations, especially because such retrocausality does not allow backwards-in-time signaling.

Signaling (spacelike and backwards-timelike) and fine-tuning. Emergent no-signaling?

A theme that came up repeatedly at the conference was "fine-tuning"---that no-spacelike-signaling, and possibly also no-retrocausal-signaling, seem to require a kind of "fine-tuning" from a hidden variable model that uses them to explain quantum correlations.  Why, in Bohmian theory, if we have spacelike influence of variables we control on physically real (but not necessarily observable) variables, should things be arranged just so that we cannot use this influence to remotely control observable variables, i.e. signal?  Similarly one might ask why, if we have backwards-in-time influence of controllable variables on physically real variables, things are arranged just so that we cannot use this influence to remotely control observable variables at an earlier time?  I think --- and I think this possibility was raised at the conference --- that a possible explanation, suggested by the above discussion of causality, is that for macroscopic agents such as us, with usually-reliable memories, some degree of control over our environment and persistence over time, to arise, it may be necessary that the scope of such macroscopic "observable" influences be limited, in order that there be a coherent macroscopic story at all for us to tell---in order for us even be around to wonder about whether there could be such signalling or not.  (So the term "emergent no-signalling" in the section heading might be slightly misleading: signalling, causality, control, and limitations on signalling might all necessarily emerge together.) Such a story might end up involving thermodynamic arguments, about the sorts of structures that might emerge in a metastable equilibrium, or that might emerge in a dynamically stable state dependent on a temperature gradient, or something of the sort.  Indeed, the distribution of hidden variables (usually, positions and/or momenta) according to the squared modulus of the wavefunction, which is necessary to get agreement of Bohmian theory with quantum theory and also to prevent signaling (and which does seem like "fine-tuning" inasmuch as it requires a precise choice of probability distribution over initial conditions), has on various occasions been justified by arguments that it represents a kind of equilibrium that would be rapidly approached even if it did not initially obtain.  (I have no informed view at present on how good these arguments are, though I have at various times in the past read some of the relevant papers---Bohm himself, and Sheldon Goldstein, are the authors who come to mind.)

I should mention that at the conference the appeal of such statistical/thermodynamic  arguments for "emergent" no-signalling was questioned---I think by Matthew Leifer, who with Rob Spekkens has been one of the main proponents of the idea that no-signaling can appear like a kind of fine-tuning, and that it would be desirable to have a model which gave a satisfying explanation of it---on the grounds that one might expect "fluctuations" away from the equilibria, metastable structures, or steady states, but we don't observe small fluctuations away from no-signalling---the law seems to hold with certainty.  This is an important point, and although I suspect there are  adequate rejoinders, I don't see at the moment what these might be like.

6 thoughts on “Free will and retrocausality at Cambridge II: Conspiracy vs. Retrocausality; Signaling and Fine-Tuning

  1. Hi Howard,

    I enjoyed your excellent and insightful post! I'll have to think a bit more about your idea in the next to last paragraph. (I don't recall it being discussed at the conference, and it sounds interesting). My gut reaction is that such an anthropic constraint would only limit what *typically* happens in nature, not what might conceivably occur with sophisticated lab equipment. But I don't think I've quite sorted out the perspectival aspects of signaling to my own satisfaction, so I definitely need to think about all this a bit more.

    On your very last point, perhaps one rejoinder would be to consider one of Huw's recent examples. Imagine that you were given a hidden and random series of incoming bits, and also given the option to send each one through a NOT gate (or not, as you chose). Clearly you are able to causally affect the outgoing bits, but the combination of the hiddenness and randomness prevent you from being able to signal. (Causation without signaling.) Now, applying Leifer's fluctuation point to this scenario, he might correctly note that the "randomness" of the bits doesn't always hold for a finite subset of incoming bits. Occasionally you would get twenty "1"'s in a row. But this is not a signaling resource, because you never know when such a string is going to arrive (or continue). So fluctuations in and of themselves aren't enough to allow signaling, even if the primary no-signaling argument is based on randomness. (This is a far cry from answering the big-picture question of why nature prevents us from signaling into the past, and there may even be a rejoinder to this small point; I'll point Matt over to this post and see what he has to say.)

    Best,
    Ken

  2. Applied to the NOT gate example, the fine-tuning objection is as follows:

    If you modify the probabilities of the incoming bits even very slightly then you would suddenly be able to signal. No fundamental reason has been given why you cannot prepare a string of bits with different probabilities.

    There are three responses you could make to this.

    1. The string of bits is 50/50 exactly because this prevents signalling. No reason is needed other than this operational fact.

    2. We are somehow wrong about what the fundamental ontological degrees of freedom are or about what the dynamics of the system are. When we reparameterize things correctly we will see that there was no possibility to modify the probabilities even in principle.

    3. The 50/50 probabilities arise from some sort of equilibriation process. It is possible in principle to signal, but there are just no systems commonly available to us that would enable this.

    In my view, answer 1 is suspect, even though it is often exactly what we do with respect to the no superluminal signalling condition when we construct operational generalizations of quantum theory. The reason is that, if we want an ontological account at all, then it seems that we ought to be able to explain the entire physics of the system in terms of the ontological model all by itself, which is after all supposed to be the most fundamental level of explanation, without imposing operational principles by hand from the outside. Otherwise, there is an explanatory gap in the ontological model.

    Answer 2 is the answer I usually prefer and I somehow failed to realize until the workshop that it perhaps implies that there should be no backwards in time influences even at the ontological level. With respect to contextuality and nonlocality, it is the response that I usually give on the basis that the best way of explaining why operational probabilities are noncontextual and nonsignalling would be if there were actually no contextuality or nonlocality at the ontological level. Similarly, the best way of explaining why there is no operational signalling into the past is perhaps if there are no retrocausal influences at the ontological level either. I am still not 100% sure if all types of retrocausality are ruled out by this. It is fairly obvious that the "zigzag" causality of stuff literally reversing its arrow of time every now and then is ruled out, expecially if the zigzagging can be controlled, but maybe "block-universe" type models can evade this in some way. This is my preferred remaining slim hope for retrocausal explanations.

    Answer 3 is supposed to be modelled on Valentini's approach to Bohmian mechanics. There is signalling at the ontological level and this implies that there is signalling at the operational level as well. It is just that we happen to live in a part of the universe where some sort of equilibrium process has washed out our ability to signal.

    Now, in light of this, one can just state that the entire universe is in this equilibrium state. There would be fluctuations away from this, but I agree that uncontrollable fluctuations would not lead to systematic signalling, so this is not a problem. However, this response is really no better than answer 1. It amounts to the assertion that there exists a physical process that, as a matter of contingent fact, cannot ever be observed. Equilibrium is just put in by hand. This is of course logically possible, but I think we should apply higher standards of evidence and take de Morgan's law:

    "what-ever can happen will happen if we make trials enough."

    as our guiding principle. Since it is in principle possible in the theory to have systems that are very far from equilibrium, i.e. not just mere fluctuations but systems that would allow systematic signalling, we should, by this principle, expect them to appear in nature somewhere or to be engineerable. Then, we need a theory of what sorts of system would admit signalling, where they should appear in nature, and how the equilibriation process works. Experimentally verifying such a theory would consititute strong evidence in favour of answer 3 and really you should not expect to convince other physicists that this is the correct answer until you come up with evidence of this sort.

    There is minor caveat to this in that, if we are dealing with backwards in time signalling in a block-universe theory then we should not expect the equilibrium process to be a dynamical process that occurs "in time". We need another way of implementing statistical hypotheses within the 4d block that does not privellige an initial time-slice. It is possible, in doing so, that this type of model might end up looking more natural than I imagine it would, e.g. it might be possible to naturally pin the blame on cosmological considerations. If so, then I might be inclined to switch to 3 as my preferred answer, but as it stands I think 2 is the best bet.

  3. Hi Matt,

    Thanks for the (as always) thoughtful response!

    As you know, I'm in agreement that #2 is the way to go (and of *course* it has to be a block universe... :-). But as you also know, talking about probabilities at the ontological level is tricky as well. Once a set of variables is truly "hidden", it seems to me that getting sufficient randomness can be trivial, so long as the universe is ontologically underdetermined. (Given that one set of actual variables is randomly chosen from an equally weighted set of all possibilities, like microstates in stat mech.)

    This fits right in with your #2, I think. When I make a measurement setting in my favored retrocausal models, I'm *not* adjusting the probabilities of past hidden variables. Instead, I'm constraining the possibility space of what those past hidden variables are in the first place. Whatever space of possibilities, the actual-variable distribution in that space can then always be sufficiently random to avoid signaling (fluctuations notwithstanding). And since most of us don't think that the fundamental ontological degrees of freedom depend on future measurement settings, we're mistaken about what that space actually is, as per your #2.

    But this just shifts the problem from one of "modified probabilities allow signaling" to "partial prior knowledge of the hidden variables allow signaling." The problem, as I see it, is to explain why those variables are truly and fundamentally hidden from epistemic access, at least until after I make the choice of measurement setting. (I'm not sure Huw agrees with me about this problem, but it's the one that keeps me up at night.)

    Still, this version of the problem doesn't strike me as ontological fine-tuning (since the probabilities are not the issue). It's more of an epistemic fine-tuning of what a given agent is allowed to know, and perhaps that makes it more tractable from the line-of-attack that Howard mentioned above. Or possibly the key is that a partial knowledge of the hidden variables is equivalent to a partial knowledge of my future choice of setting, and one needs a full freedom of setting to be able to signal. (One doesn't have free choice if one knows what that choice will be.) But then you can re-spin the question as why we *don't* have such knowledge, which perhaps amounts to the same thing. Hmmm...

    Ken

  4. Thanks for the insightful comments, Ken and Matthew.

    Ken, re your first comment, I think there's a lot to your rejoinder if "fluctuations" is just taken to mean "you will sometimes draw from the tail of the distribution". And I think that is how it often is used in statistical mechanics. Perhaps a bad word choice on my part (I think the word was used in the discussion of this point in Cambridge)...at least I put it in scare quotes. I think the mention of systems out of equilibria is more to the point... the idea that an explanation of a probability distribution could be obtained by methods similar to those used in statistical mechanics. Of course one could say that systems sometimes "fluctuate" out of equilibrium by chance, but then one faces the kinds of issues associated with "Boltzmann brains"... namely that one expects no more order than implied (or rendered highly probable) by the already-observed order. I think these issues are closely related to your discussion of why fluctuations in this sense are probably not useful for signaling: to be useful, we need to be able to rely on the order where and when we haven't already observed it. Like you, I think there might be a rejoinder to this, but it's not the sort of thing I had in mind, so I'd prefer to set it aside and think more about why, unlike the case in statistical mechanics where we do see out-of-equilibrium systems relaxing to equilibrium, we don't observe the sort of equilibration or relaxation process that we are hoping to appeal to to "explain" why the probability distribution over the "ontic" variables is (indistinguishable from) one that does not permit signaling. That's what Matthew is focused on in his coment, which I'll address next.

  5. Matthew, I think that your 1, 2, and 3 all seem to be live options at this point. Let me address them one by one:

    #1) I agree that from a certain standpoint, it seems that an explanation of something like no-signaling, whether in a spacelike direction or backwards in time, that requires a particular probability distribution over some "ontic" ("real") variables (or perhaps, requires that the probability distribution be drawn from some particular set that is of measure zero in the space of all possible distributions over the ontic variables), seems like undesirable fine-tuning, or not as explanatory as one might wish. And yet, if we achieved this, for quantum theory, with an underlying "ontic" or "hidden variable" theory that is elegant and appealing, and not cumbersome (and perhaps the choice of probability distribution or set of probability distributions might be elegant and natural, too), then we would have achieved something important---something, indeed, something that has often been claimed to be impossible. (The references to the underlying ontic theory being "elegant and appealing", and especially "not cumbersome", are intended to ward off Bohmianism, although I do think it is kind of elegant, in a sick way.) It would be a very significant advance in our understanding of the nature of quantum theory, whether or not one ultimately decided that this ontic theory should supplant quantum theory.

    Moreover, it seems that if one is going to have a probabilistic theory at all, maybe one should expect that some of the "laws" or "principles" of the theory should specify probabilities. I suspect you might feel less negative about a theory that specified, say, transition probabilities for the ontic variables, than one that specifies a probability distribution over initial conditions. The latter somehow seems more like "fine-tuning", or more to the point, fixing something as a matter of theory that should rather represent our ignorance. But perhaps if we actually had such an ontic theory, the distinction between "initial conditions" and "transition probabilities" might be less clear-cut than one might anticipate. It does seem that in quantum theory, randomness is continually getting into the system over time, and maybe in an underlying ontic theory consistent with quantum theory, different bits of the "initial conditions" would become relevant in different regions of spacetime, which could appear like a transition. Is it really that much more unreasonable for a fundamentally probabilistic physical theory to specify some probabilities precisely, than for, say, a classical field theory to specify some parameter (like the exponent of -2 in the inverse square law, which could also be considered "fine-tuned" to explain salient higher-level phenomena) precisely?

    That was pretty long, so I'll address #2 and #3 in later posts.

    Cheers, Howard

  6. Thanks for the reply, Howard -- I'll be looking forward to see what you have to say about Matt's #2, if you get back to it...

    On the point of the "equilibration or relaxation process" that is guiding how you are framing the issue here, though, please do note that such a concept has a time direction built into it from the start. The sort of retrocausal models that I'm considering (and Matt too, I believe) get around this by thinking of probability distributions as naturally residing in 4D, not 3D. Then, after one picks a "microhistory" at random, there is no additional dimension in which any "relaxation" away from such randomness can conceivably occur. (God may play dice, but only once. 🙂 The goal, then, is not to explain how the randomness emerges from dynamics (Matt's #3), but rather to come up with rules that ensure the hidden aspects of the microhistory are randomly chosen from the possibility space, even if the known variables are quite ordered at some special boundary constraint.

Comments are closed.