Anthony Aguirre is looking for postdoc at Santa Cruz in Physics of the Observer

Anthony Aguirre points out that UC Santa Cruz is advertising for postdocs in the "Physics of the Observer" program; and although review of applications began in December with a Dec. 15 deadline "for earliest consideration", if you apply fast you will still be considered.  He explicitly states they are looking for strong applicants from the quantum foundations community, among other things.

My take on this: The interaction of quantum and spacetime/gravitational physics is an area of great interest these days, and people doing rigorous work in quantum foundations, quantum information, general probabilistic theories have much to contribute.  It's natural to think about links with cosmology in this context.  I think this is a great opportunity, foundations postdocs and students, and Anthony and Max are good people to be connected with, very proactive in seeking out sources of funding for cutting-edge research and very supportive of interdisciplinary interaction.  The California coast around Santa Cruz is beautiful, SC is a nice funky town on the ocean, and you're within striking distance of the academic and venture capital powerhouses of the Bay Area.  So do it!

ITFP, Perimeter: selective guide to talks. #1: Brukner on quantum theory with indefinite causal order

Excellent conference the week before last at Perimeter Institute: Information Theoretic Foundations for Physics.  The talks are online; herewith a selection of some of my favorites, heavily biased towards ideas new and particularly interesting to me (so some excellent ones that might be of more interest to you may be left off the list!).  Some of what would have been possibly of most interest and most novel to me happened on Weds., when the topic was spacetime physics and information, and I had to skip the day to work on a grant proposal.  I'll have to watch those online sometime.  This was going to be one post with thumbnail sketches/reviews of each talk, but as usual I can't help running on, so it may be one post per talk.

All talks available here, so you can pick and choose. Here's #1 (order is roughly temporal, not any kind of ranking...):

Caslav Brukner kicked off with some interesting work on physical theories in with indefinite causal structure.  Normally in formulating theories in an "operational" setting (in which we care primarily about the probabilities of physical processes that occur as part of a complete compatible set of possible processes) we assume a definite causal (partial) ordering, so that one process may happen "before" or "after" another, or "neither before nor after".  The formulation is "operational" in that an experimenter or other agent may decide upon, or at least influence, which set of processes, out of possible compatible sets, the actual process will be drawn, and then nature decides (but with certain probabilities for each possible process, that form part of our theory), which one actually happens.  So for instance, the experimenter decides to perform a Stern-Gerlach experiment with a particular orientation X of the magnets; then the possible processes are, roughly, "the atom was deflected in the X direction by an angle theta," for various angles theta.  Choose a different orientation, Y, for your apparatus, you choose a different set of possible compatible processes.  ("The atom was deflected in the Y direction by an angle theta.")  Then we assume that if one set of compatible processes happens after another, an agent's choice of which complete set of processes is realized later, can't influence the probabilities of processes occuring in an earlier set.  "No signalling from the future", I like to call this; in formalized operational theories it is sometimes called the "Pavia causality axiom".   Signaling from the past to the future is fine, of course.  If two complete  sets of processes are incomparable with respect to causal order ("spacelike-separated"), the no-signalling constraint operates both ways:  neither Alice's choice of which compatible set is realized, nor Bob's, can influence the probabilities of processes occuring at the other agent's site.   (If it could, that would allow nearly-instantaneous signaling between spatially separated sites---a highly implausible phenomenon only possible in preposterous theories such as the Bohmian version of quantum theory with "quantum disequilibrium", and Newtonian gravity. ) Anyway, Brukner looks at theories that are close to quantum, but in which this assumption doesn't necessarily apply: the probabilities exhibit "indeterminate causal structure".  Since the theories are close to quantum, they can be interpreted as allowing "superpositions of different causal structures", which is just the sort of thing you might think you'd run into in, say, theories combining features of quantum physics with features of general relativistic spacetime physics.  As Caslav points out, since in general relativity the causal structure is influenced by the distribution of mass and energy, you might hope to realize such indefinite causal structure by creating a quantum superposition of states in which a mass is in one place, versus being in another.  (There are people who think that at some point---some combinations of spatial scales (separation of the areas in which the mass is located) and mass scales (amount of mass to be separated in "coherent" superposition)) the possibility of such superpositions breaks down.  Experimentalists at Vienna (where Caslav---a theorist, but one who likes to work with experimenters to suggest experiments---is on the faculty) have created what are probably the most significant such superpositions.)

Situations with a superposition of causal orders seem to be exhibit some computational advantages over standard causally-ordered quantum computation, like being able to tell in fewer queries (one?) whether a pair of unitaries commutes or anticommutes.  Not sure whose result that was (Giulio Chiribella and others?), but Caslav presents some more recent results on query complexity in this model, extending the initial results.  I am generally wary about results on computation in theories with causal anomalies.  The stuff on query complexity with closed timelike curves, e.g. by Dave Bacon and by  Scott Aaronson and John Watrous has seemed uncompelling---not the correctness of the mathematical results, but rather the physical relevance of the definition of computation---to me for reasons similar to those given by Bennett, Leung, Smith and Smolin.  But I tend to suspect that Caslav and the others who have done these query results, use a more physically compelling framework because they are well versed in the convex operational or "general probabilistic theories" framework which aims to make the probabilistic behavior of processes consistent under convex combination ("mixture", i.e. roughly speaking letting somebody flip coins to decide which input to present your device with).  Inconsistency with respect to such mixing is part of the Bennett/Leung/Smolin/Smith objection to the CTC complexity classes as originally defined.

[Update:  This article at Physics.org quotes an interview with Scott Aaronson responding to the Bennett et. al. objections.  Reasonably enough, he doesn't think the question of what a physically relevant definition of CTC computing is has been settled.  When I try to think about this issue sometimes I wonder if the thorny philosophical question of whether we court inconsistency by trying to combine intervention ("free choice of inputs") in a physical theory is rearing its head.  As often with posts here, I'm reminding myself to revisit the issue at some point... and think harder.]

Thinking about Robert Wald's take on the loss, or not, of information into black holes

A warning to readers: As far as physics goes, I tend to use this blog to muse out loud about things I am trying to understand better, rather than to provide lapidary intuitive summaries for the enlightenment of a general audience on matters I am already expert on. Musing out loud is what's going on in this post, for sure. I will try, I'm sure not always successfully, not to mislead, but I'll be unembarassed about admitting what I don't know.

I recently did a first reading (so, skipped and skimmed some, and did not follow all calculations/reasoning) of Robert Wald's book "Quantum Field Theory in Curved Spacetime and Black Hole Thermodynamics".  I like Wald's style --- not too lengthy, focused on getting the important concepts and points across and not getting bogged down in calculational details, but also aiming for mathematical rigor in the formulation of the important concepts and results.

Wald uses the algebraic approach to quantum field theory (AQFT), and his approach to AQFT involves looking at the space of solutions to the classical equations of motion as a symplectic manifold, and then quantizing from that point of view, in a somewhat Dirac-like manner (the idea is that Poisson brackets, which are natural mathematical objects on a symplectic manifold, should go to commutators  between generalized positions and momenta, but what is actually used is the Weyl form of the commutation relations), doing the Minkowski-space (special relativistic, flat space) version before embarking on the curved-space, (semiclassical general relativistic) one.   He argues that this manner of formulating quantum field theory has great advantages in curved space, where the dependence of the notion of "particle" on the reference frame can make quantization in terms of an expansion in Fourier modes of the field ("particles") problematic.  AQFT gets somewhat short shrift among mainstream quantum field theorists, I sense, in part because (at least when I was learning about it---things may have changed slightly, but I think not that much) no-one has given a rigorous mathematical example of an algebraic quantum field theory of interacting (as opposed to freely propagating) fields in a spacetime with three space dimensions.  (And perhaps the number of AQFT's that have been constructed even in fewer space dimensions is not very large?).  There is also the matter pointed out by Rafael Sorkin, that when AQFT's are formulated, as is often done, in terms of a "net" of local algebras of observables (each algebra associated with an open spacetime region, with compatibility conditions defining what it means to have a "net" of algebras on a spacetime, e.g. the subalgebra corresponding to a subset of region R is a subalgebra of the algebra for region R; if two subsets of a region R are spacelike separated then their corresponding subalgebras commute), the implicit assumption that every Hermitian operator in the algebra associated with a region can be measured "locally"  in that region actually creates difficulties with causal locality---since regions are extended in spacetime, coupling together measurements made in different regions through perfectly timelike classical feedforward of the results of one measurement to the setting of another, can create spacelike causality (and probably even signaling).  See Rafael's paper "Impossible measurements on quantum fields".   (I wonder if that is related to the difficulties in formulating a consistent interacting theory in higher spacetime dimension.)

That's probably tangential to our concerns here, though, because it appears we can understand the basics of the Hawking effect, of radiation by black holes, leading to black-hole evaporation and the consequent worry about "nonunitarity" or "information loss" in black holes, without needing a quantized interacting field theory.  We treat spacetime, and the matter that is collapsing to form the black hole, in classical general relativistic terms, and the Hawking radiation arises in the free field theory of photons in this background.

I liked Wald's discussion of black hole information loss in the book.  His attitude is that he is not bothered by it, because the spacelike hypersurface on which the state is mixed after the black hole evaporates (even when the states on similar spacelike hypersurfaces before black hole formation are pure) is not a Cauchy surface for the spacetime.  There are non-spacelike, inextensible curves that don't intersect that hypersurface.  The pre-black-hole spacelike hypersurfaces on which the state is pure are, by contrast, Cauchy surfaces---but some of the trajectories crossing such an initial surface go into the black hole and hit the singularity, "destroying" information.  So we should not expect purity of the state on the post-evaporation spacelike hypersurfaces any more than we should expect, say, a pure state on a hyperboloid of revolution contained in a forward light-cone in Minkowski space --- there are trajectories that never intersect that hyperboloid.

Wald's talk at last year's firewall conference is an excellent presentation of these ideas; most of it makes the same points made in the book, but with a few nice extra observations. There are additional sections, for instance on why he thinks black holes do form (i.e. rejects the idea that a "frozen star" could be the whole story), and dealing with anti de sitter / conformal field theory models of black hole evaporation. In the latter he stresses the idea that early and late times in the boundary CFT do not correspond in any clear way to early and late times in the bulk field theory (at least that is how I recall it).

I am not satisfied with a mere statement that the information "is destroyed at the singularity", however.  The singularity is a feature of the classical general relativistic mathematical description, and near it the curvature becomes so great that we expect quantum aspects of spacetime to become relevant.  We don't know what happens to the degrees of freedom inside the horizon with which variables outside the horizon are entangled (giving rise to a mixed state outside the horizon), once they get into this region.  One thing that a priori seems possible is that the spacetime geometry, or maybe some pre-spacetime quantum (or post-quantum) variables that underly the emergence of spacetime in our universe (i.e. our portion of the universe, or multiverse if you like) may go into a superposition (the components of which have different values of these inside-the-horizon degrees of freedom that are still correlated (entangled) with the post-evaporation variables). Perhaps this is a superposition including pieces of spacetime disconnected from ours, perhaps of weirder things still involving pre-spacetime degrees of freedom.  It could also be, as speculated by those who also speculate that the state on the post-evaporation hypersurface in our (portion of the) universe is pure, that these quantum fluctuations in spacetime somehow mediate the transfer of the information back out of the black hole in the evaporation process, despite worries that this process violates constraints of spacetime causality.  I'm not that clear on the various mechanisms proposed for this, but would look again at the work of Susskind, and Susskind and Maldacena ("ER=EPR") to try to recall some of the proposals. (My rough idea of the "ER=EPR" proposals is that they want to view entangled "EPR" ("Einstein-Podolsky-Rosen") pairs of particles, or at least the Hawking radiation quanta and their entangled partners that went into the black hole, as also associated with miniature "wormholes" ("Einstein-Rosen", or ER, bridges) in spacetime connecting the inside to the outside of the black hole; somehow this is supposed to help out with the issue of nonlocality, in a way that I might understand better if I understood why nonlocality threatens to begin with.)

The main thing I've taken from Wald's talk is a feeling of not being worried by the possible lack of unitarity in the transformation from a spacelike pre-black-hole hypersurface in our (portion of the) universe to a post-black-hole-evaporation one in our (portion of the) universe. Quantum gravity effects at the singularity either transfer the information into inaccessible regions of spacetime ("other universes"), leaving (if things started in a pure state on the pre-black-hole surface) a mixed state on the post-evaporation surface in our portion of the universe, but still one that is pure in some sense overall, or they funnel it back out into our portion of the universe as the black hole evaporates. It is a challenge, and one that should help stimulate the development of quantum gravity theories, to figure out which, and exactly what is going on, but I don't feel any strong a priori compulsion toward one or the other of a unitary or a nonunitary evolution on from pre-black-hole to post-evaporation spacelike hypersurfaces in our portion of the universe.

 

 

Free will and retrocausality in the quantum world, at Cambridge. I: Bell inequalities and retrocausality

I'm in Cambridge, where the conference on Free Will and Retrocausality in the Quantum World, organized (or rather, organised) by Huw Price and Matt Farr will begin in a few hours.  (My room at St. Catherine's is across from the chapel, and I'm being serenaded by a choir singing beautifully at a professional level of perfection and musicality---I saw them leaving the chapel yesterday and they looked, amazingly, to be mostly junior high school age.)  I'm hoping to understand more about how "retrocausality", in which effects occur before their causes, might help resolve some apparent problems with quantum theory, perhaps in ways that point to potentially deeper underlying theories such as a "quantum gravity". So, as much for my own use as anyone else's, I thought perhaps I should post about my current understanding of this possibility.

One of the main problems or puzzles with quantum theory that Huw and others (such as Matthew Leifer, who will be speaking) think retrocausality may be able to help with, is the existence of Bell-type inequality violations. At their simplest, these involve two spacelike-separated regions of spacetime, usually referred to as "Alice's laboratory" and "Bob's laboratory", at each of which different possible experiments can be done. The results of these experiments can be correlated, for example if they are done on a pair of particles, one of which has reached Alice's lab and the other Bob's, that have previously interacted, or were perhaps created simultaneously in the same event. Typically in actual experiments, these are a pair of photons created in a "downconversion" event in a nonlinear crystal.  In a "nonlinear"  optical process photon number is not conserved (so one can get a "nonlinearity" at the level of a Maxwell's equation where the intensity of the field is proportional to photon number; "nonlinearity" refers to the fact that the sum of two solutions is not required to be a solution).  In parametric downconversion, a photon is absorbed by the crystal which emits a pair of photons in its place, whose energy-momentum four-vectors add up to that of the absorbed photon (the process does conserve energy-momentum).   Conservation of angular momentum imposes correlations between the results of measurements made by "Alice" and "Bob" on the emitted photons. These are correlated even if the measurements are made sometime after the photons have separated far enough that the changes in the measurement apparatus that determine which component of polarization it measures (which we'll henceforth call the "polarization setting"), on one of the photons, are space-like separated from the measurement process on the other photon, so that effects of the polarization setting in Alice's laboratory, which one typically assumes can propagate only forward in time, i.e. in their forward light-cone, can't affect the setting or results in Bob's laboratory which is outside of this forward light-cone.  (And vice versa, interchanging Alice and Bob.)

Knowledge of how their pair of photons were prepared (via parametric downconversion and propagation to Alice and Bob's measurement sites) is encoded in a "quantum state" of the polarizations of the photon pair.  It gives us, for any pair of polarization settings that could be chosen by Alice and Bob, an ordinary classical joint probability distribution over the pair of random variables that are the outcomes of the given measurements.  We have different classical joint distributions, referring to different pairs of random variables, when different pairs of polarization settings are chosen.   The Bell "paradox" is that there is no way of introducing further random variables that are independent of these polarization settings, such that for each pair of polarization settings, and each assignment of values to the further random variables, Alice and Bob's measurement outcomes are independent of each other, but when the further random variables are averaged over, the experimentally observed correlations, for each pair of settings, are reproduced. In other words, the outcomes of the polarization measurements, and in particular the fact that they are correlated, can't be "explained" by variables uncorrelated with the settings. The nonexistence of such an explanation is implied by the violation of a type of inequality called a "Bell inequality". (It's equivalent to to such a violation, if "Bell inequality" is defined generally enough.)

How I stopped worrying and learned to love quantum correlations

One might have hoped to explain the correlations by having some physical quantities (sometimes referred to as "hidden variables") in the intersection of Alice and Bob's backward light-cone, whose effects, propagating forward in their light-cone to Alice and Bob's laboratories, interact their with the physical quantities describing the polarization settings to produce---whether deterministically or stochastically---the measurement outcomes at each sites, with their observed probabilities and correlations. The above "paradox" implies that this kind of "explanation" is not possible.

Some people, such as Tim Maudlin, seem to think that this implies that quantum theory is "nonlocal" in the sense of exhibiting some faster-than-light influence. I think this is wrong. If one wants to "explain" correlations by finding---or hypothesizing, as "hidden variables"---quantities conditional on which the probabilities of outcomes, for all possible measurement settings, factorize, then these cannot be independent of measurement settings. If one further requires that all such quantities must be localized in spacetime, and that their influence propagates (in some sense that I'm not too clear about at the moment, but that can probably be described in terms of differential equations---something like a conserved probability current might be involved) locally and forward in time, perhaps one gets into inconsistencies. But one can also just say that these correlations are a fact. We can have explanations of these sorts of fact---for example, for correlations in photon polarization measurements, the one alluded to above in terms of energy-momentum conservation and previous interaction or simultaneous creation---just not the sort of ultra-classical one some people wish for.

Retrocausality

It seems to me that what the retrocausality advocates bring to this issue is the possibility of something that is close to this type of classical explanation. It may allow for the removal of these types of correlation by conditioning on physical quantities. [Added July 31: this does not conflict with Bell's theorem, for the physical quantities are not required to be uncorrelated with measurement settings---indeed, being correlated with the measurement settings is to be expected if there is retrocausal influence from a measurement setting to physical quantities in the backwards light-cone of the measurement setting.] And unlike the Bohmian hidden variable theories, it hopes to avoid superluminal propagation of the influence of measurement settings to physical quantities, even unobservable ones.  It does this, however, by having the influence of measurement settings pursue a "zig-zag" path from Alice to Bob: in Alice's backward light-cone back to the region where Alice and Bob's backward light-cones intersect, then forward to Bob's laboratory. What advantages might this have over superluminal propagation? It probably satisfies some kind of spacetime continuity postulate, and seems more likely to be able to be Lorentz-invariant. (However, the relation between formal Lorentz invariance and lack of superluminal propagation is subtle, as Rafael Sorkin reminded me at breakfast today.)

Answer to question about Bekenstein BH entropy derivation

I had a look at Jacob Bekenstein's 1973 Physical Review D paper "Black holes and entropy" for the answer to my question about Susskind's presentation of the Bekenstein derivation of the formula stating that black hole entropy is proportional to horizon area.  An argument similar to the one in Susskind's talk appears in Section IV, except that massive particles are considered, rather than photons, and they can be assumed to be scalar so that the issue I raised, of entropy associated with polarization, is moot.  Bekenstein says:

we can be sure that the absolute minimum of information lost [as a particle falls into a black hole] is that contained in the answer to the question "does the particle exist or not?"  To start with, the answer [to this question] is known to be yes.  But after the particle falls in, one has no information whatever about the answer.  This is because from the point of view of this paper, one knows nothing about the physical conditions inside the black hole, and thus one cannot assess the likelihood of the particle continuing to exist or being destroyed.  One must, therefore, admit to the loss of one bit of information [...] at the very least."

Presumably for the particle to be destroyed, at least in a field-theoretic description, it must annihilate with some stuff that is already inside the black hole (or from the outside point of view, plastered against the horizon). This annihilation could, I guess, create some other particle. In fact it probably must, in order to conserve mass-energy.  My worry in the previous post about the entropy being due to the presence/absence of the particle inside the hole was that this would seem to need to be due to uncertainty about whether the particle fell into the hole in the first place, which did not seem to be part of the story Susskind was telling, and the associated worry that this would make the black hole mass uncertain, which also didn't seem to be a feature of the intended story although I wasn't sure. But the correct story seems to be that the particle definitely goes into the hole, and the uncertainty is about whether it subsequently annihilates with something else inside, in a process obeying all relevant conservation laws, rendering both of my worries inapplicable. I'd still like to see if Bekenstein wrote a version using photons, as Susskind's presentation does. And when I feel quite comfortable, I'll probably post a fairly full description of one (or more) versions of the argument. Prior to the Phys Rev D paper there was a 1972 Letter to Nuovo Cimento, which I plan to have a look at; perhaps it deals with photons. If you want to read Bekenstein's papers too, I suggest you have a look at his webpage.

Question about Susskind's presentation of Bekenstein's black hole entropy derivation

I'm partway through viewing Leonard Susskind's excellent not-too-technical talk "Inside Black Holes" given at the Kavli Institute for Theoretical Physics at UC Santa Barbara on August 25.  Thanks to John Preskill,  @preskill, for recommending it.

I've decided to try using my blog as a discussion space about this talk, and ultimately perhaps about the "Harlow-Hayden conjecture" about how to avoid accepting the recent claim that black holes must have an information-destroying "firewall" near the horizon.  (I hope I've got that right.)  I'm using  Susskind's paper "Black hole complementarity and the Harlow-Hayden conjecture"  as my first source on the latter question.  It also seems to be a relatively nontechnical presentation (though much more technical than the talk so far)... that should be particularly accessible to quantum information theorists, although it seems to me he also does a good job of explaining the quantum information-theoretic concepts he uses to those not familiar with them.

But first things first.  I'm going to unembarassedly ask elementary questions about the talk and the paper until I understand.  First off, I've got a question about Susskind's "high-school level" presentation, in minutes 18-28 of the video, of Jacob Bekenstein's 1973 argument that in our quantum-mechanical world the entropy of a black hole is proportional to its area (i.e. the area of the horizon, the closed surface inside which nothing, not even light, can escape).   The formula, as given by Susskind, is

,

where is the entropy (in bits) of the black hole, and the area of its horizon.  (The constant here may have been tweaked by a small amounts, like or its inverse, to reflect considerations that Susskind alluded to but didn't describe, more subtle than those involved in Bekenstein's argument.)

The argument, as presented by Susskind, involves creating the black hole out of photons whose wavelength is roughly the Schwarzschild radius of the black hole.  More precisely, it is built up in steps; each step in creating a black hole of a given mass and radius involves sending in another photon of wavelength roughly the current Schwarzschild radius.  The wavelength needs to be that big so that there is no information going into the hole (equivalently, from the point of view outside the hole, getting "plastered" (Susskind's nice choice of word) against the horizon) about where the photon went in.  Presumably there is some argument about why the wavelength shouldn't be much bigger, either...perhaps so that it is sure to go into the hole, rather than missing.  That raises the question of just what state of the photon field should be impinging on the hole...presumably we want some wavepacket whose spatial width is about the size of the hole, so we'll have a spread of wavelengths centered around some multiple (roughly unity) of the Schwarzschild radius.  Before there is any hole, I guess I also have some issues about momentum conservation... maybe one starts by sending in a spherical shell of radiation impinging on where we want the hole to be, so as to have zero net momentum.  But these aren't my main questions, though of course it could turn out to be necessary to answer them in order to answer my main question.  My main question is:  Susskind says that each such photon carries one bit of information: the information is "whether it's there or not".  This doesn't make sense to me, as if one is uncertain about how many photons went into creating the hole, it seems to me one should have a corresponding uncertainty about its mass, radius, etc...  Moreover, the photons that go in still seem to have a degree of freedom capable of storing a bit of information:  their polarization.  So maybe this is the source of the one bit per photon?  Of course, this would carry angular momentum into the hole/onto the horizon, so I guess uncertainty about this could generate uncertainty about whether or not we have a Schwarzschild or a Kerr (rotating) black hole, i.e. just what the angular momentum of the hole is.

Now, maybe the solution is just that given their wavelength of the same order of the hole, there is uncertainty about whether or not the photons actually get into the hole, and so the entropy of the black hole really is due to uncertainty about its total mass, and the mass M in the Bekenstein formula is just the expected value of mass?

I realize I could probably figure all this out by grabbing some papers, e.g. Bekenstein's original, or perhaps even by checking wikipedia, but I think there's some value in thinking out loud, and in having an actual interchange with people to clear up my confusion... one ends up understanding the concepts better, and remembering the solution.  So, if any physicists knowledgeable about black holes (or able and willing to intelligently speculate about them...) are reading this, straighten me out if you can, or at least let's discuss it and figure it out...