About howard

Wine, Physics and Song is my blog. Roughly speaking, I'm a quantum physicist, working mostly in the foundations of quantum theory, and in quantum computation and quantum information processing. My main focus recently has been understanding the nature of quantum theory by understanding how the possibilities it gives us for processing information compare to what might have been, by studying information processing in abstract mathematical frameworks, using tools like ordered linear spaces and category theory, in which not only quantum and classical theories, but all sorts of "foil" theories that don't seem to be realized in our physical world, but are illuminating to contrast with quantum theory, can be formulated. Sometimes I like to call this pursuit "mathematical science fiction".

Viet Cuong, Moth

For at least a few more days you can stream Moth, by classical composer Viet Cuong, performed, at the Midwest Band Clinic, by the Brooklyn Wind Symphony conducted by Jeff Ball, on Performance Today.  It is also available, probably more permanently, at his website and on his Soundcloud page.  I like the piece a lot.  The performance is excellent, really remarkable for an all-volunteer ensemble.  The style is fairly modern for PT, which is to say it is, roughly, in the idiom of tonal Western classical music from the 1920s and 1930s, with perhaps a smidgin of minimalism.  At first listen I thought it made clear use of the language of Stravinksy, especially Petrouchka and Le Sacre du Printemps, as well as of something resembling the post-Stravinsky and neoclassical phase of the 1930s, say, Milhaud, Poulenc, Constant Lambert, but without descending into pastiche.  On my second listen, with better sound, I was a bit taken aback by what I perceive as strong influence from Le Sacre, both in form and in content.   I am less startled by that after further listens.  Form-wise, it intersperses sections with ostinato, theme repetition (certainly key ingredients of Sacre), and other tension building devices (like modulation, especially stepwise upward modulation, which I don't think are found much in Sacre), with more pensive interludes, often tinged with a minor feel.  Just that kind of alternation is a main structural principal of Sacre.  As my references to neoclassicism and modulation above might suggest, there's somewhat more standard tonal content in Cuong's piece, thought it also has very strong Stravinsky-like "modal" or scalar elements, and occasional vaguely Iberian-sounding moments.  (As an aside, just thinking about harmony in Le Sacre makes me wonder if there is any standard dominant-to-tonic resolution at all in the piece---I think not, or not much.)

Cuong knows how to recombine and play with motives, scales, harmonic tropes and other elements to create interest, unify the piece and move things along in a satisfying way.  He shows this from the outset, with a clever motive consisting of a rising and descending scalar figure, played against a similar but inverted figure (or perhaps they are both fragments of the same extended figure that they evolve into, running up and down on flute, changing direction at different pitches), then relaxing into some Iberian-ish sounds.  At 1:30 we get melodic material very reminiscent of Le Sacre, and around 2:10, I think, the first hint of a four-note figure, which one might notate 3 4 2 1 in minor, also very reminiscent of Sacre (indeed it is very close---and would be identical if the last two notes were interchanged---to the initial four notes of a motive, 3 4 1 2 3 1 in minor with the last two notes twice as long as the preceding four) found in Sacre) that will become increasingly important.  Much of this material is developed and cleverly  combined through what sound to me like various key changes.  Around 3:30 things get more urgent, drums, with ostinato and repetition, especially of the four-note theme, and rising modulation.  (I wonder if there is some influence of John Adams' Harmonielehre here; I am reminded of it, but haven't listened to the Adams piece recently enough to tell.  Or maybe I should just can the speculation about influence.)  Around 4:10, quickly peak tension gives way to a mellow contrapuntal woodwind interlude, and there follows a long stretch with some alternation of faster and more complex passages, building a bit more each time, with pullbacks to this sort of mellowness.  Around 6:30 things seem to get more organized for a final buildup.  The ending, with an upward brass gliss emerging out of the ensemble to a momentarily held note, and then a sudden drop to tympani-punctuated chord, reminds me a little bit of Le Sacre too.

The program for this piece seems to be the gyrating flight of a moth before, and eventual immolation in, a flame, which is also in obvious parallel with Le Sacre's program, of a virgin obliged to dance herself to death in a pagan rite.  So I suspect the structural and idiomatic parallels to Le Sacre are no accident, although the overall tone is much lighter, and at 8'38 in this performance, the piece is of course much shorter.  I interpret these parallels, especially as dextrously integrated with harmonic movement at times quite uncharacteristic of Le Sacre, as a bit of a cheeky and light-hearted tour-de-force of compositional virtuosity.  The thematic material does have interest, but might be a little more on the generic side than ideal in places.  That is not really a problem in this piece.   I enjoyed some of the other pieces on his site but did find some of them a bit lacking in gripping melody.  Sound and Smoke I and II sound tailor-made for something like a fantasy movie soundtrack, and are extremely well done.  Part I sounds just as you might think from the subtitle "feudal castle lights", while Part II I found more distinctive.  I have a feeling that with some even stronger melodic material, perhaps some passages with some longer more sustained lines, Cuong could be really dangerous.  Hopefully Cuong will come up with more gripping melodic material in whatever way is necessary, whether from moments of personal inspiration or by ripping it off with exquisite taste à la Stravinsky if necessary.  (I exaggerate, Stravinsky fans... peace, I am one of you.)  Some of Cuong's other pieces show ability in more contemporary idioms.  He is only 24, a graduate student in composition at Princeton.  He is clearly getting a lot of recognition, as the list of awards, commissions, and performances on his webpage shows.  So he probably has a good career assured.  I hope he has his sights fixed on greatness; I'll be very interested to see what comes next.

Bonus:  On that December 12 PT stream, available for a few more days, the Brahms serenade (end of the 2nd hour) performed by the Sinfonia da Camera, if played on a good stereo, is magic.  (On first hearing through a cheap radio I was unimpressed.  Maybe it is all about the bass, although I think an undistorted treble helps too.)

Isole e Olena 2005 Chianti Classico

A half-bottle of the 2005 Isole e Olena Chianti Classico, consumed a few days ago, was superb. Medium-bodied, with a fair bit of fine but fairly grippy tannins, this was elegant, and for a somewhat tannic wine, somewhat velvety and a pleasure to drink. It didn't seem at all tired or oxidized. Flavors predominantly dried cherry or other red fruits and a hint of pine, at least to my nose.  Super tasty.  Nice long finish. Easily my favorite of the Chiantis I've tasted. I don't remember how much I paid but recent vintages seem to go for $13-15 a half bottle, $20-25 a bottle, which although not cheap, is a bargain if they turn out this well. To judge by how youthful and tannic it still was at 9 years old, I'd guess this one needs to be aged to be at its best---at 9 years it was clearly getting there, but could probably go another 5 or more years and possibly get even more.

Isole e Olena don't appear to have a website; there is more information about them at Giuliana Imports, the Boulder-based importer of this bottle.  Since I have been encountering a lot of claims to the effect that a lot of writing about wine is basically just noise and fashion-following and strongly influenced by things other than the pure olfactory sensation of the wine, I'll point out that their description of the 2011 is very close to my description of the 2005, despite my not having read it (as befits a truly serious wine there are no olfactory notes on the label, either) which suggests to me anyway that Isole e Olena make this wine in a consistent style that can be identified by taste.  Of course this is just one observation, and there is definitely a lot of noise and influence from nonolfactory things like price and reputation and label appearance that enters into people's writing about wine.  (Mention of "red fruits"  or "dark berries"  could easily be influenced by the wine's color, for example, although in my opinion there is usually more to it than that.)  My point is that I think there is a genuine olfactory basis for some of this stuff too.

Based on perusing people's notes on the web (after writing mine), it seems that a lot of people liked this wine young, opinions diverged at about 3-7 years after vintage, and the consensus is more clearly positive over the last few years, suggesting it might have gone through a "dumb"  phase as many wines do during aging.  Also, some people seem to object to the relatively lighter-bodied style, which I happen to love when it is combined, as here, with intensity.  This is a serious producer that has been around at least as long as I've been tasting wine, and based on this sample, their Chianti is indeed a classic.  My sense is that if you have the ability to age it to 9-15 years after vintage you can't go wrong buying multiple bottles of this wine in any decent vintage.

What I just wrote is more meaningful than trying to assign some arbitrary number, but I guess on a Parkeresque 100 point scale, I'd give it something like a 92... and not in the inflated sense where anything you like gets 90---to get 90 or above in my book, a wine has to be at least a bit extraordinary.

Iverson/Motian/Grenadier It's Easy To Remember, II: a deeper appreciation

Since first posting on the topic, I've now played (in my halting way) the solo piano ad lib introduction to the live Ethan Iverson/ Paul Motian/ Larry Grenadier performance of It's Easy to Remember in Guillaume Hazebrouck's transcription, and listened to it several more times.  I'm even more taken by this masterful performance, especially the introduction.  The harmonies in the introduction are often quite dissonant but beautifully limpid, probably due to the very open voicings (wide intervals), and choice of intervals.  The dissonances reminiscent of 20th century classical music combined with untypical but compelling voiceleading remind me a bit of Bill Evans, but the choice of intervals and limpid sonority doesn't so much.  The (incomplete) blow-by-blow that follows is mostly for my own reference, so you might skip down to the next paragraph if harmonic analysis doesn't interest you.  It's far from crucial for appreciating the music, but I really want to know how these sounds are made.  The first part is mostly over an E flat pedal (the piece is in E flat), with couple of excursions to Ab. The first chord is fabulous, with successive intervals of a minor ninth, minor 7th, minor 6th (Eb, E, D, Bb).  Then the two inner voices move inward by a half step for another open, somewhat dissonant chord.  It's perhaps not so important to analyze these harmonically, but the first comes off pretty clearly as an Eb major voicing, with no 3rd which no doubt contributes to the spare, clear sound, and with a major 7th, and as for the E natural (b9 you could say), well it just sounds great, and moves up to a natural 9 on the next chord, while the 7th moves down to the minor 7th of Eb, suggesting perhaps a change in quality to dominant or minor, though not this is not so clear as there's still no 3rd present.  Later in the introduction, the same voicing will indeed function as a dominant leading to an Ab major triad at the end of the first system of the transcription.  But first we get a repeat of the first two chords at a faster pace, except with A natural in place of Bb in the top voice (which is basically paraphrasing the melody).  The tenor voice is going up chromatically, cadencing toward a G as part of the double-whole-note Eb major 7th, the first time we get a 3rd with an Eb chord.  The repose is disturbed with a little tweak up to a B natural in the treble, just to add a little more pretty dissonance to the picture. (Nothing wrong with a touch of the "girlfriend chord" once in a while.)  Then we again get those first two chords, Bb in the treble again, moving in quarters, initiating the same four-quarter-note chromatic ascension in the tenor to G, but the bass moving up to Ab on the last two quarters, over which the harmony sounds first like Ab7, then Ab m7, while the top Bb leads down into a bluesy figure.  The next system finishes out with more chromatic movement in the bass, more intricate melody in the top voice accompanied by good inner voice action especially in the tenor, and a final cadence on Eb major again, with the 3rd but in the same open voicing that marked the first appearance of the G before, except that now the D forms a minor 2nd cluster with that seemingly outrageous, but beautiful, E natural, kind of fusing the initial two dissonant Eb voicings but with the added 3rd for an earthier, more harmonically grounded sound, perfectly capping off the introductory chorus.

Besides the open voicings and relatively spare use of 3rds (so that they are all the more effective when they are used), movement by half-steps is a major feature of the voice-leading in this introduction, but it doesn't come across with any feeling of slick hepness or angst-ridden compulsion, perhaps because it's not being used heavily as b9 or #11 over dominant chords, or in related diminished or augmented substitutions for dominants.  Maybe there is a relative absence of tritones in the voicings, though I didn't check carefully.  Anyway, the half-step motion is prominent enough to be considered a major musical ingredient, but doesn't really interfere with what sounds to me like a relatively diatonic, if sometimes beautifuly dissonant, feel.  I guess the chromatic motion is not, for the most part, setting up dissonances that cry out for an obvious resolution, nor effecting such resolution.  It reminds me a bit of Stravinksy in that the dissonance is often created by the interaction of natural melodic motions in the voices, and (along with the melodic motion) the actual intervals in the chord seem almost more important than any compulsive "functional" movement in the harmony even though there is some of the latter on occasion.

The other remarkable thing about Iverson's playing on this piece is the strong influence of Monk, assimilated well into Iverson's own style, in the trio portion of the piece.  Monkian upward arpeggios appear as early as measure 16 (the 3rd measure of the first trio chorus), often combined with scalar material that still sounds quite Monkish (as in measure 16), or leading into more original melodic figures (as in measures 25-26).  A classic downward-dropping Monk left-hand figure is used in measure 30, a very bluesy Monkian chorus-ending figure at 44-46, upward arpeggios in 47-48 lead again to more personal Iversonian material in 49-50, and the list could go on.  Often Iverson seems to be extending or filling in Monkish lines with his own material more reminiscent of more standard bop-influenced lines, but never quite the standard bop clichés.  There's lot's of great action in the inner voices too, sometimes Monkian, sometimes not particularly so.  I think Monk's vocabulary and approach, even while it contributed crucially to the lingua franca of bebop and beyond, has probably been underexploited by pianists who are perhaps rightly afraid that it's hard to make something personal this way, something that doesn't sound like copying Monk's licks, but Iverson makes it work to great effect.  (I guess you could argue that a few other pianists have been strongly influenced by Monk's approach while keeping the harmonic and melodic content of their playing further from Monk than Iverson does here.)

In fact, the display of constructive influence by Monk, and the use of Monkian influences in a clear personal style, makes me wonder if the introduction might be more influenced by Monk than I realized.  I haven't listened to Monk's solo piano for a while, and it is probably time to listen to more.

Speaking of more, here's hoping we get to hear more from this set, or others in the same week at the Vanguard.  All About Jazz's review of what was probably the first set on that same Friday (March 11, 2011) is tantalizing, too.  This is some of the most interesting piano playing I've heard in many years---jazz of the highest order.

Ethan Iverson, Paul Motian, Larry Grenadier: It's Easy To Remember, live at the Vanguard

Excellent piece from 2011 by Ethan Iverson on the late Paul Motian.  Discusses a lot of music I need to check out, and unexpectedly includes a superb live version of Rodgers and Hart's It's Easy to Remember featuring some of the best jazz piano I've heard from Iverson, which means some of the best jazz piano I've heard in recent years. Plus there's a downloadable transcription of his playing, provided by Guillaume Hazebrouck. The harmonies in the piano introduction sound unusual to me, but totally natural.  I really love the intro.  There's a fair bit of Monkishness, especially later in the solo, but well integrated with Iverson's own conception.  Some nice interaction of multiple voices in the piano at times, not in a showy way, adds a lot.  I found this post linked  from Ethan's recent post on Motian's compositions, which Motian's niece and heir Cynthia McGuirl is considering publishing.

Domaine La Millière 2006 Châteauneuf-du-Pape Rouge Vielles Vignes

Not much on wine recently, so here's a quick one on a wine I had with my parents recently:  the 2006 Domaine La Millière Châteauneuf-du-Pape (Old Vines, Red).    Simply put, this is delicious wine with no flaws; perfection, essentially.    Scent, flavor, and finish are all strongly present and are pretty much of a piece, with a pronounced note of chocolate that reminds me of many Vacqueyras I've tasted, but with a more balanced, elegant character, and definitely not the glyceriney mouthfeel that some of these Vacqueyras have had.  Noticeable tannin, but not at all closed or hard, just helping give the wine some backbone and probably help stick the flavor to the tongue for the strong finish.  Aside from the chocolate, perhaps red fruits, raspberry and maybe cherry, maybe a bit less herbal or spicy than some Châteauneufs I've tasted but that's not a criticism.  Reminiscent a bit of a great Pauillac in some ways (OK, I've only ever tasted one first-growth Pauillac, a free taste of the1984 Lafite-Rothschild, but this does remind me of it in terms of elegance, delicious forward flavors of fruit and sweets, though there was maybe a bit more vanilla than chocolate in the Lafite).  Nothing at all funky or off.  Somewhat silky or velvety... really delicious and refined.  This is a smashing success, I'd say pretty much a great wine.  If I had to give it a Parkeresque rating, something in the 91-93 range (as of the time I first paid any attention to his ratings, which is probably around 1985) would probably do.  Various other vintages of this are in the 19 to 23 euro range at La Millière's website---seems like a bargain to me if they are anything like this quality.  Available in the US for sure...I notice that North Berkeley Imports has them, and I have seen them in Santa Fe at the Casa Sena wine store.  I would, though, age them for 7-10 years or so... at 8 years old this seemed definitely ready to drink but whether it's at its peak or has 5 more years of interesting development I wouldn't pretend to know.  About 60% Grenache and 10% each Syrah, Mourvèdre, Cinsault and Counoise.  The dominant chocolate and red fruits notes likely have a lot to do with the Grenache, with Syrah and Mourvèdre perhaps adding some complexity and depth and maybe, along with the Cinsault, tannin and body.  (I don't know what Counoise is, but perhaps I should find out.)   If this is in your price range, and you're able to keep it till at least 6-7 years from the vintage, I'd say snap up a few bottles or more.  (Might be good younger, for all I know... but I suspect that would be a waste of its potential.)

Carolina Chocolate Drops rock the Rialto, Tucson

On a visit to Tucson I tore myself away from the U of Arizona --- USC game to go hear the Carolina Chocolate Drops at the Rialto downtown.  Incredibly high-energy show---you can get an idea of the band's sound from Youtube, but it doesn't really convey the impact of a live show.  They are still on tour until October 24th, and the main point of this post is just to say if you have a chance, go.  CCD got their start playing traditional or "old-time" African-American string band music. and that is still a large part of their repertoire.  The lineup has changed over the years, and I'm no expert on the changes since I'm new to the band.  Rhiannon Giddens, the lead singer (who majored in opera as an undergraduate at the Oberlin conservatory), is the only founding member of the band left in the lineup.  (I was amused that she felt she had to explain how her name is pronounced---anyone who doesn't know obviously missed the 70s, but I guess that applies to a good chunk of the audience.) The band is extremely tight, everybody is topnotch, and the numbers featuring the other members are just as strong as those (perhaps a majority) featuring Giddens as primary vocalist, but Giddens is clearly the powerhouse.  Though her manner when singing is not at all stagey or acted, when she starts making music the star power and charisma are immediately apparent.  CCD are currently doing a very wide range of music, much of which will sound familiar but not exactly like anything you've heard before.  This is African-American music that is part of the roots of bluegrass and country, coming out of folk traditions that are perhaps not so well known nowadays, but in CCD's hands it's not at all an exercise in scholarly dusting off of "hmm, interesting" musical curios---it's alive for the performers and audience, sometimes with an impact and energy that reminds me of a solid punk rock show---indeed some of the audience were definitely pogoing.  Much of the music is full of fiddle and banjo, with Malcolm Parson on cello (and sometimes bones), and Rowan Corbett on a variety of instruments, including bones, guitar, banjo, and I think perhaps fiddle on occasion.  Jenkins played guitar, mandolin, and banjo.  Parson's cello playing really added a lot to the ensemble sound, and I liked his rare solos a lot too. If I'm not mistaken, Parson, Jenkins, and Corbett all played bones to great effect, with Corbett especially virtuosic. Jenkins did some excellent vocal work, too, and his solo country blues original was superb.

As I said, online video doesn't really capture the impact, but this video of them doing Cousin Emmy's Ruby Are You Mad At Your Man from their current tour does a pretty good job.  (I am not sure if this is band-sanctioned, so will remove the link if they request it.)  Music starts around 1:34.

They also cover more recent material, like Dallas Austin's hit for Blu Cantrell, "Hit 'em up Style".  Here's a video from this tour, though I thought the Tucson performance of this song was harder-hitting:

Not all their songs are on the same topic---it's just coincidence that these are two of the best videos on the toob of the current tour.

They don't play many originals, but the song Giddens wrote reflecting her reading of accounts of life under slavery in the 19th century was powerful.

There's a lot more on youtube, including more old-time music, though not so much with the current lineup. They can sing country with the best---I wouldn't be surprised if they hit the country charts one of these days (or perhaps it's already happened); they do a great job with Hank Williams' Please Don't Let Me Love You:

Indeed, Country Girl sounds to me like a straight shot at the contemporary country charts, solid stuff though quite reminscent of a dozen or so other celebrations of down-home-by-the-crick livin' to be encountered over the last decade on mainstream country radio, with an acoustic backing just as rocking and funky as the typical electrified setting for the genre nowadays and just as deserving of a place there.

Definitely a band to get to know, and I plan to delve into their recordings now that I've had the live experience.

Thinking about Robert Wald's take on the loss, or not, of information into black holes

A warning to readers: As far as physics goes, I tend to use this blog to muse out loud about things I am trying to understand better, rather than to provide lapidary intuitive summaries for the enlightenment of a general audience on matters I am already expert on. Musing out loud is what's going on in this post, for sure. I will try, I'm sure not always successfully, not to mislead, but I'll be unembarassed about admitting what I don't know.

I recently did a first reading (so, skipped and skimmed some, and did not follow all calculations/reasoning) of Robert Wald's book "Quantum Field Theory in Curved Spacetime and Black Hole Thermodynamics".  I like Wald's style --- not too lengthy, focused on getting the important concepts and points across and not getting bogged down in calculational details, but also aiming for mathematical rigor in the formulation of the important concepts and results.

Wald uses the algebraic approach to quantum field theory (AQFT), and his approach to AQFT involves looking at the space of solutions to the classical equations of motion as a symplectic manifold, and then quantizing from that point of view, in a somewhat Dirac-like manner (the idea is that Poisson brackets, which are natural mathematical objects on a symplectic manifold, should go to commutators [p_j, q_k] = \delta_{qk} i \hbar between generalized positions and momenta, but what is actually used is the Weyl form e^{i \sigma p_j} e^{i \tau q_k} = e^{i \delta_{qk} \sigma \tau} e^{i \tau q_k} e^{i \sigma p_j} of the commutation relations), doing the Minkowski-space (special relativistic, flat space) version before embarking on the curved-space, (semiclassical general relativistic) one.   He argues that this manner of formulating quantum field theory has great advantages in curved space, where the dependence of the notion of "particle" on the reference frame can make quantization in terms of an expansion in Fourier modes of the field ("particles") problematic.  AQFT gets somewhat short shrift among mainstream quantum field theorists, I sense, in part because (at least when I was learning about it---things may have changed slightly, but I think not that much) no-one has given a rigorous mathematical example of an algebraic quantum field theory of interacting (as opposed to freely propagating) fields in a spacetime with three space dimensions.  (And perhaps the number of AQFT's that have been constructed even in fewer space dimensions is not very large?).  There is also the matter pointed out by Rafael Sorkin, that when AQFT's are formulated, as is often done, in terms of a "net" of local algebras of observables (each algebra associated with an open spacetime region, with compatibility conditions defining what it means to have a "net" of algebras on a spacetime, e.g. the subalgebra corresponding to a subset of region R is a subalgebra of the algebra for region R; if two subsets of a region R are spacelike separated then their corresponding subalgebras commute), the implicit assumption that every Hermitian operator in the algebra associated with a region can be measured "locally"  in that region actually creates difficulties with causal locality---since regions are extended in spacetime, coupling together measurements made in different regions through perfectly timelike classical feedforward of the results of one measurement to the setting of another, can create spacelike causality (and probably even signaling).  See Rafael's paper "Impossible measurements on quantum fields".   (I wonder if that is related to the difficulties in formulating a consistent interacting theory in higher spacetime dimension.)

That's probably tangential to our concerns here, though, because it appears we can understand the basics of the Hawking effect, of radiation by black holes, leading to black-hole evaporation and the consequent worry about "nonunitarity" or "information loss" in black holes, without needing a quantized interacting field theory.  We treat spacetime, and the matter that is collapsing to form the black hole, in classical general relativistic terms, and the Hawking radiation arises in the free field theory of photons in this background.

I liked Wald's discussion of black hole information loss in the book.  His attitude is that he is not bothered by it, because the spacelike hypersurface on which the state is mixed after the black hole evaporates (even when the states on similar spacelike hypersurfaces before black hole formation are pure) is not a Cauchy surface for the spacetime.  There are non-spacelike, inextensible curves that don't intersect that hypersurface.  The pre-black-hole spacelike hypersurfaces on which the state is pure are, by contrast, Cauchy surfaces---but some of the trajectories crossing such an initial surface go into the black hole and hit the singularity, "destroying" information.  So we should not expect purity of the state on the post-evaporation spacelike hypersurfaces any more than we should expect, say, a pure state on a hyperboloid of revolution contained in a forward light-cone in Minkowski space --- there are trajectories that never intersect that hyperboloid.

Wald's talk at last year's firewall conference is an excellent presentation of these ideas; most of it makes the same points made in the book, but with a few nice extra observations. There are additional sections, for instance on why he thinks black holes do form (i.e. rejects the idea that a "frozen star" could be the whole story), and dealing with anti de sitter / conformal field theory models of black hole evaporation. In the latter he stresses the idea that early and late times in the boundary CFT do not correspond in any clear way to early and late times in the bulk field theory (at least that is how I recall it).

I am not satisfied with a mere statement that the information "is destroyed at the singularity", however.  The singularity is a feature of the classical general relativistic mathematical description, and near it the curvature becomes so great that we expect quantum aspects of spacetime to become relevant.  We don't know what happens to the degrees of freedom inside the horizon with which variables outside the horizon are entangled (giving rise to a mixed state outside the horizon), once they get into this region.  One thing that a priori seems possible is that the spacetime geometry, or maybe some pre-spacetime quantum (or post-quantum) variables that underly the emergence of spacetime in our universe (i.e. our portion of the universe, or multiverse if you like) may go into a superposition (the components of which have different values of these inside-the-horizon degrees of freedom that are still correlated (entangled) with the post-evaporation variables). Perhaps this is a superposition including pieces of spacetime disconnected from ours, perhaps of weirder things still involving pre-spacetime degrees of freedom.  It could also be, as speculated by those who also speculate that the state on the post-evaporation hypersurface in our (portion of the) universe is pure, that these quantum fluctuations in spacetime somehow mediate the transfer of the information back out of the black hole in the evaporation process, despite worries that this process violates constraints of spacetime causality.  I'm not that clear on the various mechanisms proposed for this, but would look again at the work of Susskind, and Susskind and Maldacena ("ER=EPR") to try to recall some of the proposals. (My rough idea of the "ER=EPR" proposals is that they want to view entangled "EPR" ("Einstein-Podolsky-Rosen") pairs of particles, or at least the Hawking radiation quanta and their entangled partners that went into the black hole, as also associated with miniature "wormholes" ("Einstein-Rosen", or ER, bridges) in spacetime connecting the inside to the outside of the black hole; somehow this is supposed to help out with the issue of nonlocality, in a way that I might understand better if I understood why nonlocality threatens to begin with.)

The main thing I've taken from Wald's talk is a feeling of not being worried by the possible lack of unitarity in the transformation from a spacelike pre-black-hole hypersurface in our (portion of the) universe to a post-black-hole-evaporation one in our (portion of the) universe. Quantum gravity effects at the singularity either transfer the information into inaccessible regions of spacetime ("other universes"), leaving (if things started in a pure state on the pre-black-hole surface) a mixed state on the post-evaporation surface in our portion of the universe, but still one that is pure in some sense overall, or they funnel it back out into our portion of the universe as the black hole evaporates. It is a challenge, and one that should help stimulate the development of quantum gravity theories, to figure out which, and exactly what is going on, but I don't feel any strong a priori compulsion toward one or the other of a unitary or a nonunitary evolution on from pre-black-hole to post-evaporation spacelike hypersurfaces in our portion of the universe.

 

 

Quantum imaging with entanglement and undetected photons, II: short version

Here's a short explanation of the experiment reported in "Quantum imaging with undetected photons" by members of Anton Zeilinger's group in Vienna (Barreta Lemos, Borish, Cole, Ramelow, Lapkiewicz and Zeilinger).  The previous post also explains the experiment, but in a way that is closer to my real-time reading of the article; this post is cleaner and more succinct.

It's most easily understood by comparison to an ordinary Mach-Zehnder interferometry experiment. (The most informative part of the wikipedia article is the section "How it works"; Fig. 3 provides a picture.)  In this sort of experiment, photons from a source such as a laser encounter a beamsplitter and go into a superposition of being transmitted and reflected.  One beam goes through an object to be imaged, and acquires a phase factor---a complex number of modulus 1 that depends on the refractive index of the material out of which the object is made, and the thickness of the object at the point at which the beam goes through.  You can think of this complex number as an arrow of length 1 lying in a two-dimensional plane; the arrow rotates as the photon passes through material, with the rate of rotation depending on the refractive index of the material. (If the thickness and/or refractive index varies on a scale smaller than the beamwidth, then the phase shift may vary over the beam cross-section, allowing the creation of an image of how the thickness of the object---or at least, the total phase imparted by the object, since the refractive index may be varying too---varies in the plane transverse to the beam.  Otherwise, to create an image rather than just measure the total phase it imparts at a point, the beam may need to be scanned across the object.)  The phase shift can be detected by recombining the beams at the second beamsplitter, and observing the intensity of light in each of the two output beams, since the relative probability of a photon coming out one way or the other depends on the relative phase of the the two input beams; this dependence is called "interference".

Now open the homepage of the Nature article and click on Figure 1 to enlarge it.  This is a simplified schematic of the experiment done in Vienna.  Just as in ordinary Mach-Zehnder interferometry, a beam of photons is split on a beamsplitter (labeled BS1 in the figure).  One can think of each photon from the source going into a superposition of being reflected and transmitted at the first beamsplitter.  The transmitted part is downconverted by passing through the nonlinear crystal NL1 into an entangled pair consisting of a yellow and a red photon; the red photon is siphoned off by a dichroic (color-dependent) beamsplitter, D1, and passed through the object O to be imaged, acquiring a phase dependent on the refractive index of the object and its thickness.  The phase, as I understand things, is associated with the photon pair even though it is imparted by the passing only the red photon through the object.  In order to observe the phase via interferometry, one needs to involve both the red and yellow photon, coherently.  (If one could observe it as soon as it was imparted to the pair by just interacting with the yellow photon, one could send a signal from the interaction point to the yellow part of the beam instantaneously, violating relativity.)   The red part of the beam is then recombined (at dichroic beamsplitter D2) with the reflected portion of the beam (which is still at the original wavelength), and that portion of the beam is passed through another nonlinear crystal, NL2.  This downconverts the part of the beam that is at the original wavelength into a red-yellow pair, with the resulting red component aligned with --- and indistinguishable from---the red component that has gone through the object.  The phase associated with the photon pair created in the transmitted part of the beam whose red member went through the object is now associated with the yellow photons in the transmitted beam, since the red photons in that beam have been rendered indistinguishable from the ones created in the reflected beam, and so retain no information about the relative phase.  This means that the phase can be observed siphoning out the red photons (at dichroic beamsplitter D3), recombining just the yellow photons with a beamsplitter BS2, and observing the intensitities at the two outputs of this final beamsplitter, precisely as in the last stage of an ordinary Mach-Zehnder experiment.  The potential advantage over ordinary Mach-Zehnder interferometry is that one can image the total phase imparted by the object at a wavelength different from the wavelength of the photons that are interfered and detected at the final stage, which could be an advantage for instance if good detectors are not available at the wavelength one wants to image the object at.

Quantum imaging with entanglement and undetected photons in Vienna

[Update 9/1:  I have been planning (before any comments, incidentally) to write a version of this post which just provides a concise verbal explanation of the experiment, supplemented perhaps with a little formal calculation.  However, I think the discussion below comes to a correct understanding of the experiment, and I will leave it up as an example of how a physicist somewhat conversant with but not usually working in quantum optics reads and quickly comes to a correct understanding of a paper.  Yes, the understanding is correct even if some misleading language was used in places, but I thank commenter Andreas for pointing out the latter.]

Thanks to tweeters @AtheistMissionary and @robertwrighter for bringing to my attention this experiment by a University of Vienna group (Gabriela Barreto Lemos, Victoria Borish, Garrett D. Cole, Sven Ramelo, Radek Lapkiewicz and Anton Zeilinger), published in Nature, on imaging using entangled pairs of photons.  It seems vaguely familiar, perhaps from my visit to the Brukner, Aspelmeyer and Zeilinger groups in Vienna earlier this year;  it may be that one of the group members showed or described it to me when I was touring their labs.  I'll have to look back at my notes.

This New Scientist summary prompts the Atheist and Robert to ask (perhaps tongue-in-cheek?) if it allows faster-than-light signaling.  The answer is of course no. The New Scientist article fails to point out a crucial aspect of the experiment, which is that there are two entangled pairs created, each one at a different nonlinear crystal, labeled NL1 and NL2 in Fig. 1 of the Nature article.  [Update 9/1: As I suggest parenthetically, but in not sufficiently emphatic terms, four sentences below, and as commenter Andreas points out,  there is (eventually) a superposition of an entangled pair having been created at different points in the setup; "two pairs" here is potentially misleading shorthand for that.] To follow along with my explanation, open the Nature article preview, and click on Figure 1 to enlarge it.  Each pair is coherent with the other pair, because the two pairs are created on different arms of an interferometer, fed by the same pump laser.  The initial beamsplitter labeled "BS1" is where these two arms are created (the nonlinear crystals come later). (It might be a bit misleading to say two pairs are created by the nonlinear crystals, since that suggests that in a "single shot" the mean photon number in the system after both nonlinear crystals  have been passed is 4, whereas I'd guess it's actually 2 --- i.e. the system is in a superposition of "photon pair created at NL1" and "photon pair created at NL2".)  Each pair consists of a red and a yellow photon; on one arm of the interferometer, the red photon created at NL1 is passed through the object "O".  Crucially, the second pair is not created until after this beam containing the red photon that has passed through the object is recombined with the other beam from the initial beamsplitter (at D2).  ("D" stands for "dichroic mirror"---this mirror reflects red photons, but is transparent at the original (undownconverted) wavelength.)  Only then is the resulting combination passed through the nonlinear crystal, NL2.  Then the red mode (which is fed not only by the red mode that passed through the object and has been recombined into the beam, but also by the downconversion process from photons of the original wavelength impinging on NL2) is pulled out of the beam by another dichroic mirror.  The yellow mode is then recombined with the yellow mode from NL1 on the other arm of the interferometer, and the resulting interference observed by the detectors at lower right in the figure.

It is easy to see why this experiment does not allow superluminal signaling by altering the imaged object, and thereby altering the image.  For there is an effectively lightlike or timelike (it will be effectively timelike, given the delays introduced by the beamsplitters and mirrors and such) path from the object to the detectors.  It is crucial that the red light passed through the object be recombined, at least for a while, with the light that has not passed through the object, in some spacetime region in the past light cone of the detectors, for it is the recombination here that enables the interference between light not passed through the object, and light passed through the object, that allows the image to show up in the yellow light that has not (on either arm of the interferometer) passed through the object.  Since the object must be in the past lightcone of the recombination region where the red light interferes, which in turn must be in the past lightcone of the final detectors, the object must be in the past lightcone of the final detectors.  So we can signal by changing the object and thereby changing the image at the final detectors, but the signaling is not faster-than-light.

Perhaps the most interesting thing about the experiment, as the authors point out, is that it enables an object to be imaged at a wavelength that may be difficult to efficiently detect, using detectors at a different wavelength, as long as there is a downconversion process that creates a pair of photons with one member of the pair at each wavelength.  By not pointing out the crucial fact that this is an interference experiment between two entangled pairs [Update 9/1: per my parenthetical remark above, and Andreas' comment, this should be taken as shorthand for "between a component of the wavefunction in which an entangled pair is created in the upper arm of the interferometer, and one in which one is created in the lower arm"], the description in New Scientist does naturally suggest that the image might be created in one member of an entangled pair, by passing the other member through the object,  without any recombination of the photons that have passed through the object with a beam on a path to the final detectors, which would indeed violate no-signaling.

I haven't done a calculation of what should happen in the experiment, but my rough intuition at the moment   is that the red photons that have come through the object interfere with the red component of the beam created in the downconversion process, and since the photons that came through the object have entangled yellow partners in the upper arm of the interferometer that did not pass through the object, and the red photons that did not pass through the object have yellow partners created along with them in the lower part of the interferometer, the interference pattern between the red photons that did and didn't pass through the object corresponds perfectly to an interference pattern between their yellow partners, neither of which passed through the object.  It is the latter that is observed at the detectors. [Update 8/29: now that I've done the simple calculation, I think this intuitive explanation is not so hot.  The phase shift imparted by the object "to the red photons" actually pertains to the entire red-yellow entangled pair that has come from NL1 even though it can be imparted by just "interacting" with the red beam, so it is not that the red photons interfere with the red photons from NL2, and the yellow with the yellow in the same way independently, so that the pattern could be observed on either color, with the statistical details perfectly correlated. Rather, without recombining the red photons with the beam, no interference could be observed between photons of a single color, be it red or yellow, because the "which-beam" information for each color is recorded in different beams of the other color.  The recombination of the red photons that have passed through the object with the undownconverted photons from the other output of the initial beamsplitter ensures that the red photons all end up in the same mode after crystal NL2 whether they came into the beam before the crystal or were produced in the crystal by downconversion, thereby ensuring that the red photons contain no record of which beam the yellow photons are in, and allowing the interference due to the phase shift imparted by the object to be observed on the yellow photons alone.]

As I mentioned, not having done the calculation, I don't think I fully understand what is happening.  [Update: Now that I have done a calculation of sorts, the questions raised in this paragraph are  answered in a further Update at the end of this post.  I now think that some of the recombinations of beams considered in this paragraph are not physically possible.]  In particular, I suspect that if the red beam that passes through the object were mixed with the downconverted beam on the lower arm of the interferometer after the downconversion, and then peeled off before detection, instead of having been mixed in before the downconversion and peeled off afterward, the interference pattern would not be observed, but I don't have clear argument why that should be.  [Update 8/29: the process is described ambiguously here.  If we could peel off the red photons that have passed through the object while leaving the ones that came from the downconversion at NL2, we would destroy the interference.  But we obviously can't do that; neither we nor our apparatus can tell these photons apart (and if we could, that would destroy interference anyway).  Peeling off *all* the red photons before detection actually would allow the interference to be seen, if we could have mixed back in the red photons first; the catch is that this mixing-back-in is probaby not physically possible.]  Anyone want to help out with an explanation?  I suspect one could show that this would be the same as peeling off the red photons from NL2 after the beamsplitter but before detection,  and only then recombining them with the red photons from the object, which would be the same as just throwing away the red photons from the object to begin with.  If one could image in this way, then that would allow signaling, so it must not work.  But I'd still prefer a more direct understanding via a comparison of the downconversion process with the red photons recombined before, versus after.  Similarly, I suspect that mixing in and then peeling off the red photons from the object before NL2 would not do the job, though I don't see a no-signaling argument in this case.  But it seems crucial, in order for the yellow photons to bear an imprint of interference between the red ones, that the red ones from the object be present during the downconversion process.

The news piece summarizing the article in Nature is much better than the one at New Scientist, in that it does explain that there are two pairs, and that the one member of one pair is passed through the object and recombined with something from the other pair.  But it does not make it clear that the recombination takes place before the second pair is created---indeed it strongly suggests the opposite:

According to the laws of quantum physics, if no one detects which path a photon took, the particle effectively has taken both routes, and a photon pair is created in each path at once, says Gabriela Barreto Lemos, a physicist at Austrian Academy of Sciences and a co-author on the latest paper.

In the first path, one photon in the pair passes through the object to be imaged, and the other does not. The photon that passed through the object is then recombined with its other ‘possible self’ — which travelled down the second path and not through the object — and is thrown away. The remaining photon from the second path is also reunited with itself from the first path and directed towards a camera, where it is used to build the image, despite having never interacted with the object.

Putting the quote from Barreta Lemos about a pair being created on each path before the description of the recombination suggests that both pair-creation events occur before the recombination, which is wrong. But the description in this article is much better than the New Scientist description---everything else about it seems correct, and it gets the crucial point that there are two pairs, one member of which passes through the object and is recombined with elements of the other pair at some point before detection, right even if it is misleading about exactly where the recombination point is.

[Update 8/28: clearly if we peel the red photons off before NL2, and then peel the red photons created by downconversion at NL2 off after NL2 but before the final beamsplitter and detectors, we don't get interference because the red photons peeled off at different times are in orthogonal modes, each associated with one of the two different beams of yellow photons to be combined at the final beamsplitter, so the interference is destroyed by the recording of "which-beam" information about the yellow photons, in the red photons. But does this mean if we recombine the red photons into the same mode, we restore interference? That must not be so, for it would allow signaling based on a decision to recombine or not in a region which could be arranged to be spacelike separated from the final beamsplitter and detectors.  But how do we see this more directly?  Having now done a highly idealized version of the calculation (based on notation like that in and around Eq. (1) of the paper) I see that if we could do this recombination, we would get interference.  But to do that we would need a nonphysical device, namely a one-way mirror, to do this final recombination.  If we wanted to do the other variant I discussed above, recombining the red photons that have passed the object with the red (and yellow) photons created at NL2 and then peeling all red photons off before the final detector, we would even need a dichroic one-way mirror (transparent to yellow, one-way for red), to recombine the red photons from the object with the beam coming from NL2.  So the only physical way to implement the process is to recombine the red photons that have passed through the object with light of the original wavelength in the lower arm of the interferometer before NL2; this just needs an ordinary dichroic mirror, which is a perfectly physical device.]

Free will and retrocausality at Cambridge II: Conspiracy vs. Retrocausality; Signaling and Fine-Tuning

Expect (with moderate probability) substantial revisions to this post, hopefully including links to relevant talks from the Cambridge conference on retrocausality and free will in quantum theory, but for now I think it's best just to put this out there.

Conspiracy versus Retrocausality

One of the main things I hoped to straighten out for myself at the conference on retrocausality in Cambridge was whether the correlation between measurement settings and "hidden variables" involved in a retrocausal explanation of Bell-inequality-violating quantum correlations are necessarily "conspiratorial", as Bell himself seems to have thought.  The idea seems to be that correlations between measurement settings and hidden variables must be due to some "common cause" in the intersection of the backward light cones of the two.  That is, a kind of "conspiracy" coordinating the relevant hidden variables that can affect the meausrement outcome with all sorts of intricate processes that can affect which measurement is made, such as those affecting your "free" decision as to how to set a polarizer, or, in case you set up a mechanism to control the polarizer setting according to some apparatus reasonably viewed as random ("the Swiss national lottery machine" was the one envisioned by Bell), the functioning of this mechanism.  I left the conference convinced once again (after doubts on this score had been raised in my mind by some discussions at New Directions in the Philosophy of Physics 2013) that the retrocausal type of explanation Price has in mind is different from a conspiratorial one.

Deflationary accounts of causality: their impact on retrocausal explanation

Distinguishing "retrocausality" from "conspiratorial causality" is subtle, because it is not clear that causality makes sense as part of a fundamental physical theory.   (This is a point which, in this form, apparently goes back to Bertrand Russell early in this century.  It also reminds me of David Hume, although he was perhaps not limiting his "deflationary" account of causality to causality in physical theories.)  Causality might be a concept that makes sense at the fundamental level for some types of theory, e.g. a version ("interpretation") of quantum theory that takes measurement settings and outcomes as fundamental, taking an "instrumentalist" view of the quantum state as a means of calculating outcome probabilities giving settings, and not as itself real, without giving a further formal theoretical account of what is real.  But in general, a theory may give an account of logical implications between events, or more generally, correlations between them, without specifying which events cause, or exert some (perhaps probabilistic) causal influence on others.  The notion of causality may be something that is emergent, that appears from the perspective of beings like us, that are part of the world, and intervene in it, or model parts of it theoretically.  In our use of a theory to model parts of the world, we end up taking certain events as "exogenous".  Loosely speaking, they might be determined by us agents (using our "free will"), or by factors outside the model.  (And perhaps "determined" is the wrong word.)   If these "exogenous" events are correlated with other things in the model, we may speak of this correlation as causal influence.  This is a useful way of speaking, for example, if we control some of the exogenous variables:  roughly speaking, if we believe a model that describes correlations between these and other variables not taken as exogenous, then we say these variables are causally influenced by the variables we control that are correlated with them.  We find this sort of notion of causality valuable because it helps us decide how to influence those variables we can influence, in order to make it more likely that other variables, that we don't control directly, take values we want them to.  This view of causality, put forward for example in Judea Pearl's book "Causality", has been gaining acceptance over the last 10-15 years, but it has deeper roots.  Phil Dowe's talk at Cambridge was an especially clear exposition of this point of view on causality (emphasizing exogeneity of certain variables over the need for any strong notion of free will), and its relevance to retrocausality.

This makes the discussion of retrocausality more subtle because it raises the possibility that a retrocausal and a conspiratorial account of what's going on with a Bell experiment might describe the same correlations, between the Swiss National lottery machine, or whatever controls my whims in setting a polarizer, all the variables these things are influenced by, and the polarizer settings and outcomes in a Bell experiment, differing only in the causal relations they describe between these variables.  That might be true, if a retrocausalist decided to try to model the process by which the polarizer was set.  But the point of the retrocausal account seems to be that it is not necessary to model this to explain the correlations between measurement results.  The retrocausalist posits a lawlike relation of correlation between measurement settings and some of the hidden variables that are in the past light cone of both measurement outcomes.  As long as this retrocausal influence does not influence observable past events, but only the values of "hidden", although real, variables, there is nothing obviously more paradoxical about imagining this than about imagining----as we do all the time---that macroscopic variables that we exert some control over, such as measurement settings, are correlated with things in the future.   Indeed, as Huw Price has long (I have only recently realized for just how long) been pointing out, if we believe that the fundamental laws of physics are symmetric with respect to time-reversal, then it would be the absence of retrocausality, if we dismiss its possibility, and even if we accept its possibility to the limited extent needed to potentially explain Bell correlations, its relative scarcity, that needs explaining.  Part of the explanation, of course, is likely that causality, as mentioned above, is a notion that is useful for agents situated within the world, rather than one that applies to the "view from nowhere and nowhen" that some (e.g. Price, who I think coined the term "nowhen") think is, or should be,  taken by fundamental physical theories.  Therefore whatever asymmetries---- these could be somewhat local-in-spacetime even if extremely large-scale, or due to "spontaneous" (i.e. explicit, even if due to a small perturbation) symmetry-breaking --- are associated with our apparently symmetry-breaking experience of directionality of time may also be the explanation for why we introduce the causal arrows we do into our description, and therefore why we so rarely introduce retrocausal ones.  At the same time, such an explanation might well leave room for the limited retrocausality Price would like to introduce into our description, for the purpose of explaining Bell correlations, especially because such retrocausality does not allow backwards-in-time signaling.

Signaling (spacelike and backwards-timelike) and fine-tuning. Emergent no-signaling?

A theme that came up repeatedly at the conference was "fine-tuning"---that no-spacelike-signaling, and possibly also no-retrocausal-signaling, seem to require a kind of "fine-tuning" from a hidden variable model that uses them to explain quantum correlations.  Why, in Bohmian theory, if we have spacelike influence of variables we control on physically real (but not necessarily observable) variables, should things be arranged just so that we cannot use this influence to remotely control observable variables, i.e. signal?  Similarly one might ask why, if we have backwards-in-time influence of controllable variables on physically real variables, things are arranged just so that we cannot use this influence to remotely control observable variables at an earlier time?  I think --- and I think this possibility was raised at the conference --- that a possible explanation, suggested by the above discussion of causality, is that for macroscopic agents such as us, with usually-reliable memories, some degree of control over our environment and persistence over time, to arise, it may be necessary that the scope of such macroscopic "observable" influences be limited, in order that there be a coherent macroscopic story at all for us to tell---in order for us even be around to wonder about whether there could be such signalling or not.  (So the term "emergent no-signalling" in the section heading might be slightly misleading: signalling, causality, control, and limitations on signalling might all necessarily emerge together.) Such a story might end up involving thermodynamic arguments, about the sorts of structures that might emerge in a metastable equilibrium, or that might emerge in a dynamically stable state dependent on a temperature gradient, or something of the sort.  Indeed, the distribution of hidden variables (usually, positions and/or momenta) according to the squared modulus of the wavefunction, which is necessary to get agreement of Bohmian theory with quantum theory and also to prevent signaling (and which does seem like "fine-tuning" inasmuch as it requires a precise choice of probability distribution over initial conditions), has on various occasions been justified by arguments that it represents a kind of equilibrium that would be rapidly approached even if it did not initially obtain.  (I have no informed view at present on how good these arguments are, though I have at various times in the past read some of the relevant papers---Bohm himself, and Sheldon Goldstein, are the authors who come to mind.)

I should mention that at the conference the appeal of such statistical/thermodynamic  arguments for "emergent" no-signalling was questioned---I think by Matthew Leifer, who with Rob Spekkens has been one of the main proponents of the idea that no-signaling can appear like a kind of fine-tuning, and that it would be desirable to have a model which gave a satisfying explanation of it---on the grounds that one might expect "fluctuations" away from the equilibria, metastable structures, or steady states, but we don't observe small fluctuations away from no-signalling---the law seems to hold with certainty.  This is an important point, and although I suspect there are  adequate rejoinders, I don't see at the moment what these might be like.