What's wrong---and what's right---about quantum parallelism as an explanation for quantum speedup

I couldn't help crying "Nooo!!!!" on Twitter to the following statement by Pierre Pariente, "Strategic Analyst chez L’Atelier BNP Paribas," from a 2015 article "Quantum Computing Set to Revolutionize the Health Sector"

L’Atelier's more mathematically-inclined readers will recognise the general rule that with n qbits, a quantum computer may be in a quantum superposition of 2^n states and will thus possess the capacity to solve that number of problems simultaneously.

In the interest of not just being negative, I should explain what's wrong with this statement---and why it does capture something relevant to many examples of what most quantum computation theorists believe to be theoretical speed advantages of quantum algorithms over classical algorithms.  I'll give the short version first, and maybe in another post I'll explain a bit (groan) more.

What's wrong with quantum parallelism?

If you want, you can run a quantum version of a classical algorithm for solving some problem---say, computing a function on an n-bit long input string---on a superposition of all 2^n states representing different input strings.  This is often called "quantum parallelism".  If we consider computing the value of some function, F, on each input x to be "solving a different problem", then in some very weak sense we might think of this as "solving 2^n different problems simultaneously".  However, the drug designers, or radiotherapists, or whoever one is going to be selling one's quantum computers and software to, are presumably going to want to know the answers to the problems.  But you cannot read out of a quantum computer that has run such a "quantum parallel" computation  the answer to all of these problems (the value of the function on all inputs).  In fact, you can read out the answer to only one of the problems (the value, F(x), of F on a particular input x).  If you have run in superposition and you measure the register in which the algorithm writes the value F(x), you'll get the answer on a randomly drawn input (the famous randomness of quantum theory), and if you measure a part of the computer where the input string was stored, you'll also get the value of said randomly drawn input. There is absolutely no advantage here over just running the classical algorithm on a randomly chosen input.

What's right with quantum parallelism

So how can this quantum parallelism nevertheless be useful?  The reason for this is essentially the existence of many mutually incompatible observables in quantum theory (position and momentum being the most famous example of a pair of incompatible observables, though they are not the ones usually used in proposals for quantum computation).  In particular, quantum theory allows such a superposition state to be effectively (and often efficiently) measured in ways other than reading out the value of F and the input that gave rise to it---ways incompatible with that readout.  This can tell us certain things about the function F other than its values on specific inputs, and these things may even depend on its values on exponentially many inputs.  The incompatibility of these measurements with the one that reads out the input and the function value, implies that once the global information is obtained, the information about inputs and function values is no longer available---an example of the well-known fact that  "measurement disturbs the state" of a quantum system.

Why do we care?

Many problems of practical interest take this form:  we have a program (say, a classical circuit---from which we can always construct a quantum circuit) for computing a function F, but we want to know something about the function that we don't know how to just read off from the information we used to construct the circuit. For example, if F takes values in an ordered set (like the real numbers, to some precision) we might want to know its maximum value, and probably also an input on which it takes this maximum. (This application, "optimization", is mentioned in the article.)  Or, we might know because of the way the circuit for computing the function is constructed that it is periodic (we will need for the set of inputs to have some structure some that allows us to define periodicity, of course---maybe they are interpreted as binary notation for integers, for example), but we might not know---and might want to know---the period.  To oversimplify a bit, solving this problem is the key ingredient in Shor's algorithm for factoring large integers using quantum computation---the one that breaks currently used public-key cryptography protocols.  If we could compute and read out the values of the function on all inputs, of course we could figure out about anything we want about it---but since there are 2^n inputs, doing things this way is going to take exponential (in n) resources---e.g., 2^n calculations of the function.  To repeat, it is the fact that quantum computing allows such a superposition state to be effectively measured in ways other than reading out the value of F, that  nevertheless gives us potential access, after just one function evaluation, to a certain amount of information about "global" aspects of the function, like its period, that depend on many values.   We are still very limited as to how much information we can get about the function in one run of a "quantum parallel" function evaluation---certainly, it is only polynomially much, not the exponentially much one could get by reading out its value on all inputs.  But in some cases, we can get significant information about some global aspect of the function that we care about, with a number of quantum-parallel function evaluations that, if they were mere classical function evaluations, would leave us with the ability to get far less information, or even no information, about that global aspect of the function.  How significant this speedup is may depend on what global information we want to know about the function, and what we know about it  ahead of time.  If we know it is periodic, then in good cases, we can use polynomially many evaluations to get the period in situations where it's thought we'd need exponentially many classical evaluations.  This is the basis of the "near-exponential"  speedup of factoring by Shor's algorithm (it is not quite exponential, presumably because the best classical algorithm is not just to try to brute-force find the period that is found in the quantum algorithm, but is more sophisticated).   For relatively unstructured optimization problems, quantum algorithms usually make use of Grover's algorithm, which, again, does use quantum parallelism, and can find the optimum to high precision with roughly the *square root* of the number of function evaluations needed classically.  If the classical algorithm needs (some constant times) 2^n evaluations, in other words, the quantum one will need roughly (some constant times) 2^{n/2}---still exponential, though with half the exponent; an advantage which could still be useful if the large constant cost factor that comes from doing quantum operations instead of classical ones does not swamp it.

Quantum algorithms beyond quantum-parallelism

There are of course other scenarios in which quantum computing may be useful, that are not well described as "finding out unknown properties of a function by running a circuit for it"---especially those where the problem itself comes from quantum physics, or happens to have a mathematical structure similar to that of quantum physics.   Typically these have the property that one is still "finding out unknown properties of a quantum circuit by running it", but the circuit is not necessarily a quantum version of a classical function evaluation, but rather represents some mathematical structure, or physical evolution, that is well-described by a quantum circuit.  The most obvious example is of course the evolution of some quantum physical system!  Some of these kinds of problems---like quantum chemistry applications---are among those potentially relevant to the health applications that are the topic of the BNP article.

Ed Dolan on the case for a universal basic income

Ed Dolan suggests that claims that social safety net programs on average don't disincentivize work much may depend on the current relatively limited coverage of such programs (especially compared to the situation before, say, the Reagan administration and (hopefully soon to be qualified with the word "first") Clinton administration.  He thinks advocates should consider replacing many of these programs with a universal basic income.  There is a lot more one could say about this issue, but I think this is an important point to keep in mind.

Stiglitz likes South African infrastructure investment plans

Since I'm in South Africa this month as a Fellow of the Stellenbosch Institute for Advanced Study (about which more later), I came across this interesting article from a main South African paper, Business Day, about Joseph Stiglitz' involvement in South African economic issues.  According to the article he "voiced strong support for the government’s R840bn infrastructure programme, which he said could create a "virtuous circle" of investment and growth and set SA on the path to a more productive and equal society."  (Link added.) 840 billion rand is 108 billion US dollars at today's rate.  That is roughly 25% of South African nominal GDP as forecast for 2012, but this article, also in Business Day, refers to it as a 20-year rolling program, in which case it is on the order of 1% of GDP annually, depending on the spending profile, future GDP growth, and how the total nominal value of R850bn in planned spending is calculated.  Also interesting is the plan to finance it with mandatory retirement plan savings; while there are probably further details, it sounds on the face of it similar to a social-security type plan.  However it lacks, one suspects, the (rather inappropriate, in my view) feature of US social security as currently (but rather recently, in historic terms) formulated, of being officially described as a trust fund invested entirely in central government securities.  From the second Business Day article cited above:

In the final session of the conference, business leader Bobby Godsell and Zwelinzima Vavi, general secretary of the Congress of South African Trade Unions, both made guarded commitments to this. Mr Godsell said a society-wide discussion was needed on the concept of "a reasonable return" for investment, while Mr Vavi said he backed plans to introduce mandatory savings for all employees.

This sounds like how it should be done.

I have picked up a widespread sense that there is a lot of corruption and siphoning off of funds in the awarding and performance of goverment contracts in South Africa.  Obviously a big infrastructure programme provides big opportunities for more of this, which can of course be damaging economically and perhaps even more, politically; I suspect, and certainly hope, Stiglitz has factored in this aspect of the South African scene, and still thinks the plan worthwhile but it would be interesting to see it addressed directly as it is certainly not a minor issue.

American Enterprise "Institute" finishes destroying its scholarly credibility

The American Enterprise Institute has fired David Frum.  Presumably this is retribution for publishing his views on the passage of the health care bill---though Frum is a conservative, he thinks it a disaster for American conservatives that they refused to deal with the Democrats on healthcare.  Bruce Bartlett now reports that Frum privately told him several months ago that AEI scholars---Bartlett puts the word "scholar" in quotes, but I won't go that far on a blanket basis---"had been ordered not to speak to the media because they agreed with too much of what Obama was trying to do."

I have long viewed a writer's AEI affiliation as grounds for deep skepticism about even a sensible-sounding argument, because of corporate and political meddling of this sort with their supposedly scholarly programs, but the details had faded over time.  The latest news seriously damages the position of any genuine scholars who still work there, putting guys like Roger Scruton, who has done superb work (The Aesthetics of Architecture, and Art and Imagination are among the best things I've encountered by way of recent philosophy of art, and I certainly read Arthur Danto and Richard Wollheim back in the day) in a difficult position.

A spear carrier's view of the 1994 health care reform debacle (DeLong)

I get annoyed by blogs that are mostly just links to other people's stuff... but I had to link this great post from Brad DeLong on his inside view of the 1994 debacle in health care...

I"ve been wondering for the last few weeks or months... why can't Obama be more like LBJ?  Still, everyone has to operate from their own strengths... he may yet pull off health care reform in his own style, and for those who care about the horse-race aspect of this (and I certainly do inasmuch as it affects the Democrats' chances in the next two elections): by now, anything decent (or even apparently decent) by way of a health care bill will be viewed as a victory for Obama.

I'm worried, though, that although there are apparently European models for well-functioning healthcare that don't involve a public option, they involve nongovernmental nonprofit entities, or high levels of regulation.   Such models may be very difficult to replicate in this country where they may be easily co-opted or weakened by the money and influence big insurance can put behind things, and by the continual possibility of politically motivated medldling/sabotage.  Ironically, I fear our politics and culture may make a public option more workable than a semiprivate one.