I couldn't help crying "Nooo!!!!" on Twitter to the following statement by Pierre Pariente, "Strategic Analyst chez L’Atelier BNP Paribas," from a 2015 article "Quantum Computing Set to Revolutionize the Health Sector"
L’Atelier's more mathematically-inclined readers will recognise the general rule that with n qbits, a quantum computer may be in a quantum superposition of 2^n states and will thus possess the capacity to solve that number of problems simultaneously.
In the interest of not just being negative, I should explain what's wrong with this statement---and why it does capture something relevant to many examples of what most quantum computation theorists believe to be theoretical speed advantages of quantum algorithms over classical algorithms. I'll give the short version first, and maybe in another post I'll explain a bit (groan) more.
What's wrong with quantum parallelism?
If you want, you can run a quantum version of a classical algorithm for solving some problem---say, computing a function on an n-bit long input string---on a superposition of all 2^n states representing different input strings. This is often called "quantum parallelism". If we consider computing the value of some function, F, on each input x to be "solving a different problem", then in some very weak sense we might think of this as "solving 2^n different problems simultaneously". However, the drug designers, or radiotherapists, or whoever one is going to be selling one's quantum computers and software to, are presumably going to want to know the answers to the problems. But you cannot read out of a quantum computer that has run such a "quantum parallel" computation the answer to all of these problems (the value of the function on all inputs). In fact, you can read out the answer to only one of the problems (the value, F(x), of F on a particular input x). If you have run in superposition and you measure the register in which the algorithm writes the value F(x), you'll get the answer on a randomly drawn input (the famous randomness of quantum theory), and if you measure a part of the computer where the input string was stored, you'll also get the value of said randomly drawn input. There is absolutely no advantage here over just running the classical algorithm on a randomly chosen input.
What's right with quantum parallelism
So how can this quantum parallelism nevertheless be useful? The reason for this is essentially the existence of many mutually incompatible observables in quantum theory (position and momentum being the most famous example of a pair of incompatible observables, though they are not the ones usually used in proposals for quantum computation). In particular, quantum theory allows such a superposition state to be effectively (and often efficiently) measured in ways other than reading out the value of F and the input that gave rise to it---ways incompatible with that readout. This can tell us certain things about the function F other than its values on specific inputs, and these things may even depend on its values on exponentially many inputs. The incompatibility of these measurements with the one that reads out the input and the function value, implies that once the global information is obtained, the information about inputs and function values is no longer available---an example of the well-known fact that "measurement disturbs the state" of a quantum system.
Why do we care?
Many problems of practical interest take this form: we have a program (say, a classical circuit---from which we can always construct a quantum circuit) for computing a function F, but we want to know something about the function that we don't know how to just read off from the information we used to construct the circuit. For example, if F takes values in an ordered set (like the real numbers, to some precision) we might want to know its maximum value, and probably also an input on which it takes this maximum. (This application, "optimization", is mentioned in the article.) Or, we might know because of the way the circuit for computing the function is constructed that it is periodic (we will need for the set of inputs to have some structure some that allows us to define periodicity, of course---maybe they are interpreted as binary notation for integers, for example), but we might not know---and might want to know---the period. To oversimplify a bit, solving this problem is the key ingredient in Shor's algorithm for factoring large integers using quantum computation---the one that breaks currently used public-key cryptography protocols. If we could compute and read out the values of the function on all inputs, of course we could figure out about anything we want about it---but since there are 2^n inputs, doing things this way is going to take exponential (in n) resources---e.g., 2^n calculations of the function. To repeat, it is the fact that quantum computing allows such a superposition state to be effectively measured in ways other than reading out the value of F, that nevertheless gives us potential access, after just one function evaluation, to a certain amount of information about "global" aspects of the function, like its period, that depend on many values. We are still very limited as to how much information we can get about the function in one run of a "quantum parallel" function evaluation---certainly, it is only polynomially much, not the exponentially much one could get by reading out its value on all inputs. But in some cases, we can get significant information about some global aspect of the function that we care about, with a number of quantum-parallel function evaluations that, if they were mere classical function evaluations, would leave us with the ability to get far less information, or even no information, about that global aspect of the function. How significant this speedup is may depend on what global information we want to know about the function, and what we know about it ahead of time. If we know it is periodic, then in good cases, we can use polynomially many evaluations to get the period in situations where it's thought we'd need exponentially many classical evaluations. This is the basis of the "near-exponential" speedup of factoring by Shor's algorithm (it is not quite exponential, presumably because the best classical algorithm is not just to try to brute-force find the period that is found in the quantum algorithm, but is more sophisticated). For relatively unstructured optimization problems, quantum algorithms usually make use of Grover's algorithm, which, again, does use quantum parallelism, and can find the optimum to high precision with roughly the *square root* of the number of function evaluations needed classically. If the classical algorithm needs (some constant times) 2^n evaluations, in other words, the quantum one will need roughly (some constant times) 2^{n/2}---still exponential, though with half the exponent; an advantage which could still be useful if the large constant cost factor that comes from doing quantum operations instead of classical ones does not swamp it.
Quantum algorithms beyond quantum-parallelism
There are of course other scenarios in which quantum computing may be useful, that are not well described as "finding out unknown properties of a function by running a circuit for it"---especially those where the problem itself comes from quantum physics, or happens to have a mathematical structure similar to that of quantum physics. Typically these have the property that one is still "finding out unknown properties of a quantum circuit by running it", but the circuit is not necessarily a quantum version of a classical function evaluation, but rather represents some mathematical structure, or physical evolution, that is well-described by a quantum circuit. The most obvious example is of course the evolution of some quantum physical system! Some of these kinds of problems---like quantum chemistry applications---are among those potentially relevant to the health applications that are the topic of the BNP article.