Problem 1: Posted by Debbie Leung (initially Sept 15, 2009, on the wiki Oct 6, 2009).

Assume unlimited computation power. One of two TCP maps N_0 and N_1 are chosen uniformly at random. The chosen one N_b can be called as a blackbox twice. Suppose there is an adaptive scheme that allows b to be learnt with certainty. What is the maximum success probability of getting b correctly? (This depends on N_0 and N_1, we want to know the min over all possible pairs of TCP maps.)

Reference: http://arxiv.org/abs/0909.0256 .

What's known:

Note that this problem is equivalent to discriminating between the two states \rho_0 and \rho_1 that are obtained from applying the same circuit that calls the respective blackboxes.

(1) Allowing only one query, for all N_0 and N_1, the maximum success probability is just 1/2 + 1/4 ||N_0-N_1||_\diamond.

(2) For N_0, N_1 unitary, adaptive and parallel schemes give the same probability of success for any number of queries. Thus, the answer to the above for this special case is 1.

(3) For 2 queries and for any circuit, || \rho_0 - \rho_1 ||_1 \leq 2 || N_0 - N_1 ||_diamond . Perfect discrimination implies
|| \rho_0 - \rho_1 ||_1 = 2, or 1 \leq || N_0 - N_1 ||_diamond, thus the probability of success given one query is at least 75%. However, there are pairs of channels for which 2 uses are no better than 1.

Upshot: Our current, best lower bound (min over choices of N_0, N_1) for the max prob of success is 75%, but we aren't sure if that is tight.


Problem 2: Initially posted by Steven Girvin Sept 22, 2009, by Debbie Leung on the wiki Oct 6, 2009.

In process tomography, we assume the perfect ability to prepare states and to measure, and the goal is to characterize an unknown TCP map N that can be called as a blackbox (as many times as we want).

What happens if the state preparation and measurement are also carrying unknown errors? Here are the scenarios with increasing difficulty:

(1) Assuming that the dimension of all the systems are known. Each state preparation and measurement box comes with a fixed error that is independent of how they are being used. If those errors aren't too severe (a better quantification needed) it is likely that these errors can be fully characterized prior to their use in the process tomography. (Sanity check: say, we can prepare a faulty EPR pair, and we can measure in the X,Y,Z bases. We have an equivalence of 4 single-qubit quantum processes to characterize, only 15 measurement outcomes (up to a factor of 2 or 3). Fixing the number of measurements, and for each new initial state, we bring another 15 unknown parameters and 15 new measurement outcomes, so that will not help. Fixing the single initial state, if we increase the number of measurements to n on each side, we have (n+1)x15 unknown parameters and (n+1)x(n+1)-1 = n(n+2) outcomes. That starts to look promising.

In general, if the unknown process takes d dimensions to d dimensions, there are k possible initial states, n measurements on each side, there will be k x n x (n+2) x f(d) outcomes, and (k+n) x (d^4-1) unknown parameters.

It seems that in principle, calibration of devices that are erroneous in consistent manners is possible. Then processs tomography is also possible.

Question: how much do we know about these devices? Are these measurements really giving independent information?

(2) Assuming that the dimension of all the systems are known, but the state preparation and the measurements are erroneous in a way that depends on how the circuit is hooked up (say, if we use the same laser to prepare the state and to help the measurement, and the laser has correlated error between the two uses) what kind of dependencies can we handled in our calibration?


Problem 3: Posted by Debbie Leung (initially Sept 28, 2009, on the wiki Oct 6, 2009).

Consider a bipartite system held by Alice and Bob, each side having local dimension d, or n=log d qubits. The goal is to perform a bipartite, 2-outcome projective measurement given by {P, I-P} where P is rank r. By this, we mean approximating the quantum operation (on any input) to an \epsilon approximation in diamond norm, with the bipartite quantum input and output distributed properly between Alice and Bob. Suppose entanglement is given for free, and for the most general P subject to the rank constraint, what is the communication cost (forward and backward) needed? Does the cost depend on whether ebits or arbitrary entangled states are available? (Local computational power is unlimited.)

Reference: http://arxiv.org/abs/0803.3066 .

What do we know already?

First, there's no distinction between quantum or classical communication when ebits are free. Also, 2n bits in each direction is enough to teleport the state back and for, whatever the measurement is. So, that gives a universal upper bound when ebits are available.

From the above reference, when r=1, and when special entangled states are available, log m bits in both directions are sufficient for an accuracy \epsilon = \sqrt{2/m}. So, for accuracy inverse polynomial in system size n, the communication cost is logarithmic in n. A conjecture lower bound for 1 bit is made by Sandu Popescu. Also, if Alice and Bob only share ebits, the communication cost will be O(n). Thus, if we can tweek this quantum input quantum output problem into a classical input and classical output problem, it will give a large separation in the communication complexity given special entangled states vs standard ebits.

When r = d(d-1)/2, P can be chosen to be the projection on the anti-symmetric space, and the measurement, together with local processing, can be used to simulate the SWAP operation. Thus, the teleportation bound is tight.