Robin Blume-Kohout

Robin Blume-Kohoutrobin-blume-kohout

Mar 30 2017 16:55 UTC

I agree with much of your comment. But, the assertion you're disagreeing with isn't really mine. I was trying to summarize the content of the present paper (and 1702.01853, hereafter referred to as [PRYSB]). I'll quote a few passages from the present paper to support my interpretation:

1. "[T]he `weakly' gate-dependent noise condition assumed in Ref. [27] does not hold for realistic noise... For the analysis of Ref. [27] to be valid, theta has to be calibrated to within 1e-5 radians" (page 1).

2. "[T]he original analysis of Ref. [27] only applies in limited regimes" (page 2).

3. "...even when the analysis of Ref. [27] differs from the observed decay rate by orders of magnitude." (page 10).

4. "As observed by Ref. [29], the predicted decay using the value from Ref. [27] is in stark disagreement with the observed value." (Fig. 1)

5. "...even when the analysis of Ref. [27] is inconsistent by orders of magnitude." (page 12).

When you say "the previous theory was sound", I'm guessing you mean that error bounds on the 0th and 1st order models (the latter of which is explicitly intended as a predictive theory for gate-dependent noise) were derived, and the predictions are correct to within those error bounds. I concur (and [PRYSB] stated this).

But while the theorems are correct, the error bounds were ignored so thoroughly in practice (including by some of the cognoscenti) that I think it's difficult to avoid associating "the previous theory" with the explicit prediction of the 1st order model in [27], *sans* error bounds. Which is my reading of the quotes above (even though it's 100% clear in other parts of the paper that Joel appreciates the nuance about error bounds).

Two more questions for you (re: your comment)

1. You suggest the key is to "reinterpret what one means by `the true gates' such that the [...] RB fidelity matches the average fidelity of the gates". This wasn't quite how I was reading the paper. I would have said that this paper's key contribution is to redefine **the error in each gate**. In previous theory, this was defined as $C^\dagger \tilde{C}$, but Joel shows that if we define it instead as $\mathcal{R}\mathcal{L}$, with a particular choice of the non-unique $\tilde{C} = \mathcal{L} C \mathcal{R}$ decomposition, then happiness ensues. It seems like the target gates aren't reinterpreted, just what "the noise on the gate" means?

2. With apologies if this sounds defensive... where in [properly done] GST does one "attempt to ascribe an invariant meaning to a gauge-dependent quantity?" AFAIK, every publication on GST has been aware of gauge, and when gauge-variant quantities are requested, the standard approach is to turn them into gauge-invariant ones by explicitly minimizing them over gauge. While this is certainly an imperfect approach, I'd suggest that the GST community has been in the forefront of *not* ascribing invariant meaning to gauge-dependent quantities.

Mar 30 2017 12:07 UTC

That's a hard question to answer. I suspect that on any questions that aren't precisely stated (and technical), there's going to be some disagreement between the authors of the two papers. After one read-through, my tentative view is that each of the two papers addresses three topics which are pretty parallel. They come to similar conclusions about the first two: (1) The pre-existing theory of RB didn't give the right answers in some circumstances (sometimes by orders of magnitude); and (2) RB fidelity decays are always really close to exponential decays. [This paper's analysis appears to be stronger, as it should be; I think it gives a different reason for (1) and a stronger proof for (2)]. The two papers appear to come to different conclusions about the third shared question: (3) Does the RB number correspond to a "fidelity"? The earlier paper concluded "Not obviously, and definitely not the usual definition." This paper concludes that it does correspond to a fidelity. I think the argument/derivation looks intriguing and promising, but I haven't wrapped my head around it yet.

Oh, and obviously I'm not an unbiased observer, as I'm a co-author of the earlier paper.

Mar 30 2017 11:50 UTC
Mar 02 2017 17:04 UTC
Feb 28 2017 09:55 UTC
Robin Blume-Kohout commented on A loophole in quantum error correction

I totally agree -- that part is confusing. It's not clear whether "arbitrary good precision ... using a limited amount of hardware" is supposed to mean that arbitrarily low error rates can be achieved with codes of fixed size (clearly wrong) or just that the resources required to achieve arbitrarily low error rates scale well (polylog) with the error rate. I assumed the latter, but maybe the former was meant.

Either way, carefully deconstructing the language is probably not a good use of time. I think the paper is well-intentioned, but not correct; as you point out, this *has* been thought of. Another colleague pointed out to me (by email) that there's a much better and more elegant resolution than the one I suggested; the extended rectangle formalism provides a definition of "FTQEC succeeds" that incorporates this issue *and* others.

Feb 27 2017 13:30 UTC
Robin Blume-Kohout commented on A loophole in quantum error correction

@Chris: as Ben says, the model for measurement errors is "You measure in a basis that's off by a small rotation".

@Ben: I don't think either of the techniques you mention will directly resolve the paper's concern/confusion. That concern is with the post-QEC state of the system. That state isn't invariant under re-expressing measurement noise as a small unitary followed by measurement in the right basis. And repeated measurement in the same (wrong) basis won't fix the putative problem either.

@James: You're right that this has been thought about before. However, the paper isn't complaining that asymptotic FT fails; it's complaining that the O(p^2) error scaling that *is* supposed to hold for distance-3 codes fails. It's true that repeated measurement would be necessary to achieve that scaling, but it's not sufficient to fix the putative problem.

The basic problem here is misinterpretation of the "success" condition for QEC. This paper assumes that the metric of success is the overlap between the post-QEC state $|\Psi\rangle$ and the predefined $|0_L\rangle$ logical state. But it's not. It's actually kinda hard to precisely define what "success" means in a completely rigorous way. The simplest way I know of is to consider the sequence: (1) logical prep in one of the 4 BB84 states; (2) $N$ rounds of QEC; (3) logical measurement (in whichever of X or Z commutes with the desired initial prep). Then, look at the probability of correctly deducing the initial state, and fit it to $(1-p)^N$.

If your syndrome measurements are slightly rotated as in this paper, you're actually performing good QEC in a slightly deformed code. Which means that $|0_L\rangle$ moves around, depending on the measurement error. FTQEC still works, as measured by the operational criterion I gave above, but the naive criterion "Overlap with the $|0_L\rangle$ that I intended to implement" doesn't reveal it.

Feb 14 2017 19:53 UTC
Robin Blume-Kohout scited Logical Randomized Benchmarking
Feb 09 2017 08:06 UTC
Feb 09 2017 08:06 UTC
Feb 09 2017 08:05 UTC
Feb 09 2017 08:05 UTC
Feb 09 2017 08:05 UTC
Feb 09 2017 08:05 UTC
Robin Blume-Kohout scited SICs and Algebraic Number Theory
Feb 09 2017 08:05 UTC
Feb 08 2017 14:48 UTC
Dec 27 2016 00:07 UTC
Oct 13 2016 11:00 UTC
Oct 13 2016 10:58 UTC
Robin Blume-Kohout scited Noise Threshold of Quantum Supremacy
Oct 13 2016 10:58 UTC
Oct 12 2016 16:05 UTC
Robin Blume-Kohout scited Quantum Tokens for Digital Signatures
Oct 12 2016 15:59 UTC
Robin Blume-Kohout scited Adaptive quantum tomography