Robin Blume-Kohout

Robin Blume-Kohoutrobin-blume-kohout

Jun 15 2017 13:15 UTC
Apr 14 2017 03:03 UTC

Okay, I see the resolution to my confusion now (and admit that I was confused). Thanks to Michel, Marcus, Blake, and Steve!

Since I don't know the first thing about cyclotomic field norms... can anybody explain the utility of this norm, for this problem? I mean, just to be extreme, I could define a trivial norm that is 1 for all vectors except $\vec{0}$, and then all rank-1 POVMs would be equiangular. I'm not by any means saying that this is what's done here! My point is that there exist norms for which equi-angularity is less interesting than others. The Hilbert-Schmidt norm is very relevant for quantum states in Hilbert spaces, because it's what appears in Born's rule. What can I do with this field norm that makes it interesting and relevant?

P.S. @Marcus, if I'm understanding this correctly, then whenever two pairs have equal Hilbert-Schmidt norm, they will have equal field norm (but different H-S norms can correspond to equal field norms). So SICs should still be equiangular in field norm. Unless I'm misunderstanding again!

Apr 13 2017 15:40 UTC

This appears to be an odd and nonstandard definition of "equiangular", unless I'm missing something? Most references I'm aware of, including [Wikipedia][1] and [Renes et al 2004][2] agree that "a set of lines is called equiangular if every pair of lines makes the same angle". For unit vectors (rays), that means they have the same inner product (or absolute value thereof).

In the case you're describing, it sounds like the angles are not all equal, but chosen from a small discrete set, as (e.g.) in [this recent paper][3], or some of the older references therein. MUBs and stabilizer states are examples of k-angle tight frames, albeit not minimal ones.

[1]: https://en.wikipedia.org/wiki/Equiangular_lines
[2]: https://arxiv.org/abs/quant-ph/0310075
[3]: https://arxiv.org/abs/1605.09429

Apr 07 2017 20:30 UTC
Robin Blume-Kohout commented on Quantum advantage with shallow circuits

Zak, David: thanks! So (I think) this is a relation problem, not a decision problem (or even a partial function). Which is fine -- I'm happier with relation problems than with sampling problems, and the quantum part of Shor's algorithm is solving a relation problem, which is a pretty good pedigree. And this is clearly a very very cool result.

I do wish I had a better intuition for the practical implications -- by which I mean "How plausibly could this be leveraged to prove polynomial separations for decision problems?" Solving the relation problem at the heart of Shor immediately gives you an extension to the function/decision problem of factoring by tacking on the GCD algorithm. But situations where the circuit depth is dwarfed by the resources required to perform serial I/O on the input and/or output give me a headache!

Anyway, thanks very much for the clear explanations.

Apr 06 2017 15:05 UTC
Robin Blume-Kohout commented on Quantum advantage with shallow circuits

Is it okay to be a quantum supremacist? I thought I was, but maybe if it's "tainted" I should reconsider.

On a more serious note... a question for somebody who has read (or written) the paper. If the computation is performed on Poly(n) qubits, and all of them are relevant, and you are only allowed bounded fan-in... how do you read out the answer without using classical postprocessing of depth log(n) to concentrate the relevant information into one place?

(I'm totally happy for this to be a naive question; educate me?)

Mar 31 2017 20:05 UTC

I agree that we pretty much agree on all these points! For the record, though... when you describe the pre-2017 state of play as "...*someone wants to use this theory, but they can't match the sufficient conditions, so they appeal to heuristics to argue that they can use the theory anyway*," this implies that most practioners were aware of how loose the error bounds on the estimated RB number were, but used them anyway (appealing to heuristics). My impression was that very few scientists realized just how loose the bounds were -- and that the published estimates were being used in the mistaken belief that that they were precise.

Fortunately, this is all water under the bridge now! Hopefully, Joel's new theory does indeed provide a precise, computable, and interpretable (as fidelity) estimate of the RB number.

Inasmuch as I've been able to work through it, the paper makes sense to me so far. The one real concern I have is whether the new estimate -- $1-p(\mathcal{R}\mathcal{L})$ -- is actually computable. There are many, many ways to choose $\mathcal{R}$ and $\mathcal{L}$, and not all of them yield useful estimates. If I understand correctly, Theorems 2-4 prove that there exists a "good" choice, such that $r \approx 1-p(\mathcal{R}\mathcal{L})$. Theorem 2 gives a generalized eigenvalue problem from which suitable $\mathcal{R}$ and $\mathcal{L}$ can be extracted -- but these are still non-unique, and you have to find ones consistent with Theorem 3. At this point I get a blinding headache, and I can't figure out whether there's a simple explicit formula for the estimate (as there was in the old theory). Maybe Joel will tell us.

Finally... well trolled. :) In the interest of staying on topic, I'll just say: yes, good point, I agree that it would be a really good idea to think hard about different choices of gauge -- we've done some of that, but more would be good. The Right Thing To Do is to use gauge-invariant quantities (like the RB number!) exclusively, but that seems hard.

Mar 30 2017 16:55 UTC

I agree with much of your comment. But, the assertion you're disagreeing with isn't really mine. I was trying to summarize the content of the present paper (and 1702.01853, hereafter referred to as [PRYSB]). I'll quote a few passages from the present paper to support my interpretation:

1. "[T]he `weakly' gate-dependent noise condition assumed in Ref. [27] does not hold for realistic noise... For the analysis of Ref. [27] to be valid, theta has to be calibrated to within 1e-5 radians" (page 1).

2. "[T]he original analysis of Ref. [27] only applies in limited regimes" (page 2).

3. "...even when the analysis of Ref. [27] differs from the observed decay rate by orders of magnitude." (page 10).

4. "As observed by Ref. [29], the predicted decay using the value from Ref. [27] is in stark disagreement with the observed value." (Fig. 1)

5. "...even when the analysis of Ref. [27] is inconsistent by orders of magnitude." (page 12).

When you say "the previous theory was sound", I'm guessing you mean that error bounds on the 0th and 1st order models (the latter of which is explicitly intended as a predictive theory for gate-dependent noise) were derived, and the predictions are correct to within those error bounds. I concur (and [PRYSB] stated this).

But while the theorems are correct, the error bounds were ignored so thoroughly in practice (including by some of the cognoscenti) that I think it's difficult to avoid associating "the previous theory" with the explicit prediction of the 1st order model in [27], *sans* error bounds. Which is my reading of the quotes above (even though it's 100% clear in other parts of the paper that Joel appreciates the nuance about error bounds).

Two more questions for you (re: your comment)

1. You suggest the key is to "reinterpret what one means by `the true gates' such that the [...] RB fidelity matches the average fidelity of the gates". This wasn't quite how I was reading the paper. I would have said that this paper's key contribution is to redefine **the error in each gate**. In previous theory, this was defined as $C^\dagger \tilde{C}$, but Joel shows that if we define it instead as $\mathcal{R}\mathcal{L}$, with a particular choice of the non-unique $\tilde{C} = \mathcal{L} C \mathcal{R}$ decomposition, then happiness ensues. It seems like the target gates aren't reinterpreted, just what "the noise on the gate" means?

2. With apologies if this sounds defensive... where in [properly done] GST does one "attempt to ascribe an invariant meaning to a gauge-dependent quantity?" AFAIK, every publication on GST has been aware of gauge, and when gauge-variant quantities are requested, the standard approach is to turn them into gauge-invariant ones by explicitly minimizing them over gauge. While this is certainly an imperfect approach, I'd suggest that the GST community has been in the forefront of *not* ascribing invariant meaning to gauge-dependent quantities.

Mar 30 2017 12:07 UTC

That's a hard question to answer. I suspect that on any questions that aren't precisely stated (and technical), there's going to be some disagreement between the authors of the two papers. After one read-through, my tentative view is that each of the two papers addresses three topics which are pretty parallel. They come to similar conclusions about the first two: (1) The pre-existing theory of RB didn't give the right answers in some circumstances (sometimes by orders of magnitude); and (2) RB fidelity decays are always really close to exponential decays. [This paper's analysis appears to be stronger, as it should be; I think it gives a different reason for (1) and a stronger proof for (2)]. The two papers appear to come to different conclusions about the third shared question: (3) Does the RB number correspond to a "fidelity"? The earlier paper concluded "Not obviously, and definitely not the usual definition." This paper concludes that it does correspond to a fidelity. I think the argument/derivation looks intriguing and promising, but I haven't wrapped my head around it yet.

Oh, and obviously I'm not an unbiased observer, as I'm a co-author of the earlier paper.

Mar 30 2017 11:50 UTC
Mar 02 2017 17:04 UTC
Feb 28 2017 09:55 UTC
Robin Blume-Kohout commented on A loophole in quantum error correction

I totally agree -- that part is confusing. It's not clear whether "arbitrary good precision ... using a limited amount of hardware" is supposed to mean that arbitrarily low error rates can be achieved with codes of fixed size (clearly wrong) or just that the resources required to achieve arbitrarily low error rates scale well (polylog) with the error rate. I assumed the latter, but maybe the former was meant.

Either way, carefully deconstructing the language is probably not a good use of time. I think the paper is well-intentioned, but not correct; as you point out, this *has* been thought of. Another colleague pointed out to me (by email) that there's a much better and more elegant resolution than the one I suggested; the extended rectangle formalism provides a definition of "FTQEC succeeds" that incorporates this issue *and* others.

Feb 27 2017 13:30 UTC
Robin Blume-Kohout commented on A loophole in quantum error correction

@Chris: as Ben says, the model for measurement errors is "You measure in a basis that's off by a small rotation".

@Ben: I don't think either of the techniques you mention will directly resolve the paper's concern/confusion. That concern is with the post-QEC state of the system. That state isn't invariant under re-expressing measurement noise as a small unitary followed by measurement in the right basis. And repeated measurement in the same (wrong) basis won't fix the putative problem either.

@James: You're right that this has been thought about before. However, the paper isn't complaining that asymptotic FT fails; it's complaining that the O(p^2) error scaling that *is* supposed to hold for distance-3 codes fails. It's true that repeated measurement would be necessary to achieve that scaling, but it's not sufficient to fix the putative problem.

The basic problem here is misinterpretation of the "success" condition for QEC. This paper assumes that the metric of success is the overlap between the post-QEC state $|\Psi\rangle$ and the predefined $|0_L\rangle$ logical state. But it's not. It's actually kinda hard to precisely define what "success" means in a completely rigorous way. The simplest way I know of is to consider the sequence: (1) logical prep in one of the 4 BB84 states; (2) $N$ rounds of QEC; (3) logical measurement (in whichever of X or Z commutes with the desired initial prep). Then, look at the probability of correctly deducing the initial state, and fit it to $(1-p)^N$.

If your syndrome measurements are slightly rotated as in this paper, you're actually performing good QEC in a slightly deformed code. Which means that $|0_L\rangle$ moves around, depending on the measurement error. FTQEC still works, as measured by the operational criterion I gave above, but the naive criterion "Overlap with the $|0_L\rangle$ that I intended to implement" doesn't reveal it.

Feb 14 2017 19:53 UTC
Robin Blume-Kohout scited Logical Randomized Benchmarking
Feb 09 2017 08:06 UTC
Feb 09 2017 08:06 UTC
Feb 09 2017 08:05 UTC
Feb 09 2017 08:05 UTC
Feb 09 2017 08:05 UTC
Feb 09 2017 08:05 UTC
Robin Blume-Kohout scited SICs and Algebraic Number Theory
Feb 09 2017 08:05 UTC
Feb 08 2017 14:48 UTC
Dec 27 2016 00:07 UTC