...(continued)We also struggled with a name and since the connectivity is in a strict sense not planar on a 2D lattice and also they are not really BB codes (the bicyclic referring a torus as the underlying lattice, or to them being two-block group algebra codes for the product of two cyclic groups) . I feel "til
...(continued)Hi!
I'm one of the authors of the paper.
I’d like to share that we plan to remove the third result regarding consistent-QPH in the next revision and replace it with a new insight.
For now, here are the key points raised by Dorian Rudolph:
$\mathsf{CQPH}$ can simulate $\mathsf{QPH}$ by si
Yes, I agree the paper is clear on the meaning of the term.
...(continued)Hi Craig, thank you for the comment.
We understand that in some contexts, particularly in graph theory, "planar" is taken to imply non-crossing edges, i.e., a planar graph. However, this is a specialized technical meaning. More generally, as supported by standard definitions, "planar" simply means
...(continued)(Not a serious issue, just some terminological confusion.)
I felt a bit tricked when I realized the "planar" in the title meant something different than I expected. I'd describe the property this paper is aiming for as "2d-local genus-0" or "planar boundaries". I think, in typical usage, "planar" m
The proof of Lemma 9 in this paper got messed up in the editing process for the v1 of this paper. A corrected proof is given below; it will appear in the arXiv v2 version. Thanks to Mingyu Sun for pointing out the error!
![Corrected proof][1]
[1]: http://www.cs.cmu.edu/~odonnell/proof.png
Great paper! Also, happy birthday to my amazing colleague Lennart — may your day be as insightful as this paper! 🎉
Is this related to https://en.wikipedia.org/wiki/Thomas_precession?
...(continued)As I wrote to the authors when they put out the initial version of this pre-print, and as explained in detail in [this repository][1], there is little point in analyzing the success probability of the last step of Shor's original classical post-processing as there are better ways to post-process the
...(continued)Thank you very much for your careful reading and constructive feedback.
We agree that the phrase "on planar architectures with only nearest-neighbor interactions" in the abstract may be misleading, as our study does not specify or analyze an explicit implementation using local gates. Our intentio
...(continued)I believe the result of section 7.1, regarding the maximal-magic qutrit states, is incomplete, and thus Conjecture 2 is false. [Fuchs and Cuffaro (2024)](https://scirate.com/arxiv/2412.21083) proved that maximal magic, as measured by the stabilizer Rényi entropy, obtains if and only if the state is
It was pointed out to me that there are similarities to https://arxiv.org/abs/quant-ph/9604028 which also works by trying to stay in the symmetric subspace, and was found to be flawed.
...(continued)This obviously doesn't work.
Essentially the proposed scheme is to run the noisy process N times, and recover results by averaging measurements. As the paper notes, this suppresses errors for a single qubit state undergoing a single round of noise. The problem is that this test completely misses th
It's not clear (to me) how you encode part of an entangled state with this code.
Do you expect anything qualitatively new emerges if one generalizes such a construction to 2D?
One of the very first papers to explore this question, produced when ChatGPT (v3.5) had just made its public debut, is this one: https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12113
...(continued)Thanks for your comments Jacques. We are discussing how to update the article in light of them.
I would prefer not to formulate Completeness in terms of "hidden variables". There were good reasons to rebrand hidden variable theories as "ontological models", and there are a lot of misconceptions
...(continued)Some of us QBists are discussing this article over email. We think that the way "Completeness" is elaborated in the article sits uncomfortably with QBism, possibly also with RQM.
The article frames the issue like this: either QM furnishes us with a "description" of the properties of physical syst
...(continued)Great paper! I've not had a chance to read it over in detail yet so forgive me if I've missed this but apart from a brief mention in the conclusion, you don't seem to have a discussion on the effects of resonator leakage on your gate fidelity. You state an advantage of your gate is that you can driv
Congratulations for this nice work!
We wanted to let you know of our earlier work in : https://arxiv.org/abs/2503.01738 where we consider an ensemble BP decoding method for the Bivariate Bicycle code family under circuit level noise, as well.
We are very sorry for this mistake (and we are confused how that happened). It has been corrected, and the proper reference to your paper will appear in our next arXiv update.
Hi. You left my name off the authors of Ref. 12, [arXiv:1612.07308](https://arxiv.org/abs/1612.07308).
...(continued)This is an interesting paper, and worth reading, but I view the conclusion that there is no exponential speedup for existing quantum algorithms deeply misleading.
Here's the issue: The runtime of the algorithm in ref ZFF19 (of which I am an author) scales polynomially in the condition number (kap
...(continued)If you are only investigating the error suppression capabilities of the code, then why does your abstract specify "on planar architectures with only nearest-neighbor interactions"?
The code you describe is manifestly *not* planar. It doesn't have stabilizers spanning only nearest neighbors. So the
...(continued)Dear Changhao,
thanks for pointing out this relevant reference. I think your iterative integration-by-part method is related to the one we developed in the context of Trotter error bounds (https://arxiv.org/abs/2312.08044), which might be of interest to you. We also use the first-order variant of o
...(continued)Thank you very much for your valuable comment.
We fully agree that the code capacity noise model is not sufficient for a complete assessment of fault-tolerant quantum computation (FTQC), particularly when considering the impact of non-local stabilizer measurements and realistic circuit-level nois
...(continued)What? You can't use a code capacity model to analyze a concatenated structure involving a non-local code while claiming it runs under planar connectivity! Code capacity models don't understand how operations are performed, but because you're embedding a non-planar code into a planar system those ope
Thank you for the answer!
...(continued)Thank you very much for your interest in our work and for the insightful question.
In this study, we restrict our analysis to the code capacity noise model and do not explicitly implement or simulate lattice surgery operations. The reference to lattice surgery is made to emphasize the compatibili
Congratulations on the nice work, good to see more work focusing on stationary distributions in quantum networking. I wanted to point out a typo in Eq. 7, the top-left entry of the matrix should be (1-p)^2+p^2, instead of (1-p)^2+p.
...(continued)Thank you for the interesting paper.
How did you implement the lattice surgery between two distant surface codes? Have you used an ancilla region to directly measure the stabilizers or do you have a logical ancilla per stabiliser and doing lattice-surgery CNOTs between each logical qubit in the s
...(continued)Thank you for the detailed comment. We are aware of the uniform norm condition on implementable polynomials via the QSVT, but length constraints prevented us from discussing this issue in this conference paper; we make no claim that $P_{2n-1}$, appropriately downscaled, is optimal. Instead, we view
...(continued)The polynomial approximation to $1/x$ here is elegant and optimal, according to its definition Problem 1, which is to minimize error w.r.t $1/x$ on $[-1,-a]$ and $[a,1]$. However there seems to be an issue in its application to QSP/QSVT methods - Problem 1 does not constrain the maximum value of the
...(continued)Hi Alexander, we’re glad you found the paper interesting.
Thank you for kindly pointing out the issue of vanishing $q$. We will include the following clarification in the next revision of the manuscript.
We note that constant qubit overhead is achievable only when $d = O(k/f)$; otherwise, $q
...(continued)Hello, congratulations on this interesting paper.
I do not think the analysis on p2 in the paragraph starting "In addition to encoding overhead…” is correct for all constant rate codes. In particular, from the references it seems that asymptotically good qLDPC codes are the intended choice of ini
...(continued)There is often criticism that quantum models are compared to badly-optimised classical alternatives (see e.g. https://arxiv.org/abs/2403.07059). This paper particularly made me curious why the classical benchmark only achieves 72% accuracy (compared to 50% random guessing!), on an extremely simple t
...(continued)Fascinating work! I have a question on how one should interpret these results.
In what sense do the authors mean that permutation unitaries act classically?
E.g. the Toffoli gate is a permutation unitary, it is non-Clifford and can generate quantum resources (indeed this manuscript is about e
...(continued)Hi, congrats on the interesting preprint! Related to a part of your paper, you might want to check out the connection to the notion of the entangling power of unitary gates -- the average entanglement created on an ensemble of separable states. It was introduced in the bipartite setting by Zanardi,
Thanks a lot !
[Repo][1]
[1]: https://github.com/jblue1/ml_bb_decoding "Repo"
...(continued)Hi Scirate, in discussions with others, a common feedback I’ve heard is: "There have been so many papers on code surgery recently — it's hard to keep track of what each one is doing and how they differ.” So for everyone who share this thought, section 3.2 of this paper is a high-level, 2-page summar
...(continued)A few years ago I wrote a similar algorithm for the log-determinant, which can be found here: https://arxiv.org/abs/2011.06475 . It seems to me that it is more general and faster: we have a linear dependence in the approximation error (instead of cubic) and we are linear in the sparsity (instead of
Great figure!
Funny, our paper from 2018 (https://scirate.com/arxiv/1803.10520) which contained essentially the same algorithm only got 9 scites. I guess I need to get better at selling results.
Dear authors: I would like to introduce our previous paper https://arxiv.org/abs/2409.18369, where generalized iterative integration-by-part method is also used.
If you're using a fault-tolerant CCZ gate to perform distillation coherently, is the intended future direction to distill Clifford states, to, for example, perform a logical Hadamard instead of the non-Clifford T states that you're distilling here?
...(continued)Hi Craig, thank you for your helpful and constructive comments. I do agree, of course, that suppression O(p^d) is better than O(p^ceil(d/2)). In the long-term, this should be the ultimate goal. My take for the near-term however is more in the direction of trying to find circuits that you CAN actuall
Note also that the quantum Fisher-Yates method introduced in arXiv:1711.10460 is in Appendix C, rather than in the main text, because in the main text we introduced an even more efficient coherent sorting method for preparing those sorts of states.