Thank you Jens!
Thank you for the reminder, we will cite this reference in the next revision.
...(continued)We thank Yifan Zhang for pointing out a gap in the current version: In the original version, the prefactor of Eq. (4) should be poly(|B|, |C|). To fix this, we replace our recovery map to be the one for the erasure noise on region C (not A), and then our conclusions in Eq. (4), including the prefact
https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuantum.6.020302
Real-time dynamics with the sparse Pauli method was studied in this paper.
Wonderful, thanks, Ieva, for pointing out that fine distinction. We will accommodate this in the next version.
...(continued)Congrats on the results!
For the protocol in Fig. 2 (and potentially other protocols involving consecutive merge and splits along distinct boundaries), have you considered doing both merging and splits at the same time? Concreteley, I think you can replace the two consecutive tri-junctions with a si
This is so cool! I am really surprised you got more than 2x reduction!!
Thanks, Mark. We will clarify that the remark applies to sparse optimization problems in the next version.
...(continued)A small note on a citation of our latest paper on quantum-enhanced optimization by warmstarts ([https://arxiv.org/abs/2508.16309][1]): in the paper it is referenced as a part of "warm-started QAOA" approaches, where the quantum algorithm is fed with a pre-computed initialisation based on classical m
...(continued)Nice paper. The hardware resources and speed/latency look comparable to [last week's paper][1]. The iteration profiles in Fig. 4 are also nice to see.
Did you also process the decoding matrices to remove short cycles like they did? if so, do you have details on that?
Also is the FPGA bitmap sp
...(continued)Yup. So the article that the review referenced ([https://arxiv.org/abs/2004.09002][1]) focuses on sparse graph and likewise mentioned that their technique cannot be applied to problems like the Sherrington-Kirkpatrick model since it explores the whole graph with constant depth.
Similarly, the revie
That sounds important, do you have a reference to that?
I contacted the authors about this. They confirmed that they were referring to standard methods of concatenation and not fancy ones like blocklet concatenation and https://scirate.com/arxiv/2510.04526
...(continued)small comment on the statement that "log-depth QAOA does not have an asymptotic quantum advantage in combinatorial optimization".
To be precise, it does not have an asymptotic quantum advantage on sparse combinatorial optimization problems since, as noted in the review, there are some problems w
...(continued)I have concerns about how your method uses ancilla qubits - you state in section III.B that they are measured in the X basis and then reset. But as you point out in your example, this adds a (non-global) phase factor that depends on the measurement outcome - this will decohere the input qubits! This
Thank you very much for these detailed and helpful comments. We will carefully consider them and make the corresponding clarifications and updates in the next version.
...(continued)Congratulations on this very nice review! It is glad to see that many of our works are introduced in this paper :) I would like to kindly point out some improper statements in the review, especially in the sections of quantum learning and randomized measurements. I hope these can be helpful to impro
...(continued)Thanks for the clarification.
Good to see that all reported performances are based on the same hardware architecture and parameters. I usually interpret "Real-Time Decoding" as one where the latency is guaranteed (as in the case of classical communication for example); so the worst case latency
...(continued)Hi, thanks for your comment.
The value 400 represents the maximum number of decoding iterations and is rarely reached (with probability less than the logical error rate). Our decoder uses an early-stopping criterion: at each iteration, it computes the syndrome of the estimated error and stops whe
...(continued)Nice to see hardware implementations with fine details on performance and latency.
In Fig.1 you show that for the $[[144,12,12]]$ code you get logical error rate of ~6e-9 at physical error of 0.001 (GARI decoder); but that's with 400 iterations. The FPGA implementations seems to be using 10 itera
Happy to hear about careful reading of the technical parts! Indeed, you're right about this. Actually this is a surprisingly meaningful correction, since it implies the number of resource states for distillation is $o(N)$, even if it's not poly-log in the address size.
Great job, finally we have a proof!
Hi! Nice work. What do you think on the resource estimates for entanglement and real hardware performance donated to our work https://scirate.com/arxiv/2408.02837?
**Update**: The [ldpc-post-selection repo](https://github.com/seokhyung-lee/ldpc-post-selection) has been made public. Feel free to leave any issues, comments, or pull requests!
Maybe you can call it "fast mixing"
Thanks for the feedback, we will try to clarify the terminology in an update to avoid confusion across fields:)
Thanks, makes sense! Maybe some other phrase then like “quite rapid mixing”? :-) Since thermalization also has a technical definition, the title could be misleading to a different community.
...(continued)Hi Vedika, Thanks for the clarification! As you said, we are indeed focusing only on "mixing" in the open system/Markov chain setting. We also fully agree that our result does not apply to closed-system thermalization in the context of MBL. Perhaps a technical reason we did not explicitly say "rapid
...(continued)“Thermalization” is a distinct concept from Gibbs state preparation / mixing. Thermalization refers to the ability of a system to bring its subsystems to thermal equilibrium under its own unitary dynamics. Rapid mixing is about the ability to prepare Gibbs states when the system is coupled to a bath
...(continued)We adopted the naming convention used in Jones and Montanaro's but I also found it initially confusing because "bipartite product testing" actually refers to this more general question about testing states on $(\mathbb{C}^{d})^{\otimes n}$. That said, we couldn't think of a better name, so we just d
...(continued)We are thinking about how to approach this best. Completeness will be a secondary goal, and an interesting starting point would be to try extending fault gadgets to non Pauli noise / generalising Pauli Boxes. After this an extension to full non-Clifford diagrams might be easier. Considering universa
...(continued)Thank you Jacob! I now understand that the difference between our results is that you are considering the case where the bipartition is unknwon, while we were considering that the bipartition is known. Sorry for the misunderstanding. It is amazing that the lower bound results we derived are the same
...(continued)Thanks for your interest in the paper! The resolution is that each component of the circuit is considered to be noisy and Clifford. This means that you can not perform syndrome measurements throughout the circuit and store the results in noiseless classical memory to be post-processed at the end. In
...(continued)Hey Zhenhuan, thank you very much for your comment. I am so sorry I overlooked your prior work. I just took a look at the first paper and, indeed, your lower bound in Theorem 2 is quite similar to our Theorem 1. However, unless I am misunderstanding your work, you were considering the case of testin
Nice title ;) (prepare for weird emails though)
I believe that most simulations of Clifford error correction circuits so far involve offline syndrome decoding.
Their model doesn't include fast classical processing of the syndromes.
Thanks for your response! Would running a truly non-Clifford simulation (or its proxies) lead to any additional surprises?
Are there any plans to extend this framework to include non-Clifford ZX diagrams?
...(continued)Fascinating results! I’m curious how this compares to, or perhaps even contradicts, the numerous studies on Clifford error correction circuits, especially those numerically simulated to very low logical error rates, some even down to $p_{\text{logical}} = 10^{-9}$. What's the catch there? Am I missi
...(continued)Very interesting work!
I hope this comment is helpful—I would like to kindly point out that in our previous study on the complexity of entanglement detection (**Phys. Rev. Research 7, 033121 (2025), Theorem 2, D2**), we explored a closely related problem, and the results seem to align with **your
...(continued)Hi Allan,
Thanks for reaching out, and for bringing your work to our attention.
Regarding how our methods compare, the key difference is our use of a qDRIFT-style randomization combined with extrapolation. This combination is what allows us to achieve short circuit depths, eliminating the need for
Thanks for pointing it out! we will update it ASAP.
...(continued)Congratulations on the nice paper. Your approach to randomization that allows forgoing the use of a block encoding is particularly interesting. I want to bring to your attention our paper "[Randomized semi-quantum matrix processing][1]" where we randomize the QSVT protocol by sampling from the distr
It seems that the discussion section is incomplete - the last sentence "This is consistent with o" suddenly ends. Perhaps some sentences were omitted?
...(continued)Thank you for the helpful feedback! In our setup, the magic states are transferred from the small color codes into the BB code, and our simulations focus on this transfer process, which is significantly noisier than the initial states. Rather than simulating directly from magic states, we perform Cl
...(continued)Something that I found in [my own work][1] on the Floquet code (your ref 15) that was not obvious to me: If you do $d$ rounds of $(XX\rightarrow YY\rightarrow ZZ)$ measurements on the original Hastings-Haah code, you end up with a timelike distance of $3d/4$ rather than $d$. This is because, e.g., t
Are your stim circuits open source? Appreciate it.