Are your stim circuits open source? Appreciate it.
...(continued)Interesting work! I had a question—it's not entirely clear to me what exactly is being simulated. From the appendices, it seems like the simulation involves a Clifford surrogate for logical performance rather than a full implementation of a non-Clifford scheme or a direct replacement of T gates with
Banger, very interesting work!
Note: The [ldpc-post-selection repo](https://github.com/seokhyung-lee/ldpc-post-selection), which implements our soft-output decoder and post-selection strategies, will be made public in a few days.
...(continued)I was worried about the interface between the lattice surgery round and the memory rounds before/after it, akin to how surface code lattice surgery needs d rounds before & during & after the surgery for fault tolerance. But in this case the memory is also single shot so it seems there could be no pr
...(continued)thanks a lot!
I'm not entirely sure I understand your question. The point of the fast surgery is to reduce the number of rounds of measurements from `d` to `1` fault tolerantly. Which is why we simulate a single round for that protocol and the curve shows that indeed the logical error rate remains
...(continued)Your (and Craig's) argument mainly appears to rest on the premise that "sampling small parts of the output is equivalent to sampling a uniformly randomly generated number.'" However, if the initial target state is not $\langle 1 |$ and is perturbed, the initial state effectively becomes a non-unifor
Congrats on these nice results!
I was surprised to see that "fast 1 round" curve was below the "standard 3 rounds" curve, but then read that "For the fast scheme we simulate only a single round." Wouldn't simulating only a single round be unable to check the fault tolerance of the protocol?
...(continued)I stand by what I already wrote above, and I therefore see no reason to continue to spend time on this thread. Based on what I already wrote above and the arguments presented, I see no reason for why the algorithm would scale efficiently. I note that Craig also wrote that he ["thinks it's wrong"](h
It seems that to acheive a lower number of T gates, a larger number of ancillae are needed.
...(continued)Your conclusion that the manuscript updates constitute an “acknowledgment” of your claims is unconvincing. The preprint v2 already stated:
"Although not presented here, we also considered factorization of larger integers (using $m_{\max} < n$) and found that our proposed modular approach works succ
I just became aware of a significant overlap with arXiv:2403.14416
An updated version with proper references to related results will be posted soon.
...(continued)Hi Tom, sorry for the late response. Yes, Nouédyn is right: As it stands, this paper is purely a bound on classical codes. With that said, it does capture certain types of unitary quantum logical gates.
It breaks down when generalizing to CZ and CCZ and the paper you linked to is a counter exampl
...(continued)No, my claim has not been refuted. You can see that it holds by analyzing the quantum algorithm mathematically, or by simulating it.
1. I note that after I first commented that one cannot pick the $m_i$ very small (for instance, fix them to $3$ or $4$, see also [this comment](https://www.reddit.com
Very useful feedback, and we'll add a mention to that paper, tysm :)
...(continued)Congratulations on this interesting work!
I've noticed a small typo in your Definition 2, Equation (17): it seems $\gamma_1$ should be $\gamma_2$.
Additionally, Appendix C: ``Code surgery on qLDPC codes'' in arXiv:2505.06981 presents a generalization of Definition 2, which involves simultaneou
...(continued)@Victory Omole and @Craig Gidney -- You might want to check out authors' updated manuscript (v3, Oct 5, 2025). It includes an example of factoring a number \( N > 10^6 \) using a maximum of only \( m_{\text{max}} = 4 \) phase qubits per block. It shows that Craig's concern about samples from small b
...(continued)Your claim that the block size must satisfy $m_i > \log_2 r'$ is clearly wrong and has been refuted by the examples included in the updated manuscript (v3, 5 Oct 2025).
The authors successfully demonstrated factoring a number $N > 10^6$ with order $r=3800$ using only $m_{\max}=4$ phase qubits. Th
That's right, even for CSS codes things become non-trivial precisely because of the degeneracy from the stabilisers, and the CSS property doesn't really help with that. So you have to find another way around it, which I believe is something Ani is actively thinking about.
...(continued)Oh, I see. Thanks for pointing this out. Is the translation to stabiliser codes more obvious if I only consider CSS codes? Naively I would expect that I can just consider each "half" of the code as a classical code on which CX acts analogously to classical CNOT, but maybe it is the equivalence of lo
...(continued)Hi Hari,
Thanks for the reply! I think I understand your confusion better now. You are correct in saying that
> It seems easier to just rotate and leave them in the original space.This is precisely what we do :) We claim that we do not use any knowledge of the actual number $K_{\lambda,\mu}$
...(continued)Hi Dmitry,
Thanks very much for your comment. I’m still trying to understand this. It’s a little confusing because the vectors are first in a larger dimension (of $d_\lambda$). The isometry rotates them and essentially truncates the dimension to $K_{\lambda,\mu}$ but we still need to embed them in
The authors will probably have a more insightful answer, but I'd maybe highlight that the scope of this preprint is classical fault tolerance and classical linear codes (where CZ gates are not defined). And translating the machinery to stabilizer codes, say, is not immediately obvious.
...(continued)Is there an easy way to understand why the intuition and arguments used in this work don't generalise to other types of entangling gates? For example (to my understanding) https://arxiv.org/abs/2507.05392 gives a construction of asymptotically good codes where CZ and CCZ gates **are** addressable in
Hi Johnnie,
Thanks for your comment. We’ll be sure to check it out and update the citation with a clearer acknowledgement of your result.
...(continued)Hi Siddhant and Yifan, nice work! I thought I'd mention that in https://arxiv.org/abs/2504.07344 (which you do already cite as [51]) we actually do use the cluster expansion rather than the series expansion. We use the counting numbers, $c_r$, derived from the 'region graph' defined as in generalize
...(continued)Dear Hari,
Thanks a lot for your feedback! We will do our best to make the presentation clearer in the next update :)
Regarding your question, indeed, as a mathematical operation $V_{\lambda,\mu}$ is an isometry with domain of dimension $K_{\lambda,\mu}$ and codomain of dimension $d_\lambda$ (dime
...(continued)I thought I should say something here. This looks like a really nice, thorough analysis of high-dimensional Schur transforms. There's a lot of detail here and it hasn’t been easy to digest (especially for me). In fact, it took me a few days to recover after I first saw this on the arxiv. Since then,
...(continued)A nice progress! It seems that the instance generation is really a bottleneck of the peaked circuits protocol. Using postselection, the success probability is exponentially small. Using the variational search, computing the loss function reduces to computing the zero-to-zero amplitude of a quantum c
Great, that does clear a few things up. I’m somewhat on board now.
Great, look forward to reading that.
Thanks! No, we purely focused on the decoding problem here without much regard for the associated dual optimization problem.
...(continued)Hi, thanks for your interest in our paper and your question. The pseudo-thresholds for independent errors and erasures for the 72 qubit code are lower than the other codes presented. It also has logical error rate scaling suggestive of a lower distance code in the sub-threshold regime as expected. I
...(continued)Hi KdV,
To again relay a message for William Zhong.
"I've updated the paper with additional notes on the double-checking circuit in the appendix, and also rescaled the error parameter in all of our numerics to allow for better comparison with the original cultivation paper. Unfortunately,
...(continued)I wrote that one *basically* needs $m_i > \log_2 r$ above, and I then described that there are some caveats. In particular if the order is even. I have not thought it through in complete detail, but right off the bat it should always hold that you need $m_i > \log_2 r'$ for constructive interference
...(continued)@Martin: I was able to factor 161 using a maximum block size of 6 for a=3. Since the order here is r=66, your claim that each block size $m_i $ must be larger than $\log_2 r$ does not appear to hold. It seems the effect of the shift in the unitary for all blocks beyond the first was not accounte
This is great progress on BPQM! Did you consider the difficulty of the optimization problem dual to decoding turbo codes?
...(continued)> a common drawback for concatenated codes is their syndrome check weight,
which can grow exponentially with the number of layers `L`Is this still a drawback in the wake of [blocklet concatentation][1] which claim to only rely on the syndrome measurements of the base code?
[1]: https://s
Very interesting. We also worked with a single-shot inference QML model in our paper https://arxiv.org/pdf/2501.02148 and found it to be pretty effective (see discussion at top of page 4)!
Very cool result! This is exactly the sort of idea that seems promising for getting DQI around some of the challenges associated with speedups in unstructured settings.
Any updates on Oscar Higgott's comment from 26 days ago?
Thanks! We're working on optimising the code and will share it.
Congratulations on this very wonderful paper! I would like to ask whether there is a GitHub repo or any code available for this paper ?
...(continued)Sorry, do you mean you want references about the importance of free energy for thermalization? Or did I misunderstand your question?
The stability of ordered phases at finite temperature is controlled by the free energy, which captures the competition between the energy and entropy of excitations
Thanks a lot for the feedback! I'd be more than happy to correct that and add references to previous works on the topic, if there are some you'd recommend?