...(continued)These full shape DESI DR2 cosmological constraints are an important part of the current DE discussion. A quick perspective from a framework where “Λ” actually has a controlled meaning.
In a recent series of notes I constructed a gauged constant‑vacuum (GCV) framework where the strictly spacetime‑
...(continued)Thank you for this careful analysis of dynamical dark energy using DESI DR2 BAO. A quick perspective from a framework where “Λ” actually has a controlled meaning.
In a recent series of notes I constructed a gauged constant‑vacuum (GCV) framework where the strictly spacetime‑constant part of the m
想请教一下每个agent的具体的打分方式:如果用的自己训练的模型,是如何训练的呢
为什么agent增加之后有的指标掉的这么厉害呢;N-gram中的n是取多少呢,data_synthesizer是如何实现的呢,如何生成的呢,自动生成的么,
𝑅𝑠𝑡𝑒𝑝和Renhanced有啥区别么,有没有emb和rerank一起结合MARM进行对比评价的呢
...(continued)Thank you very much for your thoughtful and detailed comment. Let me clarify our perspective and address your concerns point by point:
About the effective trainable region:
Our construction does not restrict all training dynamics to the shallow $O(\log n)$-depth region near the gadget layer. As sho
...(continued)I have concerns about whether the proposed circuit construction actually addresses the problem. The idea of the construction can be described with an analog in classical machine learning:
Suppose I have a model M for, say, image generation. I create a new model M' which consists of the following:
Outstanding work!!
interesting paper! did you find any examples that give a stabilizer or CSS code; preferably with d>2
Thanks Tom!
welcome back
...(continued)> You say that the values $r_j$ are random guesses, but this directly contradicts the paper. The paper doesn't say "choose $r_j$ randomly", it says: "The values $r_j$ are trial orders, typically a list of hypothesized orders (or their multiples/divisors) that the true order r may belong to."
Per
...(continued)I don't want to complain too harshly about doing examples, because having examples is a huge improvement over not having any. But... an example case in the tens of millions is really not large enough to be convincing here. To quote Scott Aaronson:
> We’ve had decades of experience to tell us that
...(continued)> The values rj are trial orders, typically a list of hypothesized orders (or their multiples/divisors) that the true order r may belong to.
> Basically you will need to know the entire factorization of the period for this to work, but if you knew that then you would already know the period and s
And yet, despite this and many other retractions and BS papers "Nature" inexplicably continues to be treated as the pinnacle of scientific achievement...
I'm glad someone did the work of showing these representations should be safe. Solid stuff.
Thanks a lot for the reply! I'm looking forward to seeing what kind of codes you can obtain with these techniques.
...(continued)**An update on “Magic Tricycles: Efficient magic-state generation with finite block-length quantum LDPC codes”:**
We have made several major revisions to this work that are reflected in the current Arxiv version of the paper. In addition to correcting the technical issue with the original submissio
...(continued)Thank you both for your comments. I would like to see objections formulated more precisely: which part of the proof do You see as problematic? I am not sure that I understand what the comment "The N + 1 subspaces of operator space have nothing to do with the Hilbert space you start out with" really
This was the point in reading the paper when I asked myself, "Wait, does that really follow?"
...(continued)The paper is still completely unconvincing.
For example, for equation 3 (the initial state) they say:
> The values rj are trial orders, typically a list of hypothesized orders (or their multiples/divisors) that the true order r may belong to.
By what magical process are they hoping to divin
...(continued)Thanks for the thoughtful comment and for the clear toy example. You’re absolutely right — a plain cross-swap keeps the row/column weights but doesn’t guarantee that the rank (and hence the rate) or the minimum distance is preserved. In our paper, the main goal was to inject randomness into overly s
Sorry, but I don't believe in the "proof". The N + 1 subspaces of operator space have nothing to do with the Hilbert space you start out with. We have to try again!
Ditto. I am not deep enough into MUBs to be the person to read this. But this is the sort of resolution of a long problem claim for which I usually see a hoard of people scrambling to assess the claim. Where be the experts hiding today?
...(continued)Thanks for the interesting paper. This seems like a nice idea, but as far as I can tell the proposed procedure is not guaranteed to preserve the rate or distance of the code. A toy example with a classical code could be the length-4 cyclic repetition code with parity check matrix
$$\begin{pmatrix
Has anyone looked into this paper? It claims to resolve the MUB problem in dimension 6, which is a longstanding open problem (see [Problem 2][1]).
[1]: https://arxiv.org/abs/2002.03233
Thanks for the interesting question Jahan. After some thought, it seems the detectors still span two rounds of measurements in our leakage-reducing floquet circuits. So they are qualitatively a bit different to yours.
The code is now available here: [https://github.com/hetenyib/dynamical_codes_on_heavy_hex][1]
[1]: https://github.com/hetenyib/dynamical_codes_on_heavy_hex
Amazing work! Will the source code be released? I couldn't find any links to GitHub in the paper.
Thanks a lot, Mark! I was not aware of this body of QAOA literature, despite the fact that it appears to be a promising direction for the field.
...(continued)Remarks on “Consequences of Undecidability in Physics on the Theory of Everything” (Faizal et al., 2025)
The authors claim that Gödel’s incompleteness theorem entails that our universe cannot be a simulation, because any simulated system would necessarily be algorithmic and thus incomplete, whi
Thank you Jens!
Thank you for the reminder, we will cite this reference in the next revision.
...(continued)We thank Yifan Zhang for pointing out a gap in the current version: In the original version, the prefactor of Eq. (4) should be poly(|B|, |C|). To fix this, we replace our recovery map to be the one for the erasure noise on region C (not A), and then our conclusions in Eq. (4), including the prefact
https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuantum.6.020302
Real-time dynamics with the sparse Pauli method was studied in this paper.
Wonderful, thanks, Ieva, for pointing out that fine distinction. We will accommodate this in the next version.
...(continued)Congrats on the results!
For the protocol in Fig. 2 (and potentially other protocols involving consecutive merge and splits along distinct boundaries), have you considered doing both merging and splits at the same time? Concreteley, I think you can replace the two consecutive tri-junctions with a si
This is so cool! I am really surprised you got more than 2x reduction!!
Thanks, Mark. We will clarify that the remark applies to sparse optimization problems in the next version.
...(continued)A small note on a citation of our latest paper on quantum-enhanced optimization by warmstarts ([https://arxiv.org/abs/2508.16309][1]): in the paper it is referenced as a part of "warm-started QAOA" approaches, where the quantum algorithm is fed with a pre-computed initialisation based on classical m
...(continued)Nice paper. The hardware resources and speed/latency look comparable to [last week's paper][1]. The iteration profiles in Fig. 4 are also nice to see.
Did you also process the decoding matrices to remove short cycles like they did? if so, do you have details on that?
Also is the FPGA bitmap sp
...(continued)Yup. So the article that the review referenced ([https://arxiv.org/abs/2004.09002][1]) focuses on sparse graph and likewise mentioned that their technique cannot be applied to problems like the Sherrington-Kirkpatrick model since it explores the whole graph with constant depth.
Similarly, the revie
That sounds important, do you have a reference to that?
I contacted the authors about this. They confirmed that they were referring to standard methods of concatenation and not fancy ones like blocklet concatenation and https://scirate.com/arxiv/2510.04526
...(continued)small comment on the statement that "log-depth QAOA does not have an asymptotic quantum advantage in combinatorial optimization".
To be precise, it does not have an asymptotic quantum advantage on sparse combinatorial optimization problems since, as noted in the review, there are some problems w
...(continued)I have concerns about how your method uses ancilla qubits - you state in section III.B that they are measured in the X basis and then reset. But as you point out in your example, this adds a (non-global) phase factor that depends on the measurement outcome - this will decohere the input qubits! This
Thank you very much for these detailed and helpful comments. We will carefully consider them and make the corresponding clarifications and updates in the next version.
...(continued)Congratulations on this very nice review! It is glad to see that many of our works are introduced in this paper :) I would like to kindly point out some improper statements in the review, especially in the sections of quantum learning and randomized measurements. I hope these can be helpful to impro
...(continued)Thanks for the clarification.
Good to see that all reported performances are based on the same hardware architecture and parameters. I usually interpret "Real-Time Decoding" as one where the latency is guaranteed (as in the case of classical communication for example); so the worst case latency
...(continued)Hi, thanks for your comment.
The value 400 represents the maximum number of decoding iterations and is rarely reached (with probability less than the logical error rate). Our decoder uses an early-stopping criterion: at each iteration, it computes the syndrome of the estimated error and stops whe
...(continued)Nice to see hardware implementations with fine details on performance and latency.
In Fig.1 you show that for the $[[144,12,12]]$ code you get logical error rate of ~6e-9 at physical error of 0.001 (GARI decoder); but that's with 400 iterations. The FPGA implementations seems to be using 10 itera