...(continued)Joe: Yes, thanks for clarifying—you're right that the specific mechanism I described isn't actually an issue. On second thought, my statement was really just equivalent to saying that QRAM can easily propagate an error into (a superposition of) high-weight errors. I agree that with a proper code tha
...(continued)Sam: Quantum polynomial codes allow for only a bounded number of transversal Toffolis which depend on the distance. If you choose the code distance large enough you can make it enough to support enough Toffolis to be able to implement the routing circuit. Each time you apply a Toffoli, it reduces th
...(continued)Hi Connor, that's a really interesting observation -- I would love to incorporate it (with attribution of course) into a revision of our paper, if you would allow it. The output register need not be explicitly encoded (for QRACM you can use an arrangement of X gates, e.g.) but your argument would st
...(continued)Hi Joe, Sam,
I wanted to comment that this idea of encoding the address and only performing corrections between queries may have some additional subtleties—it’s not obvious to me that it’s quite so straightforward.In particular, it seems to me that with this approach you should also encode th
...(continued)I should add that I have not seen this approach in the literature, but it strikes me as a rather obvious one. As I think you realised in your second comment, you can make a Fredkin gate out of a Toffoli and two cnots (which are also transversal in polynomial codes), so you can implement the switchin
Making the code public would help putting these impressive numbers into context :-)
Yes, that’s what I mean. It gives you the best of both worlds. Similar to gate level efficiency but for any finite QRAM you have an error correction threshold.
On further thought, I see what you're saying: transversal gates are closed under composition, and QRAM circuits can be built from Toffoli + X + ancilla, therefore the two cases I distinguished above are the same case.
...(continued)Thank you for your comment. In our analysis of "circuit" QRAM, we assume error correction on the bucket-brigade, but we also consider "gate" QRAM, and different approaches to implement the bucket-brigade without fault tolerance on each node (specifically in Section VII).
We'll look into quantum p
...(continued)One quick comment on the treatment of bucket-brigade QRAM in the current paper and more generally in the literature: There seems to be an inbuilt assumption that full fault-tolerance must occur within the memory (there is a reference to the distance of surface code required and to magic state distil
...(continued)You can check the distance with a program to see that it is 2. If you construct a parity check matrix for the code that James is talking about given the stabilizers at the top of page 9 you find that the distance is 2.
For example, you can use the [compute_code_distance function in the python pack
The reason of code distance 3 of Fig. 4 is explained in the latest version of the manuscript entitled as "A genus-two code" arXiv:2211.12695 .
Just wanted drop a line saying that I love how the term "non-Abelion" was introduced in the abstract without definition and yet was immediately obvious what it was. Hallmark of an excellent (and apt!) name!
...(continued)Hi Koen, thanks for the question.
I would say the main purpose of the Hamiltonian is that it defines the eigenstates which we prepare. Since the Hamiltonian is a commuting projector Hamiltonian, one route would be to directly measure its terms and then correct for the outcomes. However, the proto
...(continued)Impressive results!
What I'm confused about: how does the specific Hamiltonian come into this simulation? Do I understand correctly that the applied gate sequence works well because we have a good analytical understanding of the simulated system (so that we can use a known gate sequence for state
Very interesting!
Very interesting!
Thanks!
I've added these results to [Metriq](https://metriq.info/Submission/561) in case anyone wants to see how it compares to other platforms
Awesome! Great work!
...(continued)Hi Victory, thanks for your question.
The two works have different objectives. The Google paper you mention creates an **Abelian topological order (TO)**, namely the Z_2 toric code. On top of this, they create and control lattice defects in this state. These are one-dimensional defects in the two
...(continued)Can someone explain to me, a non-expert in this field how this paper differs from https://scirate.com/arxiv/2210.10255 except that it's done on a different hardware platform? The abstract for this paper says
> In this work, we present the first unambiguous realization of non-Abelian TO and demonst
...(continued)Hi Jason,
Thank you for your comment.
I just realized that the type-II fusion used in the proposed scheme is a variant that measures $\\{ZZ, YX\\}$, which is shown in Fig. 10(a). It can be verified easily that this type of fusion is sufficient for the scheme in Fig. 4. Moreover, since $Z_1$ an
...(continued)Hi all,
I agree with Seok-Hyung that the Type II fusion scheme given in Figure 3 is actually measuring $X_1 Z_2$ and $Z_1 X_2$, and that failure leads to measurement of $X$ on one qubit and $Z$ on the other. This is problematic for the boosted fusion protocol presented here, because a $Z$ measur
...(continued)Very nice result, though I feel the way the statement of the main result in the abstract should be more precise. It seems the result applies to $f$ which has integer points as the extreme points of its solution set, or, better said, any $f: \mathbb{R}^n \rightarrow \mathbb{R} $ whose set of minimize
...(continued)Thank you, Craig. We intend to explore deeper circuits (hopefully in the ballpark of what you've suggested) in the future. Pushing the circuits to the brink of failure at the chosen error rate would, no doubt, be a useful experiment.
Also, our goal here was to establish the value of the DS-ZNE idea
...(continued)This paper should really consider circuit depths beyond 20 or 30. Table I should include depths 100, 1000, 10000, 100000 and 1000000. Push it until it breaks.
For scale, note that you want ~100 non-Clifford gates for the circuit to not be classically simulable. This is why you'd at least want to
...(continued)Hi,
First, congratulations on the publication of your paper! Your paper seems to be very appealing to researchers studying MBQC and photonic quantum computing like me.
I have one question regarding your work. In page 6, it is written that a type-II fusion failure leads to $Z$ measurements on both
"Another interesting feature is the intermediate connectivity between qubits, which
can be made variable by shuttling atoms during the computation" Is any neutral atom computing vendor seriously considering this _during computation_?
...(continued)Came to the discussion quite late, but I think @quantumfugazi is pointing out that the Spring Algorithm solves the problem with $\exp(n)$ springs and $O(T)$ circuit depth.
Note that we already know that every BQP problem (indeed, every PSPACE problem) can be solved using classical polynomial cir
Ah, I think I understood. Thanks for your kind answer!
Since you have analyzed the shadow norm and inversion of observables as a function of low depth d, I don't think I've experienced some difficulty reading this paper due to the above question :)
Best regards,
Guedong Park
...(continued)Hi, thank you for the comment, and sorry for the late reply, we missed it!
The main result in the paper you mention is formulated in terms of the Haar measure on the unitary group, but since the uniform distribution on the Clifford group forms a 3 design, the t-th moment operator for t≤3 for this m
Dear Aram and Francois,
thanks for your answers!
Yes, I was wondering particularly about the dequantizations for sparse matrices in Gharibian and Le Gall (STOC 2022). Your information about the degree of polynomials solved my paradox. :)
...(continued)Aram is absolutely right.
I would just like to add that one exception is the recent dequantization framework by Gharibian and Le Gall (STOC 2022), for which I give a robust dequantization in Section 6.3. This framework does work for sparse matrices. It dequantizes the Quantum Singular Value Tran
Dequantization requires low-rank matrices. That's why the canonical example is inner-product estimation, i.e. rank-1 matrices. The BQP-completeness results require high-rank matrices.
Great work!
I have a question about the case of sparse matrices:
Isn't this the case that the original HHL algorithm considers?
You cite a reference that says this is BQP-hard, so how is it possible that this can be dequantized?
What am I missing?
...(continued)Looks super interesting! It seems to me that this paper makes progress on the question that we raised here:
https://quantum-journal.org/papers/q-2021-03-23-419/pdf/
In particular, these objects are one particular type of resource in the resource theory of local operations and shared randomness (th
...(continued)In terms of applications, there are certainly very large systems (~$10^{30}$) that can be modeled using the classical harmonic approximation and within Newtonian dynamics, and we expect that problems like that could benefit from our results. Keep in mind that many different types of things (not just
...(continued)Thanks Miles. So we agree there are some problems that scale as $O(2^n)$ on a classical computer and $O(2^{n/2})$ on a quantum computer but that the ratio of constant factors between the quantum and classical scaling might be high enough that $n$ would need to so large that the point of crossover in
...(continued)@quantumfugazi, your argument appears self-consistent. However, my understanding of this paper is that the problem statement (Section II) is *not* intended to be bounded by any $N_{\mathrm{max}}$, nor to incorporate the effects of relativity. To put it more bluntly, I assert that this is *not* act
...(continued)I mentioned the physics problem first because, theoretically, one does not have to shrink the mass-spring setup (i.e. the Spring computer) at all. So in that case, if the physics problem is within the Newtonian limit, the mass-spring setup will also operate in the Newtonian limit.
Now when it co
...(continued)One thing your analog (aka Spring) algorithm fails to consider is effects of noise in your measurement apparatus. The quantum algorithm provided in the preprint gives an $\epsilon$ close state to the actual output state, which can then provide measurements that satisfy arbitrary error margins that a
...(continued)Thanks Robin, will try to give you a quick rebuttal soon. I want to emphasize that I have been right all along: My Spring algorithm solves the coupled classical-oscillator problem in $O(T)$, independent of $n$.
As you pointed out, there might be some extreme cases where its runtime suddenly depe
...(continued)Hi Vladimir, it's an interesting point, though not one ultimately affecting quantum advantage (see below). I would say that, yes, for classical simulations in the hardest cases where tensor network approaches start failing in their compression (i.e. growth of entanglement) and the classical costs st
...(continued)Hi Bibek,
Your question is an interesting one as Alex Meiburg's answer showed. His answer is more insightful than what I would have said about the optimal number of iterations, so I'll refer to his answer.From our point of view in the paper, we were just consdering two cases. One is the case wh
...(continued)Thanks, Ryan, for the constructive points which definitely help to sharpen the discussion.
One of our main points is that for any Grover-type problem too hard to solve classically, if such a problem eventually gets solved with a quantum algorithm, the algorithm used to solve it will not be Grover
...(continued)@quantumfugazi: The effects of relativity are relevant not to the *problem*, but to the practical question of how rapidly a given computer can solve that problem. The problem that we're trying to solve is a well-defined math problem, and it does indeed describe a Newtonian (not relativistic) physi
...(continued)@Robin, Thank you for your kind comment. I will read Scott's paper and look at the quantum gate details. However, since the goal is to solve Newton's equation, the assumption of the physics problem size ($N=2^n$, also $x$, $\dot{x}$, $a$) is not in the limit that requires Einstein's relativities.
...(continued)@Ryan Thanks for the comment. In my first argument, I am focusing my analysis on time complexity, in which I also talk about speedup in terms of $n$ (like Grover's quadratic speedup).
I only wrote about "resource" (or space complexity) in my second argument after Aram pointed out that the initial