...(continued)Nice work on superchannels. However, I have a simple question about the motivation of studying QSCs. Superchannels are naturally characterized by the requirement that they should map bipartite quantum channels to bipartite quantum channels even if they act on one party of the channel. On the other
...(continued)It might be worth saying more about the implementability of your encoding $R_f$.
Your Figure 1 (b) seems to suggest that one needs over 50 orders of magnitude of precision.
You mention that this leads to numerical instabilities.
However, I am more worried about experimental implementations in act
...(continued)It seems to be a nice clash between you and Nicolas - a lot to think about, thank you (both)!
> Wouldn’t we be more free if we can determine our next decisions
> based on how we are now, rather than letting them at the mercy of
> randomness?My nearest next decision is just almost now so I
Thanks for the clarifications, and for the nice paper!
...(continued)Hi, thanks a lot for the reply! Our paper’s results mainly differ from them as follows:
- In the two papers you mentioned, the authors consider the *exact* tensor contraction of the *approximate* QFT (AQFT) on product inputs and outputs. Specifically, they are simulating $\langle x| C |y\rangle$,
How do your results relate to, [arXiv:quant-ph/0611156] and [Phys. Rev. A 76,
042321 (2007)]? On the surface, the conclusions look quite similar.
...(continued)Interesting paper! One point: I wouldn't say that current quantum NNs don't have inductive biases; my paper on QRNNs (https://arxiv.org/abs/2006.14619) has a circuit designed to mimick latent state read and write operations; as well as many circuits used in the context of many-body Hamiltonians feat
Thanks for letting us know a relevant paper. We have not compared our bound with yours. We will have a closer look and write you back.
Have you compared your new bound to those of equation (153) of https://arxiv.org/abs/0907.3386? Note that this is NOT the Petz map, and furthermore the gamma quantities do not involve anything at all similar to a petz map.
...(continued)Thank you for pointing it out. You are right, the basis $F_{\diamond}$ satisfying our condition rarely exists unless $E$ and $F$ are already MUBs.
We will update the paper soon, but we would like to mention here that our main message is still valid: $\Delta_q$ is bounded by $\Delta_{cl, E}$, $\De
...(continued)To put this another way, choosing the computational basis to be the $F$-basis , equation 12 and the one directly above it say that the vectors in $F_\Diamond$ are formed by rescaling each coordinate of each $E$ -vector (by a positive real number multiple) to have magnitude $d^{1/2}$.
However the
...(continued)Unfortunately, there a bug in equation (12), which overdetermines the phases of the coordinates of the mutually unbiased basis F_diamond in the F-basis.
Since generally there is no MUB satisfying all these phase conditions, the decoder of Theorem 1 does not exist for most bases E and F in dimension
...(continued)Contrary to the author, I certainly don't know that I or anyone else has free will. I just know that I exist as I keep experiencing things. I can't even imagine how free will would work. One would first have to define free will in a coherent manner to have a meaningful discussion about it. What exac
...(continued)The basic suggestion of "maybe we can combine economics with gauge theory" is at least as old as a 1994 essay by Lane Hughston, better known to physicists for his work on [density-matrix decompositions](https://en.wikipedia.org/wiki/Schr%C3%B6dinger%E2%80%93HJW_theorem):
L. P. Hughston (1994), "S
...(continued)Hi Robert
I fully agree :). Indeed high weigh Paulis are an "expensive" observable for classical shadows in general as they have exponential sample complexity in all three depth regimes. Nevertheless, the sample complexity can be orders of magnitude different depending on the choice of depth used
Hi there is an error on Figure 1. Your logical Z doesn't commute with one of the stabilizers.
...(continued)Hi Hakop,
Thank you for the prompt reply! That makes the advancement much clearer!
To summarize, for the computational task (1), the efficiency in your work refers to the fact that one can compute the estimated value of any linear combination of $\mathrm{poly}(n)$ general Paulis from classical sha
...(continued)Hi Robert
Thanks for your comment. There are two kinds of efficiency that are important here. The first of these is relevant to both your shadow scheme and ours. This is the sample complexity associated with producing accurate estimates. As you correctly point out, for high weight Paulis the shad
...(continued)Thank you all for the nice work!
Should there be a constraint that the poly(n) Paulis must all be few-body (similar to random Pauli measurements) in the abstract? Prior works proved that we could not efficiently estimate many general Paulis using single-copy measurements.
Best regards,
Robert (Hs
Nice work! Can you give a short explanation for the name _Quark_ ?
...(continued)We are withdrawing this note from the arXiv -- the withdrawal will update on arXiv at the next update.
The withdrawal is due to an uncorrectable error in the proof.
A detailed explanation of the error is hosted on my website at
https://nirkhe.github.io/simple_nlts_retraction.pdfApologie
Thank you for your prompt reply, Seth! Once again, congratulations on your paper to both!
...(continued)First of all, congratulations on your (very recent!) excellent preprint on the QMA1-hardness of clique homology. Quite a few papers on quantum homology have appeared on the arXiv in the last week: we are still analyzing the overlaps, connected components, and voids in this `homological hundredth mo
...(continued)Dear Alexander and Seth,
Congratulations on your paper! Your comments on when exponential quantum advantage is possible are very interesting!
You may not be aware of this but I should point out that the result reported in your **Theorem 1** (*#P-hardness of exact Betti numbers of clique-dense com
...(continued)Hi Ryan, I'll try to keep my reply short ;) (Also happy to take the discussion offline if it looks likely to continue indefinitely).
What you write is correct indeed. The hard-core fermion model on a graph considered in their paper is precisely the independence complex for that graph: i.e. the '
...(continued)Thanks Ismail. I think the new version of your abstract that you've recently uploaded is much improved. I agree that the relaxation of the problem you describe in your most recent post is likely to admit a substantial quantum speedup for many data sets. I agree it's cool. But, as you mention, the ma
...(continued)Hi everyone, I'm tickled pink by the fascinating discussions here, thank you!
Just to let everyone know, we have uploaded a new version to arxiv incorporating the above suggestions (with acknowledgements). We're still happy to make further changes as they crop up.
One further thought combining
...(continued)Hi all,
Related to your question, Chris, I agree that it is not sufficient to just have “beta_k” growing exponentially but it should grow exactly like 2^n/poly(n), which indeed is fine-tuned. On the other hand, as you know well, one should keep in mind that the LGZ algorithm does not estimate the *
...(continued)Hi all,
Nice that you are having this discussion! I agree with the sentiment of Ryan's comments, in that it feels unlikely that a real-world dataset will happen to be one for which we can obtain an exponential ('proved' or otherwise) advantage over classical algorithms.
On that note: you both
...(continued)Thank you for your thoughtful reply. I think we’re basically in agreement about the facts of the matter. While these terms are a bit ambiguous, the requirement that data have exponentially many holes still seems pathological enough that I would hesitate to call it “arbitrary” and “non-handcrafted” w
...(continued)To clarify, the nice work by Gyurik-Cade-Dunjko that you mention does not claim to show DQC1-hardness of estimating normalized "Betti" numbers. What they establish, improving on work by Brandao, is DQC1-hardness of determining the low-lying spectrum of a general Hamiltonian, which has nothing to do
...(continued)Dear Ryan, Aram, and Travis
Thank you very much for this discussion and for sharing your precious time and insights. This is what we love about arxiv/scirate in that it allows us to improve our pre-print before publication.
Thank you for doing a great job of getting to the heart of what seem
...(continued)Thanks for your question. We did indeed so some experiments as you suggest, but decided to keep the message simple and omit them.
For a quantum memory experiment, we found you could roughly half the buffer region with no significant impact on the logical fidelity, but improving the decoding
...(continued)As I mentioned in my comment, the DQC1 results from Dunjko and others pertain to estimating the normalized Betti number - a quantity that exponentially concentrates to zero unless the Betti number is exponentially large. Having exponentially large Betti number is a very unusual property that we shou
...(continued)There is this work from Dunjko et al: https://arxiv.org/abs/2005.02607
From the abstract:
"In this paper, we study the quantum-algorithmic methods behind the algorithm for topological data analysis of Lloyd, Garnerone and Zanardi through this lens. We provide evidence that the problem solved by th
Is this problem known to be DQC1-complete? That would be one way to address Ryan's concern.
...(continued)The abstract of this paper suggests that the quantum topological data analysis algorithm provides a “provable exponential speedup on arbitrary classical (non-handcrafted) data”. This is a strong claim, especially in light of arguments, see e.g. [arXiv:1906.07673][1], that super-polynomial speedup is
...(continued)Did you experiment with varying the sizes of $n_{com}$ and $n_{buf}$? Intuitively, larger $n_{buf}$ means decreased error rate but increased overhead... and large $n_{com}$ decreases both but increases latency. Would be nice to see empirically the benefits or drawbacks of values other than d, especi
...(continued)A special case of your formula for the probability distribution $p(x | n, m, k, l)$ was obtained by Montanaro in [arXiv:0903.5466][1]. Namely, when $n = k$ and $m = l$ we have $p(x | k, l, k, l) = \mathrm{Pr}[x|l]$, where $\mathrm{Pr}[x|l]$ is given in Montanaro's Lemma 4. It denotes the probability
Equation (25) appears to be incorrect to me, you are writing the second order Trotter formula as $U_2(dt) = \left[ U_1 (dt/2) U_1(dt/2)^T\right]^m$, but since $dt = t/m$ then this would correspond to a Trotter formula for $U_2(t)$ not $U_2(dt)$?
...(continued)Hey, thanks for your question.
So in our work, we find that for the ansatze we consider, the onset of barren plateaus is related to the width of the causal cone of an observable.
The width itself expands via entangling gates like CNOTs in the circuit architecture.In the qMPS ansatz, the ent
...(continued)Hi, congratulations to your work. From your work, you state that the barren plateau is absent for qMERA and qTTN, but exists for qMPS. From previous works, entanglement can induce barren plateau, and I assume the ensemble of states generated by qMPS has smaller entanglement than the other two and a
Thanks! Indeed, there is a typo in the direction of the majorization symbol of Definition 1 in Appendix C.
Outstanding work. Small typo?: It seems like the direction of majorization symbol in Definition 1 of Appendix C is reversed.
This paper refers to the version of RQM that existed before the introduction of "cross-perspective links" in [arXiv:2203.13342](http://arxiv.org/abs/2203.13342), a change that amounts to saying, "Well, we didn't want all those 'relative facts' anyway."
...(continued)Hi, Anthony! Thank you for pointing out the Oded Goldreich survey. I think it's really neat! I haven't read it yet before since the original paper [30] was written so well that I never needed to look anywhere else. I believe you are referring to the comment on page 3, where he indeed considers somet
Hi Pavel,
You're right that ref [30] doesn't use double covers, although the overview from Oded Goldreich that came out a few weeks later did
https://eccc.weizmann.ac.il/report/2021/175/
Of course, we'll be happy to give you proper credit when we update the manuscript.
Best,
Anthony & Gilles
...(continued)Congratulations! A very nice result with much shorter proofs than ever before! It is great that with all these recent simplifications each next paper rapidly approaches the high standards of simplicity and elegancy set by Sipser and Spielman in 1996. However, with all due respect, I believe, you inc
Note that Eqs. 7 through 14 in the arXiv version of this paper are not correct. The correct expressions appear in the published version in Physical Review Letters. It's pretty straightforward to fix these equations if you are following the paper by hand, but be warned!