# Top arXiv papers

• The curse of dimensionality associated with the Hilbert space of spin systems provides a significant obstruction to the study of condensed matter systems. Tensor networks have proven an important tool in attempting to overcome this difficulty in both the numerical and analytic regimes. These notes form the basis for a seven lecture course, introducing the basics of a range of common tensor networks and algorithms. In particular, we cover: introductory tensor network notation, applications to quantum information, basic properties of matrix product states, a classification of quantum phases using tensor networks, algorithms for finding matrix product states, basic properties of projected entangled pair states, and multiscale entanglement renormalisation ansatz states. The lectures are intended to be generally accessible, although the relevance of many of the examples may be lost on students without a background in many-body physics/quantum information. For each lecture, several problems are given, with worked solutions in an ancillary file.
• Apr 27 2016 quant-ph hep-th arXiv:1604.07450v2
This is the 10th and final chapter of my book on Quantum Information, based on the course I have been teaching at Caltech since 1997. An earlier version of this chapter (originally Chapter 5) has been available on the course website since 1998, but this version is substantially revised and expanded. Topics covered include classical Shannon theory, quantum compression, quantifying entanglement, accessible information, and using the decoupling principle to derive achievable rates for quantum protocols. This is a draft, pre-publication copy of Chapter 10, which I will continue to update. See the URL on the title page for further updates and drafts of other chapters, and please send me an email if you notice errors.
• These notes are from a series of lectures given at the Universidad de Los Andes in Bogotá, Colombia on some topics of current interest in quantum information. While they aim to be self-contained, they are necessarily incomplete and idiosyncratic in their coverage. For a more thorough introduction to the subject, we recommend one of the textbooks by Nielsen and Chuang or by Wilde, or the lecture notes of Mermin, Preskill or Watrous. Our notes by contrast are meant to be a relatively rapid introduction into some more contemporary topics in this fast-moving field. They are meant to be accessible to advanced undergraduates or starting graduate students.
• Jan 19 2017 quant-ph arXiv:1701.05182v2
Quantum many-body systems exhibit a bewilderingly diverse range of behaviours. Here, we prove that all the physics of every other quantum many-body system is replicated in certain simple, "universal" quantum spin-lattice models. We first characterise precisely and in full generality what it means for one quantum many-body system to replicate the entire physics of another. We then fully classify two-qubit interactions, determining which are universal in this very strong sense and showing that certain simple spin-lattice models are already universal. Examples include the Heisenberg and XY models on a 2D square lattice (with non-uniform coupling strengths). This shows that locality, symmetry, and spatial dimension need not constrain the physics of quantum many-body systems. Our results put the practical field of analogue Hamiltonian simulation on a rigorous footing and show that far simpler systems than previously thought may be viable simulators. We also take a first step towards justifying why error correction may not be required for this application of quantum information technology.
• Tsirelson's problem asks whether the commuting operator model for two-party quantum correlations is equivalent to the tensor-product model. We give a negative answer to this question by showing that there are non-local games which have perfect commuting-operator strategies, but do not have perfect tensor-product strategies. The weak Tsirelson problem, which is known to be equivalent to Connes embedding problem, remains open. The examples we construct are instances of (binary) linear system games. For such games, previous results state that the existence of perfect strategies is controlled by the solution group of the linear system. Our main result is that every finitely-presented group embeds in some solution group. As an additional consequence, we show that the problem of determining whether a linear system game has a perfect commuting-operator strategy is undecidable.
• The Systematic Normal Form (SysNF) is a canonical form of lattices introduced in [Eldar,Shor '16], in which the basis entries satisfy a certain co-primality condition. Using a "smooth" analysis of lattices by SysNF lattices we design a quantum algorithm that can efficiently solve the following variant of the bounded-distance-decoding problem: given a lattice L, a vector v, and numbers b = \lambda_1(L)/n^17, a = \lambda_1(L)/n^13 decide if v's distance from L is in the range [a/2, a] or at most b, where \lambda_1(L) is the length of L's shortest non-zero vector. Improving these parameters to a = b = \lambda_1(L)/\sqrtn would invalidate one of the security assumptions of the Learning-with-Errors (LWE) cryptosystem against quantum attacks.
• Jan 05 2017 quant-ph arXiv:1701.01062v1
An ideal system of $n$ qubits has $2^n$ dimensions. This exponential grants power, but also hinders characterizing the system's state and dynamics. We study a new problem: the qubits in a physical system might not be independent. They can "overlap," in the sense that an operation on one qubit slightly affects the others. We show that allowing for slight overlaps, $n$ qubits can fit in just polynomially many dimensions. (Defined in a natural way, all pairwise overlaps can be $\leq \epsilon$ in $n^{O(1/\epsilon^2)}$ dimensions.) Thus, even before considering issues like noise, a real system of $n$ qubits might inherently lack any potential for exponential power. On the other hand, we also provide an efficient test to certify exponential dimensionality. Unfortunately, the test is sensitive to noise. It is important to devise more robust tests on the arrangements of qubits in quantum devices.
• We give a quantum algorithm for solving semidefinite programs (SDPs). It has worst case running time n^1/2m^1/2s poly(log(n), log(m), R, 1/delta), with n and s the dimension and sparsity of the input matrices, respectively, m the number of constraints, delta the accuracy of the solution, and R an upper bound on the trace of the optimal solution. This gives a square-root unconditional speed-up over any classical method for solving SDPs both in n and m. We prove the algorithm cannot be substantially improved giving a Omega(n^1/2 + m^1/2) quantum lower bound for solving semidefinite programs with constant s, R and delta. We then argue that in some instances the algorithm offer even exponential speed-ups. This is the case whenever the quantum Gibbs states of Hamiltonians given by linear combinations of the input matrices of the SDP can be prepared efficiently on a quantum computer. An example are SDPs in which the input matrices have low-rank: For SDPs with the maximum rank of any input matrix bounded by r, we show the quantum algorithm runs in time poly(log(n), log(m), r, R, delta)m^1/2. The quantum algorithm is constructed by a combination of quantum Gibbs sampling and the multiplicative weight method. In particular it is based on an classical algorithm of Arora and Kale for approximately solving SDPs. We present a modification of their algorithm to eliminate the need of solving an inner linear program which may be of independent interest.
• In this work we explore a correspondence between quantum circuits and low-degree polynomials over the finite field F_2. Any quantum circuit made up of Hadamard, Z, controlled-Z and controlled-controlled-Z gates gives rise to a degree-3 polynomial over F_2 such that calculating quantum circuit amplitudes is equivalent to counting zeroes of the corresponding polynomial. We exploit this connection, which is especially clean and simple for this particular gate set, in two directions. First, we give proofs of classical hardness results based on quantum circuit concepts. Second, we find efficient classical simulation algorithms for certain classes of quantum circuits based on efficient algorithms for classes of polynomials.
• These are lecture notes from a weeklong course in quantum complexity theory taught at the Bellairs Research Institute in Barbados, February 21-25, 2016. The focus is quantum circuit complexity---i.e., the minimum number of gates needed to prepare a given quantum state or apply a given unitary transformation---as a unifying theme tying together several topics of recent interest in the field. Those topics include the power of quantum proofs and advice states; how to construct quantum money schemes secure against counterfeiting; and the role of complexity in the black-hole information paradox and the AdS/CFT correspondence (through connections made by Harlow-Hayden, Susskind, and others). The course was taught to a mixed audience of theoretical computer scientists and quantum gravity / string theorists, and starts out with a crash course on quantum information and computation in general.
• We address the question of whether symmetry-protected topological (SPT) order can persist at nonzero temperature, with a focus on understanding the thermal stability of several models studied in the theory of quantum computation. We present three results in this direction. First, we prove that nontrivial SPT order protected by a global on-site symmetry cannot persist at nonzero temperature, demonstrating that several quantum computational structures protected by such on-site symmetries are not thermally stable. Second, we prove that the 3D cluster state model used in the formulation of topological measurement-based quantum computation possesses a nontrivial SPT-ordered thermal phase when protected by a global generalized (1-form) symmetry. The SPT order in this model is detected by long-range localizable entanglement in the thermal state, which compares with related results characterizing SPT order at zero temperature in spin chains using localizable entanglement as an order parameter. Our third result is to demonstrate that the high error tolerance of this 3D cluster state model for quantum computation, even without a protecting symmetry, can be understood as an application of quantum error correction to effectively enforce a 1-form symmetry.
• Nov 15 2016 quant-ph arXiv:1611.04471v1
Adiabatic quantum computing (AQC) started as an approach to solving optimization problems, and has evolved into an important universal alternative to the standard circuit model of quantum computing, with deep connections to both classical and quantum complexity theory and condensed matter physics. In this review we give an account of most of the major theoretical developments in the field, while focusing on the closed-system setting. The review is organized around a series of topics that are essential to an understanding of the underlying principles of AQC, its algorithmic accomplishments and limitations, and its scope in the more general setting of computational complexity theory. We present several variants of the adiabatic theorem, the cornerstone of AQC, and we give examples of explicit AQC algorithms that exhibit a quantum speedup. We give an overview of several proofs of the universality of AQC and related Hamiltonian quantum complexity theory. We finally devote considerable space to Stoquastic AQC, the setting of most AQC work to date, where we discuss obstructions to success and their possible resolutions. To be submitted to Reviews of Modern Physics.
• The Quantum Approximate Optimization Algorithm (QAOA) is designed to run on a gate model quantum computer and has shallow depth. It takes as input a combinatorial optimization problem and outputs a string that satisfies a high fraction of the maximum number of clauses that can be satisfied. For certain problems the lowest depth version of the QAOA has provable performance guarantees although there exist classical algorithms that have better guarantees. Here we argue that beyond its possible computational value the QAOA can exhibit a form of Quantum Supremacy in that, based on reasonable complexity theoretic assumptions, the output distribution of even the lowest depth version cannot be efficiently simulated on any classical device. We contrast this with the case of sampling from the output of a quantum computer running the Quantum Adiabatic Algorithm (QADI) with the restriction that the Hamiltonian that governs the evolution is gapped and stoquastic. Here we show that there is an oracle that would allow sampling from the QADI but even with this oracle, if one could efficiently classically sample from the output of the QAOA, the Polynomial Hierarchy would collapse. This suggests that the QAOA is an excellent candidate to run on near term quantum computers not only because it may be of use for optimization but also because of its potential as a route to establishing quantum supremacy.
• This paper surveys quantum learning theory: the theoretical aspects of machine learning using quantum computers. We describe the main results known for three models of learning: exact learning from membership queries, and Probably Approximately Correct (PAC) and agnostic learning from classical or quantum examples.
• In the near future, there will likely be special-purpose quantum computers with 40-50 high-quality qubits. This paper lays general theoretical foundations for how to use such devices to demonstrate "quantum supremacy": that is, a clear quantum speedup for some task, motivated by the goal of overturning the Extended Church-Turing Thesis as confidently as possible. First, we study the hardness of sampling the output distribution of a random quantum circuit, along the lines of a recent proposal by the the Quantum AI group at Google. We show that there's a natural hardness assumption, which has nothing to do with sampling, yet implies that no efficient classical algorithm can pass a statistical test that the quantum sampling procedure's outputs do pass. Compared to previous work, the central advantage is that we can now talk directly about the observed outputs, rather than about the distribution being sampled. Second, in an attempt to refute our hardness assumption, we give a new algorithm, for simulating a general quantum circuit with n qubits and m gates in polynomial space and m^O(n) time. We then discuss why this and other known algorithms fail to refute our assumption. Third, resolving an open problem of Aaronson and Arkhipov, we show that any strong quantum supremacy theorem--of the form "if approximate quantum sampling is classically easy, then PH collapses"--must be non-relativizing. Fourth, refuting a conjecture by Aaronson and Ambainis, we show that the Fourier Sampling problem achieves a constant versus linear separation between quantum and randomized query complexities. Fifth, we study quantum supremacy relative to oracles in P/poly. Previous work implies that, if OWFs exist, then quantum supremacy is possible relative to such oracles. We show that some assumption is needed: if SampBPP=SampBQP and NP is in BPP, then quantum supremacy is impossible relative to such oracles.
• We give an introduction to the theory of multi-partite entanglement. We begin by describing the "coordinate system" of the field: Are we dealing with pure or mixed states, with single or multiple copies, what notion of "locality" is being used, do we aim to classify states according to their "type of entanglement" or to quantify it? Building on the general theory of multi-partite entanglement - to the extent that it has been achieved - we turn to explaining important classes of multi-partite entangled states, including matrix product states, stabilizer and graph states, bosonic and fermionic Gaussian states, addressing applications in condensed matter theory. We end with a brief discussion of various applications that rely on multi-partite entangled states: quantum networks, measurement-based quantum computing, non-locality, and quantum metrology.
• (Abridged abstract.) In this thesis we introduce new models of quantum computation to study the emergence of quantum speed-up in quantum computer algorithms. Our first contribution is a formalism of restricted quantum operations, named normalizer circuit formalism, based on algebraic extensions of the qubit Clifford gates (CNOT, Hadamard and $\pi/4$-phase gates): a normalizer circuit consists of quantum Fourier transforms (QFTs), automorphism gates and quadratic phase gates associated to a set $G$, which is either an abelian group or abelian hypergroup. Though Clifford circuits are efficiently classically simulable, we show that normalizer circuit models encompass Shor's celebrated factoring algorithm and the quantum algorithms for abelian Hidden Subgroup Problems. We develop classical-simulation techniques to characterize under which scenarios normalizer circuits provide quantum speed-ups. Finally, we devise new quantum algorithms for finding hidden hyperstructures. The results offer new insights into the source of quantum speed-ups for several algebraic problems. Our second contribution is an algebraic (group- and hypergroup-theoretic) framework for describing quantum many-body states and classically simulating quantum circuits. Our framework extends Gottesman's Pauli Stabilizer Formalism (PSF), wherein quantum states are written as joint eigenspaces of stabilizer groups of commuting Pauli operators: while the PSF is valid for qubit/qudit systems, our formalism can be applied to discrete- and continuous-variable systems, hybrid settings, and anyonic systems. These results enlarge the known families of quantum processes that can be efficiently classically simulated. This thesis also establishes a precise connection between Shor's quantum algorithm and the stabilizer formalism, revealing a common mathematical structure in several quantum speed-ups and error-correcting codes.
• Mar 30 2016 quant-ph cs.DS cs.IR arXiv:1603.08675v3
A recommendation system uses the past purchases or ratings of $n$ products by a group of $m$ users, in order to provide personalized recommendations to individual users. The information is modeled as an $m \times n$ preference matrix which is assumed to have a good rank-$k$ approximation, for a small constant $k$. In this work, we present a quantum algorithm for recommendation systems that has running time $O(\text{poly}(k)\text{polylog}(mn))$. All known classical algorithms for recommendation systems that work through reconstructing an approximation of the preference matrix run in time polynomial in the matrix dimension. Our algorithm provides good recommendations by sampling efficiently from an approximation of the preference matrix, without reconstructing the entire matrix. For this, we design an efficient quantum procedure to project a given vector onto the row space of a given matrix. This is the first algorithm for recommendation systems that runs in time polylogarithmic in the dimensions of the matrix and provides an example of a quantum machine learning algorithm for a real world application.
• According to quantum theory, a measurement may have multiple possible outcomes. Single-world interpretations assert that, nevertheless, only one of them "really" occurs. Here we propose a gedankenexperiment where quantum theory is applied to model an experimenter who herself uses quantum theory. We find that, in such a scenario, no single-world interpretation can be logically consistent. This conclusion extends to deterministic hidden-variable theories, such as Bohmian mechanics, for they impose a single-world interpretation.
• We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by showing that approximate symmetry operators---unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small---which possess certain mutual commutation relations can be restricted to the ground space with low distortion. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations, and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space, and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators which may be of independent interest.
• Recent progress implies that a crossover between machine learning and quantum information processing benefits both fields. Traditional machine learning has dramatically improved the benchmarking and control of experimental quantum computing systems, including adaptive quantum phase estimation and designing quantum computing gates. On the other hand, quantum mechanics offers tantalizing prospects to enhance machine learning, ranging from reduced computational complexity to improved generalization performance. The most notable examples include quantum enhanced algorithms for principal component analysis, quantum support vector machines, and quantum Boltzmann machines. Progress has been rapid, fostered by demonstrations of midsized quantum optimizers which are predicted to soon outperform their classical counterparts. Further, we are witnessing the emergence of a physical theory pinpointing the fundamental and natural limitations of learning. Here we survey the cutting edge of this merger and list several open problems.
• Oct 07 2016 quant-ph arXiv:1610.01664v1
Quantum information and computation provide a fascinating twist on the notion of proofs in computational complexity theory. For instance, one may consider a quantum computational analogue of the complexity class \classNP, known as QMA, in which a quantum state plays the role of a proof (also called a certificate or witness), and is checked by a polynomial-time quantum computation. For some problems, the fact that a quantum proof state could be a superposition over exponentially many classical states appears to offer computational advantages over classical proof strings. In the interactive proof system setting, one may consider a verifier and one or more provers that exchange and process quantum information rather than classical information during an interaction for a given input string, giving rise to quantum complexity classes such as QIP, QSZK, and QMIP* that represent natural quantum analogues of IP, SZK, and MIP. While quantum interactive proof systems inherit some properties from their classical counterparts, they also possess distinct and uniquely quantum features that lead to an interesting landscape of complexity classes based on variants of this model. In this survey we provide an overview of many of the known results concerning quantum proofs, computational models based on this concept, and properties of the complexity classes they define. In particular, we discuss non-interactive proofs and the complexity class QMA, single-prover quantum interactive proof systems and the complexity class QIP, statistical zero-knowledge quantum interactive proof systems and the complexity class \classQSZK, and multiprover interactive proof systems and the complexity classes QMIP, QMIP*, and MIP*.
• We introduce a class of so called Markovian marginals, which gives a natural framework for constructing solutions to the quantum marginal problem. We consider a set of marginals that possess a certain internal quantum Markov chain structure. If they are equipped with such a structure and are locally consistent on their overlapping supports, there exists a global state that is consistent with all the marginals. The proof is constructive, and relies on a reduction of the marginal problem to a certain combinatorial problem. By employing an entanglement entropy scaling law, we give a physical argument that the requisite structure exists in any states with finite correlation lengths. This includes topologically ordered states as well as finite temperature Gibbs states.
• We describe two procedures which, given access to one copy of a quantum state and a sequence of two-outcome measurements, can distinguish between the case that at least one of the measurements accepts the state with high probability, and the case that all of the measurements have low probability of acceptance. The measurements cannot simply be tried in sequence, because early measurements may disturb the state being tested. One procedure is based on a variant of Marriott-Watrous amplification. The other procedure is based on the use of a test for this disturbance, which is applied with low probability. We find a number of applications. First, quantum query complexity separations in the property testing model for testing isomorphism of functions under group actions. We give quantum algorithms for testing isomorphism, linear isomorphism and affine isomorphism of boolean functions which use exponentially fewer queries than is possible classically, and a quantum algorithm for testing graph isomorphism which uses polynomially fewer queries than the best algorithm known. Second, testing properties of quantum states and operations. We show that any finite property of quantum states can be tested using a number of copies of the state which is logarithmic in the size of the property, and give a test for genuine multipartite entanglement of states of n qubits that uses O(n) copies of the state. Third, correcting an error in a result of Aaronson on de-Merlinizing quantum protocols. This result claimed that, in any one-way quantum communication protocol where two parties are assisted by an all-powerful but untrusted third party, the third party can be removed with only a modest increase in the communication cost. We give a corrected proof of a key technical lemma required for Aaronson's result.
• The Travelling Salesman Problem is one of the most famous problems in graph theory. However, little is currently known about the extent to which quantum computers could speed up algorithms for the problem. In this paper, we prove a quadratic quantum speedup when the degree of each vertex is at most 3 by applying a quantum backtracking algorithm to a classical algorithm by Xiao and Nagamochi. We then use similar techniques to accelerate a classical algorithm for when the degree of each vertex is at most 4, before speeding up higher-degree graphs via reductions to these instances.
• This set of lecture notes forms the basis of a series of lectures delivered at the 48th IFF Spring School 2017 on Topological Matter: Topological Insulators, Skyrmions and Majoranas at Forschungszentrum Juelich, Germany. The first part of the lecture notes covers the basics of abelian and non-abelian anyons and their realization in the Kitaev's honeycomb model. The second part discusses how to perform universal quantum computation using Majorana fermions.
• Current experiments are taking the first steps toward noise-resilient logical qubits. Crucially, a quantum computer must not merely store information, but also process it. A fault-tolerant computational procedure ensures that errors do not multiply and spread. This review compares the leading proposals for promoting a quantum memory to a quantum processor. We compare magic state distillation, color code techniques and other alternative ideas, paying attention to relative resource demands. We discuss the several no-go results which hold for low-dimensional topological codes and outline the potential rewards of using high-dimensional quantum (LDPC) codes in modular architectures.
• Dec 15 2016 quant-ph arXiv:1612.04795v1
The surface code is one of the most successful approaches to topological quantum error-correction. It boasts the smallest known syndrome extraction circuits and correspondingly largest thresholds. Defect-based logical encodings of a new variety called twists have made it possible to implement the full Clifford group without state distillation. Here we investigate a patch-based encoding involving a modified twist. In our modified formulation, the resulting codes, called triangle codes for the shape of their planar layout, have only weight-four checks and relatively simple syndrome extraction circuits that maintain a high, near surface-code-level threshold. They also use 25% fewer physical qubits per logical qubit than the surface code. Moreover, benefiting from the twist, we can implement all Clifford gates by lattice surgery without the need for state distillation. By a surgical transformation to the surface code, we also develop a scheme of doing all Clifford gates on surface code patches in an atypical planar layout, though with less qubit efficiency than the triangle code. Finally, we remark that logical qubits encoded in triangle codes are naturally amenable to logical tomography, and the smallest triangle code can demonstrate high-pseudothreshold fault-tolerance to depolarizing noise using just 13 physical qubits.
• Nov 14 2016 quant-ph arXiv:1611.03790v1
We present an algorithm that takes a CSS stabilizer code as input, and outputs another CSS stabilizer code such that the stabilizer generators all have weights $O(1)$ and such that $O(1)$ generators act on any given qubit. The number of logical qubits is unchanged by the procedure, while we give bounds on the increase in number of physical qubits and in the effect on distance and other code parameters, such as soundness (as a locally testable code) and "cosoundness" (defined later). Applications are discussed, including to codes from high-dimensional manifolds which have logarithmic weight stabilizers. Assuming a conjecture in geometry\citehdm, this allows the construction of CSS stabilizer codes with generator weight $O(1)$ and almost linear distance. Another application of the construction is to increasing the distance to $X$ or $Z$ errors, whichever is smaller, so that the two distances are equal.
• One of the central challenges in the study of quantum many-body systems is the complexity of simulating them on a classical computer. A recent advance of Landau et al. gave a polynomial time algorithm to actually compute a succinct classical description for unique ground states of gapped 1D quantum systems. Despite this progress many questions remained unresolved, including whether there exist rigorous efficient algorithms when the ground space is degenerate (and poly($n$) dimensional), or for the poly($n$) lowest energy states for 1D systems, or even whether such states admit succinct classical descriptions or area laws. In this paper we give a new algorithm for finding low energy states for 1D systems, based on a rigorously justified RG type transformation. In the process we resolve some of the aforementioned open questions, including giving a polynomial time algorithm for poly($n$) degenerate ground spaces and an $n^{O(\log n)}$ algorithm for the poly($n$) lowest energy states for 1D systems (under a mild density condition). We note that for these classes of systems the existence of a succinct classical description and area laws were not rigorously proved before this work. The algorithms are natural and efficient, and for the case of finding unique ground states for frustration-free Hamiltonians the running time is $\tilde{O}(nM(n))$, where $M(n)$ is the time required to multiply two $n\times n$ matrices.
• We present a brief review of discrete structures in a finite Hilbert space, relevant for the theory of quantum information. Unitary operator bases, mutually unbiased bases, Clifford group and stabilizer states, discrete Wigner function, symmetric informationally complete measurements, projective and unitary t--designs are discussed. Some recent results in the field are covered and several important open questions are formulated. We advocate a geometric approach to the subject and emphasize numerous links to various mathematical problems.
• Preparing quantum thermal states on a quantum computer is in general a difficult task. We provide a procedure to prepare a thermal state on a quantum computer with a logarithmic depth circuit of local quantum channels assuming that the thermal state correlations satisfy the following two properties: (i) the correlations between two regions are exponentially decaying in the distance between the regions, and (ii) the thermal state is an approximate Markov state for shielded regions. We require both properties to hold for the thermal state of the Hamiltonian on any induced subgraph of the original lattice. Assumption (ii) is satisfied for all commuting Gibbs states, while assumption (i) is satisfied for every model above a critical temperature. Both assumptions are satisfied in one spatial dimension. Moreover, both assumptions are expected to hold above the thermal phase transition for models without any topological order at finite temperature. As a building block, we show that exponential decay of correlation (for thermal states of Hamiltonians on all induced subgraph) is sufficient to efficiently estimate the expectation value of a local observable. Our proof uses quantum belief propagation, a recent strengthening of strong sub-additivity, and naturally breaks down for states with topological order.
• Using error correcting codes and fault tolerant techniques, it is possible, at least in theory, to produce logical qubits with significantly lower error rates than the underlying physical qubits. Suppose, however, that the gates that act on these logical qubits are only approximation of the desired gate. This can arise, for example, in synthesizing a single qubit unitary from a set of Clifford and $T$ gates; for a generic such unitary, any finite sequence of gates only approximates the desired target. In this case, errors in the gate can add coherently so that, roughly, the error $\epsilon$ in the unitary of each gate must scale as $\epsilon \lesssim 1/N$, where $N$ is the number of gates. If, however, one has the option of synthesizing one of several unitaries near the desired target, and if an average of these options is closer to the target, we give some elementary bounds showing cases in which the errors can be made to add incoherently by averaging over random choices, so that, roughly, one needs $\epsilon \lesssim 1/\sqrt{N}$. We remark on one particular application to distilling magic states where this effect happens automatically in the usual circuits.
• Apr 12 2016 quant-ph cs.CR arXiv:1604.02804v1
Prior work has established that all problems in NP admit classical zero-knowledge proof systems, and under reasonable hardness assumptions for quantum computations, these proof systems can be made secure against quantum attacks. We prove a result representing a further quantum generalization of this fact, which is that every problem in the complexity class QMA has a quantum zero-knowledge proof system. More specifically, assuming the existence of an unconditionally binding and quantum computationally concealing commitment scheme, we prove that every problem in the complexity class QMA has a quantum interactive proof system that is zero-knowledge with respect to efficient quantum computations. Our QMA proof system is sound against arbitrary quantum provers, but only requires an honest prover to perform polynomial-time quantum computations, provided that it holds a quantum witness for a given instance of the QMA problem under consideration. The proof system relies on a new variant of the QMA-complete local Hamiltonian problem in which the local terms are described by Clifford operations and standard basis measurements. We believe that the QMA-completeness of this problem may have other uses in quantum complexity.
• We show that any classical communication protocol that can approximately simulate the result of applying an arbitrary measurement (held by one party) to a quantum state of n qubits (held by another) must transmit at least 2^n bits, up to constant factors. The argument is based on a lower bound on the classical communication complexity of a distributed variant of the Fourier sampling problem. We obtain two optimal quantum-classical separations as corollaries. First, a sampling problem which can be solved with one quantum query to the input, but which requires order-N classical queries for an input of size N. Second, a nonlocal task which can be solved using n Bell pairs, but for which any approximate classical solution must communicate 2^n bits, up to constant factors.
• I discuss a variety of issues relating to near-future experiments demonstrating fault-tolerant quantum computation. I describe a family of fault-tolerant quantum circuits that can be performed with 5 qubits arranged on a ring with nearest-neighbor interactions. I also present a criterion whereby we can say that an experiment has succeeded in demonstrating fault tolerance. Finally, I discuss the possibility of using future fault-tolerant experiments to answer important questions about the interaction of fault-tolerant protocols with real experimental errors.
• The class of commuting quantum circuits known as IQP (instantaneous quantum polynomial-time) has been shown to be hard to simulate classically, assuming certain complexity-theoretic conjectures. Here we study the power of IQP circuits in the presence of physically motivated constraints. First, we show that there is a family of sparse IQP circuits that can be implemented on a square lattice of n qubits in depth O(sqrt(n) log n), and which is likely hard to simulate classically. Next, we show that, if an arbitrarily small constant amount of noise is applied to each qubit at the end of any IQP circuit whose output probability distribution is sufficiently anticoncentrated, there is a polynomial-time classical algorithm that simulates sampling from the resulting distribution, up to constant accuracy in total variation distance. However, we show that purely classical error-correction techniques can be used to design IQP circuits which remain hard to simulate classically, even in the presence of arbitrary amounts of noise of this form. These results demonstrate the challenges faced by experiments designed to demonstrate quantum supremacy over classical computation, and how these challenges can be overcome.
• We give a simple proof of the exponential de Finetti theorem due to Renner. Like Renner's proof, ours combines the post-selection de Finetti theorem, the Gentle Measurement lemma, and the Chernoff bound, but avoids virtually all calculations, including any use of the theory of types.
• We introduce a numerical method for identifying topological order in two-dimensional models based on one-dimensional bulk operators. The idea is to identify approximate symmetries supported on thin strips through the bulk that behave as string operators associated to an anyon model. We can express these ribbon operators in matrix product form and define a cost function that allows us to efficiently optimize over this ansatz class. We test this method on spin models with abelian topological order by finding ribbon operators for $\mathbb{Z}_d$ quantum double models with local fields and Ising-like terms. In addition, we identify ribbons in the abelian phase of Kitaev's honeycomb model which serve as the logical operators of the encoded qubit for the quantum error-correcting code. We further identify the topologically encoded qubit in the quantum compass model, and show that despite this qubit, the model does not support topological order. Finally, we discuss how the method supports generalizations for detecting nonabelian topological order.
• We consider a family of quantum spin systems which includes as special cases the ferromagnetic XY model and ferromagnetic Ising model on any graph, with or without a transverse magnetic field. We prove that the partition function of any model in this family can be efficiently approximated to a given relative error E using a classical randomized algorithm with runtime polynomial in 1/E, system size, and inverse temperature. As a consequence we obtain a polynomial time algorithm which approximates the free energy or ground energy to a given additive error. We first show how to approximate the partition function by the perfect matching sum of a finite graph with positive edge weights. Although the perfect matching sum is not known to be efficiently approximable in general, the graphs obtained by our method have a special structure which facilitates efficient approximation via a randomized algorithm due to Jerrum and Sinclair.
• In the early days of quantum mechanics, it was believed that the time energy uncertainty principle (TEUP) bounds the efficiency of energy measurements, relating the duration ($\Delta t$) of the measurement, and its accuracy error ($\Delta E$) by $\Delta t\Delta E \ge$ 1/2. In 1961 Y. Aharonov and Bohm gave a counterexample, whereas Aharonov, Massar and Popescu [2002] showed that under certain conditions the principle holds. Can we classify when and to what extent the TEUP is violated? Our main theorem asserts that such violations are in one to one correspondence with the ability to "fast forward" the associated Hamiltonian, namely, to simulate its evolution for time $t$ using much less than $t$ quantum gates. This intriguingly links precision measurements with quantum algorithms. Our theorem is stated in terms of a modified TEUP, which we call the computational TEUP (cTEUP). In this principle the time duration ($\Delta t$) is replaced by the number of quantum gates required to perform the measurement, and we argue why this is more suitable to study if one is to understand the totality of physical resources required to perform an accurate measurement. The inspiration for this result is a family of Hamiltonians we construct, based on Shor's algorithm, which exponentially violate the cTEUP (and the TEUP), thus allowing exponential fast forwarding. We further show that commuting local Hamiltonians and quadratic Hamiltonians of fermions (e.g., Anderson localization model), can be fast forwarded. The work raises the question of finding a physical criterion for fast forwarding, in particular, can many body localization systems be fast forwarded? We rule out a general fast-forwarding method for all physically realizable Hamiltonians (unless BQP=PSPACE). Connections to quantum metrology and to Susskind's complexification of a wormhole's length are discussed.
• Oct 18 2016 quant-ph hep-th arXiv:1610.04903v1
We study the relationship between quantum chaos and pseudorandomness by developing probes of unitary design. A natural probe of randomness is the "frame potential," which is minimized by unitary $k$-designs and measures the $2$-norm distance between the Haar random unitary ensemble and another ensemble. A natural probe of quantum chaos is out-of-time-order (OTO) four-point correlation functions. We show that the norm squared of a generalization of out-of-time-order $2k$-point correlators is proportional to the $k$th frame potential, providing a quantitative connection between chaos and pseudorandomness. Additionally, we prove that these $2k$-point correlators for Pauli operators completely determine the $k$-fold channel of an ensemble of unitary operators. Finally, we use a counting argument to obtain a lower bound on the quantum circuit complexity in terms of the frame potential. This provides a direct link between chaos, complexity, and randomness.
• We give a quasi-polynomial time classical algorithm for estimating the ground state energy and for computing low energy states of quantum impurity models. Such models describe a bath of free fermions coupled to a small interacting subsystem called an impurity. The full system consists of $n$ fermionic modes and has a Hamiltonian $H=H_0+H_{imp}$, where $H_0$ is quadratic in creation-annihilation operators and $H_{imp}$ is an arbitrary Hamiltonian acting on a subset of $O(1)$ modes. We show that the ground energy of $H$ can be approximated with an additive error $2^{-b}$ in time $n^3 \exp{[O(b^3)]}$. Our algorithm also finds a low energy state that achieves this approximation. The low energy state is represented as a superposition of $\exp{[O(b^3)]}$ fermionic Gaussian states. To arrive at this result we prove several theorems concerning exact ground states of impurity models. In particular, we show that eigenvalues of the ground state covariance matrix decay exponentially with the exponent depending very mildly on the spectral gap of $H_0$. A key ingredient of our proof is Zolotarev's rational approximation to the $\sqrt{x}$ function. We anticipate that our algorithms may be used in hybrid quantum-classical simulations of strongly correlated materials based on dynamical mean field theory. We implemented a simplified practical version of our algorithm and benchmarked it using the single impurity Anderson model.
• May 04 2016 quant-ph arXiv:1605.00713v1
Randomness is both a useful way to model natural systems and a useful tool for engineered systems, e.g. in computation, communication and control. Fully random transformations require exponential time for either classical or quantum systems, but in many case pseudorandom operations can emulate certain properties of truly random ones. Indeed in the classical realm there is by now a well-developed theory of such pseudorandom operations. However the construction of such objects turns out to be much harder in the quantum case. Here we show that random quantum circuits are a powerful source of quantum pseudorandomness. This gives the for the first time a polynomialtime construction of quantum unitary designs, which can replace fully random operations in most applications, and shows that generic quantum dynamics cannot be distinguished from truly random processes. We discuss applications of our result to quantum information science, cryptography and to understanding self-equilibration of closed quantum dynamics.
• We investigate the sample complexity of Hamiltonian simulation: how many copies of an unknown quantum state are required to simulate a Hamiltonian encoded by the density matrix of that state? We show that the procedure proposed by Lloyd, Mohseni, and Rebentrost [Nat. Phys., 10(9):631--633, 2014] is optimal for this task. We further extend their method to the case of multiple input states, showing how to simulate any Hermitian polynomial of the states provided. As applications, we derive optimal algorithms for commutator simulation and orthogonality testing, and we give a protocol for creating a coherent superposition of pure states, when given sample access to those states. We also show that this sample-based Hamiltonian simulation can be used as the basis of a universal model of quantum computation that requires only partial swap operations and simple single-qubit states.
• Jul 08 2016 quant-ph cs.IT math.IT arXiv:1607.01796v1
We ask the question whether entropy accumulates, in the sense that the operationally relevant total uncertainty about an $n$-partite system $A = (A_1, \ldots A_n)$ corresponds to the sum of the entropies of its parts $A_i$. The Asymptotic Equipartition Property implies that this is indeed the case to first order in $n$, under the assumption that the parts $A_i$ are identical and independent of each other. Here we show that entropy accumulation occurs more generally, i.e., without an independence assumption, provided one quantifies the uncertainty about the individual systems $A_i$ by the von Neumann entropy of suitably chosen conditional states. The analysis of a large system can hence be reduced to the study of its parts. This is relevant for applications. In device-independent cryptography, for instance, the approach yields essentially optimal security bounds valid for general attacks, as shown by Arnon-Friedman et al.
• We consider a notion of relative homology (and cohomology) for surfaces with two types of boundaries. Using this tool, we study a generalization of Kitaev's code based on surfaces with mixed boundaries. This construction includes both Bravyi and Kitaev's and Freedman and Meyer's extension of Kitaev's toric code. We argue that our generalization offers a denser storage of quantum information. In a planar architecture, we obtain a three-fold overhead reduction over the standard architecture consisting of a punctured square lattice.
• Motivated by understanding the power of quantum computation with restricted number of qubits, we give two complete characterizations of unitary quantum space bounded computation. First we show that approximating an element of the inverse of a well-conditioned efficiently encoded $2^{k(n)}\times 2^{k(n)}$ matrix is complete for the class of problems solvable by quantum circuits acting on $\mathcal{O}(k(n))$ qubits with all measurements at the end of the computation. Similarly, estimating the minimum eigenvalue of an efficiently encoded Hermitian $2^{k(n)}\times 2^{k(n)}$ matrix is also complete for this class. In the logspace case, our results improve on previous results of Ta-Shma [STOC '13] by giving new space-efficient quantum algorithms that avoid intermediate measurements, as well as showing matching hardness results. Additionally, as a consequence we show that PreciseQMA, the version of QMA with exponentially small completeness-soundess gap, is equal to PSPACE. Thus, the problem of estimating the minimum eigenvalue of a local Hamiltonian to inverse exponential precision is PSPACE-complete, which we show holds even in the frustration-free case. Finally, we can use this characterization to give a provable setting in which the ability to prepare the ground state of a local Hamiltonian is more powerful than the ability to prepare PEPS states. Interestingly, by suitably changing the parameterization of either of these problems we can completely characterize the power of quantum computation with simultaneously bounded time and space.
• May 18 2016 quant-ph arXiv:1605.05039v1
We introduce a fast and accurate heuristic for adaptive tomography that addresses many of the limitations of prior methods. Previous approaches were either too computationally intensive or tailored to handle special cases such as single qubits or pure states. By contrast, our approach combines the efficiency of online optimization with generally applicable and well-motivated data-processing techniques. We numerically demonstrate these advantages in several scenarios including mixed states, higher-dimensional systems, and restricted measurements.
• If a one-phrase summary of the subject of this thesis were required, it would be something like: miscellaneous large (but finite) dimensional phenomena in quantum information theory. That said, it could nonetheless be helpful to briefly elaborate. Starting from the observation that quantum physics unavoidably has to deal with high dimensional objects, basically two routes can be taken: either try and reduce their study to that of lower dimensional ones, or try and understand what kind of universal properties might precisely emerge in this regime. We actually do not choose which of these two attitudes to follow here, and rather oscillate between one and the other. In the first part of this manuscript, our aim is to reduce as much as possible the complexity of certain quantum processes, while of course still preserving their essential characteristics. The two types of processes we are interested in are quantum channels and quantum measurements. Oppositely, the second part of this manuscript is specifically dedicated to the analysis of high dimensional quantum systems and some of their typical features. Stress is put on multipartite systems and on entanglement-related properties of theirs. In the third part of this manuscript we eventually come back to a more dimensionality reduction state of mind. This time though, the strategy is to make use of the symmetries inherent to each particular situation we are looking at in order to derive a problem-dependent simplification.

Māris Ozols Feb 21 2017 15:35 UTC

I'm wondering if this result could have any interesting consequences for Hamiltonian complexity. The LCL problem sounds very much like a local Hamiltonian problem, with the run-time of an LCL algorithm corresponding to the range of local interactions in the Hamiltonian.

Maybe one caveat is that thi

...(continued)
Andrey Karchevsky Feb 17 2017 09:51 UTC

Dear Authors,

This is in reference of your preprint arxiv 1702.0638.

Above all I must say that I am puzzled with the level of publicity your work has got at http://www.nature.com/news/long-awaited-mathematics-proof-could-help-scan-earth-s-innards-1.21439. Is this a new way for mathematicians t

...(continued)
Karl Joulain Feb 09 2017 15:50 UTC

A **GREAT** paper. Where you learn how to extract work from the measurement of a qubit coupled to a drive. The authors build an engine with very unusual and interesting features such as efficiency of 1 (no entropy creation) arising for conditions where the power extrated is maximum! This maximum dep

...(continued)
Jānis Iraids Jan 25 2017 11:35 UTC

You are correct, that is a mistake -- it should be $\\{0,1\\}^n\rightarrow\\{0,1\\}$. Thank you for spotting it!

Christopher Thomas Chubb Jan 25 2017 02:27 UTC

In the abstract, should the domain of $f$ be $\lbrace0,1\rbrace^n$ instead of just $\lbrace0,1\rbrace$?

Robert Raussendorf Jan 24 2017 22:29 UTC

Regarding Mark's above comment on the role of the stabilizer states: Yes, all previous works on the subject have used the stabilizer states and Clifford gates as the classical backbone. This is due to the Gottesman-Knill theorem and related results. But is it a given that the free sector in quantum

...(continued)
Planat Jan 24 2017 13:09 UTC

Are you sure? Since we do not propose a conjecture, there is nothing wrong. A class of strange states underlie the pentagons in question. The motivation is to put the magic of computation in the permutation frame, one needs more work to check its relevance.

Mark Howard Jan 24 2017 09:59 UTC

It seems interesting at first sight, but after reading it the motivation is very muddled. It boils down to finding pentagons (which enable KCBS-style proofs of contextuality) within sets of projectors, some of which are stabilizer states and some of which are non-stabilizer states (called magic stat

...(continued)
Zoltán Zimborás Jan 12 2017 20:38 UTC

Here is a nice description, with additional links, about the importance of this work if it turns out to be flawless (thanks a lot to Martin Schwarz for this link): [dichotomy conjecture][1].

[1]: http://processalgebra.blogspot.com/2017/01/has-feder-vardi-dichotomy-conjecture.html

Noon van der Silk Jan 05 2017 04:51 UTC

This is a cool paper!