The main SciRate homepage is down (when not logged in). We are working to fix it. See https://github.com/scirate/scirate/issues/337 for updates.

Top arXiv papers

sign in to customize
  • PDF
    We prove that constant-depth quantum circuits are more powerful than their classical counterparts. To this end we introduce a non-oracular version of the Bernstein-Vazirani problem which we call the 2D Hidden Linear Function problem. An instance of the problem is specified by a quadratic form q that maps n-bit strings to integers modulo four. The goal is to identify a linear boolean function which describes the action of q on a certain subset of n-bit strings. We prove that any classical probabilistic circuit composed of bounded fan-in gates that solves the 2D Hidden Linear Function problem with high probability must have depth logarithmic in n. In contrast, we show that this problem can be solved with certainty by a constant-depth quantum circuit composed of one- and two-qubit gates acting locally on a two-dimensional grid.
  • PDF
    Quantum many-body systems exhibit a bewilderingly diverse range of behaviours. Here, we prove that all the physics of every other quantum many-body system is replicated in certain simple, "universal" quantum spin-lattice models. We first characterise precisely and in full generality what it means for one quantum many-body system to replicate the entire physics of another. We then fully classify two-qubit interactions, determining which are universal in this very strong sense and showing that certain simple spin-lattice models are already universal. Examples include the Heisenberg and XY models on a 2D square lattice (with non-uniform coupling strengths). This shows that locality, symmetry, and spatial dimension need not constrain the physics of quantum many-body systems. Our results put the practical field of analogue Hamiltonian simulation on a rigorous footing and show that far simpler systems than previously thought may be viable simulators. We also take a first step towards justifying why error correction may not be required for this application of quantum information technology.
  • PDF
    It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of photons in linear optics, which has sparked interest as a rapid way to demonstrate this quantum supremacy. Photon statistics are governed by intractable matrix functions known as permanents, which suggests that sampling from the distribution obtained by injecting photons into a linear-optical network could be solved more quickly by a photonic experiment than by a classical computer. The contrast between the apparently awesome challenge faced by any classical sampling algorithm and the apparently near-term experimental resources required for a large boson sampling experiment has raised expectations that quantum supremacy by boson sampling is on the horizon. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. While the largest boson sampling experiments reported so far are with 5 photons, our classical algorithm, based on Metropolised independence sampling (MIS), allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. We argue that the impact of experimental photon losses means that demonstrating quantum supremacy by boson sampling would require a step change in technology.
  • PDF
    Recent work has shown that quantum computers can compute scattering probabilities in massive quantum field theories, with a run time that is polynomial in the number of particles, their energy, and the desired precision. Here we study a closely related quantum field-theoretical problem: estimating the vacuum-to-vacuum transition amplitude, in the presence of spacetime-dependent classical sources, for a massive scalar field theory in (1+1) dimensions. We show that this problem is BQP-hard; in other words, its solution enables one to solve any problem that is solvable in polynomial time by a quantum computer. Hence, the vacuum-to-vacuum amplitude cannot be accurately estimated by any efficient classical algorithm, even if the field theory is very weakly coupled, unless BQP=BPP. Furthermore, the corresponding decision problem can be solved by a quantum computer in a time scaling polynomially with the number of bits needed to specify the classical source fields, and this problem is therefore BQP-complete. Our construction can be regarded as an idealized architecture for a universal quantum computer in a laboratory system described by massive phi^4 theory coupled to classical spacetime-dependent sources.
  • PDF
    An ideal system of $n$ qubits has $2^n$ dimensions. This exponential grants power, but also hinders characterizing the system's state and dynamics. We study a new problem: the qubits in a physical system might not be independent. They can "overlap," in the sense that an operation on one qubit slightly affects the others. We show that allowing for slight overlaps, $n$ qubits can fit in just polynomially many dimensions. (Defined in a natural way, all pairwise overlaps can be $\leq \epsilon$ in $n^{O(1/\epsilon^2)}$ dimensions.) Thus, even before considering issues like noise, a real system of $n$ qubits might inherently lack any potential for exponential power. On the other hand, we also provide an efficient test to certify exponential dimensionality. Unfortunately, the test is sensitive to noise. It is important to devise more robust tests on the arrangements of qubits in quantum devices.
  • PDF
    The Systematic Normal Form (SysNF) is a canonical form of lattices introduced in [Eldar,Shor '16], in which the basis entries satisfy a certain co-primality condition. Using a "smooth" analysis of lattices by SysNF lattices we design a quantum algorithm that can efficiently solve the following variant of the bounded-distance-decoding problem: given a lattice L, a vector v, and numbers b = \lambda_1(L)/n^17, a = \lambda_1(L)/n^13 decide if v's distance from L is in the range [a/2, a] or at most b, where \lambda_1(L) is the length of L's shortest non-zero vector. Improving these parameters to a = b = \lambda_1(L)/\sqrtn would invalidate one of the security assumptions of the Learning-with-Errors (LWE) cryptosystem against quantum attacks.
  • PDF
    We address the question of whether symmetry-protected topological (SPT) order can persist at nonzero temperature, with a focus on understanding the thermal stability of several models studied in the theory of quantum computation. We present three results in this direction. First, we prove that nontrivial SPT order protected by a global on-site symmetry cannot persist at nonzero temperature, demonstrating that several quantum computational structures protected by such on-site symmetries are not thermally stable. Second, we prove that the 3D cluster state model used in the formulation of topological measurement-based quantum computation possesses a nontrivial SPT-ordered thermal phase when protected by a global generalized (1-form) symmetry. The SPT order in this model is detected by long-range localizable entanglement in the thermal state, which compares with related results characterizing SPT order at zero temperature in spin chains using localizable entanglement as an order parameter. Our third result is to demonstrate that the high error tolerance of this 3D cluster state model for quantum computation, even without a protecting symmetry, can be understood as an application of quantum error correction to effectively enforce a 1-form symmetry.
  • PDF
    We propose an efficient scheme for verifying quantum computations in the `high complexity' regime i.e. beyond the remit of classical computers. Previously proposed schemes remarkably provide confidence against arbitrarily malicious adversarial behaviour in the misfunctioning of the quantum computing device. Our scheme is not secure against arbitrarily adversarial behaviour, but may nevertheless be sufficiently acceptable in many practical situations. With this concession we gain in manifest simplicity and transparency, and in contrast to previous schemes, our verifier is entirely classical. It is based on the fact that adaptive Clifford circuits on general product state inputs provide universal quantum computation, while the same processes without adaptation are always classically efficiently simulatable.
  • PDF
    Adiabatic quantum computing (AQC) started as an approach to solving optimization problems, and has evolved into an important universal alternative to the standard circuit model of quantum computing, with deep connections to both classical and quantum complexity theory and condensed matter physics. In this review we give an account of most of the major theoretical developments in the field, while focusing on the closed-system setting. The review is organized around a series of topics that are essential to an understanding of the underlying principles of AQC, its algorithmic accomplishments and limitations, and its scope in the more general setting of computational complexity theory. We present several variants of the adiabatic theorem, the cornerstone of AQC, and we give examples of explicit AQC algorithms that exhibit a quantum speedup. We give an overview of several proofs of the universality of AQC and related Hamiltonian quantum complexity theory. We finally devote considerable space to Stoquastic AQC, the setting of most AQC work to date, where we discuss obstructions to success and their possible resolutions. To be submitted to Reviews of Modern Physics.
  • PDF
    We construct a linear system non-local game which can be played perfectly using a limit of finite-dimensional quantum strategies, but which cannot be played perfectly on any finite-dimensional Hilbert space, or even with any tensor-product strategy. In particular, this shows that the set of (tensor-product) quantum correlations is not closed. The constructed non-local game provides another counterexample to the "middle" Tsirelson problem, with a shorter proof than our previous paper (though at the loss of the universal embedding theorem). We also show that it is undecidable to determine if a linear system game can be played perfectly with a finite-dimensional strategy, or a limit of finite-dimensional quantum strategies.
  • PDF
    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning techniques to impressive results in regression, classification, data-generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets are motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed-up classical machine learning algorithms. Here we review the literature in quantum machine learning and discuss perspectives for a mixed readership of classical machine learning and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in machine learning are identified as promising directions for the field. Practical questions, like how to upload classical data into quantum form, will also be addressed.
  • PDF
    This paper surveys quantum learning theory: the theoretical aspects of machine learning using quantum computers. We describe the main results known for three models of learning: exact learning from membership queries, and Probably Approximately Correct (PAC) and agnostic learning from classical or quantum examples.
  • PDF
    We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.1(1)% at a bias of 10. The performance is in fact at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.
  • PDF
    We study the classical complexity of the exact Boson Sampling problem where the objective is to produce provably correct random samples from a particular quantum mechanical distribution. The computational framework was proposed by Aaronson and Arkhipov in 2011 as an attainable demonstration of `quantum supremacy', that is a practical quantum computing experiment able to produce output at a speed beyond the reach of classical (that is non-quantum) computer hardware. Since its introduction Boson Sampling has been the subject of intense international research in the world of quantum computing. On the face of it, the problem is challenging for classical computation. Aaronson and Arkhipov show that exact Boson Sampling is not efficiently solvable by a classical computer unless $P^{\#P} = BPP^{NP}$ and the polynomial hierarchy collapses to the third level. The fastest known exact classical algorithm for the standard Boson Sampling problem takes $O({m + n -1 \choose n} n 2^n )$ time to produce samples for a system with input size $n$ and $m$ output modes, making it infeasible for anything but the smallest values of $n$ and $m$. We give an algorithm that is much faster, running in $O(n 2^n + \operatorname{poly}(m,n))$ time and $O(m)$ additional space. The algorithm is simple to implement and has low constant factor overheads. As a consequence our classical algorithm is able to solve the exact Boson Sampling problem for system sizes far beyond current photonic quantum computing experimentation, thereby significantly reducing the likelihood of achieving near-term quantum supremacy in the context of Boson Sampling.
  • PDF
    In the near future, there will likely be special-purpose quantum computers with 40-50 high-quality qubits. This paper lays general theoretical foundations for how to use such devices to demonstrate "quantum supremacy": that is, a clear quantum speedup for some task, motivated by the goal of overturning the Extended Church-Turing Thesis as confidently as possible. First, we study the hardness of sampling the output distribution of a random quantum circuit, along the lines of a recent proposal by the the Quantum AI group at Google. We show that there's a natural hardness assumption, which has nothing to do with sampling, yet implies that no efficient classical algorithm can pass a statistical test that the quantum sampling procedure's outputs do pass. Compared to previous work, the central advantage is that we can now talk directly about the observed outputs, rather than about the distribution being sampled. Second, in an attempt to refute our hardness assumption, we give a new algorithm, for simulating a general quantum circuit with n qubits and m gates in polynomial space and m^O(n) time. We then discuss why this and other known algorithms fail to refute our assumption. Third, resolving an open problem of Aaronson and Arkhipov, we show that any strong quantum supremacy theorem--of the form "if approximate quantum sampling is classically easy, then PH collapses"--must be non-relativizing. Fourth, refuting a conjecture by Aaronson and Ambainis, we show that the Fourier Sampling problem achieves a constant versus linear separation between quantum and randomized query complexities. Fifth, we study quantum supremacy relative to oracles in P/poly. Previous work implies that, if OWFs exist, then quantum supremacy is possible relative to such oracles. We show that some assumption is needed: if SampBPP=SampBQP and NP is in BPP, then quantum supremacy is impossible relative to such oracles.
  • PDF
    One of the main milestones in quantum information science is to realize quantum devices that exhibit an exponential computational advantage over classical ones without being universal quantum computers, a state of affairs dubbed quantum speedup, or sometimes "quantum computational supremacy". The known schemes heavily rely on mathematical assumptions that are plausible but unproven, prominently results on anti-concentration of random prescriptions. In this work, we aim at closing the gap by proving two anti-concentration theorems. Compared to the few other known such results, these results give rise to comparably simple, physically meaningful and resource-economical schemes showing a quantum speedup in one and two spatial dimensions. At the heart of the analysis are tools of unitary designs and random circuits that allow us to conclude that universal random circuits anti-concentrate.
  • PDF
    Three-dimensional (3D) color codes have advantages for fault-tolerant quantum computing, such as protected quantum gates with relatively low overhead and robustness against imperfect measurement of error syndromes. Here we investigate the storage threshold error rates for bit-flip and phase-flip noise in the 3D color code on the body-centererd cubic lattice, assuming perfect syndrome measurements. In particular, by exploiting a connection between error correction and statistical mechanics, we estimate the threshold for 1D string-like and 2D sheet-like logical operators to be $p^{(1)}_\mathrm{3DCC} \simeq 1.9\%$ and $p^{(2)}_\mathrm{3DCC} \simeq 27.6\%$. We obtain these results by using parallel tempering Monte Carlo simulations to study the disorder-temperature phase diagrams of two new 3D statistical-mechanical models: the 4- and 6-body random coupling Ising models.
  • PDF
    We give an introduction to the theory of multi-partite entanglement. We begin by describing the "coordinate system" of the field: Are we dealing with pure or mixed states, with single or multiple copies, what notion of "locality" is being used, do we aim to classify states according to their "type of entanglement" or to quantify it? Building on the general theory of multi-partite entanglement - to the extent that it has been achieved - we turn to explaining important classes of multi-partite entangled states, including matrix product states, stabilizer and graph states, bosonic and fermionic Gaussian states, addressing applications in condensed matter theory. We end with a brief discussion of various applications that rely on multi-partite entangled states: quantum networks, measurement-based quantum computing, non-locality, and quantum metrology.
  • PDF
    One of the main aims in the field of quantum simulation is to achieve a quantum speedup, often referred to as "quantum computational supremacy", referring to the experimental realization of a quantum device that computationally outperforms classical computers. In this work, we show that one can devise versatile and feasible schemes of two-dimensional dynamical quantum simulators showing such a quantum speedup, building on intermediate problems involving non-adaptive measurement-based quantum computation. In each of the schemes, an initial product state is prepared, potentially involving an element of randomness as in disordered models, followed by a short-time evolution under a basic translationally invariant Hamiltonian with simple nearest-neighbor interactions and a mere sampling measurement in a fixed basis. The correctness of the final state preparation in each scheme is fully efficiently certifiable. We discuss experimental necessities and possible physical architectures, inspired by platforms of cold atoms in optical lattices and a number of others, as well as specific assumptions that enter the complexity-theoretic arguments. This work shows that benchmark settings exhibiting a quantum speedup may require little control in contrast to universal quantum computing. Thus, our proposal puts a convincing experimental demonstration of a quantum speedup within reach in the near term.
  • PDF
    Matrix Product Vectors form the appropriate framework to study and classify one-dimensional quantum systems. In this work, we develop the structure theory of Matrix Product Unitary operators (MPUs) which appear e.g. in the description of time evolutions of one-dimensional systems. We prove that all MPUs have a strict causal cone, making them Quantum Cellular Automata (QCAs), and derive a canonical form for MPUs which relates different MPU representations of the same unitary through a local gauge. We use this canonical form to prove an Index Theorem for MPUs which gives the precise conditions under which two MPUs are adiabatically connected, providing an alternative derivation to that of [Commun. Math. Phys. 310, 419 (2012), arXiv:0910.3675] for QCAs. We also discuss the effect of symmetries on the MPU classification. In particular, we characterize the tensors corresponding to MPU that are invariant under conjugation, time reversal, or transposition. In the first case, we give a full characterization of all equivalence classes. Finally, we give several examples of MPU possessing different symmetries.
  • PDF
    Recent progress implies that a crossover between machine learning and quantum information processing benefits both fields. Traditional machine learning has dramatically improved the benchmarking and control of experimental quantum computing systems, including adaptive quantum phase estimation and designing quantum computing gates. On the other hand, quantum mechanics offers tantalizing prospects to enhance machine learning, ranging from reduced computational complexity to improved generalization performance. The most notable examples include quantum enhanced algorithms for principal component analysis, quantum support vector machines, and quantum Boltzmann machines. Progress has been rapid, fostered by demonstrations of midsized quantum optimizers which are predicted to soon outperform their classical counterparts. Further, we are witnessing the emergence of a physical theory pinpointing the fundamental and natural limitations of learning. Here we survey the cutting edge of this merger and list several open problems.
  • PDF
    (Abridged abstract.) In this thesis we introduce new models of quantum computation to study the emergence of quantum speed-up in quantum computer algorithms. Our first contribution is a formalism of restricted quantum operations, named normalizer circuit formalism, based on algebraic extensions of the qubit Clifford gates (CNOT, Hadamard and $\pi/4$-phase gates): a normalizer circuit consists of quantum Fourier transforms (QFTs), automorphism gates and quadratic phase gates associated to a set $G$, which is either an abelian group or abelian hypergroup. Though Clifford circuits are efficiently classically simulable, we show that normalizer circuit models encompass Shor's celebrated factoring algorithm and the quantum algorithms for abelian Hidden Subgroup Problems. We develop classical-simulation techniques to characterize under which scenarios normalizer circuits provide quantum speed-ups. Finally, we devise new quantum algorithms for finding hidden hyperstructures. The results offer new insights into the source of quantum speed-ups for several algebraic problems. Our second contribution is an algebraic (group- and hypergroup-theoretic) framework for describing quantum many-body states and classically simulating quantum circuits. Our framework extends Gottesman's Pauli Stabilizer Formalism (PSF), wherein quantum states are written as joint eigenspaces of stabilizer groups of commuting Pauli operators: while the PSF is valid for qubit/qudit systems, our formalism can be applied to discrete- and continuous-variable systems, hybrid settings, and anyonic systems. These results enlarge the known families of quantum processes that can be efficiently classically simulated. This thesis also establishes a precise connection between Shor's quantum algorithm and the stabilizer formalism, revealing a common mathematical structure in several quantum speed-ups and error-correcting codes.
  • PDF
    Noise rates in quantum computing experiments have dropped dramatically, but reliable qubits remain precious. Fault-tolerance schemes with minimal qubit overhead are therefore essential. We introduce fault-tolerant error-correction procedures that use only two ancilla qubits. The procedures are based on adding "flags" to catch the faults that can lead to correlated errors on the data. They work for various distance-three codes. In particular, our scheme allows one to test the [[5,1,3]] code, the smallest error-correcting code, using only seven qubits total. Our techniques also apply to the [[7,1,3]] and [[15,7,3]] Hamming codes, thus allowing to protect seven encoded qubits on a device with only 17 physical qubits.
  • PDF
    We provide $poly\log$ sparse quantum codes for correcting the erasure channel arbitrarily close to the capacity. Specifically, we provide $[[n, k, d]]$ quantum stabilizer codes that correct for the erasure channel arbitrarily close to the capacity if the erasure probability is at least $0.33$, and with a generating set $\langle S_1, S_2, ... S_{n-k} \rangle$ such that $|S_i|\leq \log^{2+\zeta}(n)$ for all $i$ and for any $\zeta > 0$ with high probability. In this work we show that the result of Delfosse et al. is tight: one can construct capacity approaching codes with weight almost $O(1)$.
  • PDF
    Lately, much attention has been given to quantum algorithms that solve pattern recognition tasks in machine learning. Many of these quantum machine learning algorithms try to implement classical models on large-scale universal quantum computers that have access to non-trivial subroutines such as Hamiltonian simulation, amplitude amplification and phase estimation. We approach the problem from the opposite direction and analyse a distance-based classifier that is realised by a simple quantum interference circuit. After state preparation, the circuit only consists of a Hadamard gate as well as two single-qubit measurements, and computes the distance between data points in quantum parallel. We demonstrate the proof-of-principle using the IBM Quantum Experience and analyse the performance of the classifier with numerical simulations, showing that it classifies surprisingly well for simple benchmark tasks.
  • PDF
    Suppose a large scale quantum computer becomes available over the Internet. Could we delegate universal quantum computations to this server, using only classical communication between client and server, in a way that is information-theoretically blind (i.e., the server learns nothing about the input apart from its size, with no cryptographic assumptions required)? In this paper we give strong indications that the answer is no. This contrasts with the situation where quantum communication between client and server is allowed --- where we know that such information-theoretically blind quantum computation is possible. It also contrasts with the case where cryptographic assumptions are allowed: there again, it is now known that there are quantum analogues of fully homomorphic encryption. In more detail, we observe that, if there exist information-theoretically secure classical schemes for performing universal quantum computations on encrypted data, then we get unlikely containments between complexity classes, such as ${\sf BQP} \subset {\sf NP/poly}$. Moreover, we prove that having such schemes for delegating quantum sampling problems, such as Boson Sampling, would lead to a collapse of the polynomial hierarchy. We then consider encryption schemes which allow one round of quantum communication and polynomially many rounds of classical communication, yielding a generalization of blind quantum computation. We give a complexity theoretic upper bound, namely ${\sf QCMA/qpoly} \cap {\sf coQCMA/qpoly}$, on the types of functions that admit such a scheme. This upper bound then lets us show that, under plausible complexity assumptions, such a protocol is no more useful than classical schemes for delegating ${\sf NP}$-hard problems to the server. Lastly, we comment on the implications of these results for the prospect of verifying a quantum computation through classical interaction with the server.
  • PDF
    Quantum Machine Learning is an exciting new area that was initiated by the breakthrough quantum algorithm of Harrow, Hassidim, Lloyd \citeHHL09 for solving linear systems of equations and has since seen many interesting developments \citeLMR14, LMR13a, LMR14a, KP16. In this work, we start by providing a quantum linear system solver that outperforms the current ones for large families of matrices and provides exponential savings for any low-rank (even dense) matrix. Our algorithm uses an improved procedure for Singular Value Estimation which can be used to perform efficiently linear algebra operations, including matrix inversion and multiplication. Then, we provide the first quantum method for performing gradient descent for cases where the gradient is an affine function. Performing $\tau$ steps of the quantum gradient descent requires time $O(\tau C_S)$, where $C_S$ is the cost of performing quantumly one step of the gradient descent, which can be exponentially smaller than the cost of performing the step classically. We provide two applications of our quantum gradient descent algorithm: first, for solving positive semidefinite linear systems, and, second, for performing stochastic gradient descent for the weighted least squares problem.
  • PDF
    We show that measuring pairs of qubits in the Bell basis can be used to obtain a simple quantum algorithm for efficiently identifying an unknown stabilizer state of n qubits. The algorithm uses O(n) copies of the input state and fails with exponentially small probability.
  • PDF
    This set of lecture notes forms the basis of a series of lectures delivered at the 48th IFF Spring School 2017 on Topological Matter: Topological Insulators, Skyrmions and Majoranas at Forschungszentrum Juelich, Germany. The first part of the lecture notes covers the basics of abelian and non-abelian anyons and their realization in the Kitaev's honeycomb model. The second part discusses how to perform universal quantum computation using Majorana fermions.
  • Oct 07 2016 quant-ph arXiv:1610.01664v1
    PDF
    Quantum information and computation provide a fascinating twist on the notion of proofs in computational complexity theory. For instance, one may consider a quantum computational analogue of the complexity class \classNP, known as QMA, in which a quantum state plays the role of a proof (also called a certificate or witness), and is checked by a polynomial-time quantum computation. For some problems, the fact that a quantum proof state could be a superposition over exponentially many classical states appears to offer computational advantages over classical proof strings. In the interactive proof system setting, one may consider a verifier and one or more provers that exchange and process quantum information rather than classical information during an interaction for a given input string, giving rise to quantum complexity classes such as QIP, QSZK, and QMIP* that represent natural quantum analogues of IP, SZK, and MIP. While quantum interactive proof systems inherit some properties from their classical counterparts, they also possess distinct and uniquely quantum features that lead to an interesting landscape of complexity classes based on variants of this model. In this survey we provide an overview of many of the known results concerning quantum proofs, computational models based on this concept, and properties of the complexity classes they define. In particular, we discuss non-interactive proofs and the complexity class QMA, single-prover quantum interactive proof systems and the complexity class QIP, statistical zero-knowledge quantum interactive proof systems and the complexity class \classQSZK, and multiprover interactive proof systems and the complexity classes QMIP, QMIP*, and MIP*.
  • PDF
    We introduce a class of so called Markovian marginals, which gives a natural framework for constructing solutions to the quantum marginal problem. We consider a set of marginals that possess a certain internal quantum Markov chain structure. If they are equipped with such a structure and are locally consistent on their overlapping supports, there exists a global state that is consistent with all the marginals. The proof is constructive, and relies on a reduction of the marginal problem to a certain combinatorial problem. By employing an entanglement entropy scaling law, we give a physical argument that the requisite structure exists in any states with finite correlation lengths. This includes topologically ordered states as well as finite temperature Gibbs states.
  • PDF
    We give precise quantum resource estimates for Shor's algorithm to compute discrete logarithms on elliptic curves over prime fields. The estimates are derived from a simulation of a Toffoli gate network for controlled elliptic curve point addition, implemented within the framework of the quantum computing software tool suite LIQ$Ui|\rangle$. We determine circuit implementations for reversible modular arithmetic, including modular addition, multiplication and inversion, as well as reversible elliptic curve point addition. We conclude that elliptic curve discrete logarithms on an elliptic curve defined over an $n$-bit prime field can be computed on a quantum computer with at most $9n + 2\lceil\log_2(n)\rceil+10$ qubits using a quantum circuit of at most $448 n^3 \log_2(n) + 4090 n^3$ Toffoli gates. We are able to classically simulate the Toffoli networks corresponding to the controlled elliptic curve point addition as the core piece of Shor's algorithm for the NIST standard curves P-192, P-224, P-256, P-384 and P-521. Our approach allows gate-level comparisons to recent resource estimates for Shor's factoring algorithm. The results also support estimates given earlier by Proos and Zalka and indicate that, for current parameters at comparable classical security levels, the number of qubits required to tackle elliptic curves is less than for attacking RSA, suggesting that indeed ECC is an easier target than RSA.
  • PDF
    Brandão and Svore very recently gave quantum algorithms for approximately solving semidefinite programs, which in some regimes are faster than the best-possible classical algorithms in terms of the dimension $n$ of the problem and the number $m$ of constraints, but worse in terms of various other parameters. In this paper we improve their algorithms in several ways, getting better dependence on those other parameters. To this end we develop new techniques for quantum algorithms, for instance a general way to efficiently implement smooth functions of sparse Hamiltonians, and a generalized minimum-finding procedure. We also show limits on this approach to quantum SDP-solvers, for instance for combinatorial optimizations problems that have a lot of symmetry. Finally, we prove some general lower bounds showing that in the worst case, the complexity of every quantum LP-solver (and hence also SDP-solver) has to scale linearly with $mn$ when $m\approx n$, which is the same as classical.
  • PDF
    We present a brief review of discrete structures in a finite Hilbert space, relevant for the theory of quantum information. Unitary operator bases, mutually unbiased bases, Clifford group and stabilizer states, discrete Wigner function, symmetric informationally complete measurements, projective and unitary t--designs are discussed. Some recent results in the field are covered and several important open questions are formulated. We advocate a geometric approach to the subject and emphasize numerous links to various mathematical problems.
  • PDF
    The Travelling Salesman Problem is one of the most famous problems in graph theory. However, little is currently known about the extent to which quantum computers could speed up algorithms for the problem. In this paper, we prove a quadratic quantum speedup when the degree of each vertex is at most 3 by applying a quantum backtracking algorithm to a classical algorithm by Xiao and Nagamochi. We then use similar techniques to accelerate a classical algorithm for when the degree of each vertex is at most 4, before speeding up higher-degree graphs via reductions to these instances.
  • PDF
    We present an algorithm that takes a CSS stabilizer code as input, and outputs another CSS stabilizer code such that the stabilizer generators all have weights $O(1)$ and such that $O(1)$ generators act on any given qubit. The number of logical qubits is unchanged by the procedure, while we give bounds on the increase in number of physical qubits and in the effect on distance and other code parameters, such as soundness (as a locally testable code) and "cosoundness" (defined later). Applications are discussed, including to codes from high-dimensional manifolds which have logarithmic weight stabilizers. Assuming a conjecture in geometry\citehdm, this allows the construction of CSS stabilizer codes with generator weight $O(1)$ and almost linear distance. Another application of the construction is to increasing the distance to $X$ or $Z$ errors, whichever is smaller, so that the two distances are equal.
  • PDF
    Current experiments are taking the first steps toward noise-resilient logical qubits. Crucially, a quantum computer must not merely store information, but also process it. A fault-tolerant computational procedure ensures that errors do not multiply and spread. This review compares the leading proposals for promoting a quantum memory to a quantum processor. We compare magic state distillation, color code techniques and other alternative ideas, paying attention to relative resource demands. We discuss the several no-go results which hold for low-dimensional topological codes and outline the potential rewards of using high-dimensional quantum (LDPC) codes in modular architectures.
  • PDF
    The surface code is one of the most successful approaches to topological quantum error-correction. It boasts the smallest known syndrome extraction circuits and correspondingly largest thresholds. Defect-based logical encodings of a new variety called twists have made it possible to implement the full Clifford group without state distillation. Here we investigate a patch-based encoding involving a modified twist. In our modified formulation, the resulting codes, called triangle codes for the shape of their planar layout, have only weight-four checks and relatively simple syndrome extraction circuits that maintain a high, near surface-code-level threshold. They also use 25% fewer physical qubits per logical qubit than the surface code. Moreover, benefiting from the twist, we can implement all Clifford gates by lattice surgery without the need for state distillation. By a surgical transformation to the surface code, we also develop a scheme of doing all Clifford gates on surface code patches in an atypical planar layout, though with less qubit efficiency than the triangle code. Finally, we remark that logical qubits encoded in triangle codes are naturally amenable to logical tomography, and the smallest triangle code can demonstrate high-pseudothreshold fault-tolerance to depolarizing noise using just 13 physical qubits.
  • PDF
    The phenomenon of data hiding, i.e. the existence of pairs of states of a bipartite system that are perfectly distinguishable via general entangled measurements yet almost indistinguishable under LOCC, is a distinctive signature of nonclassicality. The relevant figure of merit is the maximal ratio (called data hiding ratio) between the distinguishability norms associated with the two sets of measurements we are comparing, typically all measurements vs LOCC protocols. For a bipartite $n\times n$ quantum system, it is known that the data hiding ratio scales as $n$, i.e. the square root of the dimension of the local state space of density matrices. We show that for bipartite $n_A\times n_B$ systems the maximum data hiding ratio against LOCC protocols is $\Theta\left(\min\{n_A,n_B\}\right)$. This scaling is better than the previously best obtained $\sqrt{n_A n_B}$, and moreover our intuitive argument yields constants close to optimal. In this paper, we investigate data hiding in the more general context of general probabilistic theories (GPTs), an axiomatic framework for physical theories encompassing only the most basic requirements about the predictive power of the theory. The main result of the paper is the determination of the maximal data hiding ratio obtainable in an arbitrary GPT, which is shown to scale linearly in the minimum of the local dimensions. We exhibit an explicit model achieving this bound up to additive constants, finding that quantum mechanics exhibits a data hiding ratio which is only the square root of the maximal one. Our proof rests crucially on an unexpected link between data hiding and the theory of projective and injective tensor products of Banach spaces. Finally, we develop a body of techniques to compute data hiding ratios for a variety of restricted classes of GPTs that support further symmetries.
  • PDF
    I discuss a variety of issues relating to near-future experiments demonstrating fault-tolerant quantum computation. I describe a family of fault-tolerant quantum circuits that can be performed with 5 qubits arranged on a ring with nearest-neighbor interactions. I also present a criterion whereby we can say that an experiment has succeeded in demonstrating fault tolerance. Finally, I discuss the possibility of using future fault-tolerant experiments to answer important questions about the interaction of fault-tolerant protocols with real experimental errors.
  • PDF
    The class of commuting quantum circuits known as IQP (instantaneous quantum polynomial-time) has been shown to be hard to simulate classically, assuming certain complexity-theoretic conjectures. Here we study the power of IQP circuits in the presence of physically motivated constraints. First, we show that there is a family of sparse IQP circuits that can be implemented on a square lattice of n qubits in depth O(sqrt(n) log n), and which is likely hard to simulate classically. Next, we show that, if an arbitrarily small constant amount of noise is applied to each qubit at the end of any IQP circuit whose output probability distribution is sufficiently anticoncentrated, there is a polynomial-time classical algorithm that simulates sampling from the resulting distribution, up to constant accuracy in total variation distance. However, we show that purely classical error-correction techniques can be used to design IQP circuits which remain hard to simulate classically, even in the presence of arbitrary amounts of noise of this form. These results demonstrate the challenges faced by experiments designed to demonstrate quantum supremacy over classical computation, and how these challenges can be overcome.
  • PDF
    A well-known result of Gottesman and Knill states that Clifford circuits - i.e. circuits composed of only CNOT, Hadamard, and $\pi/4$ phase gates - are efficiently classically simulable. We show that in contrast, "conjugated Clifford circuits" (CCCs) - where one additionally conjugates every qubit by the same one-qubit gate U - can perform hard sampling tasks. In particular, we fully classify the computational power of CCCs by showing that essentially any non-Clifford conjugating unitary U can give rise to sampling tasks which cannot be simulated classically to constant multiplicative error, unless the polynomial hierarchy collapses. Furthermore, we show that this hardness result can be extended to allow for the more realistic model of constant additive error, under a plausible complexity-theoretic conjecture.
  • PDF
    In this work we formulate thermodynamics as an exclusive consequence of information conservation. The framework can be applied to most general situations, beyond the traditional assumptions in thermodynamics, where systems and thermal-baths could be quantum, of arbitrary sizes and even could posses inter-system correlations. Further, it does not require a priory predetermined temperature associated to a thermal-bath, which does not carry much sense for finite-size cases. Importantly, the thermal-baths and systems are not treated here differently, rather both are considered on equal footing. This leads us to introduce a "temperature"-independent formulation of thermodynamics. We rely on the fact that, for a given amount of information, measured by the von Neumann entropy, any system can be transformed to a state that possesses minimal energy. This state is known as a completely passive state that acquires a Boltzmann--Gibb's canonical form with an intrinsic temperature. We introduce the notions of bound and free energy and use them to quantify heat and work respectively. We explicitly use the information conservation as the fundamental principle of nature, and develop universal notions of equilibrium, heat and work, universal fundamental laws of thermodynamics, and Landauer's principle that connects thermodynamics and information. We demonstrate that the maximum efficiency of a quantum engine with a finite bath is in general different and smaller than that of an ideal Carnot's engine. We introduce a resource theoretic framework for our intrinsic-temperature based thermodynamics, within which we address the problem of work extraction and inter-state transformations. We also extend the framework to the cases of multiple conserved quantities.
  • PDF
    We give a new upper bound on the quantum query complexity of deciding $st$-connectivity on certain classes of planar graphs, and show the bound is sometimes exponentially better than previous results. We then show Boolean formula evaluation reduces to deciding connectivity on just such a class of graphs. Applying the algorithm for $st$-connectivity to Boolean formula evaluation problems, we match the $O(\sqrt{N})$ bound on the quantum query complexity of evaluating formulas on $N$ variables, give a quadratic speed-up over the classical query complexity of a certain class of promise Boolean formulas, and show this approach can yield superpolynomial quantum/classical separations. These results indicate that this $st$-connectivity-based approach may be the "right" way of looking at quantum algorithms for formula evaluation.
  • PDF
    We present an infinite family of protocols to distill magic states for $T$-gates that has a low space overhead and uses an asymptotic number of input magic states to achieve a given target error that is conjectured to be optimal. The space overhead, defined as the ratio between the physical qubits to the number of output magic states, is asymptotically constant, while both the number of input magic states used per output state and the $T$-gate depth of the circuit scale linearly in the logarithm of the target error $\delta$ (up to $\log \log 1/\delta$). Unlike other distillation protocols, this protocol achieves this performance without concatenation and the input magic states are injected at various steps in the circuit rather than all at the start of the circuit. The protocol can be modified to distill magic states for other gates at the third level of the Clifford hierarchy, with the same asymptotic performance. The protocol relies on the construction of weakly self-dual CSS codes with many logical qubits and large distance, allowing us to implement control-SWAPs on multiple qubits. We call this code the "inner code". The control-SWAPs are then used to measure properties of the magic state and detect errors, using another code that we call the "outer code". Alternatively, we use weakly-self dual CSS codes which implement controlled Hadamards for the inner code, reducing circuit depth. We present several specific small examples of this protocol.
  • PDF
    Tensor network methods are taking a central role in modern quantum physics and beyond. They can provide an efficient approximation to certain classes of quantum states, and the associated graphical language makes it easy to describe and pictorially reason about quantum circuits, channels, protocols, open systems and more. Our goal is to explain tensor networks and some associated methods as quickly and as painlessly as possible. Beginning with the key definitions, the graphical tensor network language is presented through examples. We then provide an introduction to matrix product states. We conclude the tutorial with tensor contractions evaluating combinatorial counting problems. The first one counts the number of solutions for Boolean formulae, whereas the second is Penrose's tensor contraction algorithm, returning the number of $3$-edge-colorings of $3$-regular planar graphs.
  • PDF
    Preparing quantum thermal states on a quantum computer is in general a difficult task. We provide a procedure to prepare a thermal state on a quantum computer with a logarithmic depth circuit of local quantum channels assuming that the thermal state correlations satisfy the following two properties: (i) the correlations between two regions are exponentially decaying in the distance between the regions, and (ii) the thermal state is an approximate Markov state for shielded regions. We require both properties to hold for the thermal state of the Hamiltonian on any induced subgraph of the original lattice. Assumption (ii) is satisfied for all commuting Gibbs states, while assumption (i) is satisfied for every model above a critical temperature. Both assumptions are satisfied in one spatial dimension. Moreover, both assumptions are expected to hold above the thermal phase transition for models without any topological order at finite temperature. As a building block, we show that exponential decay of correlation (for thermal states of Hamiltonians on all induced subgraph) is sufficient to efficiently estimate the expectation value of a local observable. Our proof uses quantum belief propagation, a recent strengthening of strong sub-additivity, and naturally breaks down for states with topological order.
  • PDF
    Although the emergence of a fully-functional quantum computer may still be far away from today, in the near future, it is possible to have medium-size, special-purpose, quantum devices that can perform computational tasks not efficiently simulable with any classical computer. This status is known as quantum supremacy (or quantum advantage), where one of the promising approaches is through the sampling of chaotic quantum circuits. Sampling of ideal chaotic quantum circuits has been argued to require an exponential time for classical devices. A major question is whether quantum supremacy can be maintained under noise without error correction, as the implementation of fault-tolerance would cost lots of extra qubits and quantum gates. Here we show that, for a family of chaotic quantum circuits subject to Pauli errors, there exists an non-exponential classical algorithm capable of simulating the noisy chaotic quantum circuits with bounded errors. This result represents a serious challenge to a previous result in the literature suggesting the failure of classical devices in simulating noisy chaotic circuits with about 48 qubits and depth 25. Moreover, even though our model does not cover all types of experimental errors, from a practical point view, our result describes a well-defined setting where quantum supremacy can be tested against the challenges of classical rivals, in the context of chaotic quantum circuits.
  • PDF
    Although an input distribution may not majorize a target distribution, it may majorize a distribution which is close to the target. Here we introduce a notion of approximate majorization. For any distribution, and given a distance $\delta$, we find the approximate distributions which majorize (are majorized by) all other distributions within the distance $\delta$. We call these the steepest and flattest approximation. This enables one to compute how close one can get to a given target distribution under a process governed by majorization. We show that the flattest and steepest approximations preserve ordering under majorization. Furthermore, we give a notion of majorization distance. This has applications ranging from thermodynamics, entanglement theory, and economics.
  • PDF
    Robust quantum computation requires encoding delicate quantum information into degrees of freedom that are hard for the environment to change. Quantum encodings have been demonstrated in many physical systems by observing and correcting storage errors, but applications require not just storing information; we must accurately compute even with faulty operations. The theory of fault-tolerant quantum computing illuminates a way forward by providing a foundation and collection of techniques for limiting the spread of errors. Here we implement one of the smallest quantum codes in a five-qubit superconducting transmon device and demonstrate fault-tolerant state preparation. We characterize the resulting codewords through quantum process tomography and study the free evolution of the logical observables. Our results are consistent with fault-tolerant state preparation in a protected qubit subspace.

Recent comments

Ben Criger Sep 08 2017 08:09 UTC

Oh look, there's another technique for decoding surface codes subject to X/Z correlated errors: https://scirate.com/arxiv/1709.02154

Aram Harrow Sep 06 2017 07:54 UTC

The paper only applies to conformal field theories, and such a result cannot hold for more general 1-D systems by 0705.4077 and other papers (assuming standard complexity theory conjectures).

Felix Leditzky Sep 05 2017 21:27 UTC

Thanks for the clarification, Philippe!

Philippe Faist Sep 05 2017 21:09 UTC

Hi Felix, thanks for the good question.

We've found it more convenient to consider trace-nonincreasing and $\Gamma$-sub-preserving maps (and this is justified by the fact that they can be dilated to fully trace-preserving and $\Gamma$-preserving maps on a larger system). The issue arises because

...(continued)
Felix Leditzky Sep 05 2017 19:02 UTC

What is the reason/motivation to consider trace-non-increasing maps instead of trace-preserving maps in your framework and the definition of the coherent relative entropy?

Steve Flammia Aug 30 2017 22:30 UTC

Thanks for the reference Ashley. If I understand your paper, you are still measuring stabilizers of X- and Z-type at the top layer of the code. So it might be that we can improve on the factor of 2 that you found if we tailor the stabilizers to the noise bias at the base level.

Ashley Aug 30 2017 22:09 UTC

We followed Aliferis and Preskill's approach in [https://arxiv.org/abs/1308.4776][1] and found that the fault-tolerant threshold for the surface code was increased by approximately a factor of two, from around 0.75 per cent to 1.5 per cent for a bias of 10 to 100.

[1]: https://arxiv.org/abs/1308.

...(continued)
Stephen Bartlett Aug 30 2017 21:55 UTC

Following on from Steve's comments, it's possible to use the bias-preserving gate set in Aliferis and Preskill directly to do the syndrome extraction, as you build up a CNOT gadget, but such a direct application of your methods would be very complicated and involve a lot of gate teleportation. If y

...(continued)
Steve Flammia Aug 30 2017 21:38 UTC

We agree that finding good syndrome extraction circuits if an important question. At the moment we do not have such circuits, though we have started to think about them. We are optimistic that this can be done in principle, but it remains to be seen if the circuits can be made sufficiently simple to

...(continued)
John Preskill Aug 30 2017 14:48 UTC

Hi Steves and David. When we wrote https://arxiv.org/abs/0710.1301 our viewpoint was that a gate with highly biased (primarily Z) noise would need to commute with Z. So we built our fault-tolerant gadgets from such gates, along with preparations and measurements in the X basis.

Can you easily ext

...(continued)