Top arXiv papers

sign in to customize
  • PDF
    We give two new quantum algorithms for solving semidefinite programs (SDPs) providing quantum speed-ups. We consider SDP instances with $m$ constraint matrices, each of dimension $n$, rank $r$, and sparsity $s$. The first algorithm assumes an input model where one is given access to entries of the matrices at unit cost. We show that it has run time $\tilde{O}(s^2(\sqrt{m}\epsilon^{-10}+\sqrt{n}\epsilon^{-12}))$, where $\epsilon$ is the error. This gives an optimal dependence in terms of $m, n$ and quadratic improvement over previous quantum algorithms when $m\approx n$. The second algorithm assumes a fully quantum input model in which the matrices are given as quantum states. We show that its run time is $\tilde{O}(\sqrt{m}+\text{poly}(r))\cdot\text{poly}(\log m,\log n,B,\epsilon^{-1})$, with $B$ an upper bound on the trace-norm of all input matrices. In particular the complexity depends only poly-logarithmically in $n$ and polynomially in $r$. We apply the second SDP solver to the problem of learning a good description of a quantum state with respect to a set of measurements: Given $m$ measurements and copies of an unknown state $\rho$, we show we can find in time $\sqrt{m}\cdot\text{poly}(\log m,\log n,r,\epsilon^{-1})$ a description of the state as a quantum circuit preparing a density matrix which has the same expectation values as $\rho$ on the $m$ measurements, up to error $\epsilon$. The density matrix obtained is an approximation to the maximum entropy state consistent with the measurement data considered in Jaynes' principle from statistical mechanics. As in previous work, we obtain our algorithm by "quantizing" classical SDP solvers based on the matrix multiplicative weight method. One of our main technical contributions is a quantum Gibbs state sampler for low-rank Hamiltonians with a poly-logarithmic dependence on its dimension, which could be of independent interest.
  • PDF
    It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of photons in linear optics, which has sparked interest as a rapid way to demonstrate this quantum supremacy. Photon statistics are governed by intractable matrix functions known as permanents, which suggests that sampling from the distribution obtained by injecting photons into a linear-optical network could be solved more quickly by a photonic experiment than by a classical computer. The contrast between the apparently awesome challenge faced by any classical sampling algorithm and the apparently near-term experimental resources required for a large boson sampling experiment has raised expectations that quantum supremacy by boson sampling is on the horizon. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. While the largest boson sampling experiments reported so far are with 5 photons, our classical algorithm, based on Metropolised independence sampling (MIS), allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. We argue that the impact of experimental photon losses means that demonstrating quantum supremacy by boson sampling would require a step change in technology.
  • PDF
    Noisy Intermediate-Scale Quantum (NISQ) technology will be available in the near future. Quantum computers with 50-100 qubits may be able to perform tasks which surpass the capabilities of today's classical digital computers, but noise in quantum gates will limit the size of quantum circuits that can be executed reliably. NISQ devices will be useful tools for exploring many-body quantum physics, and may have other useful applications, but the 100-qubit quantum computer will not change the world right away --- we should regard it as a significant step toward the more powerful quantum technologies of the future. Quantum technologists should continue to strive for more accurate quantum gates and, eventually, fully fault-tolerant quantum computing.
  • PDF
    A critical milestone on the path to useful quantum computers is quantum supremacy - a demonstration of a quantum computation that is prohibitively hard for classical computers. A leading near-term candidate, put forth by the Google/UCSB team, is sampling from the probability distributions of randomly chosen quantum circuits, which we call Random Circuit Sampling (RCS). In this paper we study both the hardness and verification of RCS. While RCS was defined with experimental realization in mind, we show complexity theoretic evidence of hardness that is on par with the strongest theoretical proposals for supremacy. Specifically, we show that RCS satisfies an average-case hardness condition - computing output probabilities of typical quantum circuits is as hard as computing them in the worst-case, and therefore #P-hard. Our reduction exploits the polynomial structure in the output amplitudes of random quantum circuits, enabled by the Feynman path integral. In addition, it follows from known results that RCS satisfies an anti-concentration property, making it the first supremacy proposal with both average-case hardness and anti-concentration.
  • PDF
    We show that there are two distinct aspects of a general quantum circuit that can make it hard to efficiently simulate with a classical computer. The first aspect, which has been well-studied, is that it can be hard to efficiently estimate the probability associated with a particular measurement outcome. However, we show that this aspect alone does not determine whether a quantum circuit can be efficiently simulated. The second aspect is that, in general, there can be an exponential number of `relevant' outcomes that are needed for an accurate simulation, and so efficient simulation may not be possible even in situations where the probabilities of individual outcomes can be efficiently estimated. We show that these two aspects are distinct, the former being necessary but not sufficient for simulability whilst the pair is jointly sufficient for simulability. Specifically, we prove that a family of quantum circuits is efficiently simulable if it satisfies two properties: one related to the efficiency of Born rule probability estimation, and the other related to the sparsity of the outcome distribution. We then prove a pair of hardness results (using standard complexity assumptions and a variant of a commonly-used average case hardness conjecture), where we identify families of quantum circuits that satisfy one property but not the other, and for which efficient simulation is not possible. To prove our results, we consider a notion of simulation of quantum circuits that we call epsilon-simulation. This notion is less stringent than exact sampling and is now in common use in quantum hardness proofs.
  • PDF
    Contextuality has been conjectured to be a super-classical resource for quantum computation, analogous to the role of non-locality as a super-classical resource for communication. We show that the presence of contextuality places a lower bound on the amount of classical memory required to simulate any quantum sub-theory, thereby establishing a quantitative connection between contextuality and classical simulability. We apply our result to the qubit stabilizer sub-theory, where the presence of state-independent contextuality has been an obstacle in establishing contextuality as a quantum computational resource. We find that the presence of contextuality in this sub-theory demands that the minimum number of classical bits of memory required to simulate a multi-qubit system must scale quadratically in the number of qubits; notably, this is the same scaling as the Gottesman-Knill algorithm. We contrast this result with the (non-contextual) qudit case, where linear scaling is possible.
  • PDF
    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning techniques to impressive results in regression, classification, data-generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets are motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed-up classical machine learning algorithms. Here we review the literature in quantum machine learning and discuss perspectives for a mixed readership of classical machine learning and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in machine learning are identified as promising directions for the field. Practical questions, like how to upload classical data into quantum form, will also be addressed.
  • PDF
    With quantum computers of significant size now on the horizon, we should understand how to best exploit their initially limited abilities. To this end, we aim to identify a practical problem that is beyond the reach of current classical computers, but that requires the fewest resources for a quantum computer. We consider quantum simulation of spin systems, which could be applied to understand condensed matter phenomena. We synthesize explicit circuits for three leading quantum simulation algorithms, employing diverse techniques to tighten error bounds and optimize circuit implementations. Quantum signal processing appears to be preferred among algorithms with rigorous performance guarantees, whereas higher-order product formulas prevail if empirical error estimates suffice. Our circuits are orders of magnitude smaller than those for the simplest classically-infeasible instances of factoring and quantum chemistry.
  • PDF
    We study the classical complexity of the exact Boson Sampling problem where the objective is to produce provably correct random samples from a particular quantum mechanical distribution. The computational framework was proposed by Aaronson and Arkhipov in 2011 as an attainable demonstration of `quantum supremacy', that is a practical quantum computing experiment able to produce output at a speed beyond the reach of classical (that is non-quantum) computer hardware. Since its introduction Boson Sampling has been the subject of intense international research in the world of quantum computing. On the face of it, the problem is challenging for classical computation. Aaronson and Arkhipov show that exact Boson Sampling is not efficiently solvable by a classical computer unless $P^{\#P} = BPP^{NP}$ and the polynomial hierarchy collapses to the third level. The fastest known exact classical algorithm for the standard Boson Sampling problem takes $O({m + n -1 \choose n} n 2^n )$ time to produce samples for a system with input size $n$ and $m$ output modes, making it infeasible for anything but the smallest values of $n$ and $m$. We give an algorithm that is much faster, running in $O(n 2^n + \operatorname{poly}(m,n))$ time and $O(m)$ additional space. The algorithm is simple to implement and has low constant factor overheads. As a consequence our classical algorithm is able to solve the exact Boson Sampling problem for system sizes far beyond current photonic quantum computing experimentation, thereby significantly reducing the likelihood of achieving near-term quantum supremacy in the context of Boson Sampling.
  • PDF
    We consider a generic framework of optimization algorithms based on gradient descent. We develop a quantum algorithm that computes the gradient of a multi-variate real-valued function $f:\mathbb{R}^d\rightarrow \mathbb{R}$ by evaluating it at only a logarithmic number of points in superposition. Our algorithm is an improved version of Stephen Jordan's gradient computation algorithm, providing an approximation of the gradient $\nabla f$ with quadratically better dependence on the evaluation accuracy of $f$, for an important class of smooth functions. Furthermore, we show that most objective functions arising from quantum optimization procedures satisfy the necessary smoothness conditions, hence our algorithm provides a quadratic improvement in the complexity of computing their gradient. We also show that in a continuous phase-query model, our gradient computation algorithm has optimal query complexity up to poly-logarithmic factors, for a particular class of smooth functions. Moreover, we show that for low-degree multivariate polynomials our algorithm can provide exponential speedups compared to Jordan's algorithm in terms of the dimension $d$. One of the technical challenges in applying our gradient computation procedure for quantum optimization problems is the need to convert between a probability oracle (which is common in quantum optimization procedures) and a phase oracle (which is common in quantum algorithms) of the objective function $f$. We provide efficient subroutines to perform this delicate interconversion between the two types of oracles incurring only a logarithmic overhead, which might be of independent interest. Finally, using these tools we improve the runtime of prior approaches for training quantum auto-encoders, variational quantum eigensolvers (VQE), and quantum approximate optimization algorithms (QAOA).
  • PDF
    Schur-Weyl duality is a ubiquitous tool in quantum information. At its heart is the statement that the space of operators that commute with the tensor powers of all unitaries is spanned by the permutations of the tensor factors. In this work, we describe a similar duality theory for tensor powers of Clifford unitaries. The Clifford group is a central object in many subfields of quantum information, most prominently in the theory of fault-tolerance. The duality theory has a simple and clean description in terms of finite geometries. We demonstrate its effectiveness in several applications: (1) We resolve an open problem in quantum property testing by showing that "stabilizerness" is efficiently testable: There is a protocol that, given access to six copies of an unknown state, can determine whether it is a stabilizer state, or whether it is far away from the set of stabilizer states. We give a related membership test for the Clifford group. (2) We find that tensor powers of stabilizer states have an increased symmetry group. We provide corresponding de Finetti theorems, showing that the reductions of arbitrary states with this symmetry are well-approximated by mixtures of stabilizer tensor powers (in some cases, exponentially well). (3) We show that the distance of a pure state to the set of stabilizers can be lower-bounded in terms of the sum-negativity of its Wigner function. This gives a new quantitative meaning to the sum-negativity (and the related mana) -- a measure relevant to fault-tolerant quantum computation. The result constitutes a robust generalization of the discrete Hudson theorem. (4) We show that complex projective designs of arbitrary order can be obtained from a finite number (independent of the number of qudits) of Clifford orbits. To prove this result, we give explicit formulas for arbitrary moments of random stabilizer states.
  • PDF
    We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is in fact at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.
  • PDF
    We present the first protocol allowing a classical computer to interactively verify the result of an efficient quantum computation. We achieve this by constructing a measurement protocol, which enables a classical verifier to ensure that the quantum prover holds an n qubit quantum state, and correctly reports the results of measuring it in a basis of the verifier's choice. This is enforced based on the assumption that the learning with errors problem is computationally intractable for efficient quantum machines.
  • PDF
    Characterising quantum processes is a key task in and constitutes a challenge for the development of quantum technologies, especially at the noisy intermediate scale of today's devices. One method for characterising processes is randomised benchmarking, which is robust against state preparation and measurement (SPAM) errors, and can be used to benchmark Clifford gates. A complementing approach asks for full tomographic knowledge. Compressed sensing techniques achieve full tomography of quantum channels essentially at optimal resource efficiency. So far, guarantees for compressed sensing protocols rely on unstructured random measurements and can not be applied to the data acquired from randomised benchmarking experiments. It has been an open question whether or not the favourable features of both worlds can be combined. In this work, we give a positive answer to this question. For the important case of characterising multi-qubit unitary gates, we provide a rigorously guaranteed and practical reconstruction method that works with an essentially optimal number of average gate fidelities measured respect to random Clifford unitaries. Moreover, for general unital quantum channels we provide an explicit expansion into a unitary 2-design, allowing for a practical and guaranteed reconstruction also in that case. As a side result, we obtain a new statistical interpretation of the unitarity -- a figure of merit that characterises the coherence of a process. In our proofs we exploit recent representation theoretic insights on the Clifford group, develop a version of Collins' calculus with Weingarten functions for integration over the Clifford group, and combine this with proof techniques from compressed sensing.
  • PDF
    We introduce the problem of *shadow tomography*: given an unknown $D$-dimensional quantum mixed state $\rho$, as well as known two-outcome measurements $E_{1},\ldots,E_{M}$, estimate the probability that $E_{i}$ accepts $\rho$, to within additive error $\varepsilon$, for each of the $M$ measurements. How many copies of $\rho$ are needed to achieve this, with high probability? Surprisingly, we give a procedure that solves the problem by measuring only $\widetilde{O}\left( \varepsilon^{-5}\cdot\log^{4} M\cdot\log D\right)$ copies. This means, for example, that we can learn the behavior of an arbitrary $n$-qubit state, on all accepting/rejecting circuits of some fixed polynomial size, by measuring only $n^{O\left( 1\right)}$ copies of the state. This resolves an open problem of the author, which arose from his work on private-key quantum money schemes, but which also has applications to quantum copy-protected software, quantum advice, and quantum one-way communication. Recently, building on this work, Brandão et al. have given a different approach to shadow tomography using semidefinite programming, which achieves a savings in computation time.
  • PDF
    We propose an efficient scheme for verifying quantum computations in the `high complexity' regime i.e. beyond the remit of classical computers. Previously proposed schemes remarkably provide confidence against arbitrarily malicious adversarial behaviour in the misfunctioning of the quantum computing device. Our scheme is not secure against arbitrarily adversarial behaviour, but may nevertheless be sufficiently acceptable in many practical situations. With this concession we gain in manifest simplicity and transparency, and in contrast to previous schemes, our verifier is entirely classical. It is based on the fact that adaptive Clifford circuits on general product state inputs provide universal quantum computation, while the same processes without adaptation are always classically efficiently simulatable.
  • PDF
    We construct a Hamiltonian whose dynamics simulate the dynamics of every other Hamiltonian up to exponentially long times in the system size. The Hamiltonian is time-independent, local, one-dimensional, and translation invariant. As a consequence, we show (under plausible computational complexity assumptions) that the circuit complexity of the unitary dynamics under this Hamiltonian grows steadily with time up to an exponential value in system size. This result makes progress on a recent conjecture by Susskind, in the context of the AdS/CFT correspondence, that the time evolution of the thermofield double state of two conformal fields theories with a holographic dual has a circuit complexity increasing linearly in time, up to exponential time.
  • PDF
    We show how to approximately represent a quantum state using the square root of the usual amount of classical memory. The classical representation of an n-qubit state $\psi$ consists of its inner products with $O(\sqrt{2^n})$ stabilizer states. A quantum state initially specified by its $2^n$ entries in the computational basis can be compressed to this form in time $O(2^n \mathrm{poly}(n))$, and, subsequently, the compressed description can be used to additively approximate the expectation value of an arbitrary observable. Our compression scheme directly gives a new protocol for the vector in subspace problem with randomized one-way communication complexity that matches (up to polylogarithmic factors) the best known upper bound, due to Raz. We obtain an exponential improvement over Raz's protocol in terms of computational efficiency.
  • PDF
    Three-dimensional (3D) color codes have advantages for fault-tolerant quantum computing, such as protected quantum gates with relatively low overhead and robustness against imperfect measurement of error syndromes. Here we investigate the storage threshold error rates for bit-flip and phase-flip noise in the 3D color code on the body-centererd cubic lattice, assuming perfect syndrome measurements. In particular, by exploiting a connection between error correction and statistical mechanics, we estimate the threshold for 1D string-like and 2D sheet-like logical operators to be $p^{(1)}_\mathrm{3DCC} \simeq 1.9\%$ and $p^{(2)}_\mathrm{3DCC} \simeq 27.6\%$. We obtain these results by using parallel tempering Monte Carlo simulations to study the disorder-temperature phase diagrams of two new 3D statistical-mechanical models: the 4- and 6-body random coupling Ising models.
  • PDF
    One of the main milestones in quantum information science is to realize quantum devices that exhibit an exponential computational advantage over classical ones without being universal quantum computers, a state of affairs dubbed quantum speedup, or sometimes "quantum computational supremacy". The known schemes heavily rely on mathematical assumptions that are plausible but unproven, prominently results on anti-concentration of random prescriptions. In this work, we aim at closing the gap by proving two anti-concentration theorems. Compared to the few other known such results, these results give rise to comparably simple, physically meaningful and resource-economical schemes showing a quantum speedup in one and two spatial dimensions. At the heart of the analysis are tools of unitary designs and random circuits that allow us to conclude that universal random circuits anti-concentrate.
  • PDF
    As quantum computers have become available to the general public, the need has arisen to train a cohort of quantum programmers, many of whom have been developing classic computer programs for most of their career. While currently available quantum computers have less than 100 qubits, quantum computer hardware is widely expected to grow in terms of qubit counts, quality, and connectivity. Our article aims to explain the principles of quantum programming, which are quite different from classical programming, with straight-forward algebra that makes understanding the underlying quantum mechanics optional (but still fascinating). We give an introduction to quantum computing algorithms and their implementation on real quantum hardware. We survey 20 different quantum algorithms, attempting to describe each in a succintc and self-contained fashion; we show how they are implemented on IBM's quantum computer; and in each case we discuss the results of the implementation with respect to differences of the simulator and the actual hardware runs. This article introduces computer scientists and engineers to quantum algorithms and provides a blueprint for their implementations.
  • PDF
    We consider a problem we call StateIsomorphism: given two quantum states of n qubits, can one be obtained from the other by rearranging the qubit subsystems? Our main goal is to study the complexity of this problem, which is a natural quantum generalisation of the problem StringIsomorphism. We show that StateIsomorphism is at least as hard as GraphIsomorphism, and show that these problems have a similar structure by presenting evidence to suggest that StateIsomorphism is an intermediate problem for QCMA. In particular, we show that the complement of the problem, StateNonIsomorphism, has a two message quantum interactive proof system, and that this proof system can be made statistical zero-knowledge. We consider also StabilizerStateIsomorphism (SSI) and MixedStateIsomorphism (MSI), showing that the complement of SSI has a quantum interactive proof system that uses classical communication only, and that MSI is QSZK-hard.
  • PDF
    In order to build a large scale quantum computer, one must be able to correct errors extremely fast. We design a fast decoding algorithm for topological codes to correct for Pauli errors and erasure and combination of both errors and erasure. Our algorithm has a worst case complexity of $O(n \alpha(n))$, where $n$ is the number of physical qubits and $\alpha$ is the inverse of Ackermann's function, which is very slowly growing. For all practical purposes, $\alpha(n) \leq 3$. We prove that our algorithm performs optimally for errors of weight up to $(d-1)/2$ and for loss of up to $d-1$ qubits, where $d$ is the minimum distance of the code. Numerically, we obtain a threshold of $9.9\%$ for the 2d-toric code with perfect syndrome measurements and $2.6\%$ with faulty measurements.
  • PDF
    We provide the first example of a symmetry protected quantum phase that has universal computational power. Throughout this phase, which lives in spatial dimension two, the ground state is a universal resource for measurement based quantum computation.
  • PDF
    Suppose we have many copies of an unknown $n$-qubit state $\rho$. We measure some copies of $\rho$ using a known two-outcome measurement $E_{1}$, then other copies using a measurement $E_{2}$, and so on. At each stage $t$, we generate a current hypothesis $\sigma_{t}$ about the state $\rho$, using the outcomes of the previous measurements. We show that it is possible to do this in a way that guarantees that $|\operatorname{Tr}(E_{i} \sigma_{t}) - \operatorname{Tr}(E_{i}\rho) |$, the error in our prediction for the next measurement, is at least $\varepsilon$ at most $\operatorname{O}\!\left(n / \varepsilon^2 \right) $ times. Even in the "non-realizable" setting---where there could be arbitrary noise in the measurement outcomes---we show how to output hypothesis states that do significantly worse than the best possible states at most $\operatorname{O}\!\left(\sqrt {Tn}\right) $ times on the first $T$ measurements. These results generalize a 2007 theorem by Aaronson on the PAC-learnability of quantum states, to the online and regret-minimization settings. We give three different ways to prove our results---using convex optimization, quantum postselection, and sequential fat-shattering dimension---which have different advantages in terms of parameters and portability.
  • PDF
    A family of quantum Hamiltonians is said to be universal if any other finite-dimensional Hamiltonian can be approximately encoded within the low-energy space of a Hamiltonian from that family. If the encoding is efficient, universal families of Hamiltonians can be used as universal analogue quantum simulators and universal quantum computers, and the problem of approximately determining the ground-state energy of a Hamiltonian from a universal family is QMA-complete. One natural way to categorise Hamiltonians into families is in terms of the interactions they are built from. Here we prove universality of some important classes of interactions on qudits ($d$-level systems): (1) We completely characterise the $k$-qudit interactions which are universal, if augmented with arbitrary 1-local terms. We find that, for all $k \geqslant 2$ and all local dimensions $d \geqslant 2$, almost all such interactions are universal aside from a simple stoquastic class. (2) We prove universality of generalisations of the Heisenberg model that are ubiquitous in condensed-matter physics, even if free 1-local terms are not provided. We show that the $SU(d)$ and $SU(2)$ Heisenberg interactions are universal for all local dimensions $d \geqslant 2$ (spin $\geqslant 1/2$), implying that a quantum variant of the Max-$d$-Cut problem is QMA-complete. We also show that for $d=3$ all bilinear-biquadratic Heisenberg interactions are universal. One example is the general AKLT model. (3) We prove universality of any interaction proportional to the projector onto a pure entangled state.
  • PDF
    With the current rate of progress in quantum computing technologies, 50-qubit systems will soon become a reality. To assess, refine and advance the design and control of these devices, one needs a means to test and evaluate their fidelity. This in turn requires the capability of computing ideal quantum state amplitudes for devices of such sizes and larger. In this study, we present a new approach for this task that significantly extends the boundaries of what can be classically computed. We demonstrate our method by presenting results obtained from a calculation of the complete set of output amplitudes of a universal random circuit with depth 27 in a 2D lattice of $7 \times 7$ qubits. We further present results obtained by calculating an arbitrarily selected slice of $2^{37}$ amplitudes of a universal random circuit with depth 23 in a 2D lattice of $8 \times 7$ qubits. Such calculations were previously thought to be impossible due to impracticable memory requirements. Using the methods presented in this paper, the above simulations required 4.5 and 3.0 TB of memory, respectively, to store calculations, which is well within the limits of existing classical computers.
  • PDF
    We give a quantum algorithm to exactly solve certain problems in combinatorial optimization, including weighted MAX-2-SAT as well as problems where the objective function is a weighted sum of products of Ising variables, all terms of the same degree $D$; this problem is called weighted MAX-E$D$-LIN2. We require that the optimal solution be unique for odd $D$ and doubly degenerate for even $D$; however, we expect that the algorithm still works without this condition and we show how to reduce to the case without this assumption at the cost of an additional overhead. While the time required is still exponential, the algorithm provably outperforms Grover's algorithm assuming a mild condition on the number of low energy states of the target Hamiltonian. The detailed analysis of the runtime dependence on a tradeoff between the number of such states and algorithm speed: fewer such states allows a greater speedup. This leads to a natural hybrid algorithm that finds either an exact or approximate solution.
  • PDF
    We study the problem of simulating the time evolution of a lattice Hamiltonian, where the qubits are laid out on a lattice and the Hamiltonian only includes geometrically local interactions (i.e., a qubit may only interact with qubits in its vicinity). This class of Hamiltonians is very general and encompasses all physically reasonable Hamiltonians. Our algorithm simulates the time evolution of such a Hamiltonian on $n$ qubits for time $T$ up to error $\epsilon$ using $\mathcal{O}( nT \mathrm{polylog} (nT/\epsilon))$ gates with depth $\mathcal{O}(T \mathrm{polylog} (nT/\epsilon))$. Our algorithm is the first simulation algorithm that achieves gate cost quasilinear in $nT$ and polylogarithmic in $1/\epsilon$. Our algorithm also readily generalizes to time-dependent Hamiltonians and yields an algorithm with similar gate count for any piecewise slowly varying time-dependent bounded local Hamiltonian. We also prove a matching lower bound on the gate count of such a simulation, showing that any quantum algorithm that can simulate a piecewise constant bounded local Hamiltonian in one dimension to constant error requires $\tilde\Omega(nT)$ gates in the worst case. The lower bound holds even if we only require the output state to be correct on local measurements. To our best knowledge, this is the first nontrivial lower bound on the gate complexity of the simulation problem. Our algorithm is based on a decomposition of the time-evolution unitary into a product of small unitaries using Lieb-Robinson bounds. In the appendix, we prove a Lieb-Robinson bound tailored to Hamiltonians with small commutators between local terms, giving zero Lieb-Robinson velocity in the limit of commuting Hamiltonians. This improves the performance of our algorithm when the Hamiltonian is close to commuting.
  • PDF
    In quantum algorithms discovered so far for simulating scattering processes in quantum field theories, state preparation is the slowest step. We present a new algorithm for preparing particle states to use in simulation of Fermionic Quantum Field Theory (QFT) on a quantum computer, which is based on the matrix product state ansatz. We apply this to the massive Gross-Neveu model in one spatial dimension to illustrate the algorithm, but we believe the same algorithm with slight modifications can be used to simulate any one-dimensional massive Fermionic QFT. In the case where the number of particle species is one, our algorithm can prepare particle states using $O\left( \epsilon^{-3.23\ldots}\right)$ gates, which is much faster than previous known results, namely $O\left(\epsilon^{-8-o\left(1\right)}\right)$. Furthermore, unlike previous methods which were based on adiabatic state preparation, the method given here should be able to simulate quantum phases unconnected to the free theory.
  • PDF
    Noise rates in quantum computing experiments have dropped dramatically, but reliable qubits remain precious. Fault-tolerance schemes with minimal qubit overhead are therefore essential. We introduce fault-tolerant error-correction procedures that use only two ancilla qubits. The procedures are based on adding "flags" to catch the faults that can lead to correlated errors on the data. They work for various distance-three codes. In particular, our scheme allows one to test the [[5,1,3]] code, the smallest error-correcting code, using only seven qubits total. Our techniques also apply to the [[7,1,3]] and [[15,7,3]] Hamming codes, thus allowing to protect seven encoded qubits on a device with only 17 physical qubits.
  • PDF
    We present two techniques that can greatly reduce the number of gates required for ground state preparation in quantum simulations. The first technique realizes that to prepare the ground state of some Hamiltonian, it is not necessary to implement the time-evolution operator: any unitary operator which is a function of the Hamiltonian will do. We propose one such unitary operator which can be implemented exactly, circumventing any Taylor or Trotter approximation errors. The second technique is tailored to lattice models, and is targeted at reducing the use of generic single-qubit rotations, which are very expensive to produce by distillation and synthesis fault-tolerantly. In particular, the number of generic single-qubit rotations used by our method scales with the number of parameters in the Hamiltonian, which contrasts with a growth proportional to the lattice site required by other techniques.
  • PDF
    We present two particular decoding procedures for reconstructing a quantum state from the Hawking radiation in the Hayden-Preskill thought experiment. We work in an idealized setting and represent the black hole and its entangled partner by $n$ EPR pairs. The first procedure teleports the state thrown into the black hole to an outside observer by post-selecting on the condition that a sufficient number of EPR pairs remain undisturbed. The probability of this favorable event scales as $1/d_{A}^2$, where $d_A$ is the Hilbert space dimension for the input state. The second procedure is deterministic and combines the previous idea with Grover's search. The decoding complexity is $\mathcal{O}(d_{A}\mathcal{C})$ where $\mathcal{C}$ is the size of the quantum circuit implementing the unitary evolution operator $U$ of the black hole. As with the original (non-constructive) decoding scheme, our algorithms utilize scrambling, where the decay of out-of-time-order correlators (OTOCs) guarantees faithful state recovery.
  • PDF
    We study how well topological quantum codes can tolerate coherent noise caused by systematic unitary errors such as unwanted $Z$-rotations. Our main result is an efficient algorithm for simulating quantum error correction protocols based on the 2D surface code in the presence of coherent errors. The algorithm has runtime $O(n^2)$, where $n$ is the number of physical qubits. It allows us to simulate systems with more than one thousand qubits and obtain the first error threshold estimates for several toy models of coherent noise. Numerical results are reported for storage of logical states subject to $Z$-rotation errors and for logical state preparation with general $SU(2)$ errors. We observe that for large code distances the effective logical-level noise is well-approximated by random Pauli errors even though the physical-level noise is coherent. Our algorithm works by mapping the surface code to a system of Majorana fermions.
  • PDF
    A well-known result of Gottesman and Knill states that Clifford circuits - i.e. circuits composed of only CNOT, Hadamard, and $\pi/4$ phase gates - are efficiently classically simulable. We show that in contrast, "conjugated Clifford circuits" (CCCs) - where one additionally conjugates every qubit by the same one-qubit gate U - can perform hard sampling tasks. In particular, we fully classify the computational power of CCCs by showing that essentially any non-Clifford conjugating unitary U can give rise to sampling tasks which cannot be simulated classically to constant multiplicative error, unless the polynomial hierarchy collapses. Furthermore, we show that this hardness result can be extended to allow for the more realistic model of constant additive error, under a plausible complexity-theoretic conjecture.
  • PDF
    We show that measuring pairs of qubits in the Bell basis can be used to obtain a simple quantum algorithm for efficiently identifying an unknown stabilizer state of n qubits. The algorithm uses O(n) copies of the input state and fails with exponentially small probability.
  • PDF
    The Harrow-Hassidim-Lloyd (HHL) quantum algorithm for sampling from the solution of a linear system provides an exponential speed-up over its classical counterpart. The problem of solving a system of linear equations has a wide scope of applications, and thus HHL constitutes an important algorithmic primitive. In these notes, we present the HHL algorithm and its improved versions in detail, including explanations of the constituent sub- routines. More specifically, we discuss various quantum subroutines such as quantum phase estimation and amplitude amplification, as well as the important question of loading data into a quantum computer, via quantum RAM. The improvements to the original algorithm exploit variable-time amplitude amplification as well as a method for implementing linear combinations of unitary operations (LCUs) based on a decomposition of the operators using Fourier and Chebyshev series. Finally, we discuss a linear solver based on the quantum singular value estimation (QSVE) subroutine.
  • PDF
    We study the problem of approximating a quantum channel by one with as few Kraus operators as possible (in the sense that, for any input state, the output states of the two channels should be close to one another). Our main result is that any quantum channel mapping states on some input Hilbert space $\mathrm{A}$ to states on some output Hilbert space $\mathrm{B}$ can be compressed into one with order $d\log d$ Kraus operators, where $d=\max(|\mathrm{A}|,|\mathrm{B}|)$, hence much less than $|\mathrm{A}||\mathrm{B}|$. In the case where the channel's outputs are all very mixed, this can be improved to order $d$. We discuss the optimality of this result as well as some consequences.
  • PDF
    We study thermal states of strongly interacting quantum spin chains and prove that those can be represented in terms of convex combinations of matrix product states. Apart from revealing new features of the entanglement structure of Gibbs states our results provide a theoretical justification for the use of White's algorithm of minimally entangled typical thermal states. Furthermore, we shed new light on time dependent matrix product state algorithms which yield hydrodynamical descriptions of the underlying dynamics.
  • PDF
    We give precise quantum resource estimates for Shor's algorithm to compute discrete logarithms on elliptic curves over prime fields. The estimates are derived from a simulation of a Toffoli gate network for controlled elliptic curve point addition, implemented within the framework of the quantum computing software tool suite LIQ$Ui|\rangle$. We determine circuit implementations for reversible modular arithmetic, including modular addition, multiplication and inversion, as well as reversible elliptic curve point addition. We conclude that elliptic curve discrete logarithms on an elliptic curve defined over an $n$-bit prime field can be computed on a quantum computer with at most $9n + 2\lceil\log_2(n)\rceil+10$ qubits using a quantum circuit of at most $448 n^3 \log_2(n) + 4090 n^3$ Toffoli gates. We are able to classically simulate the Toffoli networks corresponding to the controlled elliptic curve point addition as the core piece of Shor's algorithm for the NIST standard curves P-192, P-224, P-256, P-384 and P-521. Our approach allows gate-level comparisons to recent resource estimates for Shor's factoring algorithm. The results also support estimates given earlier by Proos and Zalka and indicate that, for current parameters at comparable classical security levels, the number of qubits required to tackle elliptic curves is less than for attacking RSA, suggesting that indeed ECC is an easier target than RSA.
  • PDF
    Suppose a large scale quantum computer becomes available over the Internet. Could we delegate universal quantum computations to this server, using only classical communication between client and server, in a way that is information-theoretically blind (i.e., the server learns nothing about the input apart from its size, with no cryptographic assumptions required)? In this paper we give strong indications that the answer is no. This contrasts with the situation where quantum communication between client and server is allowed --- where we know that such information-theoretically blind quantum computation is possible. It also contrasts with the case where cryptographic assumptions are allowed: there again, it is now known that there are quantum analogues of fully homomorphic encryption. In more detail, we observe that, if there exist information-theoretically secure classical schemes for performing universal quantum computations on encrypted data, then we get unlikely containments between complexity classes, such as ${\sf BQP} \subset {\sf NP/poly}$. Moreover, we prove that having such schemes for delegating quantum sampling problems, such as Boson Sampling, would lead to a collapse of the polynomial hierarchy. We then consider encryption schemes which allow one round of quantum communication and polynomially many rounds of classical communication, yielding a generalization of blind quantum computation. We give a complexity theoretic upper bound, namely ${\sf QCMA/qpoly} \cap {\sf coQCMA/qpoly}$, on the types of functions that admit such a scheme. This upper bound then lets us show that, under plausible complexity assumptions, such a protocol is no more useful than classical schemes for delegating ${\sf NP}$-hard problems to the server. Lastly, we comment on the implications of these results for the prospect of verifying a quantum computation through classical interaction with the server.
  • PDF
    PhD thesis investigating homological quantum codes derived from curved and higher dimensional geometries. In the first part we will consider closed surfaces with constant negative curvature. We show how such surfaces can be constructed and enumerate all quantum codes derived from them which have less than 10.000 physical qubits. For codes that are extremal in a certain sense we perform numerical simulations to determine the value of their threshold. Furthermore, we give evidence that these codes can be used for more overhead efficient storage as compared to the surface code by orders of magnitude. We also show how to read and write the encoded qubits while keeping their connectivity low. In the second part we consider codes in which qubits are layed-out according to a four- dimensional geometry. Such codes allow for much simpler decoding schemes compared to codes which are two-dimensional. In particular, measurements do not necessarily have to be repeated to obtain reliable information about the error and the classical hardware performing the error correction is greatly simplified. We perform numerical simulations to analyze the performance of these codes using decoders based on local updates. We also introduce a novel decoder based on techniques from machine learning and image recognition to decode four-dimensional codes.
  • PDF
    In this work we formulate thermodynamics as an exclusive consequence of information conservation. The framework can be applied to most general situations, beyond the traditional assumptions in thermodynamics, where systems and thermal-baths could be quantum, of arbitrary sizes and even could posses inter-system correlations. Further, it does not require a priori predetermined temperature associated to a thermal-bath, which does not carry much sense for finite-size cases. Importantly, the thermal-baths and systems are not treated here differently, rather both are considered on equal footing. This leads us to introduce a "temperature"-independent formulation of thermodynamics. We rely on the fact that, for a given amount of information, measured by the von Neumann entropy, any system can be transformed to a state that possesses minimal energy. This state is known as a completely passive state that acquires a Boltzmann--Gibb's canonical form with an intrinsic temperature. We introduce the notions of bound and free energy and use them to quantify heat and work respectively. We explicitly use the information conservation as the fundamental principle of nature, and develop universal notions of equilibrium, heat and work, universal fundamental laws of thermodynamics, and Landauer's principle that connects thermodynamics and information. We demonstrate that the maximum efficiency of a quantum engine with a finite bath is in general different and smaller than that of an ideal Carnot's engine. We introduce a resource theoretic framework for our intrinsic-temperature based thermodynamics, within which we address the problem of work extraction and inter-state transformations. We also extend the framework to the cases of multiple conserved quantities.
  • PDF
    These are notes on some entanglement properties of quantum field theory, aiming to make accessible a variety of ideas that are known in the literature. The main goal is to explain how to deal with entanglement when -- as in quantum field theory -- it is a property of the algebra of observables and not just of the states.
  • PDF
    Thermodynamics is traditionally constrained to the study of macroscopic systems whose energy fluctuations are negligible compared to their average energy. Here, we push beyond this thermodynamic limit by developing a mathematical framework to rigorously address the problem of thermodynamic transformations of finite-size systems. More formally, we analyse state interconversion under thermal operations and between arbitrary energy-incoherent states. We find precise relations between the optimal rate at which interconversion can take place and the desired infidelity of the final state when the system size is sufficiently large. These so-called second-order asymptotics provide a bridge between the extreme cases of single-shot thermodynamics and the asymptotic limit of infinitely large systems. We illustrate the utility of our results with several examples. We first show how thermodynamic cycles are affected by irreversibility due to finite-size effects. We then provide a precise expression for the gap between the distillable work and work of formation that opens away from the thermodynamic limit. Finally, we explain how the performance of a heat engine gets affected when one of the heat baths it operates between is finite. We find that while perfect work cannot generally be extracted at Carnot efficiency, there are conditions under which these finite-size effects vanish. In deriving our results we also clarify relations between different notions of approximate majorisation.
  • PDF
    Quantum information technologies, and intelligent learning systems, are both emergent technologies that will likely have a transforming impact on our society. The respective underlying fields of research -- quantum information (QI) versus machine learning (ML) and artificial intelligence (AI) -- have their own specific challenges, which have hitherto been investigated largely independently. However, in a growing body of recent work, researchers have been probing the question to what extent these fields can learn and benefit from each other. QML explores the interaction between quantum computing and ML, investigating how results and techniques from one field can be used to solve the problems of the other. Recently, we have witnessed breakthroughs in both directions of influence. For instance, quantum computing is finding a vital application in providing speed-ups in ML, critical in our "big data" world. Conversely, ML already permeates cutting-edge technologies, and may become instrumental in advanced quantum technologies. Aside from quantum speed-up in data analysis, or classical ML optimization used in quantum experiments, quantum enhancements have also been demonstrated for interactive learning, highlighting the potential of quantum-enhanced learning agents. Finally, works exploring the use of AI for the very design of quantum experiments, and for performing parts of genuine research autonomously, have reported their first successes. Beyond the topics of mutual enhancement, researchers have also broached the fundamental issue of quantum generalizations of ML/AI concepts. This deals with questions of the very meaning of learning and intelligence in a world that is described by quantum mechanics. In this review, we describe the main ideas, recent developments, and progress in a broad spectrum of research investigating machine learning and artificial intelligence in the quantum domain.
  • PDF
    Surface codes are the leading family of quantum error-correcting codes. Here, we explore the properties of the 3D surface code. We develop a new picture for visualising 3D surface codes which can be used to analyse the properties of stacks of three 3D surface codes. We then use our new picture to prove that the $CCZ$ gate is transversal in 3D surface codes. We also generalise the techniques of lattice surgery to 3D surface codes. Finally, we introduce a hybrid 2D/3D surface code architecture which supports universal quantum computation without magic state distillation.
  • PDF
    Recent work on quantum machine learning has demonstrated that quantum computers can offer dramatic improvements over classical devices for data mining, prediction and classification. However, less is known about the advantages using quantum computers may bring in the more general setting of reinforcement learning, where learning is achieved via interaction with a task environment that provides occasional rewards. Reinforcement learning can incorporate data-analysis-oriented learning settings as special cases, but also includes more complex situations where, e.g., reinforcing feedback is delayed. In a few recent works, Grover-type amplification has been utilized to construct quantum agents that achieve up-to-quadratic improvements in learning efficiency. These encouraging results have left open the key question of whether super-polynomial improvements in learning times are possible for genuine reinforcement learning problems, that is problems that go beyond the other more restricted learning paradigms. In this work, we provide a family of such genuine reinforcement learning tasks. We construct quantum-enhanced learners which learn super-polynomially, and even exponentially faster than any classical reinforcement learning model, and we discuss the potential impact our results may have on future technologies.
  • PDF
    We show that the maximum success probability of players sharing quantum entanglement in a two-player game with classical questions of logarithmic length and classical answers of constant length is NP-hard to approximate to within constant factors. As a corollary, the inclusion $\mathrm{NEXP}\subseteq\mathrm{MIP}^*$, first shown in [IV12] with three provers, holds with two provers only. The proof is based on a simpler, improved analysis of the low-degree test Raz and Safra (STOC'97) against two entangled provers.
  • PDF
    Brandão and Svore very recently gave quantum algorithms for approximately solving semidefinite programs, which in some regimes are faster than the best-possible classical algorithms in terms of the dimension $n$ of the problem and the number $m$ of constraints, but worse in terms of various other parameters. In this paper we improve their algorithms in several ways, getting better dependence on those other parameters. To this end we develop new techniques for quantum algorithms, for instance a general way to efficiently implement smooth functions of sparse Hamiltonians, and a generalized minimum-finding procedure. We also show limits on this approach to quantum SDP-solvers, for instance for combinatorial optimizations problems that have a lot of symmetry. Finally, we prove some general lower bounds showing that in the worst case, the complexity of every quantum LP-solver (and hence also SDP-solver) has to scale linearly with $mn$ when $m\approx n$, which is the same as classical.

Recent comments

Joel Wallman Apr 18 2018 13:34 UTC

A very nice approach! Could you clarify the conclusion a little bit though? The aspirational goal for a quantum benchmark is to test how well we approximate a *specific* representation of a group (up to similarity transforms), whereas what your approach demonstrates is that without additional knowle

...(continued)
serfati philippe Mar 29 2018 14:07 UTC

see my 2 papers on direction of vorticity (nov1996 + feb1999) = https://www.researchgate.net/profile/Philippe_Serfati (published author, see also mendeley, academia.edu, orcid etc)

serfati philippe Mar 29 2018 13:34 UTC

see my 4 papers, 1998-1999, on contact and superposed vortex patches, cusps (and eg splashs), corners, generalized ones on lR^n and (ir/)regular ones =. http://www.researchgate.net/profile/Philippe_Serfati/ (published author).

Luis Cruz Mar 16 2018 15:34 UTC

Related Work:

- [Performance-Based Guidelines for Energy Efficient Mobile Applications](http://ieeexplore.ieee.org/document/7972717/)
- [Leafactor: Improving Energy Efficiency of Android Apps via Automatic Refactoring](http://ieeexplore.ieee.org/document/7972807/)

Dan Elton Mar 16 2018 04:36 UTC

Comments are appreciated. Message me here or on twitter @moreisdifferent

Code is open source and available at :
[https://github.com/delton137/PIMD-F90][1]

[1]: https://github.com/delton137/PIMD-F90

Danial Dervovic Mar 01 2018 12:08 UTC

Hello again Māris, many thanks for your patience. Your comments and questions have given me much food for thought, and scope for an amended version of the paper -- please see my responses below.

Please if any of the authors of [AST17 [arXiv:1712.01609](https://arxiv.org/abs/1712.01609)] have any fu

...(continued)
igorot Feb 28 2018 05:19 UTC

The Igorots built an [online community][1] that helps in the exchange, revitalization, practice, and learning of indigenous culture. It is the first and only Igorot community on the web.

[1]: https://www.igorotage.com/

Beni Yoshida Feb 13 2018 19:53 UTC

This is not a direct answer to your question, but may give some intuition to formulate the problem in a more precise language. (And I simplify the discussion drastically). Consider a static slice of an empty AdS space (just a hyperbolic space) and imagine an operator which creates a particle at some

...(continued)
Abhinav Deshpande Feb 10 2018 15:42 UTC

I see. Yes, the epsilon ball issue seems to be a thorny one in the prevalent definition, since the gate complexity to reach a target state from any of a fixed set of initial states depends on epsilon, and not in a very nice way (I imagine that it's all riddled with discontinuities). It would be inte

...(continued)
Elizabeth Crosson Feb 10 2018 05:49 UTC

Thanks for the correction Abhinav, indeed I meant that the complexity of |psi(t)> grows linearly with t.

Producing an arbitrary state |phi> exactly is also too demanding for the circuit model, by the well-known argument that given any finite set of gates, the set of states that can be reached i

...(continued)