- We prove that constant-depth quantum circuits are more powerful than their classical counterparts. To this end we introduce a non-oracular version of the Bernstein-Vazirani problem which we call the 2D Hidden Linear Function problem. An instance of the problem is specified by a quadratic form q that maps n-bit strings to integers modulo four. The goal is to identify a linear boolean function which describes the action of q on a certain subset of n-bit strings. We prove that any classical probabilistic circuit composed of bounded fan-in gates that solves the 2D Hidden Linear Function problem with high probability must have depth logarithmic in n. In contrast, we show that this problem can be solved with certainty by a constant-depth quantum circuit composed of one- and two-qubit gates acting locally on a two-dimensional grid.
- We give semidefinite program (SDP) quantum solvers with an exponential speed-up over classical ones. Specifically, we consider SDP instances with $m$ constraint matrices of dimension $n$, each of rank at most $r$, and assume that the input matrices of the SDP are given as quantum states (after a suitable normalization). Then we show there is a quantum algorithm that solves the SDP feasibility problem with accuracy $\epsilon$ by using $\sqrt{m}\log m\cdot\text{poly}(\log n,r,\epsilon^{-1})$ quantum gates. The dependence on $n$ provides an exponential improvement over the work of Brandão and Svore and the work of van Apeldoorn et al., and demonstrates an exponential quantum speed-up when $m$ and $r$ are small. We apply the SDP solver to the problem of learning a good description of a quantum state with respect to a set of measurements: Given $m$ measurements and a supply of copies of an unknown state $\rho$, we show we can find in time $\sqrt{m}\log m\cdot\text{poly}(\log n,r,\epsilon^{-1})$ a description of the state as a quantum circuit preparing a density matrix which has the same expectation values as $\rho$ on the $m$ measurements up to error $\epsilon$. The density matrix obtained is an approximation to the maximum entropy state consistent with the measurement data considered in Jaynes' principle. As in previous work, we obtain our algorithm by "quantizing" classical SDP solvers based on the matrix multiplicative weight update method. One of our main technical contributions is a quantum Gibbs state sampler for low-rank Hamiltonians with a poly-logarithmic dependence on its dimension based on the techniques developed in quantum principal component analysis, which could be of independent interest.
- Jan 19 2017 quant-ph arXiv:1701.05182v3Quantum many-body systems exhibit an extremely diverse range of phases and physical phenomena. Here, we prove that the entire physics of any other quantum many-body system is replicated in certain simple, "universal" spin-lattice models. We first characterise precisely what it means for one quantum many-body system to replicate the entire physics of another. We then show that certain very simple spin-lattice models are universal in this very strong sense. Examples include the Heisenberg and XY models on a 2D square lattice (with non-uniform coupling strengths). We go on to fully classify all two-qubit interactions, determining which are universal and which can only simulate more restricted classes of models. Our results put the practical field of analogue Hamiltonian simulation on a rigorous footing and take a significant step towards justifying why error correction may not be required for this application of quantum information technology.
- May 03 2017 quant-ph arXiv:1705.00686v1It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of photons in linear optics, which has sparked interest as a rapid way to demonstrate this quantum supremacy. Photon statistics are governed by intractable matrix functions known as permanents, which suggests that sampling from the distribution obtained by injecting photons into a linear-optical network could be solved more quickly by a photonic experiment than by a classical computer. The contrast between the apparently awesome challenge faced by any classical sampling algorithm and the apparently near-term experimental resources required for a large boson sampling experiment has raised expectations that quantum supremacy by boson sampling is on the horizon. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. While the largest boson sampling experiments reported so far are with 5 photons, our classical algorithm, based on Metropolised independence sampling (MIS), allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. We argue that the impact of experimental photon losses means that demonstrating quantum supremacy by boson sampling would require a step change in technology.
- Jan 05 2017 quant-ph arXiv:1701.01062v1An ideal system of $n$ qubits has $2^n$ dimensions. This exponential grants power, but also hinders characterizing the system's state and dynamics. We study a new problem: the qubits in a physical system might not be independent. They can "overlap," in the sense that an operation on one qubit slightly affects the others. We show that allowing for slight overlaps, $n$ qubits can fit in just polynomially many dimensions. (Defined in a natural way, all pairwise overlaps can be $\leq \epsilon$ in $n^{O(1/\epsilon^2)}$ dimensions.) Thus, even before considering issues like noise, a real system of $n$ qubits might inherently lack any potential for exponential power. On the other hand, we also provide an efficient test to certify exponential dimensionality. Unfortunately, the test is sensitive to noise. It is important to devise more robust tests on the arrangements of qubits in quantum devices.
- Mar 03 2017 quant-ph arXiv:1703.00454v2Recent work has shown that quantum computers can compute scattering probabilities in massive quantum field theories, with a run time that is polynomial in the number of particles, their energy, and the desired precision. Here we study a closely related quantum field-theoretical problem: estimating the vacuum-to-vacuum transition amplitude, in the presence of spacetime-dependent classical sources, for a massive scalar field theory in (1+1) dimensions. We show that this problem is BQP-hard; in other words, its solution enables one to solve any problem that is solvable in polynomial time by a quantum computer. Hence, the vacuum-to-vacuum amplitude cannot be accurately estimated by any efficient classical algorithm, even if the field theory is very weakly coupled, unless BQP=BPP. Furthermore, the corresponding decision problem can be solved by a quantum computer in a time scaling polynomially with the number of bits needed to specify the classical source fields, and this problem is therefore BQP-complete. Our construction can be regarded as an idealized architecture for a universal quantum computer in a laboratory system described by massive phi^4 theory coupled to classical spacetime-dependent sources.
- Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning techniques to impressive results in regression, classification, data-generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets are motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed-up classical machine learning algorithms. Here we review the literature in quantum machine learning and discuss perspectives for a mixed readership of classical machine learning and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in machine learning are identified as promising directions for the field. Practical questions, like how to upload classical data into quantum form, will also be addressed.
- This paper surveys quantum learning theory: the theoretical aspects of machine learning using quantum computers. We describe the main results known for three models of learning: exact learning from membership queries, and Probably Approximately Correct (PAC) and agnostic learning from classical or quantum examples.
- Aug 30 2017 quant-ph arXiv:1708.08474v2We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is in fact at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.
- We consider a generic framework of optimization algorithms based on gradient descent. We develop a quantum algorithm that computes the gradient of a multi-variate real-valued function $f:\mathbb{R}^d\rightarrow \mathbb{R}$ by evaluating it at only a logarithmic number of points in superposition. Our algorithm is an improved version of Jordan's gradient calculation algorithm, providing an approximation of the gradient $\nabla f$ with quadratically better dependence on the evaluation accuracy of $f$, for an important class of smooth functions. Furthermore, we show that most objective functions arising from quantum optimization procedures satisfy the necessary smoothness conditions, hence our algorithm provides a quadratic improvement in the complexity of computing their gradient. We also show that in a continuous phase-query model, our gradient computation algorithm has optimal query complexity up to poly-logarithmic factors, for a particular class of smooth functions. Moreover, we show that for low-degree multivariate polynomials our algorithm can provide exponential speedups compared to Jordan's algorithm in terms of the dimension $d$. One of the technical challenges in applying our gradient computation procedure for quantum optimization problems is the need to convert between a probability oracle (which is common in quantum optimization procedures) and a phase oracle (which is common in quantum algorithms) of the objective function $f$. We provide efficient subroutines to perform this delicate interconversion between the two types of oracles incurring only a logarithmic overhead, which might be of independent interest. Finally, using these tools we improve the runtime of prior approaches for training quantum auto-encoders, variational quantum eigensolvers, and quantum approximate optimization algorithms (QAOA).
- May 09 2017 quant-ph arXiv:1705.02817v2We propose an efficient scheme for verifying quantum computations in the `high complexity' regime i.e. beyond the remit of classical computers. Previously proposed schemes remarkably provide confidence against arbitrarily malicious adversarial behaviour in the misfunctioning of the quantum computing device. Our scheme is not secure against arbitrarily adversarial behaviour, but may nevertheless be sufficiently acceptable in many practical situations. With this concession we gain in manifest simplicity and transparency, and in contrast to previous schemes, our verifier is entirely classical. It is based on the fact that adaptive Clifford circuits on general product state inputs provide universal quantum computation, while the same processes without adaptation are always classically efficiently simulatable.
- We construct a linear system non-local game which can be played perfectly using a limit of finite-dimensional quantum strategies, but which cannot be played perfectly on any finite-dimensional Hilbert space, or even with any tensor-product strategy. In particular, this shows that the set of (tensor-product) quantum correlations is not closed. The constructed non-local game provides another counterexample to the "middle" Tsirelson problem, with a shorter proof than our previous paper (though at the loss of the universal embedding theorem). We also show that it is undecidable to determine if a linear system game can be played perfectly with a finite-dimensional strategy, or a limit of finite-dimensional quantum strategies.
- We study the classical complexity of the exact Boson Sampling problem where the objective is to produce provably correct random samples from a particular quantum mechanical distribution. The computational framework was proposed by Aaronson and Arkhipov in 2011 as an attainable demonstration of `quantum supremacy', that is a practical quantum computing experiment able to produce output at a speed beyond the reach of classical (that is non-quantum) computer hardware. Since its introduction Boson Sampling has been the subject of intense international research in the world of quantum computing. On the face of it, the problem is challenging for classical computation. Aaronson and Arkhipov show that exact Boson Sampling is not efficiently solvable by a classical computer unless $P^{\#P} = BPP^{NP}$ and the polynomial hierarchy collapses to the third level. The fastest known exact classical algorithm for the standard Boson Sampling problem takes $O({m + n -1 \choose n} n 2^n )$ time to produce samples for a system with input size $n$ and $m$ output modes, making it infeasible for anything but the smallest values of $n$ and $m$. We give an algorithm that is much faster, running in $O(n 2^n + \operatorname{poly}(m,n))$ time and $O(m)$ additional space. The algorithm is simple to implement and has low constant factor overheads. As a consequence our classical algorithm is able to solve the exact Boson Sampling problem for system sizes far beyond current photonic quantum computing experimentation, thereby significantly reducing the likelihood of achieving near-term quantum supremacy in the context of Boson Sampling.
- In the near future, there will likely be special-purpose quantum computers with 40-50 high-quality qubits. This paper lays general theoretical foundations for how to use such devices to demonstrate "quantum supremacy": that is, a clear quantum speedup for some task, motivated by the goal of overturning the Extended Church-Turing Thesis as confidently as possible. First, we study the hardness of sampling the output distribution of a random quantum circuit, along the lines of a recent proposal by the the Quantum AI group at Google. We show that there's a natural hardness assumption, which has nothing to do with sampling, yet implies that no efficient classical algorithm can pass a statistical test that the quantum sampling procedure's outputs do pass. Compared to previous work, the central advantage is that we can now talk directly about the observed outputs, rather than about the distribution being sampled. Second, in an attempt to refute our hardness assumption, we give a new algorithm, for simulating a general quantum circuit with n qubits and m gates in polynomial space and m^O(n) time. We then discuss why this and other known algorithms fail to refute our assumption. Third, resolving an open problem of Aaronson and Arkhipov, we show that any strong quantum supremacy theorem--of the form "if approximate quantum sampling is classically easy, then PH collapses"--must be non-relativizing. Fourth, refuting a conjecture by Aaronson and Ambainis, we show that the Fourier Sampling problem achieves a constant versus linear separation between quantum and randomized query complexities. Fifth, we study quantum supremacy relative to oracles in P/poly. Previous work implies that, if OWFs exist, then quantum supremacy is possible relative to such oracles. We show that some assumption is needed: if SampBPP=SampBQP and NP is in BPP, then quantum supremacy is impossible relative to such oracles.
- Oct 10 2017 quant-ph arXiv:1710.02625v2We construct a Hamiltonian whose dynamics simulate the dynamics of every other Hamiltonian up to exponentially long times in the system size. The Hamiltonian is time-independent, local, one-dimensional, and translation invariant. As a consequence, we show (under plausible computational complexity assumptions) that the circuit complexity of the unitary dynamics under this Hamiltonian grows steadily with time up to an exponential value in system size. This result makes progress on a recent conjecture by Susskind, in the context of the AdS/CFT correspondence, that the time evolution of the thermofield double state of two conformal fields theories with a holographic dual has a circuit complexity increasing linearly in time, up to exponential time.
- Aug 25 2017 quant-ph cond-mat.dis-nn arXiv:1708.07131v1Three-dimensional (3D) color codes have advantages for fault-tolerant quantum computing, such as protected quantum gates with relatively low overhead and robustness against imperfect measurement of error syndromes. Here we investigate the storage threshold error rates for bit-flip and phase-flip noise in the 3D color code on the body-centererd cubic lattice, assuming perfect syndrome measurements. In particular, by exploiting a connection between error correction and statistical mechanics, we estimate the threshold for 1D string-like and 2D sheet-like logical operators to be $p^{(1)}_\mathrm{3DCC} \simeq 1.9\%$ and $p^{(2)}_\mathrm{3DCC} \simeq 27.6\%$. We obtain these results by using parallel tempering Monte Carlo simulations to study the disorder-temperature phase diagrams of two new 3D statistical-mechanical models: the 4- and 6-body random coupling Ising models.
- We give an introduction to the theory of multi-partite entanglement. We begin by describing the "coordinate system" of the field: Are we dealing with pure or mixed states, with single or multiple copies, what notion of "locality" is being used, do we aim to classify states according to their "type of entanglement" or to quantify it? Building on the general theory of multi-partite entanglement - to the extent that it has been achieved - we turn to explaining important classes of multi-partite entangled states, including matrix product states, stabilizer and graph states, bosonic and fermionic Gaussian states, addressing applications in condensed matter theory. We end with a brief discussion of various applications that rely on multi-partite entangled states: quantum networks, measurement-based quantum computing, non-locality, and quantum metrology.
- One of the main milestones in quantum information science is to realize quantum devices that exhibit an exponential computational advantage over classical ones without being universal quantum computers, a state of affairs dubbed quantum speedup, or sometimes "quantum computational supremacy". The known schemes heavily rely on mathematical assumptions that are plausible but unproven, prominently results on anti-concentration of random prescriptions. In this work, we aim at closing the gap by proving two anti-concentration theorems. Compared to the few other known such results, these results give rise to comparably simple, physically meaningful and resource-economical schemes showing a quantum speedup in one and two spatial dimensions. At the heart of the analysis are tools of unitary designs and random circuits that allow us to conclude that universal random circuits anti-concentrate.
- We introduce the problem of *shadow tomography*: given an unknown $D$-dimensional quantum mixed state $\rho$, as well as known two-outcome measurements $E_{1},\ldots,E_{M}$, estimate the probability that $E_{i}$ accepts $\rho$, to within additive error $\varepsilon$, for each of the $M$ measurements. How many copies of $\rho$ are needed to achieve this, with high probability? Surprisingly, we give a procedure that solves the problem by measuring only $\widetilde{O}\left( \varepsilon^{-5}\cdot\log^{4} M\cdot\log D\right)$ copies. This means, for example, that we can learn the behavior of an arbitrary $n$-qubit state, on all accepting/rejecting circuits of some fixed polynomial size, by measuring only $n^{O\left( 1\right)}$ copies of the state. This resolves an open problem of the author, which arose from his work on private-key quantum money schemes, but which also has applications to quantum copy-protected software, quantum advice, and quantum one-way communication. Recently, building on this work, Brandão et al. have given a different approach to shadow tomography using semidefinite programming, which achieves a savings in computation time.
- Mar 28 2017 cond-mat.str-el quant-ph arXiv:1703.09188v2Matrix Product Vectors form the appropriate framework to study and classify one-dimensional quantum systems. In this work, we develop the structure theory of Matrix Product Unitary operators (MPUs) which appear e.g. in the description of time evolutions of one-dimensional systems. We prove that all MPUs have a strict causal cone, making them Quantum Cellular Automata (QCAs), and derive a canonical form for MPUs which relates different MPU representations of the same unitary through a local gauge. We use this canonical form to prove an Index Theorem for MPUs which gives the precise conditions under which two MPUs are adiabatically connected, providing an alternative derivation to that of [Commun. Math. Phys. 310, 419 (2012), arXiv:0910.3675] for QCAs. We also discuss the effect of symmetries on the MPU classification. In particular, we characterize the tensors corresponding to MPU that are invariant under conjugation, time reversal, or transposition. In the first case, we give a full characterization of all equivalence classes. Finally, we give several examples of MPU possessing different symmetries.
- Sep 28 2017 quant-ph arXiv:1709.09622v1We consider a problem we call StateIsomorphism: given two quantum states of n qubits, can one be obtained from the other by rearranging the qubit subsystems? Our main goal is to study the complexity of this problem, which is a natural quantum generalisation of the problem StringIsomorphism. We show that StateIsomorphism is at least as hard as GraphIsomorphism, and show that these problems have a similar structure by presenting evidence to suggest that StateIsomorphism is an intermediate problem for QCMA. In particular, we show that the complement of the problem, StateNonIsomorphism, has a two message quantum interactive proof system, and that this proof system can be made statistical zero-knowledge. We consider also StabilizerStateIsomorphism (SSI) and MixedStateIsomorphism (MSI), showing that the complement of SSI has a quantum interactive proof system that uses classical communication only, and that MSI is QSZK-hard.
- Sep 20 2017 quant-ph arXiv:1709.06218v1In order to build a large scale quantum computer, one must be able to correct errors extremely fast. We design a fast decoding algorithm for topological codes to correct for Pauli errors and erasure and combination of both errors and erasure. Our algorithm has a worst case complexity of $O(n \alpha(n))$, where $n$ is the number of physical qubits and $\alpha$ is the inverse of Ackermann's function, which is very slowly growing. For all practical purposes, $\alpha(n) \leq 3$. We prove that our algorithm performs optimally for errors of weight up to $(d-1)/2$ and for loss of up to $d-1$ qubits, where $d$ is the minimum distance of the code. Numerically, we obtain a threshold of $9.9\%$ for the 2d-toric code with perfect syndrome measurements and $2.6\%$ with faulty measurements.
- Mar 03 2017 quant-ph cond-mat.other arXiv:1703.00466v2One of the main aims in the field of quantum simulation is to achieve a quantum speedup, often referred to as "quantum computational supremacy", referring to the experimental realization of a quantum device that computationally outperforms classical computers. In this work, we show that one can devise versatile and feasible schemes of two-dimensional dynamical quantum simulators showing such a quantum speedup, building on intermediate problems involving non-adaptive measurement-based quantum computation. In each of the schemes, an initial product state is prepared, potentially involving an element of randomness as in disordered models, followed by a short-time evolution under a basic translationally invariant Hamiltonian with simple nearest-neighbor interactions and a mere sampling measurement in a fixed basis. The correctness of the final state preparation in each scheme is fully efficiently certifiable. We discuss experimental necessities and possible physical architectures, inspired by platforms of cold atoms in optical lattices and a number of others, as well as specific assumptions that enter the complexity-theoretic arguments. This work shows that benchmark settings exhibiting a quantum speedup may require little control in contrast to universal quantum computing. Thus, our proposal puts a convincing experimental demonstration of a quantum speedup within reach in the near term.
- Recent progress implies that a crossover between machine learning and quantum information processing benefits both fields. Traditional machine learning has dramatically improved the benchmarking and control of experimental quantum computing systems, including adaptive quantum phase estimation and designing quantum computing gates. On the other hand, quantum mechanics offers tantalizing prospects to enhance machine learning, ranging from reduced computational complexity to improved generalization performance. The most notable examples include quantum enhanced algorithms for principal component analysis, quantum support vector machines, and quantum Boltzmann machines. Progress has been rapid, fostered by demonstrations of midsized quantum optimizers which are predicted to soon outperform their classical counterparts. Further, we are witnessing the emergence of a physical theory pinpointing the fundamental and natural limitations of learning. Here we survey the cutting edge of this merger and list several open problems.
- Oct 17 2017 quant-ph arXiv:1710.05867v1With the current rate of progress in quantum computing technologies, 50-qubit systems will soon become a reality. To assess, refine and advance the design and control of these devices, one needs a means to test and evaluate their fidelity. This in turn requires the capability of computing ideal quantum state amplitudes for devices of such sizes and larger. In this study, we present a new approach for this task that significantly extends the boundaries of what can be classically computed. We demonstrate our method by presenting results obtained from a calculation of the complete set of output amplitudes of a universal random circuit with depth 27 in a 2D lattice of $7 \times 7$ qubits. We further present results obtained by calculating an arbitrarily selected slice of $2^{37}$ amplitudes of a universal random circuit with depth 23 in a 2D lattice of $8 \times 7$ qubits. Such calculations were previously thought to be impossible due to impracticable memory requirements. Using the methods presented in this paper, the above simulations required 4.5 and 3.0 TB of memory, respectively, to store calculations, which is well within the limits of existing classical computers.
- May 08 2017 quant-ph arXiv:1705.02329v1Noise rates in quantum computing experiments have dropped dramatically, but reliable qubits remain precious. Fault-tolerance schemes with minimal qubit overhead are therefore essential. We introduce fault-tolerant error-correction procedures that use only two ancilla qubits. The procedures are based on adding "flags" to catch the faults that can lead to correlated errors on the data. They work for various distance-three codes. In particular, our scheme allows one to test the [[5,1,3]] code, the smallest error-correcting code, using only seven qubits total. Our techniques also apply to the [[7,1,3]] and [[15,7,3]] Hamming codes, thus allowing to protect seven encoded qubits on a device with only 17 physical qubits.
- Mar 02 2017 quant-ph arXiv:1703.00382v3We provide $poly\log$ sparse quantum codes for correcting the erasure channel arbitrarily close to the capacity. Specifically, we provide $[[n, k, d]]$ quantum stabilizer codes that correct for the erasure channel arbitrarily close to the capacity if the erasure probability is at least $0.33$, and with a generating set $\langle S_1, S_2, ... S_{n-k} \rangle$ such that $|S_i|\leq \log^{2+\zeta}(n)$ for all $i$ and for any $\zeta > 0$ with high probability. In this work we show that the result of Delfosse et al. is tight: one can construct capacity approaching codes with weight almost $O(1)$.
- (Abridged abstract.) In this thesis we introduce new models of quantum computation to study the emergence of quantum speed-up in quantum computer algorithms. Our first contribution is a formalism of restricted quantum operations, named normalizer circuit formalism, based on algebraic extensions of the qubit Clifford gates (CNOT, Hadamard and $\pi/4$-phase gates): a normalizer circuit consists of quantum Fourier transforms (QFTs), automorphism gates and quadratic phase gates associated to a set $G$, which is either an abelian group or abelian hypergroup. Though Clifford circuits are efficiently classically simulable, we show that normalizer circuit models encompass Shor's celebrated factoring algorithm and the quantum algorithms for abelian Hidden Subgroup Problems. We develop classical-simulation techniques to characterize under which scenarios normalizer circuits provide quantum speed-ups. Finally, we devise new quantum algorithms for finding hidden hyperstructures. The results offer new insights into the source of quantum speed-ups for several algebraic problems. Our second contribution is an algebraic (group- and hypergroup-theoretic) framework for describing quantum many-body states and classically simulating quantum circuits. Our framework extends Gottesman's Pauli Stabilizer Formalism (PSF), wherein quantum states are written as joint eigenspaces of stabilizer groups of commuting Pauli operators: while the PSF is valid for qubit/qudit systems, our formalism can be applied to discrete- and continuous-variable systems, hybrid settings, and anyonic systems. These results enlarge the known families of quantum processes that can be efficiently classically simulated. This thesis also establishes a precise connection between Shor's quantum algorithm and the stabilizer formalism, revealing a common mathematical structure in several quantum speed-ups and error-correcting codes.
- We present two particular decoding procedures for reconstructing a quantum state from the Hawking radiation in the Hayden-Preskill thought experiment. We work in an idealized setting and represent the black hole and its entangled partner by $n$ EPR pairs. The first procedure teleports the state thrown into the black hole to an outside observer by post-selecting on the condition that a sufficient number of EPR pairs remain undisturbed. The probability of this favorable event scales as $1/d_{A}^2$, where $d_A$ is the Hilbert space dimension for the input state. The second procedure is deterministic and combines the previous idea with Grover's search. The decoding complexity is $\mathcal{O}(d_{A}\mathcal{C})$ where $\mathcal{C}$ is the size of the quantum circuit implementing the unitary evolution operator $U$ of the black hole. As with the original (non-constructive) decoding scheme, our algorithms utilize scrambling, where the decay of out-of-time-order correlators (OTOCs) guarantees faithful state recovery.
- Oct 09 2017 quant-ph arXiv:1710.02270v1We study how well topological quantum codes can tolerate coherent noise caused by systematic unitary errors such as unwanted $Z$-rotations. Our main result is an efficient algorithm for simulating quantum error correction protocols based on the 2D surface code in the presence of coherent errors. The algorithm has runtime $O(n^2)$, where $n$ is the number of physical qubits. It allows us to simulate systems with more than one thousand qubits and obtain the first error threshold estimates for several toy models of coherent noise. Numerical results are reported for storage of logical states subject to $Z$-rotation errors and for logical state preparation with general $SU(2)$ errors. We observe that for large code distances the effective logical-level noise is well-approximated by random Pauli errors even though the physical-level noise is coherent. Our algorithm works by mapping the surface code to a system of Majorana fermions.
- Jan 19 2017 quant-ph cond-mat.mes-hall arXiv:1701.05052v1This set of lecture notes forms the basis of a series of lectures delivered at the 48th IFF Spring School 2017 on Topological Matter: Topological Insulators, Skyrmions and Majoranas at Forschungszentrum Juelich, Germany. The first part of the lecture notes covers the basics of abelian and non-abelian anyons and their realization in the Kitaev's honeycomb model. The second part discusses how to perform universal quantum computation using Majorana fermions.
- Apr 03 2017 quant-ph arXiv:1703.10793v2Lately, much attention has been given to quantum algorithms that solve pattern recognition tasks in machine learning. Many of these quantum machine learning algorithms try to implement classical models on large-scale universal quantum computers that have access to non-trivial subroutines such as Hamiltonian simulation, amplitude amplification and phase estimation. We approach the problem from the opposite direction and analyse a distance-based classifier that is realised by a simple quantum interference circuit. After state preparation, the circuit only consists of a Hadamard gate as well as two single-qubit measurements, and computes the distance between data points in quantum parallel. We demonstrate the proof-of-principle using the IBM Quantum Experience and analyse the performance of the classifier with numerical simulations, showing that it classifies surprisingly well for simple benchmark tasks.
- Jul 14 2017 quant-ph arXiv:1707.04012v2We show that measuring pairs of qubits in the Bell basis can be used to obtain a simple quantum algorithm for efficiently identifying an unknown stabilizer state of n qubits. The algorithm uses O(n) copies of the input state and fails with exponentially small probability.
- Suppose a large scale quantum computer becomes available over the Internet. Could we delegate universal quantum computations to this server, using only classical communication between client and server, in a way that is information-theoretically blind (i.e., the server learns nothing about the input apart from its size, with no cryptographic assumptions required)? In this paper we give strong indications that the answer is no. This contrasts with the situation where quantum communication between client and server is allowed --- where we know that such information-theoretically blind quantum computation is possible. It also contrasts with the case where cryptographic assumptions are allowed: there again, it is now known that there are quantum analogues of fully homomorphic encryption. In more detail, we observe that, if there exist information-theoretically secure classical schemes for performing universal quantum computations on encrypted data, then we get unlikely containments between complexity classes, such as ${\sf BQP} \subset {\sf NP/poly}$. Moreover, we prove that having such schemes for delegating quantum sampling problems, such as Boson Sampling, would lead to a collapse of the polynomial hierarchy. We then consider encryption schemes which allow one round of quantum communication and polynomially many rounds of classical communication, yielding a generalization of blind quantum computation. We give a complexity theoretic upper bound, namely ${\sf QCMA/qpoly} \cap {\sf coQCMA/qpoly}$, on the types of functions that admit such a scheme. This upper bound then lets us show that, under plausible complexity assumptions, such a protocol is no more useful than classical schemes for delegating ${\sf NP}$-hard problems to the server. Lastly, we comment on the implications of these results for the prospect of verifying a quantum computation through classical interaction with the server.
- Apr 18 2017 quant-ph arXiv:1704.04992v3Quantum Machine Learning is an exciting new area that was initiated by the breakthrough quantum algorithm of Harrow, Hassidim, Lloyd \citeHHL09 for solving linear systems of equations and has since seen many interesting developments \citeLMR14, LMR13a, LMR14a, KP16. In this work, we start by providing a quantum linear system solver that outperforms the current ones for large families of matrices and provides exponential savings for any low-rank (even dense) matrix. Our algorithm uses an improved procedure for Singular Value Estimation which can be used to perform efficiently linear algebra operations, including matrix inversion and multiplication. Then, we provide the first quantum method for performing gradient descent for cases where the gradient is an affine function. Performing $\tau$ steps of the quantum gradient descent requires time $O(\tau C_S)$, where $C_S$ is the cost of performing quantumly one step of the gradient descent, which can be exponentially smaller than the cost of performing the step classically. We provide two applications of our quantum gradient descent algorithm: first, for solving positive semidefinite linear systems, and, second, for performing stochastic gradient descent for the weighted least squares problem.
- We study the problem of approximating a quantum channel by one with as few Kraus operators as possible (in the sense that, for any input state, the output states of the two channels should be close to one another). Our main result is that any quantum channel mapping states on some input Hilbert space $\mathrm{A}$ to states on some output Hilbert space $\mathrm{B}$ can be compressed into one with order $d\log d$ Kraus operators, where $d=\max(|\mathrm{A}|,|\mathrm{B}|)$, hence much less than $|\mathrm{A}||\mathrm{B}|$. In the case where the channel's outputs are all very mixed, this can be improved to order $d$. We discuss the optimality of this result as well as some consequences.
- We study thermal states of strongly interacting quantum spin chains and prove that those can be represented in terms of convex combinations of matrix product states. Apart from revealing new features of the entanglement structure of Gibbs states our results provide a theoretical justification for the use of White's algorithm of minimally entangled typical thermal states. Furthermore, we shed new light on time dependent matrix product state algorithms which yield hydrodynamical descriptions of the underlying dynamics.
- A well-known result of Gottesman and Knill states that Clifford circuits - i.e. circuits composed of only CNOT, Hadamard, and $\pi/4$ phase gates - are efficiently classically simulable. We show that in contrast, "conjugated Clifford circuits" (CCCs) - where one additionally conjugates every qubit by the same one-qubit gate U - can perform hard sampling tasks. In particular, we fully classify the computational power of CCCs by showing that essentially any non-Clifford conjugating unitary U can give rise to sampling tasks which cannot be simulated classically to constant multiplicative error, unless the polynomial hierarchy collapses. Furthermore, we show that this hardness result can be extended to allow for the more realistic model of constant additive error, under a plausible complexity-theoretic conjecture.
- We give precise quantum resource estimates for Shor's algorithm to compute discrete logarithms on elliptic curves over prime fields. The estimates are derived from a simulation of a Toffoli gate network for controlled elliptic curve point addition, implemented within the framework of the quantum computing software tool suite LIQ$Ui|\rangle$. We determine circuit implementations for reversible modular arithmetic, including modular addition, multiplication and inversion, as well as reversible elliptic curve point addition. We conclude that elliptic curve discrete logarithms on an elliptic curve defined over an $n$-bit prime field can be computed on a quantum computer with at most $9n + 2\lceil\log_2(n)\rceil+10$ qubits using a quantum circuit of at most $448 n^3 \log_2(n) + 4090 n^3$ Toffoli gates. We are able to classically simulate the Toffoli networks corresponding to the controlled elliptic curve point addition as the core piece of Shor's algorithm for the NIST standard curves P-192, P-224, P-256, P-384 and P-521. Our approach allows gate-level comparisons to recent resource estimates for Shor's factoring algorithm. The results also support estimates given earlier by Proos and Zalka and indicate that, for current parameters at comparable classical security levels, the number of qubits required to tackle elliptic curves is less than for attacking RSA, suggesting that indeed ECC is an easier target than RSA.
- Quantum information technologies, and intelligent learning systems, are both emergent technologies that will likely have a transforming impact on our society. The respective underlying fields of research -- quantum information (QI) versus machine learning (ML) and artificial intelligence (AI) -- have their own specific challenges, which have hitherto been investigated largely independently. However, in a growing body of recent work, researchers have been probing the question to what extent these fields can learn and benefit from each other. QML explores the interaction between quantum computing and ML, investigating how results and techniques from one field can be used to solve the problems of the other. Recently, we have witnessed breakthroughs in both directions of influence. For instance, quantum computing is finding a vital application in providing speed-ups in ML, critical in our "big data" world. Conversely, ML already permeates cutting-edge technologies, and may become instrumental in advanced quantum technologies. Aside from quantum speed-up in data analysis, or classical ML optimization used in quantum experiments, quantum enhancements have also been demonstrated for interactive learning, highlighting the potential of quantum-enhanced learning agents. Finally, works exploring the use of AI for the very design of quantum experiments, and for performing parts of genuine research autonomously, have reported their first successes. Beyond the topics of mutual enhancement, researchers have also broached the fundamental issue of quantum generalizations of ML/AI concepts. This deals with questions of the very meaning of learning and intelligence in a world that is described by quantum mechanics. In this review, we describe the main ideas, recent developments, and progress in a broad spectrum of research investigating machine learning and artificial intelligence in the quantum domain.
- Jul 07 2017 quant-ph arXiv:1707.01750v1In this work we formulate thermodynamics as an exclusive consequence of information conservation. The framework can be applied to most general situations, beyond the traditional assumptions in thermodynamics, where systems and thermal-baths could be quantum, of arbitrary sizes and even could posses inter-system correlations. Further, it does not require a priory predetermined temperature associated to a thermal-bath, which does not carry much sense for finite-size cases. Importantly, the thermal-baths and systems are not treated here differently, rather both are considered on equal footing. This leads us to introduce a "temperature"-independent formulation of thermodynamics. We rely on the fact that, for a given amount of information, measured by the von Neumann entropy, any system can be transformed to a state that possesses minimal energy. This state is known as a completely passive state that acquires a Boltzmann--Gibb's canonical form with an intrinsic temperature. We introduce the notions of bound and free energy and use them to quantify heat and work respectively. We explicitly use the information conservation as the fundamental principle of nature, and develop universal notions of equilibrium, heat and work, universal fundamental laws of thermodynamics, and Landauer's principle that connects thermodynamics and information. We demonstrate that the maximum efficiency of a quantum engine with a finite bath is in general different and smaller than that of an ideal Carnot's engine. We introduce a resource theoretic framework for our intrinsic-temperature based thermodynamics, within which we address the problem of work extraction and inter-state transformations. We also extend the framework to the cases of multiple conserved quantities.
- We present a brief review of discrete structures in a finite Hilbert space, relevant for the theory of quantum information. Unitary operator bases, mutually unbiased bases, Clifford group and stabilizer states, discrete Wigner function, symmetric informationally complete measurements, projective and unitary t--designs are discussed. Some recent results in the field are covered and several important open questions are formulated. We advocate a geometric approach to the subject and emphasize numerous links to various mathematical problems.
- The Travelling Salesman Problem is one of the most famous problems in graph theory. However, little is currently known about the extent to which quantum computers could speed up algorithms for the problem. In this paper, we prove a quadratic quantum speedup when the degree of each vertex is at most 3 by applying a quantum backtracking algorithm to a classical algorithm by Xiao and Nagamochi. We then use similar techniques to accelerate a classical algorithm for when the degree of each vertex is at most 4, before speeding up higher-degree graphs via reductions to these instances.
- Dec 15 2016 quant-ph arXiv:1612.04795v2The surface code is one of the most successful approaches to topological quantum error-correction. It boasts the smallest known syndrome extraction circuits and correspondingly largest thresholds. Defect-based logical encodings of a new variety called twists have made it possible to implement the full Clifford group without state distillation. Here we investigate a patch-based encoding involving a modified twist. In our modified formulation, the resulting codes, called triangle codes for the shape of their planar layout, have only weight-four checks and relatively simple syndrome extraction circuits that maintain a high, near surface-code-level threshold. They also use 25% fewer physical qubits per logical qubit than the surface code. Moreover, benefiting from the twist, we can implement all Clifford gates by lattice surgery without the need for state distillation. By a surgical transformation to the surface code, we also develop a scheme of doing all Clifford gates on surface code patches in an atypical planar layout, though with less qubit efficiency than the triangle code. Finally, we remark that logical qubits encoded in triangle codes are naturally amenable to logical tomography, and the smallest triangle code can demonstrate high-pseudothreshold fault-tolerance to depolarizing noise using just 13 physical qubits.
- Brandão and Svore very recently gave quantum algorithms for approximately solving semidefinite programs, which in some regimes are faster than the best-possible classical algorithms in terms of the dimension $n$ of the problem and the number $m$ of constraints, but worse in terms of various other parameters. In this paper we improve their algorithms in several ways, getting better dependence on those other parameters. To this end we develop new techniques for quantum algorithms, for instance a general way to efficiently implement smooth functions of sparse Hamiltonians, and a generalized minimum-finding procedure. We also show limits on this approach to quantum SDP-solvers, for instance for combinatorial optimizations problems that have a lot of symmetry. Finally, we prove some general lower bounds showing that in the worst case, the complexity of every quantum LP-solver (and hence also SDP-solver) has to scale linearly with $mn$ when $m\approx n$, which is the same as classical.
- The phenomenon of data hiding, i.e. the existence of pairs of states of a bipartite system that are perfectly distinguishable via general entangled measurements yet almost indistinguishable under LOCC, is a distinctive signature of nonclassicality. The relevant figure of merit is the maximal ratio (called data hiding ratio) between the distinguishability norms associated with the two sets of measurements we are comparing, typically all measurements vs LOCC protocols. For a bipartite $n\times n$ quantum system, it is known that the data hiding ratio scales as $n$, i.e. the square root of the dimension of the local state space of density matrices. We show that for bipartite $n_A\times n_B$ systems the maximum data hiding ratio against LOCC protocols is $\Theta\left(\min\{n_A,n_B\}\right)$. This scaling is better than the previously best obtained $\sqrt{n_A n_B}$, and moreover our intuitive argument yields constants close to optimal. In this paper, we investigate data hiding in the more general context of general probabilistic theories (GPTs), an axiomatic framework for physical theories encompassing only the most basic requirements about the predictive power of the theory. The main result of the paper is the determination of the maximal data hiding ratio obtainable in an arbitrary GPT, which is shown to scale linearly in the minimum of the local dimensions. We exhibit an explicit model achieving this bound up to additive constants, finding that quantum mechanics exhibits a data hiding ratio which is only the square root of the maximal one. Our proof rests crucially on an unexpected link between data hiding and the theory of projective and injective tensor products of Banach spaces. Finally, we develop a body of techniques to compute data hiding ratios for a variety of restricted classes of GPTs that support further symmetries.
- Dec 23 2016 quant-ph arXiv:1612.07330v1Current experiments are taking the first steps toward noise-resilient logical qubits. Crucially, a quantum computer must not merely store information, but also process it. A fault-tolerant computational procedure ensures that errors do not multiply and spread. This review compares the leading proposals for promoting a quantum memory to a quantum processor. We compare magic state distillation, color code techniques and other alternative ideas, paying attention to relative resource demands. We discuss the several no-go results which hold for low-dimensional topological codes and outline the potential rewards of using high-dimensional quantum (LDPC) codes in modular architectures.
- In quantum algorithms discovered so far for simulating scattering processes in quantum field theories, state preparation is the slowest step. We present a new algorithm for preparing particle states to use in simulation of Fermionic Quantum Field Theory (QFT) on a quantum computer, which is based on the matrix product state ansatz. We apply this to the massive Gross-Neveu model in one spatial dimension to illustrate the algorithm, but we believe the same algorithm with slight modifications can be used to simulate any one-dimensional massive Fermionic QFT. In the case where the number of particle species is one, our algorithm can prepare particle states using $O\left( \epsilon^{-3.23\ldots}\right)$ gates, which is much faster than previous known results, namely $O\left(\epsilon^{-8-o\left(1\right)}\right)$. Furthermore, unlike previous methods which were based on adiabatic state preparation, the method given here should be able to simulate quantum phases unconnected to the free theory.
- Nov 06 2017 quant-ph arXiv:1711.01193v1Thermodynamics is traditionally constrained to the study of macroscopic systems whose energy fluctuations are negligible compared to their average energy. Here, we push beyond this thermodynamic limit by developing a mathematical framework to rigorously address the problem of thermodynamic transformations of finite-size systems. More formally, we analyse state interconversion under thermal operations and between arbitrary energy-incoherent states. We find precise relations between the optimal rate at which interconversion can take place and the desired infidelity of the final state when the system size is sufficiently large. These so-called second-order asymptotics provide a bridge between the extreme cases of single-shot thermodynamics and the asymptotic limit of infinitely large systems. We illustrate the utility of our results with several examples. We first show how thermodynamic cycles are affected by irreversibility due to finite-size effects. We then provide a precise expression for the gap between the distillable work and work of formation that opens away from the thermodynamic limit. Finally, we explain how the performance of a heat engine gets affected when one of the heat baths it operates between is finite. We find that while perfect work cannot generally be extracted at Carnot efficiency, there are conditions under which these finite-size effects vanish. In deriving our results we also clarify relations between different notions of approximate majorisation.
- We give a new upper bound on the quantum query complexity of deciding $st$-connectivity on certain classes of planar graphs, and show the bound is sometimes exponentially better than previous results. We then show Boolean formula evaluation reduces to deciding connectivity on just such a class of graphs. Applying the algorithm for $st$-connectivity to Boolean formula evaluation problems, we match the $O(\sqrt{N})$ bound on the quantum query complexity of evaluating formulas on $N$ variables, give a quadratic speed-up over the classical query complexity of a certain class of promise Boolean formulas, and show this approach can yield superpolynomial quantum/classical separations. These results indicate that this $st$-connectivity-based approach may be the "right" way of looking at quantum algorithms for formula evaluation.