# Top arXiv papers

• We show that there are two distinct aspects of a general quantum circuit that can make it hard to efficiently simulate with a classical computer. The first aspect, which has been well-studied, is that it can be hard to efficiently estimate the probability associated with a particular measurement outcome. However, we show that this aspect alone does not determine whether a quantum circuit can be efficiently simulated. The second aspect is that, in general, there can be an exponential number of relevant' outcomes that are needed for an accurate simulation, and so efficient simulation may not be possible even in situations where the probabilities of individual outcomes can be efficiently estimated. We show that these two aspects are distinct, the former being necessary but not sufficient for simulability whilst the pair is jointly sufficient for simulability. Specifically, we prove that a family of quantum circuits is efficiently simulable if it satisfies two properties: one related to the efficiency of Born rule probability estimation, and the other related to the sparsity of the outcome distribution. We then prove a pair of hardness results (using standard complexity assumptions and a variant of a commonly-used average case hardness conjecture), where we identify families of quantum circuits that satisfy one property but not the other, and for which efficient simulation is not possible. To prove our results, we consider a notion of simulation of quantum circuits that we call epsilon-simulation. This notion is less stringent than exact sampling and is now in common use in quantum hardness proofs.
• With quantum computers of significant size now on the horizon, we should understand how to best exploit their initially limited abilities. To this end, we aim to identify a practical problem that is beyond the reach of current classical computers, but that requires the fewest resources for a quantum computer. We consider quantum simulation of spin systems, which could be applied to understand condensed matter phenomena. We synthesize explicit circuits for three leading quantum simulation algorithms, employing diverse techniques to tighten error bounds and optimize circuit implementations. Quantum signal processing appears to be preferred among algorithms with rigorous performance guarantees, whereas higher-order product formulas prevail if empirical error estimates suffice. Our circuits are orders of magnitude smaller than those for the simplest classically-infeasible instances of factoring and quantum chemistry.
• In quantum algorithms discovered so far for simulating scattering processes in quantum field theories, state preparation is the slowest step. We present a new algorithm for preparing particle states to use in simulation of Fermionic Quantum Field Theory (QFT) on a quantum computer, which is based on the matrix product state ansatz. We apply this to the massive Gross-Neveu model in one spatial dimension to illustrate the algorithm, but we believe the same algorithm with slight modifications can be used to simulate any one-dimensional massive Fermionic QFT. In the case where the number of particle species is one, our algorithm can prepare particle states using $O\left( \epsilon^{-3.23\ldots}\right)$ gates, which is much faster than previous known results, namely $O\left(\epsilon^{-8-o\left(1\right)}\right)$. Furthermore, unlike previous methods which were based on adiabatic state preparation, the method given here should be able to simulate quantum phases unconnected to the free theory.
• We present two techniques that can greatly reduce the number of gates required for ground state preparation in quantum simulations. The first technique realizes that to prepare the ground state of some Hamiltonian, it is not necessary to implement the time-evolution operator: any unitary operator which is a function of the Hamiltonian will do. We propose one such unitary operator which can be implemented exactly, circumventing any Taylor or Trotter approximation errors. The second technique is tailored to lattice models, and is targeted at reducing the use of generic single-qubit rotations, which are very expensive to produce by distillation and synthesis fault-tolerantly. In particular, the number of generic single-qubit rotations used by our method scales with the number of parameters in the Hamiltonian, which contrasts with a growth proportional to the lattice site required by other techniques.
• Building upon a recent approach pioneered by Barvinok [4, 5, 7, 8] we present a quasi-polynomial time algorithm for approximating the permanent of a typical n x n random matrix with unit variance and vanishing mean \mu = (ln ln n)-1/6 to within inverse polynomial multiplicative error. This result counters the common intuition that the difficulty of computing the permanent, even approximately, stems merely from our inability to treat matrices with many opposing signs. We believe that our approach may have several implications to understanding the permanent in the context of computational complexity, anti-concentration statistics, and de-randomization.
• We prove that ground states of gapped local Hamiltonians on an infinite spin chain can be efficiently approximated by matrix product states with a bond dimension which scales as D~(L-1)/epsilon, where any local quantity on L consecutive spins is approximated to accuracy epsilon.
• What is the energy cost of extracting entanglement from complex quantum systems? In other words, given a state of a quantum system, how much energy does it cost to extract m EPR pairs? This is an important question, particularly for quantum field theories where the vacuum is generally highly entangled. Here we build a theory to understand the energy cost of entanglement extraction. First, we apply it to a toy model, and then we define the entanglement temperature, which relates the energy cost to the amount of extracted entanglement. Next, we give a physical argument to find the energy cost of entanglement extraction in some condensed matter and quantum field systems. The energy cost for those quantum field theories depends on the spatial dimension, and in one dimension, for example, it grows exponentially with the number of EPR pairs extracted. Next, we outline some approaches for bounding the energy cost of extracting entanglement in general quantum systems. Finally, we look at the antiferromagnetic Heisenberg and transverse field Ising models numerically to calculate the entanglement temperature using matrix product states.
• Before executing a quantum algorithm, one must first decompose the algorithm into machine-level instructions compatible with the architecture of the quantum computer, a process known as quantum compiling. There are many different quantum circuit decompositions for the same algorithm but it is desirable to compile leaner circuits. A popular cost measure is the $T$ count -- the number of $T$ gates in a circuit -- since it closely approximates the full space-time cost for surface code architectures. For the single qubit case, optimal compiling is essentially a solved problem. However, multi-qubit compiling is a harder problem with optimal algorithms requiring classical runtime exponential in the number of qubits, $n$. Here, we present and compare several efficient quantum compilers for multi-qubit Clifford + $T$ circuits. We implemented our compilers in C++ and benchmarked them on random circuits, from which we determine that our TODD compiler yields the lowest $T$ counts on average. We also benchmarked TODD on a library of reversible logic circuits that appear in quantum algorithms and found an average of 34\% $T$-count reduction when compared against the best of all previous circuit decompositions.
• In topology, a torus remains invariant under certain non-trivial transformations known as modular transformations. In the context of topologically ordered quantum states of matter, these transformations encode the braiding statistics and fusion rules of emergent anyonic excitations and thus serve as a diagnostic of topological order. Moreover, modular transformations of higher genus surfaces, e.g. a torus with multiple handles, can enhance the computational power of a topological state, in many cases providing a universal fault-tolerant set of gates for quantum computation. However, due to the intrusive nature of modular transformations, which abstractly involve global operations and manifold surgery, physical implementations of them in local systems have remained elusive. Here, we show that by folding manifolds, modular transformations can be reduced to independent local unitaries, providing a novel class of transversal logic gates in topological states. Specifically, through folding, we demonstrate that multi-layer topological states with appropriate boundary conditions and twist defects allow modular transformations to be effectively implemented by a finite sequence of local SWAP gates between the layers. We further provide methods to directly measure the modular matrices, and thus the fractional statistics of anyonic excitations, providing a novel way to directly measure topological order.
• We present a quantum algorithm for simulating the wave equation under Dirichlet and Neumann boundary conditions. The algorithm uses Hamiltonian simulation and quantum linear system algorithms as subroutines. It relies on factorizations of discretized Laplacian operators to allow for improved scaling in truncation errors and improved scaling for state preparation relative to general purpose linear differential equation algorithms. We also consider using Hamiltonian simulation for Klein-Gordon equations and Maxwell's equations.
• Laboratory hardware is rapidly progressing towards a state where quantum error-correcting codes can be realised. As such, we must learn how to deal with the complex nature of the noise that may occur in real physical systems. Single qubit Pauli errors are commonly used to study the behaviour of error-correcting codes, but in general we might expect the environment to introduce correlated errors to a system. Given some knowledge of structures that errors commonly take, it may be possible to adapt the error-correction procedure to compensate for this noise, but performing full state tomography on a physical system to analyse this structure quickly becomes impossible as the size increases beyond a few qubits. Here we develop and test new methods to analyse correlated errors by making use of a parametrised families of decoding algorithms. We demonstrate our method numerically using a diffusive noise model. We show that information can be learnt about the parameters of the noise model, and additionally that the logical error rates can be improved. We conclude by discussing how our method could be utilised in a practical setting.
• Even the most sophisticated artificial neural networks are built by aggregating substantially identical units called neurons. A neuron receives multiple signals, internally combines them, and applies a non-linear function to the resulting weighted sum. Several attempts to generalize neurons to the quantum regime have been proposed, but all proposals collided with the difficulty of implementing non-linear activation functions, which is essential for classical neurons, due to the linear nature of quantum mechanics. Here we propose a solution to this roadblock in the form of a small quantum circuit that naturally simulates neurons with threshold activation. Our quantum circuit defines a building block, the "quantum neuron", that can reproduce a variety of classical neural network constructions while maintaining the ability to process superpositions of inputs and preserve quantum coherence and entanglement. In the construction of feedforward networks of quantum neurons, we provide numerical evidence that the network not only can learn a function when trained with superposition of inputs and the corresponding output, but that this training suffices to learn the function on all individual inputs separately. When arranged to mimic Hopfield networks, quantum neural networks exhibit properties of associative memory. Patterns are encoded using the simple Hebbian rule for the weights and we demonstrate attractor dynamics from corrupted inputs. Finally, the fact that our quantum model closely captures (traditional) neural network dynamics implies that the vast body of literature and results on neural networks becomes directly relevant in the context of quantum machine learning.
• A method to study strongly interacting quantum many-body systems at and away from criticality is proposed. The method is based on a MERA-like tensor network that can be efficiently and reliably contracted on a noisy quantum computer using a number of qubits that is much smaller than the system size. We prove that the outcome of the contraction is stable to noise and that the estimated energy upper bounds the ground state energy. The stability, which we numerically substantiate, follows from the positivity of operator scaling dimensions under renormalization group flow. The variational upper bound follows from a particular assignment of physical qubits to different locations of the tensor network plus the assumption that the noise model is local. We postulate a scaling law for how well the tensor network can approximate ground states of lattice regulated conformal field theories in d spatial dimensions and provide evidence for the postulate. Under this postulate, a $O(\log^{d}(1/\delta))$-qubit quantum computer can prepare a valid quantum-mechanical state with energy density $\delta$ above the ground state. In the presence of noise, $\delta = O(\epsilon \log^{d+1}(1/\epsilon))$ can be achieved, where $\epsilon$ is the noise strength.
• Matthew Fisher recently postulated a mechanism by which quantum phenomena could influence cognition: Phosphorus nuclear spins may resist decoherence for long times. The spins would serve as biological qubits. The qubits may resist decoherence longer when in Posner molecules. We imagine that Fisher postulates correctly. How adroitly could biological systems process quantum information (QI)? We establish a framework for answering. Additionally, we apply biological qubits in quantum error correction, quantum communication, and quantum computation. First, we posit how the QI encoded by the spins transforms as Posner molecules form. The transformation points to a natural computational basis for qubits in Posner molecules. From the basis, we construct a quantum code that detects arbitrary single-qubit errors. Each molecule encodes one qutrit. Shifting from information storage to computation, we define the model of Posner quantum computation. To illustrate the model's quantum-communication ability, we show how it can teleport information incoherently: A state's weights are teleported; the coherences are not. The dephasing results from the entangling operation's simulation of a coarse-grained Bell measurement. Whether Posner quantum computation is universal remains an open question. However, the model's operations can efficiently prepare a Posner state usable as a resource in universal measurement-based quantum computation. The state results from deforming the Affleck-Lieb-Kennedy-Tasaki (AKLT) state and is a projected entangled-pair state (PEPS). Finally, we show that entanglement can affect molecular-binding rates (by 0.6% in an example). This work opens the door for the QI-theoretic analysis of biological qubits and Posner molecules.
• As far as we know, a useful quantum computer will require fault-tolerant gates, and existing schemes demand a prohibitively large space and time overhead. We argue that a first generation quantum computer will be very valuable to design, test, and optimize fault-tolerant protocols tailored to the noise processes of the hardware. Our argument is essentially a critical analysis of the current methods envisioned to optimize fault-tolerant schemes, which rely on hardware characterization, noise modelling, and numerical simulations. We show that, even within a very restricted set of noise models, error correction protocols depend strongly on the details of the noise model. Combined to the intrinsic difficulty of hardware characterization and of numerical simulations of fault-tolerant protocols, we arrive at the conclusion that the currently envisioned optimization cycle is of very limited scope. On the other hand, the direct characterization of a fault-tolerant scheme on a small quantum computer bypasses these difficulties, and could provide a bootstrapping path to full-scale fault-tolerant quantum computation.
• The number of parameters describing a quantum state is well known to grow exponentially with the number of particles. This scaling clearly limits our ability to do tomography to systems with no more than a few qubits and has been used to argue against the universal validity of quantum mechanics itself. However, from a computational learning theory perspective, it can be shown that, in a probabilistic setting, quantum states can be approximately learned using only a linear number of measurements. Here we experimentally demonstrate this linear scaling in optical systems with up to 6 qubits. Our results highlight the power of computational learning theory to investigate quantum information, provide the first experimental demonstration that quantum states can be "probably approximately learned" with access to a number of copies of the state that scales linearly with the number of qubits, and pave the way to probing quantum states at new, larger scales.
• Large-scale quantum computation is likely to require massive quantum error correction (QEC). QEC codes and circuits are described via the stabilizer formalism, which represents stabilizer states by keeping track of the operators that preserve them. Such states are obtained by stabilizer circuits (consisting of CNOT, Hadamard and Phase gates) and can be represented compactly on conventional computers using $O(n^2)$ bits, where $n$ is the number of qubits. As an additional application, the work by Aaronson and Gottesman suggests the use of superpositions of stabilizer states to represent arbitrary quantum states. To aid in such applications and improve our understanding of stabilizer states, we characterize and count nearest-neighbor stabilizer states, quantify the distribution of angles between pairs of stabilizer states, study succinct stabilizer superpositions and stabilizer bivectors, explore the approximation of non-stabilizer states by single stabilizer states and short linear combinations of stabilizer states, develop an improved inner-product computation for stabilizer states via synthesis of compact canonical stabilizer circuits, propose an orthogonalization procedure for stabilizer states, and evaluate several of these algorithms empirically.
• This paper considers the potential impact that the nascent technology of quantum computing may have on society. It focuses on three areas: cryptography, optimization, and simulation of quantum systems. We will also discuss some ethical aspects of these developments, and ways to mitigate the risks.
• We study symmetry-enriched topological order in two-dimensional tensor network states by using graded matrix product operator algebras to represent symmetry induced domain walls. A close connection to the theory of graded unitary fusion categories is established. Tensor network representations of the topological defect superselection sectors are constructed for all domain walls. The emergent symmetry-enriched topological order is extracted from these representations, including the symmetry action on the underlying anyons. Dual phase transitions, induced by gauging a global symmetry, and condensation of a bosonic subtheory, are analyzed and the relationship between topological orders on either side of the transition is derived. Several examples are worked through explicitly.
• Simulating strongly correlated fermionic systems is notoriously hard on classical computers. An alternative approach, as proposed by Feynman, is to use a quantum computer. Here, we discuss quantum simulation of strongly correlated fermionic systems. We focus specifically on 2D and linear geometry with nearest neighbor qubit-qubit couplings, typical for superconducting transmon qubit arrays. We improve an existing algorithm to prepare an arbitrary Slater determinant by exploiting a unitary symmetry. We also present a quantum algorithm to prepare an arbitrary fermionic Gaussian state with $O(N^2)$ gates and $O(N)$ circuit depth. Both algorithms are optimal in the sense that the numbers of parameters in the quantum circuits are equal to those to describe the quantum states. Furthermore, we propose an algorithm to implement the 2-dimensional (2D) fermionic Fourier transformation on a 2D qubit array with only $O(N^{1.5})$ gates and $O(\sqrt N)$ circuit depth, which is the minimum depth required for quantum information to travel across the qubit array. We also present methods to simulate each time step in the evolution of the 2D Fermi-Hubbard model---again on a 2D qubit array---with $O(N)$ gates and $O(\sqrt N)$ circuit depth. Finally, we discuss how these algorithms can be used to determine the ground state properties and phase diagrams of strongly correlated quantum systems using the Hubbard model as an example.
• Quantum walks on graphs have been shown in certain cases to mix quadratically faster than their classical counterparts. Lifted Markov chains, consisting of a Markov chain on an extended state space which is projected back down to the original state space, also show considerable speedups in mixing time. Here, we construct a lifted Markov chain on a graph with $n^2 T^3$ vertices that mixes to the average mixing distribution of a quantum walk on any graph with $n$ vertices over $T$ timesteps. Moreover, we prove that the mixing time of this chain is $T$, the number of timesteps in the quantum walk. As an immediate consequence, for every quantum walk there is a lifted Markov chain with the same mixing time. The result is based on a lifting presented by Apers, Ticozzi and Sarlette (arXiv:1705.08253).
• We prove a characterization of $t$-query quantum algorithms in terms of the unit ball of a space of degree-$2t$ polynomials. Based on this, we obtain a refined notion of approximate polynomial degree that equals the quantum query complexity, answering a question of Aaronson et al. (CCC'16). Our proof is based on a fundamental result of Christensen and Sinclair (J. Funct. Anal., 1987) that generalizes the well-known Stinespring representation for quantum channels to multilinear forms. Using our characterization, we show that many polynomials of degree four are far from those coming from two-query quantum algorithms. We also give a simple and short proof of one of the results of Aaronson et al. showing an equivalence between one-query quantum algorithms and bounded quadratic polynomials.
• We propose a general-purpose quantum algorithm for preparing ground states of quantum Hamiltonians from a given trial state. The algorithm is based on techniques recently developed in the context of solving the quantum linear systems problem [Childs,Kothari,Somma'15]. We show that, compared to algorithms based on phase estimation, the runtime of our algorithm is exponentially better as a function of the allowed error, and at least quadratically better as a function of the overlap with the trial state. Our algorithm also has a better scaling with the spectral gap, which is quadratically better if the ground energy is known beforehand. We also show that our algorithm requires fewer ancilla qubits than existing algorithms, making it attractive for early applications of small quantum computers. Additionally, it can be used to determine an unknown ground energy faster than with phase estimation if a very high precision is required.
• We present three techniques for reducing the cost of preparing fermionic Hamiltonian eigenstates using phase estimation. First, we report a polylogarithmic-depth quantum algorithm for antisymmetrizing the initial states required for simulation of fermions in first quantization. This is an exponential improvement over the previous state-of-the-art. Next, we show how to reduce the overhead due to repeated state preparation in phase estimation when the goal is to prepare the ground state to high precision and one has knowledge of an upper bound on the ground state energy that is less than the excited state energy (often the case in quantum chemistry). Finally, we explain how one can perform the time evolution necessary for the phase estimation based preparation of Hamiltonian eigenstates with exactly zero error by using the recently introduced qubitization procedure.
• We show that quantum expander codes, a constant-rate family of quantum LDPC codes, with the quasi-linear time decoding algorithm of Leverrier, Tillich and Zémor can correct a constant fraction of random errors with very high probability. This is the first construction of a constant-rate quantum LDPC code with an efficient decoding algorithm that can correct a linear number of random errors with a negligible failure probability. Finding codes with these properties is also motivated by Gottesman's construction of fault tolerant schemes with constant space overhead. In order to obtain this result, we study a notion of $\alpha$-percolation: for a random subset $W$ of vertices of a given graph, we consider the size of the largest connected $\alpha$-subset of $W$, where $X$ is an $\alpha$-subset of $W$ if $|X \cap W| \geq \alpha |X|$.
• Present quantum computers often work with distinguishable qubits as their computational units. In order to simulate indistinguishable fermionic particles, it is first required to map the fermionic state to the state of the qubits. The Bravyi-Kitaev Superfast (BKSF) algorithm can be used to accomplish this mapping. The BKSF mapping has connections to quantum error correction and opens the door to new ways of understanding fermionic simulation in a topological context. Here, we present the first detailed exposition of BKSF algorithm for molecular simulation. We provide the BKSF transformed qubit operators and report on our implementation of the BKSF fermion-to-qubits transform in OpenFermion. In this initial study of the hydrogen molecule, we have compared BKSF, Jordan-Wigner and Bravyi-Kitaev transforms under the Trotter approximation. We considered different orderings of the exponentiated terms and found lower Trotter errors than previously reported for Jordan-Wigner and Bravyi-Kitaev algorithms. These results open the door to further study of the BKSF algorithm for quantum simulation.
• We relate the amount of entanglement required to play linear-system non-local games near-optimally to the hyperlinear profile of finitely-presented groups. By calculating the hyperlinear profile of a certain group, we give an example of a finite non-local game for which the amount of entanglement required to play $\varepsilon$-optimally is at least $\Omega(1/\varepsilon^k)$, for some $k>0$. Since this function approaches infinity as $\varepsilon$ approaches zero, this provides a quantitative version of a theorem of the first author.
• Spekkens' toy theory is a non-contextual hidden variable model with an epistemic restriction, a constraint on what the observer can know about the reality. It has been shown in [3] that for qudits of odd dimensions it is operationally equivalent to stabiliser quantum mechanics by making use of Gross' theory of discrete Wigner functions. This result does not hold in the case of qubits, because of the unavoidable negativity of any Wigner function representation of qubit stabiliser quantum mechanics. In this work we define and characterise the subtheories of Spekkens' theory that are operationally equivalent to subtheories of stabiliser quantum mechanics. We use these Spekkens' subtheories as a unifying framework for the known examples of state-injection schemes where contextuality is an injected resource to reach universal quantum computation. In addition, we prove that, in the case of qubits, stabiliser quantum mechanics can be reduced to a Spekkens' subtheory in the sense that all its objects that do not belong to the Spekkens' subtheory, namely non-covariant Clifford gates, can be injected. This shows that within Spekkens' subtheories we possess the toolbox to perform state-injection of every object outside of them and it suggests that there is no need to use bigger subtheories to reach universal quantum computation via state-injection. We conclude with a novel scheme of computation suggested by our approach which is based on the injection of CCZ states and we also relate different proofs of contextuality to different state injections of non-covariant gates.
• We revisit the Corner Transfer Matrix Renormalization Group (CTMRG) method of Nishino and Okunishi for contracting 2-dimensional tensor networks, and demonstrate that its performance can be substantially improved by determining the tensors using an eigenvalue solver as opposed to the power method used in CTMRG. We also generalize the variational uniform Matrix Product State (VUMPS) ansatz for diagonalizing 1D quantum Hamiltonians to the case of 2D transfer matrices, and discuss similarities with the corner methods. These two new algorithms will be crucial in improving the performance of variational Projected Entangled Pair State (PEPS) methods.
• According to the received conception of physics, a valid physical theory is presumed to describe the objective evolution of a unique external world. However, this assumption is challenged by quantum theory, which indicates that physical systems do not always have objective properties which are simply revealed by measurement. Furthermore, several other conceptual puzzles in the foundations of physics and related fields point to possible limitations of the received perspective and motivate the exploration of alternatives. Thus, here I propose an alternative approach which starts with the concept of "observation" as its primary notion, and does not from the outset assume the existence of a "world" or physical laws. It can be subsumed under a single postulate: Solomonoff induction correctly predicts future observations. I show that the resulting theory suggests a possible explanation for why there are simple computable probabilistic laws in the first place. It predicts the emergence of the notion of an objective external world that has begun in a state of low entropy. It also predicts that observers will typically see the violation of Bell inequalities despite the validity of the no-signalling principle. Moreover, it resolves cosmology's Boltzmann brain problem via a "principle of persistent regularities", and it makes the unusual prediction that the emergent notion of objective external world breaks down in certain extreme situations, yielding phenomena such as "probabilistic zombies". Additionally, it makes in principle concrete predictions for some fundamental conceptual problems relating to the computer simulation of observers. This paper does not claim to exactly describe "how the world works", but it dares to raise the question of whether the first-person perspective may be a more fruitful starting point from which to address certain longstanding fundamental issues.
• We present a strong connection between quantum information and the theory of quantum permutation groups. Specifically we note that the projective permutation matrices corresponding to perfect quantum strategies for the graph isomorphism game are the same notion as the magic unitaries that are used to define the quantum automorphism group of a graph. This connection links quantum groups to the more concrete notion of nonlocal games and physically observable quantum behaviours. In this work, we exploit this by using ideas and results from quantum information in order to prove new results about quantum automorphism groups of graphs, and about quantum permutation groups more generally. In particular, we show that asymptotically almost surely all graphs have trivial quantum automorphism group. Furthermore, we use examples of quantum isomorphic graphs from previous work to construct an infinite family of graphs which are quantum vertex transitive but fail to be vertex transitive, answering a question from the quantum permutation group literature. Our main tool for proving these results is the introduction of orbits and orbitals (orbits on ordered pairs) of quantum permutation groups. We show that the orbitals of a quantum permutation group form a coherent configuration/algebra, a notion from the field of algebraic graph theory. We then prove that the elements of this quantum orbital algebra are exactly the matrices that commute with the magic unitary defining the quantum group. We furthermore show that quantum isomorphic graphs admit an isomorphism of their quantum orbital algebras which maps the adjacency matrix of one graph to that of the other. We hope that this work will encourage new collaborations among the communities of quantum information, quantum groups, and algebraic graph theory.
• As physical implementations of quantum architectures emerge, it is increasingly important to consider the cost of algorithms for practical connectivities between qubits. We show that by using an arrangement of gates that we term the fermionic swap network, we can simulate a Trotter step of the electronic structure Hamiltonian in exactly $N$ depth and with $N^2/2$ two-qubit entangling gates, and prepare arbitrary Slater determinants in at most $N/2$ depth, all assuming only a minimal, linearly connected architecture. We conjecture that no explicit Trotter step of the electronic structure Hamiltonian is possible with fewer entangling gates, even with arbitrary connectivities. These results represent significant practical improvements on the cost of all current proposed algorithms for both variational and phase estimation based simulation of quantum chemistry.
• Security for machine learning has begun to become a serious issue for present day applications. An important question remaining is whether emerging quantum technologies will help or hinder the security of machine learning. Here we discuss a number of ways that quantum information can be used to help make quantum classifiers more secure or private. In particular, we demonstrate a form of robust principal component analysis that, under some circumstances, can provide an exponential speedup relative to robust methods used at present. To demonstrate this approach we introduce a linear combinations of unitaries Hamiltonian simulation method that we show functions when given an imprecise Hamiltonian oracle, which may be of independent interest. We also introduce a new quantum approach for bagging and boosting that can use quantum superposition over the classifiers or splits of the training set to aggregate over many more models than would be possible classically. Finally, we provide a private form of $k$--means clustering that can be used to prevent an all powerful adversary from learning more than a small fraction of a bit from any user. These examples show the role that quantum technologies can play in the security of ML and vice versa. This illustrates that quantum computing can provide useful advantages to machine learning apart from speedups.
• Within the last two decades, Quantum Technologies (QT) have made tremendous progress, moving from Noble Prize award-winning experiments on quantum physics into a cross-disciplinary field of applied research. Technologies are being developed now that explicitly address individual quantum states and make use of the 'strange' quantum properties, such as superposition and entanglement. The field comprises four domains: Quantum Communication, Quantum Simulation, Quantum Computation, and Quantum Sensing and Metrology. One success factor for the rapid advancement of QT is a well-aligned global research community with a common understanding of the challenges and goals. In Europe, this community has profited from several coordination projects, which have orchestrated the creation of a 150-page QT Roadmap. This article presents an updated summary of this roadmap. Besides sections on the four domains of QT, we have included sections on Quantum Theory and Software, and on Quantum Control, as both are important areas of research that cut across all four domains. Each section, after a short introduction to the domain, gives an overview on its current status and main challenges and then describes the advances in science and technology foreseen for the next ten years and beyond.
• We give an exposition of the SYK model with several new results. A non-local correction to the Schwarzian effective action is found. The same action is obtained by integrating out the bulk degrees of freedom in a certain variant of dilaton gravity. We also discuss general properties of out-of-time-order correlators.
• These notes describe representations of the universal cover of $\mathrm{SL}(2,\mathbb{R})$ with a view toward applications in physics. Spinors on the hyperbolic plane and the two-dimensional anti-de Sitter space are also discussed.
• In physics, there is the prevailing intuition that we are part of a unique external world, and that the goal of physics is to understand and describe this world. This assumption of the fundamentality of objective reality is often seen as a major prerequisite of any kind of scientific reasoning. However, here I argue that we should consider relaxing this assumption in a specific way in some contexts. Namely, there is a collection of open questions in and around physics that can arguably be addressed in a substantially more consistent and rigorous way if we consider the possibility that the first-person perspective is ultimately more fundamental than our usual notion of external world. These are questions like: which probabilities should an observer assign to future experiences if she is told that she will be simulated on a computer? How should we think of cosmology's Boltzmann brain problem, and what can we learn from the fact that measurements in quantum theory seem to do more than just reveal preexisting properties? Why are there simple computable laws of physics in the first place? This note summarizes a longer companion paper which constructs a mathematically rigorous theory along those lines, suggesting a simple and unified framework (rooted in algorithmic information theory) to address questions like those above. It is not meant as a "theory of everything" (in fact, it predicts its own limitations), but it shows how a notion of objective external world, looking very much like our own, can provably emerge from a starting point in which the first-person perspective is primary, without apriori assumptions on the existence of "laws" or a "physical world". While the ideas here are perfectly compatible with physics as we know it, they imply some quite surprising predictions and suggest that we may want to substantially revise the way we think about some foundational questions.
• The question has remained open if near-term gate model quantum computers will offer a quantum advantage for practical applications in the pre-fault tolerance noise regime. A class of algorithms which have shown some promise in this regard are the so-called classical-quantum hybrid variational algorithms. Here we develop a low-depth quantum algorithm to train quantum Boltzmann machine neural networks using such variational methods. We introduce a method which employs the quantum approximate optimization algorithm as a subroutine in order to approximately sample from Gibbs states of Ising Hamiltonians. We use this approximate Gibbs sampling to train neural networks for which we demonstrate training convergence for numerically simulated noisy circuits with depolarizing errors of rates of up to 4%.
• Numerous works have shown that under mild assumptions unitary dynamics inevitably leads to equilibration of physical expectation values if many energy eigenstates contribute to the initial state. Here, we consider systems driven by arbitrary time-dependent Hamiltonians as a protocol to prepare systems that do not equilibrate. We introduce a measure of the resilience against equilibration of such states and show, under natural assumptions, that in order to increase the resilience against equilibration of a given system, one needs to possess a resource system which itself has a large resilience. In this way, we establish a new link between the theory of equilibration and resource theories by quantifying the resilience against equilibration and the resources that are needed to produce it. We connect these findings with insights into local quantum quenches and investigate the (im-)possibility of formulating a second law of equilibration, by studying how resilience can be either only redistributed among subsystems, if these remain completely uncorrelated, or in turn created in a catalytic process if subsystems are allowed to build up some correlations.
• We propose an experimental design for universal continuous-variable quantum computation that incorporates recent innovations in linear-optics-based continuous-variable cluster state generation and cubic-phase gate teleportation. The first ingredient is a protocol for generating the bilayer-square-lattice cluster state (a universal resource state) with temporal modes of light. With this state, measurement-based implementation of Gaussian unitary gates requires only homodyne detection. Second, we describe a measurement device that implements an adaptive cubic-phase gate, up to a random phase-space displacement. It requires a two-step sequence of homodyne measurements and consumes a (non-Gaussian) cubic-phase state.
• Local operations assisted by classical communication (LOCC) constitute the free operations in entanglement theory. Hence, the determination of LOCC transformations is crucial for the understanding of entanglement. We characterize here almost all LOCC transformations among pure states of $n>3$ $d$--level systems with $d>2$. Combined with the analogous results for $n$-qubit states shown in G. Gour, B. Kraus, N. R. Wallach, J. Math. Phys. 58, 092204 (2017) this gives a characterization of almost all local transformations among multipartite pure states. We show that non-trivial LOCC transformations among generic fully entangled pure states are almost never possible. Thus, almost all multipartite states are isolated. They can neither be deterministically obtained from local unitary (LU)-inequivalent states via local operations, nor can they be deterministically transformed to pure fully entangled LU-inequivalent states. In order to derive this result we prove a more general statement, namely that generically a state possesses no non-trivial local symmetry. We show that these results also hold for certain tripartite systems.
• To implement fault-tolerant quantum computation with continuous variables, the Gottesman-Kitaev-Preskill (GKP) qubit has been recognized as an important technological element. However,it is still challenging to experimentally generate the GKP qubit with the required squeezing level, 14.8 dB, of the existing fault-tolerant quantum computation. To reduce this requirement, we propose a high-threshold fault-tolerant quantum computation with GKP qubits using topologically protected measurement-based quantum computation with the surface code. By harnessing analog information contained in the GKP qubits, we apply analog quantum error correction to the surface code.Furthermore, we develop a method to prevent the squeezing level from decreasing during the construction of the large scale cluster states for the topologically protected measurement based quantum computation. We numerically show that the required squeezing level can be relaxed to less than 10 dB, which is within the reach of the current experimental technology. Hence, this work can considerably alleviate this experimental requirement and take a step closer to the realization of large scale quantum computation.
• Alternating minimization heuristics seek to solve a (difficult) global optimization task through iteratively solving a sequence of (much easier) local optimization tasks on different parts (or blocks) of the input parameters. While popular and widely applicable, very few examples of this heuristic are rigorously shown to converge to optimality, and even fewer to do so efficiently. In this paper we present a general framework which is amenable to rigorous analysis, and expose its applicability. Its main feature is that the local optimization domains are each a group of invertible matrices, together naturally acting on tensors, and the optimization problem is minimizing the norm of an input tensor under this joint action. The solution of this optimization problem captures a basic problem in Invariant Theory, called the null-cone problem. This algebraic framework turns out to encompass natural computational problems in combinatorial optimization, algebra, analysis, quantum information theory, and geometric complexity theory. It includes and extends to high dimensions the recent advances on (2-dimensional) operator scaling. Our main result is a fully polynomial time approximation scheme for this general problem, which may be viewed as a multi-dimensional scaling algorithm. This directly leads to progress on some of the problems in the areas above, and a unified view of others. We explain how faster convergence of an algorithm for the same problem will allow resolving central open problems. Our main techniques come from Invariant Theory, and include its rich non-commutative duality theory, and new bounds on the bitsizes of coefficients of invariant polynomials. They enrich the algorithmic toolbox of this very computational field of mathematics, and are directly related to some challenges in geometric complexity theory (GCT).
• Here we propose a method for determining if two graphs are isomorphic in polynomial time on a quantum computer. We show that any two isomorphic graphs can be represented as states which share the same equal-angle slice of the Wigner function - a process that on a quantum computer is at most quartic, and possibly linear, in the number of nodes. We conjecture that only isomorphic graphs have this property. The method is then, for each graph: (i) create a quantum graph state using existing protocols, representing the classical graphs to be compared (ii) measure the state in a restricted phase space using a spin-Wigner function (iii) compare measurement results numbering between one and the square of the number of nodes. As soon as there is a difference (outside of experimental error) the graphs are not isomorphic and the procedure can terminate. We discuss extending this work to the subgraph isomorphism problem that is known to be NP-complete and conjecture that this could be reducible to pseudo-polynomial-time using dynamic programming methods.
• We ask whether the knowledge of a single eigenstate of a local Hamiltonian is sufficient to uniquely determine the Hamiltonian. We present evidence that the answer is "yes" for generic local Hamiltonians, given either the ground state or an excited eigenstate. In fact, knowing only the two-point equal-time correlation functions of local observables with respect to the eigenstate should generically be sufficient to exactly recover the Hamiltonian for finite-size systems, with numerical algorithms that run in a time that is polynomial in the system size. We also investigate the large-system limit, the sensitivity of the reconstruction to error, and the case when correlation functions are only known for observables on a fixed sub-region. Numerical demonstrations support the results for one-dimensional spin chains. For the purpose of our analysis, we define the "$k$-correlation spectrum" of a state, which reveals properties of local correlations in the state and may be of independent interest.
• Quantum information theory has considerably helped in the understanding of quantum many-body systems. Since the early 2000s various measures of quantum entanglement have been employed to characterise the features of the ground and excited states of quantum matter. Furthermore, the scaling of entanglement entropy with the size of a system has inspired modifications to numerical techniques for the simulation of many-body systems leading to the, now established, area of tensor networks. However, the knowledge and the methods brought by quantum information do not end with bipartite entanglement. There are other forms of quantum correlations that emerge "for free" in the ground and thermal states of condensed matter models and that can be exploited as resources for quantum technologies. The goal of this work is to review the most recent development on quantum correlations for quantum many-body systems focussing on multipartite entanglement, quantum non locality, quantum discord, mutual information but also other non classical resources like quantum coherence. Moreover, we also discuss applications of quantum metrology in quantum many-body systems.
• Recent progress in building large-scale quantum devices for exploring quantum computing and simulation paradigms has relied upon effective tools for achieving and maintaining good experimental parameters, i.e. tuning up devices. In many cases, including in quantum-dot based architectures, the parameter space grows substantially with the number of qubits, and may become a limit to scalability. Fortunately, machine learning techniques for pattern recognition and image classification using so-called deep neural networks have shown surprising successes for computer-aided understanding of complex systems. In this work, we use deep and convolutional neural networks to characterize states and charge configurations of semiconductor quantum dot arrays when one can only measure a current-voltage characteristic of transport (here conductance) through such a device. For simplicity, we model a semiconductor nanowire connected to leads and capacitively coupled to depletion gates using the Thomas-Fermi approximation and Coulomb blockade physics. We then generate labelled training data for the neural networks, and find at least $90\,\%$ accuracy for charge and state identification for single and double dots purely from the dependence of the nanowire's conductance upon gate voltages. Using these characterization networks, we can then optimize the parameter space to achieve a desired configuration of the array, a technique we call auto-tuning'. Finally, we show how such techniques can be implemented in an experimental setting by applying our approach to an experimental data set, and outline further problems in this domain, from using charge sensing data to extensions to full one and two-dimensional arrays, that can be tackled with machine learning.
• In this work, we will show how the topological order of the Toric Code appears when the lattice on which it is defined discretizes a three-dimensional torus. In order to do this, we will present a pedagogical review of the traditional two-dimensional Toric Code, with an emphasis on how its quasiparticles are conceived and transported. With that, we want to make clear not only how all these same quasiparticle conception and transportation fit into this three-dimensional model, but to make it clear how topology controls the degeneracy of ground state in this new situation.
• Ernst Specker considered a particular feature of quantum theory to be especially fundamental, namely that pairwise joint measurability implies global joint measurability for sharp measurements [vimeo.com/52923835 (2009)]. To date, it seemed that Specker's principle failed to single out quantum theory from the space of all general probabilistic theories. In particular, consistent exclusivity --- an important consequence of Specker's principle --- is satisfied by both quantum and almost quantum correlations. Here, we identify another statistical implication of Specker's principle besides consistent exclusivity, which possibly holds for almost quantum correlations. However, our main result asserts that Specker's principle cannot be satisfied in any theory that yields almost quantum models.
• We generate and characterise entangled states of a register of 20 individually-controlled qubits, where each qubit is encoded into the electronic state of a trapped atomic ion. Entanglement is generated amongst the qubits during the out-of-equilbrium dynamics of an Ising-type Hamiltonian, engineered via laser fields. Since the qubit-qubit interactions decay with distance, entanglement is generated at early times predominantly between neighbouring groups of qubits. We characterise entanglement between these groups by designing and applying witnesses for genuine multipartite entanglement (GME). Our results show that, during the dynamical evolution, all neighbouring pairs, triplets and quadruplets of qubits simultaneously develop GME. GME is detected in groups containing up to 5 qubits. Witnessing GME in larger groups of qubits in our system remains an open challenge.

Andrew W Simmons Dec 14 2017 11:40 UTC

Hi Māris, you might well be right! Stabiliser QM with more qubits, I think, is also a good candidate for further investigation to see if we can close the gap a bit more between the analytical upper bound and the example-based lower bound.

Planat Dec 14 2017 08:43 UTC

Interesting work. You don't require that the polar space has to be symplectic. In ordinary quantum mechanics the commutation of n-qudit observables is ruled by a symplectic polar space. For two qubits, it is the generalized quadrangle GQ(2,2). Incidently, in https://arxiv.org/abs/1601.04865 this pro

...(continued)
Māris Ozols Dec 12 2017 19:41 UTC

$E_7$ also has some nice properties in this regard (in fact, it might be even better than $E_8$). See https://arxiv.org/abs/1009.1195.

Danial Dervovic Dec 10 2017 15:25 UTC

Thank you for the insightful observations, Simon.

In response to the first point, there is a very short comment in the Discussion section to this effect. I felt an explicit dependence on $T$ as opposed to the diameter would make the implications of the result more clear. Namely, lifting can mix

...(continued)
Simon Apers Dec 09 2017 07:54 UTC

Thanks for the comment, Simone. A couple of observations:

- We noticed that Danial's result can in fact be proved more directly using the theorem that is used from ([arXiv:1705.08253][1]): by choosing the quantum walk Cesaro average as the goal distribution, it can be attained with a lifted Markov

...(continued)
Simone Severini Dec 07 2017 02:51 UTC

Closely related to

Simon Apers, Alain Sarlette, Francesco Ticozzi, Simulation of Quantum Walks and Fast Mixing with Classical Processes, https://scirate.com/arxiv/1712.01609

In my opinion, lifting is a good opportunity to put on a rigorous footing the relationship between classical and quantu

...(continued)
Mark Everitt Dec 05 2017 07:50 UTC

Thank you for the helpful feedback.

Yes these are 14 pairs of graphs [This is an edit - I previously mistakenly posted that it was 7 pairs] that share the same equal angle slice. We have only just started looking at the properties of these graphs. Thank you for the link - that is a really useful r

...(continued)
Simone Severini Dec 05 2017 01:13 UTC

When looking at matrix spectra as graph invariants, it is easy to see that the spectrum of the adjacency matrix or the Laplacian fails for 4 vertices. Also, the spectrum of the adjacency matrix together with the spectrum of the adjacency matrix of the complement fail for 7 vertices. So, the algorith

...(continued)
Mark Everitt Dec 04 2017 17:52 UTC

Thank you for this - its the sort of feedback we were after.

We have found 14 examples of 8 node graphs (of the possible 12,346) that break our conjecture.

We are looking into this now to get some understanding and see if we can overcome this issue. We will check to see if the failure of our algo

...(continued)
Dave Bacon Dec 02 2017 00:08 UTC