# Top arXiv papers

• We give semidefinite program (SDP) quantum solvers with an exponential speed-up over classical ones. Specifically, we consider SDP instances with $m$ constraint matrices of dimension $n$, each of rank at most $r$, and assume that the input matrices of the SDP are given as quantum states (after a suitable normalization). Then we show there is a quantum algorithm that solves the SDP feasibility problem with accuracy $\epsilon$ by using $\sqrt{m}\log m\cdot\text{poly}(\log n,r,\epsilon^{-1})$ quantum gates. The dependence on $n$ provides an exponential improvement over the work of Brandão and Svore and the work of van Apeldoorn et al., and demonstrates an exponential quantum speed-up when $m$ and $r$ are small. We apply the SDP solver to the problem of learning a good description of a quantum state with respect to a set of measurements: Given $m$ measurements and a supply of copies of an unknown state $\rho$, we show we can find in time $\sqrt{m}\log m\cdot\text{poly}(\log n,r,\epsilon^{-1})$ a description of the state as a quantum circuit preparing a density matrix which has the same expectation values as $\rho$ on the $m$ measurements up to error $\epsilon$. The density matrix obtained is an approximation to the maximum entropy state consistent with the measurement data considered in Jaynes' principle. As in previous work, we obtain our algorithm by "quantizing" classical SDP solvers based on the matrix multiplicative weight update method. One of our main technical contributions is a quantum Gibbs state sampler for low-rank Hamiltonians with a poly-logarithmic dependence on its dimension based on the techniques developed in quantum principal component analysis, which could be of independent interest.
• We construct a Hamiltonian whose dynamics simulate the dynamics of every other Hamiltonian up to exponentially long times in the system size. The Hamiltonian is time-independent, local, one-dimensional, and translation invariant. As a consequence, we show (under plausible computational complexity assumptions) that the circuit complexity of the unitary dynamics under this Hamiltonian grows steadily with time up to an exponential value in system size. This result makes progress on a recent conjecture by Susskind, in the context of the AdS/CFT correspondence, that the time evolution of the thermofield double state of two conformal fields theories with a holographic dual has circuit complexity increasing linearly in time, up to exponential time.
• Sep 28 2017 quant-ph arXiv:1709.09622v1
We consider a problem we call StateIsomorphism: given two quantum states of n qubits, can one be obtained from the other by rearranging the qubit subsystems? Our main goal is to study the complexity of this problem, which is a natural quantum generalisation of the problem StringIsomorphism. We show that StateIsomorphism is at least as hard as GraphIsomorphism, and show that these problems have a similar structure by presenting evidence to suggest that StateIsomorphism is an intermediate problem for QCMA. In particular, we show that the complement of the problem, StateNonIsomorphism, has a two message quantum interactive proof system, and that this proof system can be made statistical zero-knowledge. We consider also StabilizerStateIsomorphism (SSI) and MixedStateIsomorphism (MSI), showing that the complement of SSI has a quantum interactive proof system that uses classical communication only, and that MSI is QSZK-hard.
• In order to build a large scale quantum computer, one must be able to correct errors extremely fast. We design a fast decoding algorithm for topological codes to correct for Pauli errors and erasure and combination of both errors and erasure. Our algorithm has a worst case complexity of $O(n \alpha(n))$, where $n$ is the number of physical qubits and $\alpha$ is the inverse of Ackermann's function, which is very slowly growing. For all practical purposes, $\alpha(n) \leq 3$. We prove that our algorithm performs optimally for errors of weight up to $(d-1)/2$ and for loss of up to $d-1$ qubits, where $d$ is the minimum distance of the code. Numerically, we obtain a threshold of $9.9\%$ for the 2d-toric code with perfect syndrome measurements and $2.6\%$ with faulty measurements.
• We study thermal states of strongly interacting quantum spin chains and prove that those can be represented in terms of convex combinations of matrix product states. Apart from revealing new features of the entanglement structure of Gibbs states our results provide a theoretical justification for the use of White's algorithm of minimally entangled typical thermal states. Furthermore, we shed new light on time dependent matrix product state algorithms which yield hydrodynamical descriptions of the underlying dynamics.
• We study how well topological quantum codes can tolerate coherent noise caused by systematic unitary errors such as unwanted $Z$-rotations. Our main result is an efficient algorithm for simulating quantum error correction protocols based on the 2D surface code in the presence of coherent errors. The algorithm has runtime $O(n^2)$, where $n$ is the number of physical qubits. It allows us to simulate systems with more than one thousand qubits and obtain the first error threshold estimates for several toy models of coherent noise. Numerical results are reported for storage of logical states subject to $Z$-rotation errors and for logical state preparation with general $SU(2)$ errors. We observe that for large code distances the effective logical-level noise is well-approximated by random Pauli errors even though the physical-level noise is coherent. Our algorithm works by mapping the surface code to a system of Majorana fermions.
• We present two particular decoding procedures for reconstructing a quantum state from the Hawking radiation in the Hayden-Preskill thought experiment. We work in an idealized setting and represent the black hole and its entangled partner by $n$ EPR pairs. The first procedure teleports the state thrown into the black hole to an outside observer by post-selecting on the condition that a sufficient number of EPR pairs remain undisturbed. The probability of this favorable event scales as $1/d_{A}^2$, where $d_A$ is the Hilbert space dimension for the input state. The second procedure is deterministic and combines the previous idea with Grover's search. The decoding complexity is $\mathcal{O}(d_{A}\mathcal{C})$ where $\mathcal{C}$ is the size of the quantum circuit implementing the unitary evolution operator $U$ of the black hole. As with the original (non-constructive) decoding scheme, our algorithms utilize scrambling, where the decay of out-of-time-order correlators (OTOCs) guarantees faithful state recovery.
• With the current rate of progress in quantum computing technologies, 50-qubit systems will soon become a reality. To assess, refine and advance the design and control of these devices, one needs a means to test and evaluate their fidelity. This in turn requires the capability of computing ideal quantum state amplitudes for devices of such sizes and larger. In this study, we present a new approach for this task that significantly extends the boundaries of what can be classically computed. We demonstrate our method by presenting results obtained from a calculation of the complete set of output amplitudes of a universal random circuit with depth 27 in a 2D lattice of $7 \times 7$ qubits. We further present results obtained by calculating an arbitrarily selected slice of $2^{37}$ amplitudes of a universal random circuit with depth 23 in a 2D lattice of $8 \times 7$ qubits. Such calculations were previously thought to be impossible due to impracticable memory requirements. Using the methods presented in this paper, the above simulations required 4.5 and 3.0 TB of memory, respectively, to store calculations, which is well within the limits of existing classical computers.
• We show that the maximum success probability of players sharing quantum entanglement in a two-player game with classical questions of logarithmic length and classical answers of constant length is NP-hard to approximate to within constant factors. As a corollary, the inclusion $\mathrm{NEXP}\subseteq\mathrm{MIP}^*$, first shown in [IV12] with three provers, holds with two provers only. The proof is based on a simpler, improved analysis of the low-degree test Raz and Safra (STOC'97) against two entangled provers.
• Sep 20 2017 quant-ph arXiv:1709.06139v1
Pure state entanglement transformations have been thought of as irreversible, with reversible transformations generally only possible in the limit of many copies. Here, we show that reversible entanglement transformations do not require processing on the many copy level, but can instead be undertaken on individual systems, provided the amount of entanglement which is produced or consumed is allowed to fluctuate. We derive necessary and sufficient conditions for entanglement manipulations in this case. As a corollary, we derive an equation which quantifies the fluctuations of entanglement, which is formally identical to the Jarzynski fluctuation equality found in thermodynamics. One can also relate a forward entanglement transformation to its reverse process in terms of the entanglement cost of such a transformation, in a manner equivalent to the Crooks relation. We show that a strong converse theorem for entanglement transformations is related to the second law of thermodynamics, while the fact that the Schmidt rank of an entangled state cannot increase is related to the third law of thermodynamics. Achievability of the protocols is done by introducing an entanglement battery, a device which stores entanglement and uses an amount of entanglement that is allowed to fluctuate but with an average cost which is still optimal. This allows us to also solve the problem of partial entanglement recovery, and in fact, we show that entanglement is fully recovered. Allowing the amount of consumed entanglement to fluctuate also leads to improved and optimal entanglement dilution protocols.
• Proving that the parent Hamiltonian of a Projected Entangled Pair State (PEPS) is gapped remains an important open problem. We take a step forward in solving this problem by showing that if the boundary state of any rectangular subregion is a quasi-local Gibbs state of the virtual indices, then the parent Hamiltonian of the bulk 2D PEPS has a constant gap in the thermodynamic limit. The proof employs the martingale method of nearly commuting projectors, and exploits a result of Araki on the robustness of one dimensional Gibbs states. Our result provides one of the first rigorous connections between boundary theories and dynamical properties in an interacting many body system. We show that the proof can be extended to MPO-injective PEPS, and speculate that the assumption on the locality of the boundary Hamiltonian follows from exponential decay of correlations in the bulk.
• The stabilizer rewiring algorithm (SRA) recently proposed in arXiv:1707.09403 gives a method for constructing a transversal circuit mapping between any pair of stabilizer codes. As gates along this circuit are applied, the initial code is deformed through a series of intermediate codes before reaching the final code. The circuit is then fault-tolerant if the full set of intermediate codes have high distance. We propose a randomized variant of the SRA and show that with at most linear overhead, there exists a path of deformations which preserves the code distance throughout the circuit. Furthermore, we show that a random path will almost always suffice, and so the circuit can be constructed explicitly using the SRA. This allows constructive, low overhead fault-tolerant code switching between arbitrary stabilizer error-correcting codes.
• It is well known that correlations predicted by quantum mechanics cannot be explained by any classical (local-realistic) theory. The relative strength of quantum and classical correlations is usually studied in the context of Bell inequalities, but this tells us little about the geometry of the quantum set of correlations. In other words, we do not have good intuition about what the quantum set actually looks like. In this paper we study the geometry of the quantum set using standard tools from convex geometry. We find explicit examples of rather counter-intuitive features in the simplest non-trivial Bell scenario (two parties, two inputs and two outputs) and illustrate them using 2-dimensional slice plots. We also show that even more complex features appear in Bell scenarios with more inputs or more parties. Finally, we discuss the limitations that the geometry of the quantum set imposes on the task of self-testing.
• Estimation of Shannon and Rényi entropies of unknown discrete distributions is a fundamental problem in statistical property testing and an active research topic in both theoretical computer science and information theory. Tight bounds on the number of samples to estimate these entropies have been established in the classical setting, while little is known about their quantum counterparts. In this paper, we give the first quantum algorithms for estimating $\alpha$-Rényi entropies (Shannon entropy being 1-Renyi entropy). In particular, we demonstrate a quadratic quantum speedup for Shannon entropy estimation and a generic quantum speedup for $\alpha$-Rényi entropy estimation for all $\alpha\geq 0$, including a tight bound for the collision-entropy (2-Rényi entropy). We also provide quantum upper bounds for extreme cases such as the Hartley entropy (i.e., the logarithm of the support size of a distribution, corresponding to $\alpha=0$) and the min-entropy case (i.e., $\alpha=+\infty$), as well as the Kullback-Leibler divergence between two distributions. Moreover, we complement our results with quantum lower bounds on $\alpha$-Rényi entropy estimation for all $\alpha\geq 0$.
• A divisible binary classical code is one in which every code word has weight divisible by a fixed integer. If the divisor is $2^\nu$ for a positive integer $\nu$, then one can construct a Calderbank-Shor-Steane (CSS) code, where $X$-stabilizer space is the divisible classical code, that admits a transversal gate in the $\nu$-th level of Clifford hierarchy. We consider a generalization of the divisibility by allowing a coefficient vector of odd integers with which every code word has zero dot product modulo the divisor. In this generalized sense, we construct a CSS code with divisor $2^{\nu+1}$ and code distance $d$ from any CSS code of code distance $d$ and divisor $2^\nu$ where the transversal $X$ is a nontrivial logical operator. The encoding rate of the new code is approximately $d$ times smaller than that of the old code. In particular, for large $d$ and $\nu \ge 2$, our construction yields a CSS code of parameters $[[O(d^{\nu-1}), \Omega(d),d]]$ admitting a transversal gate at the $\nu$-th level of Clifford hierarchy. For our construction we introduce a conversion from magic state distillation protocols based on Clifford measurements to those based on codes with transversal $T$-gates. Our tower contains, as a subclass, generalized triply even CSS codes that have appeared in so-called gauge fixing or code switching methods.
• Quantum error correction was invented to allow for fault-tolerant quantum computation. Topological ordered systems turned out to give a natural physical realization of quantum error correcting codes (QECC) in their grounspaces. More recently, in the context of the AdS/CFT correspondence, it has been argued that eigenstates of CFTs with a holographic dual should also form QECCs. These two examples lead to the question of how generally do eigenstates of many-body models form quantum codes. In this work we establish new connections between quantum chaos and translation-invariance in many-body spin systems, on one hand, and approximate quantum error correcting codes (AQECC), on the other hand. We first observe that quantum chaotic systems exhibiting the Eigenstate Thermalization Hypothesis (ETH) have eigenstates forming quantum error-correcting codes. Then we show that AQECC can be obtained probabilistically from the spectrum of every translation-invariant spin chains, even for integrable models, by taking translation-invariant energy eigenstates. Applying this result to 1D classical systems, we show that local symmetries can be used to construct parent Hamiltonians which embed these codes into the low-energy subspace of gapless 1D spin chains. As explicit examples we obtain local AQECC in the ground space of the 1D ferromagnetic Heisenberg model and the Motzkin spin chain model with periodic boundary conditions, thereby yielding non-stabilizer codes in the ground space and low energy subspace of physically plausible 1D gapless models.
• Quantum computers promise to efficiently solve not only problems believed to be intractable for classical computers, but also problems for which verifying the solution is also considered intractable. This raises the question of how one can check whether quantum computers are indeed producing correct results. This task, known as quantum verification, has been highlighted as a significant challenge on the road to scalable quantum computing technology. We review the most significant approaches to quantum verification and compare them in terms of structure, complexity and required resources. We also comment on the use of cryptographic techniques which, for many of the presented protocols, has proven extremely useful in performing verification. Finally, we discuss issues related to fault tolerance, experimental implementations and the outlook for future protocols.
• Neural Networks Quantum States have been recently introduced as an Ansatz for describing the wave function of quantum many-body systems. We show that there are strong connections between Neural Networks Quantum States in the form of Restricted Boltzmann Machines and some classes of Tensor Network states in arbitrary dimension. In particular we demonstrate that short-range Restricted Boltzmann Machines are Entangled Plaquette States, while fully connected Restricted Boltzmann Machines are String-Bond States with a non-local geometry and low bond dimension. These results shed light on the underlying architecture of Restricted Boltzmann Machines and their efficiency at representing many-body quantum states. String-Bond States also provide a generic way of enhancing the power of Neural Networks Quantum States and a natural generalization to systems with larger local Hilbert space. We compare the advantages and drawbacks of these different classes of states and present a method to combine them together. This allows us to benefit from both the entanglement structure of Tensor Networks and the efficiency of Neural Network Quantum States into a single Ansatz capable of targeting the wave function of strongly correlated systems. While it remains a challenge to describe states with chiral topological order using traditional Tensor Networks, we show that Neural Networks Quantum States and their String-Bond States extension can describe a lattice Fractional Quantum Hall state exactly. In addition, we provide numerical evidence that Neural Networks Quantum States can approximate a chiral spin liquid with better accuracy than Entangled Plaquette States and local String-Bond States. Our results demonstrate the efficiency of neural networks to describe complex quantum wave functions and pave the way towards the use of String-Bond States as a tool in more traditional machine learning applications.
• Universal fault-tolerant quantum computers will require error-free execution of long sequences of quantum gate operations, which is expected to involve millions of physical qubits. Before the full power of such machines will be available, near-term quantum devices will provide several hundred qubits and limited error correction. Still, there is a realistic prospect to run useful algorithms within the limited circuit depth of such devices. Particularly promising are optimization algorithms that follow a hybrid approach: the aim is to steer a highly entangled state on a quantum system to a target state that minimizes a cost function via variation of some gate parameters. This variational approach can be used both for classical optimization problems as well as for problems in quantum chemistry. The challenge is to converge to the target state given the limited coherence time and connectivity of the qubits. In this context, the quantum volume as a metric to compare the power of near-term quantum devices is discussed. With focus on chemistry applications, a general description of variational algorithms is provided and the mapping from fermions to qubits is explained. Coupled-cluster and heuristic trial wave-functions are considered for efficiently finding molecular ground states. Furthermore, simple error-mitigation schemes are introduced that could improve the accuracy of determining ground-state energies. Advancing these techniques may lead to near-term demonstrations of useful quantum computation with systems containing several hundred qubits.
• Oct 03 2017 quant-ph cond-mat.str-el math-ph math.MP arXiv:1710.00464v1
The theory of anyon systems, as modular functors topologically and unitary modular tensor categories algebraically, is mature. To go beyond anyons, our first step is the interplay of anyons with conventional group symmetry due to the paramount importance of group symmetry in physics. This led to the theory of symmetry-enriched topological order. Another direction is the boundary physics of topological phases, both gapless as in the fractional quantum Hall physics and gapped as in toric code. A more speculative and interesting direction is the study of Banados-Teitelboim-Zanelli black holes and quantum gravity in $3d$. The clearly defined physical and mathematical issues require a far-reaching generalization of anyons and seem to be within reach. In this short survey, I will first cover the extensions of anyon theory to symmetry defects and gapped boundaries. Then I will discuss a desired generalization of anyons to anyon-like objects---the Banados-Teitelboim-Zanelli black holes---in $3d$ quantum gravity.
• We analyze the performance of classical and quantum search algorithms from a thermodynamic perspective, focusing on resources such as time, energy, and memory size. We consider two examples that are relevant to post-quantum cryptography: Grover's search algorithm, and the quantum algorithm for collision-finding. Using Bennett's "Brownian" model of low-power reversible computation, we show classical algorithms that have the same asymptotic energy consumption as these quantum algorithms. Thus, the quantum advantage in query complexity does not imply a reduction in these thermodynamic resource costs. In addition, we present realistic estimates of the resource costs of quantum and classical search, for near-future computing technologies. We find that, if memory is cheap, classical exhaustive search can be surprisingly competitive with Grover's algorithm.
• We extend quantum Stein's lemma in asymmetric quantum hypothesis testing to composite null and alternative hypotheses. As our main result, we show that the asymptotic error exponent for testing convex combinations of quantum states $\rho^{\otimes n}$ against convex combinations of quantum states $\sigma^{\otimes n}$ is given by a regularized quantum relative entropy distance formula. We prove that in general such a regularization is needed but also discuss various settings where our formula as well as extensions thereof become single-letter. This includes a novel operational interpretation of the relative entropy of coherence in terms of hypothesis testing. For our proof, we start from the composite Stein's lemma for classical probability distributions and lift the result to the non-commutative setting by only using elementary properties of quantum entropy. Finally, our findings also imply an improved Markov type lower bound on the quantum conditional mutual information in terms of the regularized quantum relative entropy - featuring an explicit and universal recovery map.
• Studying general quantum many-body systems is one of the major challenges in modern physics because it requires an amount of computational resources that scales exponentially with the size of the system.Simulating the evolution of a state, or even storing its description, rapidly becomes intractable for exact classical algorithms. Recently, machine learning techniques, in the form of restricted Boltzmann machines, have been proposed as a way to efficiently represent certain quantum states with applications in state tomography and ground state estimation. Here, we introduce a new representation of states based on variational autoencoders. Variational autoencoders are a type of generative model in the form of a neural network. We probe the power of this representation by encoding probability distributions associated with states from different classes. Our simulations show that deep networks give a better representation for states that are hard to sample from, while providing no benefit for random states. This suggests that the probability distributions associated to hard quantum states might have a compositional structure that can be exploited by layered neural networks. Specifically, we consider the learnability of a class of quantum states introduced by Fefferman and Umans. Such states are provably hard to sample for classical computers, but not for quantum ones, under plausible computational complexity assumptions. The good level of compression achieved for hard states suggests these methods can be suitable for characterising states of the size expected in first generation quantum hardware.
• Oct 13 2017 quant-ph arXiv:1710.04228v1
Is it always possible to explain random stochastic transitions between states of a finite-dimensional system as arising from the deterministic quantum evolution of the system? If not, then what is the minimal amount of randomness required by quantum theory to explain a given stochastic process? Here, we address this problem by studying possible coherifications of a quantum channel $\Phi$, i.e., we look for channels $\Phi^{\mathcal{C}}$ that induce the same classical transitions $T$, but are "more coherent". To quantify the coherence of a channel $\Phi$ we measure the coherence of the corresponding Jamiołkowski state $J_{\Phi}$. We show that the classical transition matrix $T$ can be coherified to reversible unitary dynamics if and only if $T$ is unistochastic. Otherwise the Jamiołkowski state $J_\Phi^{\mathcal{C}}$ of the optimally coherified channel is mixed, and the dynamics must necessarily be irreversible. To asses the extent to which an optimal process $\Phi^{\mathcal{C}}$ is indeterministic we find explicit bounds on the entropy and purity of $J_\Phi^{\mathcal{C}}$, and relate the latter to the unitarity of $\Phi^{\mathcal{C}}$. We also find optimal coherifications for several classes of channels, including all one-qubit channels. Finally, we provide a non-optimal coherification procedure that works for an arbitrary channel $\Phi$ and reduces its rank (the minimal number of required Kraus operators) from $d^2$ to $d$.
• Techniques for approximately contracting tensor networks are limited in how efficiently they can make use of parallel computing resources. In this work we demonstrate and characterize a Monte Carlo approach to the tensor network renormalization group method which can be used straightforwardly on modern computing architectures. We demonstrate the efficiency of the technique and show that Monte Carlo tensor network renormalization provides an attractive path to improving the accuracy of a wide class of challenging computations while also providing useful estimates of uncertainty and a statistical guarantee of unbiased results.
• We explore the feasibility of implementing a small surface code with 9 data qubits and 8 ancilla qubits, commonly referred to as surface-17, using a linear chain of 171Yb+ ions. Two-qubit gates can be performed between any two ions in the chain with gate time increasing linearly with ion distance. Measurement of the ion state by fluorescence requires that the ancilla qubits be physically separated from the data qubits to avoid errors on the data due to scattered photons. We minimize the time required to measure one round of stabilizers by optimizing the mapping of the two-dimensional surface code to the linear chain of ions. We develop a physically motivated Pauli error model that allows for fast simulation and captures the key sources of noise in an ion trap quantum computer including gate imperfections and ion heating. Our simulations showed a consistent requirement of a two-qubit gate fidelity of > 99.9% for logical memory to have a better fidelity than physical two-qubit operations. Finally, we perform an analysis on the error subsets from the importance sampling method used to approximate the logical error rates in this paper to gain insight into which error sources are particularly detrimental to error correction.
• Grid (or comb) states are an interesting class of bosonic states introduced by Gottesman, Kitaev and Preskill to encode a qubit into an oscillator. A method to generate or `breed' a grid state from Schrödinger cat states using beam splitters and homodyne measurements is known, but this method requires post-selection. In this paper we show how post-processing of the measurement data can be used to entirely remove the need for post-selection, making the scheme much more viable. We bound the asymptotic behavior of the breeding procedure and demonstrate the efficacy of the method numerically.
• We describe an efficient quantum algorithm for the quantum Schur transform. The Schur transform is an operation on a quantum computer that maps the standard computational basis to a basis composed of irreducible representations of the unitary and symmetric groups. We simplify and extend the algorithm of Bacon, Chuang, and Harrow, and provide a new practical construction as well as sharp theoretical and practical analyses. Our algorithm decomposes the Schur transform on $n$ qubits into $O(n^4 \log(n/{\epsilon}))$ operators in the Clifford+T fault-tolerant gate set. We extend our qubit algorithm to decompose the Schur transform on $n$ qudits of dimension $d$ into $O(d^{1+p} n^{2d+1} \log^p (dn/{\epsilon})$) primitive operators from any universal gate set, for $p {\approx} 3.97$.
• Oct 11 2017 quant-ph arXiv:1710.03599v1
Quantum computing allows for the potential of significant advancements in both the speed and the capacity of widely-used machine learning algorithms. In this paper, we introduce quantum algorithms for a recurrent neural network, the Hopfield network, which can be used for pattern recognition, reconstruction, and optimization as a realization of a content addressable memory system. We show that an exponentially large network can be stored in a polynomial number of quantum bits by encoding the network into the amplitudes of quantum states. By introducing a new classical technique for operating such a network, we can leverage quantum techniques to obtain a quantum computational complexity that is logarithmic in the dimension of the data. This potentially yields an exponential speed-up in comparison to classical approaches. We present an application of our method as a genetic sequence recognizer.
• In this article we show the duality between tensor networks and undirected graphical models with discrete variables. We study tensor networks on hypergraphs, which we call tensor hypernetworks. We show that the tensor hypernetwork on a hypergraph exactly corresponds to the graphical model given by the dual hypergraph. We translate various notions under duality. For example, marginalization in a graphical model is dual to contraction in the tensor network. Algorithms also translate under duality. We show that belief propagation corresponds to a known algorithm for tensor network contraction. This article is a reminder that the research areas of graphical models and tensor networks can benefit from interaction.
• An important approach to the fault-tolerant quantum computation is protecting the logical information using the quantum error correction. Usually, the logical information is in the form of logical qubits, which are encoded in physical qubits using quantum error correction codes. Compared with the qubit quantum computation, the fermionic quantum computation has advantages in quantum simulations of fermionic systems, e.g. molecules. In this paper, we show that the fermionic quantum computation can be universal and fault-tolerant if we encode logical Majorana fermions in physical Majorana fermions. We take a color code as an example to demonstrate the universal set of fault-tolerant operations on logical Majorana fermions, and we numerically find that the fault-tolerance threshold is about 0.8%.
• The Data Processing Inequality (DPI) says that the Umegaki relative entropy $S(\rho||\sigma) := {\rm Tr}[\rho(\log \rho - \log \sigma)]$ is non-increasing under the action of completely positive trace preserving (CPTP) maps. Let ${\mathcal M}$ be a finite dimensional von Neumann algebra and ${\mathcal N}$ a von Neumann subalgebra if it. Let ${\mathcal E}_\tau$ be the tracial conditional expectation from ${\mathcal M}$ onto ${\mathcal N}$. For density matrices $\rho$ and $\sigma$ in ${\mathcal N}$, let $\rho_{\mathcal N} := {\mathcal E}_\tau \rho$ and $\sigma_{\mathcal N} := {\mathcal E}_\tau \sigma$. Since ${\mathcal E}_\tau$ is CPTP, the DPI says that $S(\rho||\sigma) \geq S(\rho_{\mathcal N}||\sigma_{\mathcal N})$, and the general case is readily deduced from this. A theorem of Petz says that there is equality if and only if $\sigma = {\mathcal R}_\rho(\sigma_{\mathcal N} )$, where ${\mathcal R}_\rho$ is the Petz recovery map, which is dual to the Accardi-Cecchini coarse graining operator ${\mathcal A}_\rho$ from ${\mathcal M}$ to ${\mathcal N}$. In it simplest form, our bound is $$S(\rho||\sigma) - S(\rho_\mathcal N ||\sigma_\mathcal N ) ≥\left(\frac18\pi\right)^4 \|\Delta_\sigma,\rho\|^-2 \| \mathcal R_\rho_\mathcal N -\sigma\|_1^4$$ where $\Delta_{\sigma,\rho}$ is the relative modular operator. We also prove related results for various quasi-relative entropies. Explicitly describing the solutions set of the Petz equation $\sigma = {\mathcal R}_\rho(\sigma_{\mathcal N} )$ amounts to determining the set of fixed points of the Accardi-Cecchini coarse graining map. Building on previous work, we provide a throughly detailed description of the set of solutions of the Petz equation, and obtain all of our results in a simple self, contained manner.
• The asymptotic restriction problem for tensors is to decide, given tensors $s$ and $t$, whether the $n$th tensor power of $s$ can be obtained from the $(n+o(n))$th tensor power of $t$ by applying linear maps to the tensor legs (this we call restriction), when $n$ goes to infinity. In this context, Volker Strassen, striving to understand the complexity of matrix multiplication, introduced in 1986 the asymptotic spectrum of tensors. Essentially, the asymptotic restriction problem for a family of tensors $X$, closed under direct sum and tensor product, reduces to finding all maps from $X$ to the reals that are monotone under restriction, normalised on diagonal tensors, additive under direct sum and multiplicative under tensor product, which Strassen named spectral points. Strassen created the support functionals, which are spectral points for oblique tensors, a strict subfamily of all tensors. Universal spectral points are spectral points for the family of all tensors. The construction of nontrivial universal spectral points has been an open problem for more than thirty years. We construct for the first time a family of nontrivial universal spectral points over the complex numbers, using quantum entropy and covariants: the quantum functionals. In the process we connect the asymptotic spectrum to the quantum marginal problem and to the entanglement polytope. To demonstrate the asymptotic spectrum, we reprove (in hindsight) recent results on the cap set problem by reducing this problem to computing asymptotic spectrum of the reduced polynomial multiplication tensor, a prime example of Strassen. A better understanding of our universal spectral points construction may lead to further progress on related questions. We additionally show that the quantum functionals are an upper bound on the recently introduced (multi-)slice rank.
• We provide a complete set of game-theoretic conditions equivalent to the existence of a transformation from one quantum channel into another one, by means of classically correlated pre/post processing maps only. Such conditions naturally induce tests to certify that a quantum memory is capable of storing quantum information, as opposed to memories that can be simulated by measurement and state preparation (corresponding to entanglement-breaking channels). These results are formulated as a resource theory of genuine quantum memories (correlated in time), mirroring the resource theory of entanglement in quantum states (correlated spatially). As the set of conditions is complete, the corresponding tests are faithful, in the sense that any non entanglement-breaking channel can be certified. Moreover, they only require the assumption of trusted inputs, known to be unavoidable for quantum channel verification. As such, the tests we propose are intrinsically different from the usual process tomography, for which the probes of both the input and the output of the channel must be trusted. An explicit construction is provided and shown to be experimentally realizable, even in the presence of arbitrarily strong losses in the memory or detectors.
• As of today, no one can tell when a universal quantum computer with thousands of logical quantum bits (qubits) will be built. At present, most quantum computer prototypes involve less than ten individually controllable qubits, and only exist in laboratories for the sake of either the great costs of devices or professional maintenance requirements. Moreover, scientists believe that quantum computers will never replace our daily, every-minute use of classical computers, but would rather serve as a substantial addition to the classical ones when tackling some particular problems. Due to the above two reasons, cloud-based quantum computing is anticipated to be the most useful and reachable form for public users to experience with the power of quantum. As initial attempts, IBM Q has launched influential cloud services on a superconducting quantum processor in 2016, but no other platforms has followed up yet. Here, we report our new cloud quantum computing service -- NMRCloudQ (http://nmrcloudq.com/zh-hans/), where nuclear magnetic resonance, one of the pioneer platforms with mature techniques in experimental quantum computing, plays as the role of implementing computing tasks. Our service provides a comprehensive software environment preconfigured with a list of quantum information processing packages, and aims to be freely accessible to either amateurs that look forward to keeping pace with this quantum era or professionals that are interested in carrying out real quantum computing experiments in person. In our current version, four qubits are already usable with in average 1.26% single-qubit gate error rate and 1.77% two-qubit controlled-NOT gate error rate via randomized benchmaking tests. Improved control precisions as well as a new seven-qubit processor are also in preparation and will be available later.
• Measurement based (MB) quantum computation allows for universal quantum computing by measuring individual qubits prepared in entangled multipartite states, known as graph states. Unless corrected for, the randomness of the measurements leads to the generation of ensembles of random unitaries, where each random unitary is identified with a string of possible measurement results. We show that repeating an MB scheme an efficient number of times, on a simple graph state, with measurements at fixed angles and no feed-forward corrections, produces a random unitary ensemble that is an \epsilon-approximate t-design on n-qubits. Unlike previous constructions, the graph is regular and is also a universal resource for measurement based quantum computing, closely related to the brickwork state
• We introduce the Markovian matrix product density operator, which is a special subclass of the matrix product density operator. We show that the von Neumann entropy of such ansatz can be computed efficiently on a classical computer. This is possible because one can efficiently certify that the global state forms an approximate quantum Markov chain by verifying a set of inequalities. Each of these inequalities can be verified in time that scales polynomially with the bond dimension and the local Hilbert space dimension. The total number of inequalities scale linearly with the system size. We use this fact to study the complexity of computing the minimum free energy of local Hamiltonians at finite temperature. To this end, we introduce the free energy problem as a generalization of the local Hamiltonian problem, and study its complexity for a class of Hamiltonians that describe quantum spin chains. The corresponding free energy problem at finite temperature is in NP if the Gibbs state of such Hamiltonian forms an approximate quantum Markov chain with an error that decays exponentially with the width of the conditioning subsystem.
• Sep 21 2017 quant-ph arXiv:1709.06648v1
We improve the number of T gates needed to perform an n-bit adder from 8n + O(1) to 4n + O(1). We do so via a "temporary logical-AND" construction, which uses four T gates to store the logical-AND of two qubits into an ancilla and zero T gates to later erase the ancilla. Temporary logical-ANDs are a generally useful tool when optimizing T-counts. They can be applied to integer arithmetic, modular arithmetic, rotation synthesis, the quantum Fourier transform, Shor's algorithm, Grover oracles, and many other circuits. Because T gates dominate the cost of quantum computation based on the surface code, and temporary logical-ANDs are widely applicable, our constructions represent a significant reduction in projected costs of quantum computation. We also present an n-bit controlled adder circuit with T-count of 8n + O(1), a temporary adder that can be computed for the same cost as the normal adder but whose result can be kept until it is later uncomputed without using T gates, and discuss some other constructions whose T-count is improved by the temporary logical-AND.
• Fault-tolerant quantum computation (FTQC) schemes that use multi-qubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement of a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data code blocks, which are generally difficult to prepare if the code size is large. Previously we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault-tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be high for general CSS codes of arbitrary size. Ancilla preparation for the [[23,1,7]] quantum Golay code is numerically studied in detail through Monte Carlo simulation. The results support the validity of the protocol when the gate failure rate is reasonably low. To the best of our knowledge, this approach is the first attempt to prepare general large block stabilizer states free of correlated errors for FTQC in a fault-tolerant and efficient manner.
• The theory of the asymptotic manipulation of pure bipartite quantum systems can be considered completely understood: The rates at which bipartite entangled states can be asymptotically transformed into each other are fully determined by a single number each, the respective entanglement entropy. In the multi-partite setting, similar questions of the optimally achievable rates of transforming one pure state into another are notoriously open. This seems particularly unfortunate in the light of the revived interest in such questions due to the perspective of experimentally realizing multi-partite quantum networks. In this work, we report substantial progress by deriving surprisingly simple upper and lower bounds on the rates that can be achieved in asymptotic multi-partite entanglement transformations. These bounds are based on and develop ideas of entanglement combing, state merging, and assisted entanglement distillation. We identify cases where the bounds coincide and hence provide the exact rates. As an example, we bound rates at which resource states for the cryptographic scheme of quantum secret sharing can be distilled from arbitrary pure tri-partite quantum states, providing further scope for quantum internet applications beyond point-to-point.
• Recently, it is well recognized that hypothesis testing has deep relations with other topics in quantum information theory as well as in classical information theory. These relations enable us to derive precise evaluation in the finite-length setting. However, such usefulness of hypothesis testing is not limited to information theoretical topics. For example, it can be used for verification of entangled state and quantum computer as well as guaranteeing the security of keys generated via quantum key distribution. In this talk, we overview these kinds of applications of hypothesis testing.
• Quantum Markov semigroups characterize the time evolution of an important class of open quantum systems. Studying convergence properties of such a semigroup, and determining concentration properties of its invariant state, have been the focus of much research. Quantum versions of functional inequalities (like the modified logarithmic Sobolev and Poincaré inequalities) and the so-called transportation cost inequalities, have proved to be essential for this purpose. Classical functional and transportation cost inequalities are seen to arise from a single geometric inequality, called the Ricci lower bound, via an inequality which interpolates between them. The latter is called the HWI-inequality, where the letters I, W and H are, respectively, acronyms for the Fisher information (arising in the modified logarithmic Sobolev inequality), the so-called Wasserstein distance (arising in the transportation cost inequality) and the relative entropy (or Boltzmann H function) arising in both. Hence, classically, all the above inequalities and the implications between them form a remarkable picture which relates elements from diverse mathematical fields, such as Riemannian geometry, information theory, optimal transport theory, Markov processes, concentration of measure, and convexity theory. Here we consider a quantum version of the Ricci lower bound introduced by Carlen and Maas, and prove that it implies a quantum HWI inequality from which the quantum functional and transportation cost inequalities follow. Our results hence establish that the unifying picture of the classical setting carries over to the quantum one.
• Fundamental questions in chemistry and physics may never be answered due to the exponential complexity of the underlying quantum phenomena. A desire to overcome this challenge has sparked a new industry of quantum technologies with the promise that engineered quantum systems can address these hard problems. A key step towards demonstrating such a system will be performing a computation beyond the capabilities of any classical computer, achieving so-called quantum supremacy. Here, using 9 superconducting qubits, we demonstrate an immediate path towards quantum supremacy. By individually tuning the qubit parameters, we are able to generate thousands of unique Hamiltonian evolutions and probe the output probabilities. The measured probabilities obey a universal distribution, consistent with uniformly sampling the full Hilbert-space. As the number of qubits in the algorithm is varied, the system continues to explore the exponentially growing number of states. Combining these large datasets with techniques from machine learning allows us to construct a model which accurately predicts the measured probabilities. We demonstrate an application of these algorithms by systematically increasing the disorder and observing a transition from delocalized states to localized states. By extending these results to a system of 50 qubits, we hope to address scientific questions that are beyond the capabilities of any classical computer.
• We study variants of the Mermin--Peres Magic Square and Magic Pentagram with outputs over alphabets of size d. We show that these games have unique winning strategies requiring 2 and 3 pairs of maximally entangled qudits, respectively. We also show that this uniqueness is robust to small perturbations, and we show the same for a certain n-fold product of these games. These games are the first nonlocal games which robustly self-test maximally entangled qudits for dimensions other than powers of 2. In order to prove our result, we extend the representation-theoretic framework of Cleve, Liu, and Slofstra (Journal of Mathematical Physics 58.1 (2017): 012202.) to apply to linear constraint games over $\mathbb Z_d$ for $d \geq 2$. We package our main argument into a general self-testing theorem which can be applied to various linear constraint games
• Purification is a powerful technique in quantum physics whereby a mixed quantum state is extended to a pure state on a larger system. This process is not unique, and in systems composed of many degrees of freedom, one natural purification is the one with minimal entanglement. Here we study the entropy of the minimally entangled purification, called the entanglement of purification, in three model systems: an Ising spin chain, conformal field theories holographically dual to Einstein gravity, and random stabilizer tensor networks. We conjecture values for the entanglement of purification in all these models, and we support our conjectures with a variety of numerical and analytical results. We find that such minimally entangled purifications have a number of applications, from enhancing entanglement-based tensor network methods for describing mixed states to elucidating novel aspects of the emergence of geometry from entanglement in the AdS/CFT correspondence.
• The resemblance between the methods used in studying quantum-many body physics and in machine learning has drawn considerable attention. In particular, tensor networks (TNs) and deep learning architectures bear striking similarities to the extent that TNs can be used for machine learning. Previous results used one-dimensional TNs in image recognition, showing limited scalability and a high bond dimension. In this work, we train two-dimensional hierarchical TNs to solve image recognition problems, using a training algorithm derived from the multipartite entanglement renormalization ansatz (MERA). This approach overcomes scalability issues and implies novel mathematical connections among quantum many-body physics, quantum information theory, and machine learning. While keeping the TN unitary in the training phase, TN states can be defined, which optimally encodes each class of the images into a quantum many-body state. We study the quantum features of the TN states, including quantum entanglement and fidelity. We suggest these quantities could be novel properties that characterize the image classes, as well as the machine learning tasks. Our work could be further applied to identifying possible quantum properties of certain artificial intelligence methods.
• Oct 06 2017 hep-th cond-mat.other hep-ph arXiv:1710.01791v2
I discuss gauge and global symmetries in particle physics, condensed matter physics, and quantum gravity. In a modern understanding, global symmetries are approximate and gauge symmetries may be emergent. (Based on a lecture at the April, 2016 meeting of the American Physical Society in Salt Lake City, Utah.)
• Many experiments in the field of quantum foundations seek to adjudicate between quantum theory and speculative alternatives to it. To do so, one must analyse the experimental data in a manner that does not presume the correctness of the quantum formalism. The mathematical framework of generalized probabilistic theories (GPTs) provides a means of doing so. We present a scheme for determining what GPTs are consistent with a given set of experimental data. It proceeds by performing tomography on the preparations and measurements in a self-consistent manner, i.e., without presuming a prior characterization of either. We illustrate the scheme by analyzing experimental data for a large set of preparations and measurements on the polarization degree of freedom of a single photon. We find that the smallest and largest GPT state spaces consistent with our data are a pair of polytopes, each approximating the shape of the Bloch Sphere and having a volume ratio of $0.977 \pm 0.001$, which provides a quantitative bound on the scope for deviations from quantum theory. We also demonstrate how our scheme can be used to bound the extent to which nature might be more nonlocal than quantum theory predicts, as well as the extent to which it might be more or less contextual. Specifically, we find that the maximal violation of the CHSH inequality can be at most $1.3\% \pm 0.1$ greater than the quantum prediction, and the maximal violation of a particular noncontextuality inequality can not differ from the quantum prediction by more than this factor on either side.
• Correlator product states (CPS) are a powerful and very broad class of states for quantum lattice systems whose amplitudes can be sampled exactly and efficiently. They work by gluing together states of overlapping clusters of sites on the lattice, called correlators. Recently Carleo and Troyer Science 355, 602 (2017) introduced a new type sampleable ansatz called neural-network quantum states (NQS) that are inspired by the restricted Boltzmann model used in machine learning. By employing the formalism of tensor networks we show that NQS are a special form of CPS with novel properties. Diagramatically a number of simple observations become transparent. Namely, that NQS are CPS built from extensively sized GHZ-form correlators, which are related to a canonical polyadic decomposition of a tensor, making them uniquely unbiased geometrically. Another immediate implication of the equivalence to CPS is that we are able to formulate exact NQS representations for a wide range of paradigmatic states, including superposition of weighed-graph states, the Laughlin state, toric code states, and the resonating valence bond state. These examples reveal the potential of using higher dimensional hidden units and a second hidden layer in NQS. The major outlook of this study is the elevation of NQS to correlator operators allowing them to enhance conventional well-established variational Monte Carlo approaches for strongly correlated fermions.
• Edge theories of symmetry-protected topological phases are well-known to possess global symmetry anomalies. In this work we focus on two-dimensional bosonic phases protected by an on-site symmetry and analyse the corresponding edge anomalies in more detail. Physical interpretations of the anomaly in terms of an obstruction to orbifolding and constructing symmetry-preserving boundaries are connected to the cohomology classification of symmetry-protected phases in two dimensions. Using the tensor network and matrix product state formalism we numerically illustrate our arguments and discuss computational detection schemes to identify symmetry-protected order in a ground state wave function.

Siddhartha Das Oct 06 2017 03:18 UTC

Here is a work in related direction: "Unification of Bell, Leggett-Garg and Kochen-Specker inequalities: Hybrid spatio-temporal inequalities", Europhysics Letters 104, 60006 (2013), which may be relevant to the discussions in your paper. [https://arxiv.org/abs/1308.0270]

Bin Shi Oct 05 2017 00:07 UTC

Welcome to give the comments for this paper!

Bassam Helou Sep 22 2017 17:21 UTC

The initial version of the article does not adequately and clearly explain how certain equations demonstrate whether a particular interpretation of QM violates the no-signaling condition.
A revised and improved version is scheduled to appear on September 25.

James Wootton Sep 21 2017 05:41 UTC

What does this imply for https://scirate.com/arxiv/1608.00263? I'm guessing they still regard it as valid (it is ref [14]), but just too hard to implement for now.

Ben Criger Sep 08 2017 08:09 UTC

Oh look, there's another technique for decoding surface codes subject to X/Z correlated errors: https://scirate.com/arxiv/1709.02154

Aram Harrow Sep 06 2017 07:54 UTC

The paper only applies to conformal field theories, and such a result cannot hold for more general 1-D systems by 0705.4077 and other papers (assuming standard complexity theory conjectures).

Felix Leditzky Sep 05 2017 21:27 UTC

Thanks for the clarification, Philippe!

Philippe Faist Sep 05 2017 21:09 UTC

Hi Felix, thanks for the good question.

We've found it more convenient to consider trace-nonincreasing and $\Gamma$-sub-preserving maps (and this is justified by the fact that they can be dilated to fully trace-preserving and $\Gamma$-preserving maps on a larger system). The issue arises because

...(continued)