# Top arXiv papers

• Recent work has shown that quantum computers can compute scattering probabilities in massive quantum field theories, with a run time that is polynomial in the number of particles, their energy, and the desired precision. Here we study a closely related quantum field-theoretical problem: estimating the vacuum-to-vacuum transition amplitude, in the presence of spacetime-dependent classical sources, for a massive scalar field theory in (1+1) dimensions. We show that this problem is BQP-hard; in other words, its solution enables one to solve any problem that is solvable in polynomial time by a quantum computer. Hence, the vacuum-to-vacuum amplitude cannot be accurately estimated by any efficient classical algorithm, even if the field theory is very weakly coupled, unless BQP=BPP. Furthermore, the corresponding decision problem can be solved by a quantum computer in a time scaling polynomially with the number of bits needed to specify the classical source fields, and this problem is therefore BQP-complete. Our construction can be regarded as an idealized architecture for a universal quantum computer in a laboratory system described by massive phi^4 theory coupled to classical spacetime-dependent sources.
• We provide $poly\log$ sparse quantum codes for correcting the erasure channel arbitrarily close to the capacity. Specifically, we provide $[[n, k, d]]$ quantum stabilizer codes that correct for the erasure channel arbitrarily close to the capacity if the erasure probability is at least $0.33$, and with a generating set $\langle S_1, S_2, ... S_{n-k} \rangle$ such that $|S_i|\leq \log^{2+\zeta}(n)$ for all $i$ and for any $\zeta > 0$ with high probability. In this work we show that the result of Delfosse et al. is tight: one can construct capacity approaching codes with weight almost $O(1)$.
• One of the main aims in the field of quantum simulation is to achieve what is called "quantum supremacy", referring to the experimental realization of a quantum device that computationally outperforms classical computers. In this work, we show that one can devise versatile and feasible schemes of two-dimensional dynamical quantum simulators showing such a quantum supremacy, building on intermediate problems involving IQP circuits. In each of the schemes, an initial product state is prepared, potentially involving an element of randomness as in disordered models, followed by a short time evolution under a translationally invariant Hamiltonian with nearest-neighbor interactions and a mere sampling measurement in a fixed basis. The final state preparation in each scheme is fully efficiently certifiable. We discuss experimental necessities and possible physical architectures, inspired by platforms of cold atoms in optical lattices and a number of others, as well as specific assumptions that enter the complexity-theoretic arguments. This work shows that benchmark settings exhibiting a quantum advantage may require little control in contrast to universal quantum computing.
• The phenomenon of data hiding, i.e. the existence of pairs of states of a bipartite system that are perfectly distinguishable via general entangled measurements yet almost indistinguishable under LOCC, is a distinctive signature of nonclassicality. The relevant figure of merit is the maximal ratio (called data hiding ratio) between the distinguishability norms associated with the two sets of measurements we are comparing, typically all measurements vs LOCC protocols. For a bipartite $n\times n$ quantum system, it is known that the data hiding ratio scales as $n$, i.e. the square root of the dimension of the local state space of density matrices. We show that for bipartite $n_A\times n_B$ systems the maximum data hiding ratio against LOCC protocols is $\Theta\left(\min\{n_A,n_B\}\right)$. This scaling is better than the previously best obtained $\sqrt{n_A n_B}$, and moreover our intuitive argument yields constants close to optimal. In this paper, we investigate data hiding in the more general context of general probabilistic theories (GPTs), an axiomatic framework for physical theories encompassing only the most basic requirements about the predictive power of the theory. The main result of the paper is the determination of the maximal data hiding ratio obtainable in an arbitrary GPT, which is shown to scale linearly in the minimum of the local dimensions. We exhibit an explicit model achieving this bound up to additive constants, finding that quantum mechanics exhibits a data hiding ratio which is only the square root of the maximal one. Our proof rests crucially on an unexpected link between data hiding and the theory of projective and injective tensor products of Banach spaces. Finally, we develop a body of techniques to compute data hiding ratios for a variety of restricted classes of GPTs that support further symmetries.
• We introduce a definition of the Discrete Fourier Transform on Euclidean lattices in $R^n$. This definition is not applicable for any lattice, but can be defined on lattices known as Systematic Normal Form (SysNF) introduced in [3]. Such lattices form a dense set in the space of $n$-dimensional lattices, which implies that lattice DFT can be applied to a nearby lattice for every lattice. As an application we show a quantum algorithm for sampling from discrete distributions on lattices, that extends our ability to sample efficiently from the discrete Gaussian distribution [4] to any distribution that is sufficiently "smooth". We conjecture that studying the eigenvectors of the newly-defined lattice DFT may provide new insights into the structure of lattices, especially regarding hard computational problems.
• We show how a hyperbolic surface code could be used for overhead-efficient quantum storage. We give numerical evidence for a noise threshold of 1.3% for the 4,5-hyperbolic surface code in a phenomenological noise model (as compared to 2.9% for the toric code). In this code family parity checks are of weight 4 and 5 while each qubit participates in 4 different parity checks. We introduce a family of semi-hyperbolic codes which interpolate between the toric code and the 4,5-hyperbolic surface code in terms of encoding rate and threshold. We show how these hyperbolic codes outperform the toric code in terms of qubit overhead for a target logical error probability. We show how Dehn twists and lattice code surgery can be used to read and write individual qubits to this quantum storage medium.
• A central result in the study of Quantum Hamiltonian Complexity is that the k-Local hamiltonian problem is QMA-complete. In that problem, we must decide if the lowest eigenvalue of a Hamiltonian is bounded below some value, or above another, promised one of these is true. Given the ground state of the Hamiltonian, a quantum computer can determine this question, even if the ground state itself may not be efficiently quantum preparable. Kitaev's proof of QMA-completeness encodes a unitary quantum circuit in QMA into the ground space of a Hamiltonian. However, we now have quantum computing models based on measurement instead of unitary evolution, furthermore we can use post-selected measurement as an additional computational tool. In this work, we generalise Kitaev's construction to allow for non-unitary evolution including post-selection. Furthermore, we consider a type of post-selection under which the construction is consistent, which we call tame post-selection. We consider the computational complexity consequences of this construction and then consider how the probability of an event upon which we are post-selecting affects the gap between the ground state energy and the energy of the first excited state of its corresponding Hamiltonian. We provide numerical evidence that the two are not immediately related, by giving a family of circuits where the probability of an event upon which we post-select is exponentially small, but the gap in the energy levels of the Hamiltonian decreases as a polynomial.
• We present an infinite family of protocols to distill magic states for $T$-gates that has a low space overhead and uses an asymptotic number of input magic states to achieve a given target error that is conjectured to be optimal. The space overhead, defined as the ratio between the physical qubits to the number of output magic states, is asymptotically constant, while both the number of input magic states used per output state and the $T$-gate depth of the circuit scale linearly in the logarithm of the target error $\delta$ (up to $\log \log 1/\delta$). Unlike other distillation protocols, this protocol achieves this performance without concatenation and the input magic states are injected at various steps in the circuit rather than all at the start of the circuit. The protocol can be modified to distill magic states for other gates at the third level of the Clifford hierarchy, with the same asymptotic performance. The protocol relies on the construction of weakly self-dual CSS codes with many logical qubits and large distance, allowing us to implement control-SWAPs on multiple qubits. We call this code the "inner code". The control-SWAPs are then used to measure properties of the magic state and detect errors, using another code that we call the "outer code". Alternatively, we use weakly-self dual CSS codes which implement controlled Hadamards for the inner code, reducing circuit depth. We present several specific small examples of this protocol.
• We consider the classical complexity of approximately simulating time evolution under spatially local quadratic bosonic Hamiltonians for time $t$. We obtain upper and lower bounds on the scaling of $t$ with the number of bosons, $n$, for which simulation, cast as a sampling problem, is classically efficient and provably hard, respectively. We view these results in the light of classifying phases of physical systems based on parameters in the Hamiltonian and conjecture a link to dynamical phase transitions. In doing so, we combine ideas from mathematical physics and computational complexity to gain insight into the behavior of condensed matter systems.
• Learning with Errors is one of the fundamental problems in computational learning theory and has in the last years become the cornerstone of post-quantum cryptography. In this work, we study the quantum sample complexity of Learning with Errors and show that there exists an efficient quantum learning algorithm (with polynomial sample and time complexity) for the Learning with Errors problem where the error distribution is the one used in cryptography. While our quantum learning algorithm does not break the LWE-based encryption schemes proposed in the cryptography literature, it does have some interesting implications for cryptography: first, when building an LWE-based scheme, one needs to be careful about the access to the public-key generation algorithm that is given to the adversary; second, our algorithm shows a possible way for attacking LWE-based encryption by using classical samples to approximate the quantum sample state, since then using our quantum learning algorithm would solve LWE.
• In this paper, we consider spin systems in three spatial dimensions, and prove that the local Hamiltonian problem for 3D lattices with face-centered cubic unit cells, 4-local translationally-invariant interactions between spin-3/2 particles and open boundary conditions is QMAEXP-complete. We go beyond a mere embedding of past hard 1D history state constructions, and utilize a classical Wang tiling problem as binary counter in order to translate one cube side length into a binary description for the verifier input. We further make use of a recently-developed computational model especially well-suited for history state constructions, and combine it with a specific circuit encoding shown to be universal for quantum computation. These novel techniques allow us to significantly lower the local spin dimension, surpassing the best translationally-invariant result to date by two orders of magnitude (in the number of degrees of freedom per coupling). This brings our models en par with the best non-translationally-invariant construction.
• Matrix Product Vectors form the appropriate framework to study and classify one-dimensional quantum systems. In this work, we develop the structure theory of Matrix Product Unitary operators (MPUs) which appear e.g. in the description of time evolutions of one-dimensional systems. We prove that all MPUs have a strict causal cone, making them Quantum Cellular Automata (QCAs), and derive a canonical form for MPUs which relates different MPU representations of the same unitary through a local gauge. We use this canonical form to prove an Index Theorem for MPUs which gives the precise conditions under which two MPUs are adiabatically connected, providing an alternative derivation to that of [Commun. Math. Phys. 310, 419 (2012), arXiv:0910.3675] for QCAs. We also discuss the effect of symmetries on the MPU classification. In particular, we characterize the tensors corresponding to MPU that are invariant under conjugation, time reversal, or transposition. In the first case, we give a full characterization of all equivalence classes. Finally, we give several examples of MPU possessing different symmetries.
• We study 't Hooft anomalies of discrete groups in the framework of (1+1)-dimensional multiscale entanglement renormalization ansatz states on the lattice. Using matrix product operators, general topological restrictions on conformal data are derived. An ansatz class allowing for optimization of MERA with an anomalous symmetry is introduced. We utilize this class to numerically study a family of Hamiltonians with a symmetric critical line. Conformal data is obtained for all irreducible projective representations of each anomalous symmetry twist, corresponding to definite topological sectors. It is numerically demonstrated that this line is a protected gapless phase. Finally, we implement a duality transformation between a pair of critical lines using our subclass of MERA.
• We demonstrate that small quantum memories, realized via quantum error correction in multi-qubit devices, can benefit substantially by choosing a quantum code that is tailored to the relevant error model of the system. For a biased noise model, with independent bit and phase flips occurring at different rates, we show that a single code greatly outperforms the well-studied Steane code across the full range of parameters of the noise model, including for unbiased noise. In fact, this tailored code performs almost optimally when compared with 10,000 randomly selected stabilizer codes of comparable experimental complexity. Tailored codes can even outperform the Steane code with realistic experimental noise, and without any increase in the experimental complexity, as we demonstrate by comparison in the observed error model in a recent 7-qubit trapped ion experiment.
• We construct a linear system non-local game which can be played perfectly using a limit of finite-dimensional quantum strategies, but which cannot be played perfectly on any finite-dimensional Hilbert space, or even with any tensor-product strategy. In particular, this shows that the set of (tensor-product) quantum correlations is not closed. The constructed non-local game provides another counterexample to the "middle" Tsirelson problem, with a shorter proof than our previous paper (though at the loss of the universal embedding theorem). We also show that it is undecidable to determine if a linear system game can be played perfectly with a limit of finite-dimensional quantum strategies.
• Recently, several intriguing conjectures have been proposed connecting symmetric informationally complete quantum measurements (SIC POVMs, or SICs) and algebraic number theory. These conjectures relate the SICs and their minimal defining algebraic number field. Testing or sharpening these conjectures requires that the SICs are expressed exactly, rather than as numerical approximations. While many exact solutions of SICs have been constructed previously using Gröbner bases, this method has probably been taken as far as is possible with current computer technology. Here we describe a method for converting high-precision numerical solutions into exact ones using an integer relation algorithm in conjunction with the Galois symmetries of a SIC. Using this method we have calculated 69 new exact solutions, including 9 new dimensions where previously only numerical solutions were known, which more than triples the number of known exact solutions. In some cases the solutions require number fields with degrees as high as 12,288. We use these solutions to confirm that they obey the number-theoretic conjectures and we address two questions suggested by the previous work.
• Mar 03 2017 quant-ph arXiv:1703.00612v1
We consider Majorana fermion stabilizer codes with small number of modes and distance. We give an upper bound on the number of logical qubits for distance $4$ codes, and we construct Majorana fermion codes similar to the classical Hamming code that saturate this bound. We perform numerical studies and find other distance $4$ and $6$ codes that we conjecture have the largest possible number of logical qubits for the given number of physical Majorana modes. Some of these codes have more logical qubits than any Majorana fermion code derived from a qubit stabilizer code.
• Gate model quantum computers with too many qubits to be simulated by available classical computers are about to arrive. We present a strategy for programming these devices without error correction or compilation. This means that the number of logical qubits is the same as the number of qubits on the device. The hardware determines which pairs of qubits can be addressed by unitary operators. The goal is to build quantum states that solve computational problems such as maximizing a combinatorial objective function or minimizing a Hamiltonian. These problems may not fit naturally on the physical layout of the qubits. Our algorithms use a sequence of parameterized unitaries that sit on the qubit layout to produce quantum states depending on those parameters. Measurements of the objective function (or Hamiltonian) guide the choice of new parameters with the goal of moving the objective function up (or lowering the energy). As an example we consider finding approximate solutions to MaxCut on 3-regular graphs whereas the hardware is physical qubits laid out on a rectangular grid. We prove that the lowest depth version of the Quantum Approximate Optimization Algorithm will achieve an approximation ratio of at least 0.5293 on all large enough instances which beats random guessing (0.5). We open up the algorithm to have different parameters for each single qubit $X$ rotation and for each $ZZ$ interaction associated with the nearest neighbor interactions on the grid. Small numerical experiments indicate that an enveloping classical algorithm can be used to find the parameters which sit on the grid to optimize an objective function with a different connectivity. We discuss strategies for finding good parameters but offer no evidence yet that the proposed approach can beat the best classical algorithms. Ultimately the strength of this approach will be determined by running on actual hardware.
• Surface codes are among the best candidates to ensure the fault-tolerance of a quantum computer. In order to avoid the accumulation of errors during a computation, it is crucial to have at our disposal a fast decoding algorithm to quickly identify and correct errors as soon as they occur. We propose a linear-time maximum likelihood decoder for surface codes over the quantum erasure channel. This decoding algorithm for dealing with qubit loss is optimal both in terms of performance and speed.
• A prominent application of quantum cryptography is the distribution of cryptographic keys with unconditional security. Recently, such security was extended by Vazirani and Vidick (Physical Review Letters, 113, 140501, 2014) to the device-independent (DI) scenario, where the users do not need to trust the integrity of the underlying quantum devices. The protocols analyzed by them and by subsequent authors all require a sequential execution of N multiplayer games, where N is the security parameter. In this work, we prove unconditional security of a protocol where all games are executed in parallel. Our result further reduces the requirements for QKD (allowing for arbitrary information leakage within each players' lab) and opens the door to more efficient implementation. To the best of our knowledge, this is the first parallel security proof for a fully device-independent QKD protocol. Our protocol tolerates a constant level of device imprecision and achieves a linear key rate.
• We show how to obtain perfect samples from a quantum Gibbs state on a quantum computer. To do so, we adapt one of the "Coupling from the Past"- algorithms proposed by Propp and Wilson. The algorithm has a probabilistic run-time and produces perfect samples without any previous knowledge of the mixing time of a quantum Markov chain. To implement it, we assume we are able to perform the phase estimation algorithm for the underlying Hamiltonian and implement a quantum Markov chain that satisfies certain conditions, implied e.g. by detailed balance, and is primitive. We analyse the expected run-time of the algorithm, which is linear in the mixing time and quadratic in the dimension. We also analyse the circuit depth necessary to implement it, which is proportional to the sum of the depth necessary to implement one step of the quantum Markov chain and one phase estimation. This algorithm is stable under noise in the implementation of different steps. We also briefly discuss how to adapt different "Coupling from the Past"- algorithms to the quantum setting.
• The emergence of quantum computers has challenged long-held beliefs about what is efficiently computable given our current physical theories. However, going back to the work of Abrams and Lloyd, changing one aspect of quantum theory can result in yet more dramatic increases in computational power, as well as violations of fundamental physical principles. Here we focus on efficient computation within a framework of general physical theories that make good operational sense. In prior work, Lee and Barrett showed that in any theory satisfying the principle of tomographic locality (roughly, local measurements suffice for tomography of multipartite states) the complexity bound on efficient computation is AWPP. This bound holds independently of whether the principle of causality (roughly, no signalling from the future) is satisfied. In this work we show that this bound is tight: there exists a theory satisfying both the principles of tomographic locality and causality which can efficiently decide everything in AWPP, and in particular can simulate any efficient quantum computation. Thus the class AWPP has a natural physical interpretation: it is precisely the class of problems that can be solved efficiently in tomographically-local theories. This theory is built upon a model of computing involving Turing machines with quasi-probabilities, to wit, machines with transition weights that can be negative but sum to unity over all branches. In analogy with the study of non-local quantum correlations, this leads us to question what physical principles recover the power of quantum computing. Along this line, we give some computational complexity evidence that quantum computation does not achieve the bound of AWPP.
• Suppose that Alice and Bob are located in distant laboratories, which are connected by an ideal quantum channel. Suppose further that they share many copies of a quantum state $\rho_{ABE}$, such that Alice possesses the $A$ systems and Bob the $BE$ systems. In our model, there is an identifiable part of Bob's laboratory that is insecure: a third party named Eve has infiltrated Bob's laboratory and gained control of the $E$ systems. Alice, knowing this, would like use their shared state and the ideal quantum channel to communicate a message in such a way that Bob, who has access to the whole of his laboratory ($BE$ systems), can decode it, while Eve, who has access only to a sector of Bob's laboratory ($E$ systems) and the ideal quantum channel connecting Alice to Bob, cannot learn anything about Alice's transmitted message. We call this task the conditional one-time pad, and in this paper, we prove that the optimal rate of secret communication for this task is equal to the conditional quantum mutual information $I(A;B|E)$ of their shared state. We thus give the conditional quantum mutual information an operational meaning that is different from those given in prior works, via state redistribution, conditional erasure, or state deconstruction. We also generalize the model and method in several ways, one of which demonstrates that the negative tripartite interaction information $-I_{3}(A;B;E) = I(A;BE)-I(A;B)-I(A;E)$ of a tripartite state $\rho_{ABE}$ is an achievable rate for a secret-sharing task, i.e., the case in which Alice's message should be secure from someone possessing only the $AB$ or $AE$ systems but should be decodable by someone possessing all systems $A$, $B$, and $E$.
• The practical success of polynomial-time tensor network methods for computing ground states of certain quantum local Hamiltonians has recently been given a sound theoretical basis by Arad, Landau, Vazirani, and Vidick. The convergence proof, however, relies on "rigorous renormalization group" (RRG) techniques which differ fundamentally from existing algorithms. We introduce an efficient implementation of the theoretical RRG procedure which finds MPS ansatz approximations to the ground spaces and low-lying excited spectra of local Hamiltonians in situations of practical interest. In contrast to other schemes, RRG does not utilize variational methods on tensor networks. Rather, it operates on subsets of the system Hilbert space by constructing approximations to the global ground space in a tree-like manner. We evaluate the algorithm numerically, finding similar performance to DMRG in the case of a gapped nondegenerate Hamiltonian. Even in challenging situations of criticality, or large ground-state degeneracy, or long-range entanglement, RRG remains able to identify candidate states having large overlap with ground and low-energy eigenstates, outperforming DMRG in some cases.
• Modeling and simulation is essential for predicting and verifying the behavior of fabricated quantum circuits, but existing simulation methods are either impractically costly or require an unrealistic simplification of error processes. We present a method of simulating noisy Clifford circuits that is both accurate and practical in experimentally relevant regimes. In particular, the cost is weakly exponential in the size and the degree of non-Cliffordness of the circuit. Our approach is based on the construction of exact representations of quantum channels as quasiprobability distributions over stabilizer operations, which are then sampled, simulated, and weighted to yield statistical estimates of circuit outputs and other observables. As a demonstration of these techniques we simulate a Steane [[7,1,3]]-encoded logical operation with non-Clifford errors and compute its fault tolerance error threshold. We expect that the method presented here will enable studies of much larger and more realistic quantum circuits than was previously possible.
• Scrambling is a process by which the state of a quantum system is effectively randomized. Scrambling exhibits different complexities depending on the degree of randomness it produces. For example, the complete randomization of a pure quantum state (Haar scrambling) implies the inability to retrieve information of the initial state by measuring only parts of the system (Page/information scrambling), but the converse is not necessarily the case. Here, we formally relate scrambling complexities to the degree of randomness, by studying the behaviors of generalized entanglement entropies -- in particular Rényi entropies -- and their relationship to designs, ensembles of states or unitaries that match the completely random states or unitaries (drawn from the Haar measure) up to certain moments. The main result is that the Rényi-$\alpha$ entanglement entropies, averaged over $\alpha$-designs, are almost maximal. The result generalizes Page's theorem for the von Neumann entropies of small subsystems of random states. For designs of low orders, the average Rényi entanglement entropies can be non-maximal: we exhibit a projective 2-design such that all higher order Rényi entanglement entropies are bounded away from the maximum. However, we show that the Rényi entanglement entropies of all orders are almost maximal for state or unitary designs of order logarithmic in the dimension of the system. That is, such designs are indistinguishable from Haar-random by the entanglement spectrum. Our results establish a formal correspondence between generalized entropies and designs of the same order.
• Building upon work by Matsumoto, we show that the quantum relative entropy with full-rank second argument is determined by four simple axioms: i) Continuity in the first argument, ii) the validity of the data-processing inequality, iii) additivity under tensor products, and iv) super-additivity. This observation has immediate implications for quantum thermodynamics, which we discuss. Specifically, we demonstrate that, under reasonable restrictions, the free energy is singled out as a measure of athermality. In particular, we consider an extended class of Gibbs-preserving maps as free operations in a resource-theoretic framework, in which a catalyst is allowed to build up correlations with the system at hand. The free energy is the only extensive and continuous function that is monotonic under such free operations.
• The potential impact of future quantum networks hinges on high-quality quantum entanglement shared between network nodes. Unavoidable real-world imperfections necessitate means to improve remote entanglement by local quantum operations. Here we realize entanglement distillation on a quantum network primitive of distant electron-nuclear two-qubit nodes. We demonstrate the heralded generation of two copies of a remote entangled state through single-photon-mediated entangling of the electrons and robust storage in the nuclear spins. After applying local two-qubit gates, single-shot measurements herald the distillation of an entangled state with increased fidelity that is available for further use. In addition, this distillation protocol significantly speeds up entanglement generation compared to previous two-photon-mediated schemes. The key combination of generating, storing and processing entangled states demonstrated here opens the door to exploring and utilizing multi-particle entanglement on an extended quantum network.
• A renormalization group flow of Hamiltonians for two-dimensional classical partition functions is constructed using tensor networks. Similar to tensor network renormalization ([G. Evenbly and G. Vidal, Phys. Rev. Lett. 115, 180405 (2015)], [S. Yang, Z.-C. Gu, and X.-G Wen, arXiv:1512.04938]) we obtain approximate fixed point tensor networks at criticality. Our formalism however preserves positivity of the tensors at every step and hence yields an interpretation in terms of Hamiltonian flows. We emphasize that the key difference between tensor network approaches and Kadanoff's spin blocking method can be understood in terms of a change of local basis at every decimation step, a property which is crucial to overcome the area law of mutual information. We derive algebraic relations for fixed point tensors, calculate critical exponents, and benchmark our method on the Ising model and the six-vertex model.
• The generalization of the multi-scale entanglement renormalization ansatz (MERA) to continuous systems, or cMERA [Haegeman et al., Phys. Rev. Lett, 110, 100402 (2013)], is expected to become a powerful variational ansatz for the ground state of strongly interacting quantum field theories. In this paper we investigate, in the simpler context of Gaussian cMERA for free theories, the extent to which the cMERA state $|\Psi^\Lambda\rangle$ with finite UV cut-off $\Lambda$ can capture the spacetime symmetries of the ground state $|\Psi\rangle$. For a free boson conformal field theory (CFT) in 1+1 dimensions as a concrete example, we build a quasi-local unitary transformation $V$ that maps $|\Psi\rangle$ into $|\Psi^\Lambda\rangle$ and show two main results. (i) Any spacetime symmetry of the ground state $|\Psi\rangle$ is also mapped by $V$ into a spacetime symmetry of the cMERA $|\Psi^\Lambda\rangle$. However, while in the CFT the stress-energy tensor $T_{\mu\nu}(x)$ (in terms of which all the spacetime symmetry generators are expressed) is local, the corresponding cMERA stress-energy tensor $T_{\mu\nu}^{\Lambda}(x) = V T_{\mu\nu}(x) V^{\dagger}$ is quasi-local. (ii) From the cMERA, we can extract quasi-local scaling operators $O^{\Lambda}_{\alpha}(x)$ characterized by the exact same scaling dimensions $\Delta_{\alpha}$, conformal spins $s_{\alpha}$, operator product expansion coefficients $C_{\alpha\beta\gamma}$, and central charge $c$ as the original CFT. Finally, we argue that these results should also apply to interacting theories.
• We demonstrate that perturbative expansions for quantum many-body systems can be rephrased in terms of tensor networks, thereby providing a natural framework for interpolating perturbative expansions across a quantum phase transition. This approach leads to classes of tensor-network states parameterized by few parameters with a clear physical meaning, while still providing excellent variational energies. We also demonstrate how to construct perturbative expansions of the entanglement Hamiltonian, whose eigenvalues form the entanglement spectrum, and how the tensor-network approach gives rise to order parameters for topological phase transitions.
• Certain quantum many-body ground states can be prepared by a large-depth quantum circuit consisting of geometrically local gates. In the presence of noise, local expectation values deviate from the correct value at most by an amount comparable to the noise rate. This happens if the action of the noiseless circuit, restricted to certain subsystems, rapidly mixes local observables up to a small correction. The encoding circuit for the surface code is given as an example.
• In this paper we improve the layered implementation of arbitrary stabilizer circuits introduced by Aaronson and Gottesman in \it Phys. Rev. A 70(052328), 2004. In particular, we reduce their 11-stage computation -H-C-P-C-P-C-H-P-C-P-C- into an 8-stage computation of the form -H-C-CZ-P-H-P-CZ-C-. We show arguments in support of using -CZ- stages over the -C- stages: not only the use of -CZ- stages allows a shorter layered expression, but -CZ- stages are simpler and appear to be easier to implement compared to the -C- stages. Relying on the 8-stage decomposition we develop a two-qubit depth-$(14n-4)$ implementation of stabilizer circuits over the gate library P,H,CNOT, executable in the LNN architecture, improving best previously known depth-$25n$ circuit, also executable in the LNN architecture. Our constructions rely on folding arbitrarily long sequences $($-P-C-$)^m$ into a 3-stage computation -P-CZ-C-, as well as efficient implementation of the -CZ- stage circuits.
• Recent progress in applying complex network theory to problems faced in quantum information and computation has resulted in a beneficial crossover between two fields. Complex network methods have successfully been used to characterize quantum walk and transport models, entangled communication networks, graph theoretic models of emergent space-time and in detecting community structure in quantum systems. Information physics is setting the stage for a theory of complex and networked systems with quantum information-inspired methods appearing in complex network science, including information-theoretic distance and correlation measures for network characterization. Novel quantum induced effects have been predicted in random graphs---where edges represent entangled links---and quantum computer algorithms have recently been proposed to offer super-polynomial enhancement for several network and graph theoretic problems. Here we review the results at the cutting edge, pinpointing the similarities and reconciling the differences found in the series of results at the intersection of these two fields.
• The experimental interest in realizing quantum spin-1/2-chains has increased uninterruptedly over the last decade. In many instances, the target quantum simulation belongs to the broader class of non-interacting fermionic models, constituting an important benchmark. In spite of this class being analytically efficiently tractable, no direct certification tool has yet been reported for it. In fact, in experiments, certification has almost exclusively relied on notions of quantum state tomography scaling very unfavorably with the system size. Here, we develop experimentally-friendly fidelity witnesses for all pure fermionic Gaussian target states. Their expectation value yields a tight lower bound to the fidelity and can be measured efficiently. We derive witnesses in full generality in the Majorana-fermion representation and apply them to experimentally relevant spin-1/2 chains. Among others, we show how to efficiently certify strongly out-of-equilibrium dynamics in critical Ising chains. At the heart of the measurement scheme is a variant of importance sampling specially tailored to overlaps between covariance matrices. The method is shown to be robust against finite experimental-state infidelities.
• Topological phase transitions are related to changes in the properties of quasi-particle excitations (anyons). We study this phenomenon in the framework of projected entanglement pair states (PEPS) and show how condensing and confining anyons reduces a local gauge symmetry to a global on-site symmetry. We also study the action of this global symmetry over the quasiparticle excitations. As a byproduct, we observe that this symmetry reduction effect can be applied to one-dimensional systems as well, and brings a new perspective on the classification of phases with symmetries using matrix product states (MPS), opening the door to appealing physical interpretations. The case of $\mathbb{Z}_2$ on-site symmetry is studied in detail.
• Mar 27 2017 quant-ph arXiv:1703.08508v1
We give an arguably simpler and more direct proof of a recent result by Miller, Jain and Shi, who proved device-independent security of a protocol for quantum key distribution in which the devices can be used in parallel. Our proof combines existing results on immunization (Kempe et al., SICOMP 2011) and parallel repetition (Bavarian et al., STOC 2017) of entangled games.
• In this thesis we study properties of open quantum dissipative evolutions of spin systems on lattices described by Lindblad generators, in a particular regime that we denote rapid mixing. We consider dissipative evolutions with a unique fixed point, and which compress the whole space of input states into increasingly small neighborhoods of the fixed point. The time scale at which this compression takes place, or in other words the time we have to wait for any input state to become almost indistinguishable from the fixed point, is called the mixing time of the process. Rapid mixing is a condition on the scaling of this mixing time with the system size: if it is logarithmic, then we have rapid mixing. The main contribution of this thesis is to show that rapid mixing has profound implications for the corresponding system: it is stable against external perturbations and its fixed point satisfies an area law for mutual information.
• For any function $f: X \times Y \to Z$, we prove that $Q^{*\text{cc}}(f) \cdot Q^{\text{OIP}}(f) \cdot (\log Q^{\text{OIP}}(f) + \log |Z|) \geq \Omega(\log |X|)$. Here, $Q^{*\text{cc}}(f)$ denotes the bounded-error communication complexity of $f$ using an entanglement-assisted two-way qubit channel, and $Q^{\text{OIP}}(f)$ denotes the number of quantum queries needed to determine $x$ with high probability given oracle access to the function $f_x(y) \stackrel{\text{def}}{=} f(x, y)$. We show that this tradeoff is close to the best possible. We also give a generalization of this tradeoff for distributional query complexity. As an application, we prove an optimal $\Omega(\log q)$ lower bound on the $Q^{*\text{cc}}$ complexity of determining whether $x + y$ is a perfect square, where Alice holds $x \in \mathbf{F}_q$, Bob holds $y \in \mathbf{F}_q$, and $\mathbf{F}_q$ is a finite field of odd characteristic. As another application, we give a new, simpler proof that searching an ordered size-$N$ database requires $\Omega(\log N / \log \log N)$ quantum queries. (It was already known that $\Theta(\log N)$ queries are required.)
• Atomic clocks are our most accurate indicators of time. Their applications range from GPS systems to synchronization of electronics, astronomical observations, and tests of fundamental physics. Here we propose a protocol that combines an atomic clock with a quantum memory, realizing a stopwatch that can be paused and resumed on demand. Taking advantage of the quantum memory, the stopwatch can measure the total time elapsed in a sequence of time intervals with the ultimate precision allowed by quantum mechanics, outperforming stopwatches that use only classical memories. The working principle of the quantum stopwatch is a new type of information compression,tailored to store the state of the quantum clock into a quantum memory of exponentially smaller size. Our protocol can be used to generate quantum states with Heisenberg limited sensitivity and to efficiently synchronize clocks in quantum communication networks.
• The Gottesman-Knill theorem established that stabilizer states and operations can be efficiently simulated classically. For qudits with dimension three and greater, stabilizer states and Clifford operations have been found to correspond to positive discrete Wigner functions and dynamics. We present a discrete Wigner function-based simulation algorithm for odd-$d$ qudits that has the same time and space complexity as the Aaronson-Gottesman algorithm. We show that the efficiency of both algorithms is due to the harmonic evolution in the symplectic structure of discrete phase space. The differences between the Wigner function algorithm and Aaronson-Gottesman are likely due only to the fact that the Weyl-Heisenberg group is not in $SU(d)$ for $d=2$ and that qubits have state-independent contextuality. This may provide a guide for extending the discrete Wigner function approach to qubits.
• We consider Projected Entangled Pair State (PEPS) models with a global $\mathbb Z_N$ symmetry, which are constructed from $\mathbb Z_N$-symmetric tensors and are thus $\mathbb Z_N$-invariant wavefunctions, and study the occurence of long-range order and symmetry breaking in these systems. First, we show that long-range order in those models is accompanied by a degeneracy in the so-called transfer operator of the system. We subsequently use this degeneracy to determine the nature of the symmetry broken states, i.e., those stable under arbitrary perturbations, and provide a succinct characterization in terms of the fixed points of the transfer operator (i.e.\ the different boundary conditions) in the individual symmetry sectors. We verify our findings numerically through the study of a $\mathbb Z_3$-symmetric model, and show that the entanglement Hamiltonian derived from the symmetry broken states is quasi-local (unlike the one derived from the symmetric state), reinforcing the locality of the entanglement Hamiltonian for gapped phases.
• Zauner's conjecture asserts that $d^2$ equiangular lines exist in all $d$ complex dimensions. In quantum theory, the $d^2$ lines are dubbed a SIC, as they define a favoured standard informationally complete quantum measurement called a SIC-POVM. This note supplements A. J. Scott and M. Grassl [J. Math. Phys. 51 (2010), 042203] by extending the list of published numerical solutions. We provide a putative complete list of Weyl-Heisenberg covariant SICs with the known symmetries in dimensions $d\leq 90$, a single solution with Zauner's symmetry for every $d\leq 121$ and solutions with higher symmetry for $d=124,143,147,168,172,195,199,228,259$ and $323$.
• Many determinantal inequalities for positive definite block matrices are consequences of general entropy inequalities, specialised to Gaussian distributed vectors with prescribed covariances. In particular, strong subadditivity (SSA) yields \beginequation* \ln\det V_AC + \ln\det V_BC - \ln\det V_ABC - \ln\det V_C ≥0 \endequation* for all $3\times 3$-block matrices $V_{ABC}$, where subscripts identify principal submatrices. We shall refer to the above inequality as SSA of log-det entropy. In this paper we develop further insights on the properties of the above inequality and its applications to classical and quantum information theory. In the first part of the paper, we show how to find known and new necessary and sufficient conditions under which saturation with equality occurs. Subsequently, we discuss the role of the classical transpose channel (also known as Petz recovery map) in this problem and find its action explicitly. We then prove some extensions of the saturation theorem, by finding faithful lower bounds on a log-det conditional mutual information. In the second part, we focus on quantum Gaussian states, whose covariance matrices are not only positive but obey additional constraints due to the uncertainty relation. For Gaussian states, the log-det entropy is equivalent to the Rényi entropy of order $2$. We provide a strengthening of log-det SSA for quantum covariance matrices that involves the so-called Gaussian Rényi-$2$ entanglement of formation, a well-behaved entanglement measure defined via a Gaussian convex roof construction. We then employ this result to define a log-det entropy equivalent of the squashed entanglement, which is remarkably shown to coincide with the Gaussian Rényi-$2$ entanglement of formation. This allows us to establish useful properties of such measure(s), like monogamy, faithfulness, and additivity on Gaussian states.
• Efficiently characterising quantum systems, verifying operations of quantum devices and validating underpinning physical models, are central challenges for the development of quantum technologies and for our continued understanding of foundational physics. Machine-learning enhanced by quantum simulators has been proposed as a route to improve the computational cost of performing these studies. Here we interface two different quantum systems through a classical channel - a silicon-photonics quantum simulator and an electron spin in a diamond nitrogen-vacancy centre - and use the former to learn the latter's Hamiltonian via Bayesian inference. We learn the salient Hamiltonian parameter with an uncertainty of approximately $10^{-5}$. Furthermore, an observed saturation in the learning algorithm suggests deficiencies in the underlying Hamiltonian model, which we exploit to further improve the model itself. We go on to implement an interactive version of the protocol and experimentally show its ability to characterise the operation of the quantum photonic device. This work demonstrates powerful new quantum-enhanced techniques for investigating foundational physical models and characterising quantum technologies.
• The experimental realization of increasingly complex synthetic quantum systems calls for the development of general theoretical methods, to validate and fully exploit quantum resources. Quantum-state tomography (QST) aims at reconstructing the full quantum state from simple measurements, and therefore provides a key tool to obtain reliable analytics. Brute-force approaches to QST, however, demand resources growing exponentially with the number of constituents, making it unfeasible except for small systems. Here we show that machine learning techniques can be efficiently used for QST of highly-entangled states in arbitrary dimension. Remarkably, the resulting approach allows one to reconstruct traditionally challenging many-body quantities - such as the entanglement entropy - from simple, experimentally accessible measurements. This approach can benefit existing and future generations of devices ranging from quantum computers to ultra-cold atom quantum simulators.
• We design forward and backward fault-tolerant conversion circuits, which convert between the Steane code and the 15-qubit Reed-Muller quantum code so as to provide a universal transversal gate set.In our method, only 11 out of total 14 code stabilizers are measured, and we further enhance the circuit by simplifying some stabilizers; thus, we need only 64 CNOT gates for one round of forward-conversion stabilizer measurements and 60 CNOT gates for one round of backward-conversion stabilizer measurements. For conversion, we consider random single-qubit errors and their influence on syndromes of gauge operators, and we perform operations that yield quantum error correction and gauge fixing in a single step. We also generalize our method to conversion between any adjacent Reed-Muller quantum codes $\overline{\textsf{RM}}(1,m)$ and $\overline{\textsf{RM}}\left(1,m+1\right)$, for which we provide the necessary $3m+2$ stabilizers and the concomitant resources required.
• Defects between gapped boundaries provide a possible physical realization of projective non-abelian braid statistics. A notable example is the projective Majorana/parafermion braid statistics of boundary defects in fractional quantum Hall/topological insulator and superconductor heterostructures. In this paper, we develop general theories to analyze the topological properties and projective braiding of boundary defects of topological phases of matter in two spatial dimensions. We present commuting Hamiltonians to realize defects between gapped boundaries in any $(2+1)D$ untwisted Dijkgraaf-Witten theory, and use these to describe their topological properties such as their quantum dimension. By modeling the algebraic structure of boundary defects through multi-fusion categories, we establish a bulk-edge correspondence between certain boundary defects and symmetry defects in the bulk. Even though it is not clear how to physically braid the defects, this correspondence elucidates the projective braid statistics for many classes of boundary defects, both amongst themselves and with bulk anyons. Specifically, three such classes of importance to condensed matter physics/topological quantum computation are studied in detail: (1) A boundary defect version of Majorana and parafermion zero modes, (2) a similar version of genons in bilayer theories, and (3) boundary defects in $\mathfrak{D}(S_3)$.
• We investigate the task of $d$-level random access codes ($d$-RACs) and consider the possibility of encoding classical strings of $d$-level symbols (dits) into a quantum system of dimension $d'$ strictly less than $d$. We show that the average success probability of recovering one (randomly chosen) dit from the encoded string can be larger than that obtained in the best classical protocol for the task. Our result is intriguing as we know from Holevo's theorem (and more recently from Frenkel-Weiner's result [Commun. Math. Phys. 340, 563 (2015)]) that there exist communication scenarios wherein quantum resources prove to be of no advantage over classical resources. A distinguishing feature of our protocol is that it establishes a stronger quantum advantage in contrast to the existing quantum $d$-RACs where $d$-level quantum systems are shown to be advantageous over their classical $d$-level counterparts.

Laura Mančinska Mar 28 2017 13:09 UTC

Great result!

For those familiar with I_3322, William here gives an example of a nonlocal game exhibiting a behaviour that many of us suspected (but couldn't prove) to be possessed by I_3322.

gae spedalieri Mar 13 2017 14:13 UTC

1) Sorry but this is false.

1a) That analysis is specifically for reducing QECC protocol to an entanglement distillation protocol over certain class of discrete variable channels. Exactly as in BDSW96. Task of the protocol is changed in the reduction.

1b) The simulation is not via a general LOCC b

...(continued)
Siddhartha Das Mar 13 2017 13:22 UTC

We feel that we have cited and credited previous works appropriately in our paper. To clarify:

1) The LOCC simulation of a channel and the corresponding adaptive reduction can be found worked out in full generality in the 2012 Master's thesis of Muller-Hermes. We have cited the original paper BD

...(continued)
gae spedalieri Mar 13 2017 08:56 UTC

This is one of those papers where the contribution of previous literature is omitted and not fairly represented.

1- the LOCC simulation of quantum channels (not necessarily teleportation based) and the corresponding general reduction of adaptive protocols was developed in PLOB15 (https://arxiv.org/

...(continued)
Noon van der Silk Mar 08 2017 04:45 UTC

I feel that while the proliferation of GUNs is unquestionable a good idea, there are many unsupervised networks out there that might use this technology in dangerous ways. Do you think Indifferential-Privacy networks are the answer? Also I fear that the extremist binary networks should be banned ent

...(continued)
Qian Wang Mar 07 2017 17:21 UTC

"To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics."

Christopher Chamberland Mar 02 2017 18:48 UTC

A good paper for learning about exRec's is this one https://arxiv.org/abs/quant-ph/0504218. Also, rigorous threshold lower bounds are obtained using an adversarial noise model approach.

Anirudh Krishna Mar 02 2017 18:40 UTC

Here's a link to a lecture from Dan Gottesman's course at PI about exRecs.
http://pirsa.org/displayFlash.php?id=07020028

You can find all the lectures here:
http://www.perimeterinstitute.ca/personal/dgottesman/QECC2007/index.html

Ben Criger Mar 02 2017 08:58 UTC

Good point, I wish I knew more about ExRecs.

Robin Blume-Kohout Feb 28 2017 09:55 UTC

I totally agree -- that part is confusing. It's not clear whether "arbitrary good precision ... using a limited amount of hardware" is supposed to mean that arbitrarily low error rates can be achieved with codes of fixed size (clearly wrong) or just that the resources required to achieve arbitraril

...(continued)