Top arXiv papers

sign in to customize
  • PDF
    In order to build a large scale quantum computer, one must be able to correct errors extremely fast. We design a fast decoding algorithm for topological codes to correct for Pauli errors and erasure and combination of both errors and erasure. Our algorithm has a worst case complexity of $O(n \alpha(n))$, where $n$ is the number of physical qubits and $\alpha$ is the inverse of Ackermann's function, which is very slowly growing. For all practical purposes, $\alpha(n) \leq 3$. We prove that our algorithm performs optimally for errors of weight up to $(d-1)/2$ and for loss of up to $d-1$ qubits, where $d$ is the minimum distance of the code. Numerically, we obtain a threshold of $9.9\%$ for the 2d-toric code with perfect syndrome measurements and $2.6\%$ with faulty measurements.
  • PDF
    Pure state entanglement transformations have been thought of as irreversible, with reversible transformations generally only possible in the limit of many copies. Here, we show that reversible entanglement transformations do not require processing on the many copy level, but can instead be undertaken on individual systems, provided the amount of entanglement which is produced or consumed is allowed to fluctuate. We derive necessary and sufficient conditions for entanglement manipulations in this case. As a corollary, we derive an equation which quantifies the fluctuations of entanglement, which is formally identical to the Jarzynski fluctuation equality found in thermodynamics. One can also relate a forward entanglement transformation to its reverse process in terms of the entanglement cost of such a transformation, in a manner equivalent to the Crooks relation. We show that a strong converse theorem for entanglement transformations is related to the second law of thermodynamics, while the fact that the Schmidt rank of an entangled state cannot increase is related to the third law of thermodynamics. Achievability of the protocols is done by introducing an entanglement battery, a device which stores entanglement and uses an amount of entanglement that is allowed to fluctuate but with an average cost which is still optimal. This allows us to also solve the problem of partial entanglement recovery, and in fact, we show that entanglement is fully recovered. Allowing the amount of consumed entanglement to fluctuate also leads to improved and optimal entanglement dilution protocols.
  • PDF
    We study thermal states of strongly interacting quantum spin chains and prove that those can efficiently be represented in terms of convex combinations of matrix product states. Apart from revealing new features of the entanglement structure of Gibbs states, such as an area law for the entanglement of formation, our results provide a theoretical justifications for the use of White's algorithm of minimally entangled typical thermal states. Furthermore, we shed new light on time dependent matrix product state algorithms which yield hydrodynamical descriptions of the underlying dynamics.
  • PDF
    Quantum computers promise to efficiently solve not only problems believed to be intractable for classical computers, but also problems for which verifying the solution is also considered intractable. This raises the question of how one can check whether quantum computers are indeed producing correct results. This task, known as quantum verification, has been highlighted as a significant challenge on the road to scalable quantum computing technology. We review the most significant approaches to quantum verification and compare them in terms of structure, complexity and required resources. We also comment on the use of cryptographic techniques which, for many of the presented protocols, has proven extremely useful in performing verification. Finally, we discuss issues related to fault tolerance, experimental implementations and the outlook for future protocols.
  • PDF
    We extend quantum Stein's lemma in asymmetric quantum hypothesis testing to composite null and alternative hypotheses. As our main result, we show that the asymptotic error exponent for testing convex combinations of quantum states $\rho^{\otimes n}$ against convex combinations of quantum states $\sigma^{\otimes n}$ is given by a regularized quantum relative entropy distance formula. We prove that in general such a regularization is needed but also discuss various settings where our formula as well as extensions thereof become single-letter. This includes a novel operational interpretation of the relative entropy of coherence in terms of hypothesis testing. For our proof, we start from the composite Stein's lemma for classical probability distributions and lift the result to the non-commutative setting by only using elementary properties of quantum entropy. Finally, our findings also imply an improved Markov type lower bound on the quantum conditional mutual information in terms of the regularized quantum relative entropy - featuring an explicit and universal recovery map.
  • PDF
    We describe an efficient quantum algorithm for the quantum Schur transform. The Schur transform is an operation on a quantum computer that maps the standard computational basis to a basis composed of irreducible representations of the unitary and symmetric groups. We simplify and extend the algorithm of Bacon, Chuang, and Harrow, and provide a new practical construction as well as sharp theoretical and practical analyses. Our algorithm decomposes the Schur transform on $n$ qubits into $O(n^4 \log(n/{\epsilon}))$ operators in the Clifford+T fault-tolerant gate set. We extend our qubit algorithm to decompose the Schur transform on $n$ qudits of dimension $d$ into $O(d^{1+p} n^{2d+1} \log^p (dn/{\epsilon})$) primitive operators from any universal gate set, for $p {\approx} 3.97$.
  • PDF
    We improve the number of T gates needed to perform an n-bit adder from 8n + O(1) to 4n + O(1). We do so via a "temporary logical-AND" construction, which uses four T gates to store the logical-AND of two qubits into an ancilla and zero T gates to later erase the ancilla. Temporary logical-ANDs are a generally useful tool when optimizing T-counts. They can be applied to integer arithmetic, modular arithmetic, rotation synthesis, the quantum Fourier transform, Shor's algorithm, Grover oracles, and many other circuits. Because T gates dominate the cost of quantum computation based on the surface code, and temporary logical-ANDs are widely applicable, our constructions represent a significant reduction in projected costs of quantum computation. We also present an n-bit controlled adder circuit with T-count of 8n + O(1), a temporary adder that can be computed for the same cost as the normal adder but whose result can be kept until it is later uncomputed without using T gates, and discuss some other constructions whose T-count is improved by the temporary logical-AND.
  • PDF
    An important approach to the fault-tolerant quantum computation is protecting the logical information using the quantum error correction. Usually, the logical information is in the form of logical qubits, which are encoded in physical qubits using quantum error correction codes. Compared with the qubit quantum computation, the fermionic quantum computation has advantages in quantum simulations of fermionic systems, e.g. molecules. In this paper, we show that the fermionic quantum computation can be universal and fault-tolerant if we encode logical Majorana fermions in physical Majorana fermions. We take a color code as an example to demonstrate the universal set of fault-tolerant operations on logical Majorana fermions, and we numerically find that the fault-tolerance threshold is about 0.8%.
  • PDF
    Fundamental questions in chemistry and physics may never be answered due to the exponential complexity of the underlying quantum phenomena. A desire to overcome this challenge has sparked a new industry of quantum technologies with the promise that engineered quantum systems can address these hard problems. A key step towards demonstrating such a system will be performing a computation beyond the capabilities of any classical computer, achieving so-called quantum supremacy. Here, using 9 superconducting qubits, we demonstrate an immediate path towards quantum supremacy. By individually tuning the qubit parameters, we are able to generate thousands of unique Hamiltonian evolutions and probe the output probabilities. The measured probabilities obey a universal distribution, consistent with uniformly sampling the full Hilbert-space. As the number of qubits in the algorithm is varied, the system continues to explore the exponentially growing number of states. Combining these large datasets with techniques from machine learning allows us to construct a model which accurately predicts the measured probabilities. We demonstrate an application of these algorithms by systematically increasing the disorder and observing a transition from delocalized states to localized states. By extending these results to a system of 50 qubits, we hope to address scientific questions that are beyond the capabilities of any classical computer.
  • PDF
    Purification is a powerful technique in quantum physics whereby a mixed quantum state is extended to a pure state on a larger system. This process is not unique, and in systems composed of many degrees of freedom, one natural purification is the one with minimal entanglement. Here we study the entropy of the minimally entangled purification, called the entanglement of purification, in three model systems: an Ising spin chain, conformal field theories holographically dual to Einstein gravity, and random stabilizer tensor networks. We conjecture values for the entanglement of purification in all these models, and we support our conjectures with a variety of numerical and analytical results. We find that such minimally entangled purifications have a number of applications, from enhancing entanglement-based tensor network methods for describing mixed states to elucidating novel aspects of the emergence of geometry from entanglement in the AdS/CFT correspondence.
  • PDF
    Quantum Markov semigroups characterize the time evolution of an important class of open quantum systems. Studying convergence properties of such a semigroup, and determining concentration properties of its invariant state, have been the focus of much research. Quantum versions of functional inequalities (like the modified logarithmic Sobolev and Poincaré inequalities) and the so-called transportation cost inequalities, have proved to be essential for this purpose. Classical functional and transportation cost inequalities are seen to arise from a single geometric inequality, called the Ricci lower bound, via an inequality which interpolates between them. The latter is called the HWI-inequality, where the letters I, W and H are, respectively, acronyms for the Fisher information (arising in the modified logarithmic Sobolev inequality), the so-called Wasserstein distance (arising in the transportation cost inequality) and the relative entropy (or Boltzmann H function) arising in both. Hence, classically, all the above inequalities and the implications between them form a remarkable picture which relates elements from diverse mathematical fields, such as Riemannian geometry, information theory, optimal transport theory, Markov processes, concentration of measure, and convexity theory. Here we consider a quantum version of the Ricci lower bound introduced by Carlen and Maas, and prove that it implies a quantum HWI inequality from which the quantum functional and transportation cost inequalities follow. Our results hence establish that the unifying picture of the classical setting carries over to the quantum one.
  • PDF
    A quantum measurement is Fisher symmetric if it provides uniform and maximal information on all parameters that characterize the quantum state of interest. Using (complex projective) 2-designs, we construct measurements on a pair of identically prepared quantum states that are Fisher-symmetric for all pure states. Such measurements are optimal in achieving the minimal statistical error without adaptive measurements. We then determine all collective measurements on a pair that are Fisher-symmetric for the completely mixed state and for all pure states simultaneously. For a qubit, these measurements are Fisher-symmetric for all states. The minimal optimal measurements are tied to the elusive symmetric informationally complete measurements, which reflects a deep connection between local symmetry and global symmetry. In the study, we derive a fundamental constraint on the Fisher information matrix of any collective measurement on a pair, which offers a useful tool for characterizing the tomographic efficiency of collective measurements.
  • PDF
    We study the problem of encrypting and authenticating quantum data in the presence of adversaries making adaptive chosen plaintext and chosen ciphertext queries. Classically, security games use string copying and comparison to detect adversarial cheating in such scenarios. Quantumly, this approach would violate no-cloning. We develop new techniques to overcome this problem: we use entanglement to detect cheating, and rely on recent results for characterizing quantum encryption schemes. We give definitions for (i.) ciphertext unforgeability , (ii.) indistinguishability under adaptive chosen-ciphertext attack, and (iii.) authenticated encryption. The restriction of each definition to the classical setting is at least as strong as the corresponding classical notion: (i) implies INT-CTXT, (ii) implies IND-CCA2, and (iii) implies AE. All of our new notions also imply QIND-CPA privacy. Combining one-time authentication and classical pseudorandomness, we construct schemes for each of these new quantum security notions, and provide several separation examples. Along the way, we also give a new definition of one-time quantum authentication which, unlike all previous approaches, authenticates ciphertexts rather than plaintexts.
  • PDF
    Entanglement distribution is a prerequisite for several important quantum information processing and computing tasks, such as quantum teleportation, quantum key distribution, and distributed quantum computing. In this work, we focus on two-dimensional quantum networks based on optical quantum technologies using dual-rail photonic qubits. We lay out a quantum network architecture for entanglement distribution between distant parties, with the technological constraint that quantum repeaters equipped with quantum memories are not currently widely available. We also discuss several quantum network topologies for the building of a fail-safe quantum internet. We use percolation theory to provide figures of merit on the loss parameter of the optical medium for networks with bow-tie lattice and Archimedean lattice topologies. These figures of merit allow for comparisons of the robustness of different networks against intermittent failures of its nodes and against intermittent photon loss, which is an important consideration in the realization of the quantum internet.
  • PDF
    Simulating quantum contextuality with classical systems requires memory. A fundamental yet open question is which is the minimum memory needed and, therefore, the precise sense in which quantum systems outperform classical ones. Here we make rigorous the notion of classically simulating quantum state-independent contextuality (QSIC) in the case of a single quantum system submitted to an infinite sequence of measurements randomly chosen from a finite QSIC set. We obtain the minimum memory classical systems need to simulate arbitrary QSIC sets under the assumption that the simulation should not contain any oracular information. In particular, we show that, while classically simulating two qubits tested with the Peres-Mermin set requires $\log_2 24 \approx 4.585$ bits, simulating a single qutrit tested with the Yu-Oh set requires, at least, $5.740$ bits.
  • PDF
    We develop a machine learning method to construct accurate ground-state wave functions of strongly interacting and entangled quantum spin as well as fermionic models on lattices. A restricted Boltzmann machine algorithm in the form of an artificial neural network is combined with a conventional variational Monte Carlo method with pair product (geminal) wave functions and quantum number projections. The combined method substantially improves the accuracy beyond that ever achieved by each method separately, in the Heisenberg as well as Hubbard models on square lattices, thus proving its power as a highly accurate quantum many-body solver.
  • PDF
    To analyze non-Markovianity of tripartite quantum states from a resource theoretical viewpoint, we introduce a class of quantum operations performed by three distant parties, and investigate an operational resource theory induced it. A tripartite state is a free state if and only if it is a quantum Markov chain. We prove monotonicity of functions such as the conditional mutual information, intrinsic information, squashed entanglement, a generalization of entanglement of purification, and the relative entropy of recovery. We identify bound sets that have a clear correspondence to the monotone functions, and analyze their inclusion relations. We introduce a task of "non-Markovianity dilution", and prove that the optimal rate for the task, namely the "non-Markovianity cost", is bounded from above by the regularized entanglement of purification. We also propose a classical resource theory of non-Markovianity.
  • PDF
    Quantum bits based on individual trapped atomic ions constitute a promising technology for building a quantum computer, with all the elementary operations having been achieved with the necessary precision. However, the essential two-qubit logic gate used for generating quantum entanglement has hitherto always been performed in an adiabatic regime, where the gate is slow compared with the characteristic motional frequencies of ions in the trap, giving logic speeds of order 10kHz. There have been numerous proposals for performing gates faster than this natural "speed limit" of the trap. We implement the method of Steane et al., which uses tailored laser pulses: these are shaped on 10ns timescales to drive the ions' motion along trajectories designed such that the gate operation is insensitive to the initial phase of the optical field. This permits fast (MHz-rate) quantum logic which is robust to this important source of experimental error. We demonstrate entanglement generation for gate times as short as 480ns; this is less than a single oscillation period of an ion in the trap, and 8 orders of magnitude shorter than the memory coherence time measured in similar calcium-43 hyperfine qubits. The method's power is most evident at intermediate timescales, where it yields more than an order of magnitude reduction in gate error compared with conventional techniques; for example, we achieve a 1.6$\mu$s gate with fidelity 99.8%. Still faster gates are possible at the price of higher laser intensity. The method requires only a single amplitude-shaped pulse and one pair of beams derived from a cw laser, and offers the prospect of combining the unrivalled coherence properties, optical connectivity and operation fidelities of trapped-ion qubits with the sub-microsecond logic speeds usually associated with solid state devices.
  • PDF
    It has long been recognized that certain quantum correlations are incompatible with particular assumption about classical causal structure. Given a causal structure of unknown classicality, the presence of such correlations certifies the non-classical nature of the causal structure in a device independent fashion. In structures where all parties share a common resource, these non-classical correlations are also known as non-local correlations. Any constraint satisfied by all correlations which are classically compatible with a given causal structure defines a causal compatibility criterion. Such criteria were recently derived for the Triangle structure (arXiv:1609.00672) in the form of polynomial inequalities, begging the question: do any of those inequalities admit violation by quantum correlations? Numerical investigation suggests not, and we further conjecture that the set of correlations admitted by the classical Triangle structure is equivalent to the set of correlations admitted by its quantum generalization whenever the three observable variables are binary. Our main contribution in this work, however, is the derivation of new causal compatibility inequalities for the Triangle structure which do admit quantum violation. This provides the first robust-to-noise witness of quantum correlations in the Triangle structure. We conclude by considering the possibility of quantum resources potentially qualitatively different from those known previously.
  • PDF
    The quantum autoencoder is a recent paradigm in the field of quantum machine learning, which may enable an enhanced use of resources in quantum technologies. To this end, quantum neural networks with less nodes in the inner than in the outer layers were considered. Here, we propose a useful connection between approximate quantum adders and quantum autoencoders. Specifically, this link allows us to employ optimized approximate quantum adders, obtained with genetic algorithms, for the implementation of quantum autoencoders for a variety of initial states. Furthermore, we can also directly optimize the quantum autoencoders via genetic algorithms. Our approach opens a different path for the design of quantum autoencoders in controllable quantum platforms.
  • PDF
    Maximum A posteriori Probability (MAP) inference in graphical models amounts to solving a graph-structured combinatorial optimization problem. Popular inference algorithms such as belief propagation (BP) and generalized belief propagation (GBP) are intimately related to linear programming (LP) relaxation within the Sherali-Adams hierarchy. Despite the popularity of these algorithms, it is well understood that the Sum-of-Squares (SOS) hierarchy based on semidefinite programming (SDP) can provide superior guarantees. Unfortunately, SOS relaxations for a graph with $n$ vertices require solving an SDP with $n^{\Theta(d)}$ variables where $d$ is the degree in the hierarchy. In practice, for $d\ge 4$, this approach does not scale beyond a few tens of variables. In this paper, we propose binary SDP relaxations for MAP inference using the SOS hierarchy with two innovations focused on computational efficiency. Firstly, in analogy to BP and its variants, we only introduce decision variables corresponding to contiguous regions in the graphical model. Secondly, we solve the resulting SDP using a non-convex Burer-Monteiro style method, and develop a sequential rounding procedure. We demonstrate that the resulting algorithm can solve problems with tens of thousands of variables within minutes, and outperforms BP and GBP on practical problems such as image denoising and Ising spin glasses. Finally, for specific graph types, we establish a sufficient condition for the tightness of the proposed partial SOS relaxation.
  • PDF
    High-dimensional encoding of quantum information provides a promising method of transcending current limitations in quantum communication. One of the central challenges in the pursuit of such an approach is the certification of high-dimensional entanglement. In particular, it is desirable to do so without resorting to inefficient full state tomography. Here, we show how carefully constructed measurements in two or more bases can be used to efficiently certify high-dimensional states and their entanglement under realistic conditions. We considerably improve upon existing criteria and introduce new entanglement dimensionality witnesses which we put to the test for photons entangled in their orbital angular momentum. In our experimental setup, we are able to verify 8-dimensional entanglement for 11-dimensional subspaces, at present the highest amount certified without assumptions on the state itself.
  • PDF
    We introduce a measurement-device independent star network which is conveniently based on continuous variable systems and standard linear optics. Here an arbitrary number of users send modulated coherent states to an untrusted relay where a generalized Bell detection creates multi-partite secret correlations. These correlations are then distilled into a shared secret key to implement a completely-secure quantum conference or, alternatively, a protocol of quantum secret-sharing. Our scheme is composably secure and able to achieve high rates with cheap optical implementation.
  • PDF
    We develop a framework for certifying randomness from Bell-test trials based on directly estimating the probability of the measurement outcomes with adaptive test supermartingales. The number of trials need not be predetermined, and one can stop performing trials early, as soon as the desired amount of randomness is extractable. It can be used with arbitrary, partially known and time-dependent probabilities for the random settings choices. Furthermore, it is suitable for application to experimental configurations with low Bell violation per trial, such as current optical loophole-free Bell tests. It is possible to adapt to time-varying experimental parameters. We formulate the framework for the general situation where the trial probability distributions are constrained to a known set. Randomness expansion with logarithmic settings entropy is possible for many relevant configurations. We implement probability estimation numerically and apply it to a representative settings-conditional outcome probability distribution from an atomic loophole-free Bell test [Rosenfeld et al., Phys. Rev. Lett. 119:010402 (2017), arXiv:1611.04604 (2016)] to illustrate trade-offs between the amount of randomness, error, settings entropy, unknown settings biases, and number of trials. We then show that probability estimation yields more randomness from the loophole-free Bell-test data analyzed in [Bierhorst et al., arXiv:1702.05178 (2017)] and tolerates adversarial settings probability biases.
  • PDF
    The security of conventional cryptography systems is threatened in the forthcoming era of quantum computers. Quantum key distribution (QKD) features fundamentally proven security and offers a promising option for quantum-proof cryptography solution. Although prototype QKD systems over optical fiber have been demonstrated over the years, the key generation rates remain several orders-of-magnitude lower than current classical communication systems. In an effort towards a commercially viable QKD system with improved key generation rates, we developed a discrete-variable QKD system based on time-bin quantum photonic states that is capable of generating provably-secure cryptographic keys at megabit-per-second (Mbps) rates over metropolitan distances. We use high-dimensional quantum states that transmit more than one secret bit per received photon, alleviating detector saturation effects in the superconducting nanowire single-photon detectors (SNSPDs) employed in our system that feature very high detection efficiency (of over 70%) and low timing jitter (of less than 40 ps). Our system is constructed using commercial off-the-shelf components, and the adopted protocol can readily be extended to free-space quantum channels. The security analysis adopted to distill the keys ensures that the demonstrated protocol is robust against coherent attacks, finite-size effects, and a broad class of experimental imperfections identified in our system.
  • PDF
    The notion of potential output purity of a completely positive map is introduced as a generalization of the regularized output purity. An upper bound is derived for this quantity, and for several classes of maps (including CQ, QC and Hadamard channels) it is shown that potential purity does not exceed the standard output purity. As an application the potential purity is used to bound the logarithmic Sobolev constant of a product of depolarizing channel semigroups.
  • PDF
    We apply advanced methods of control theory to open quantum systems and we determine finite-time processes which are optimal with respect to thermodynamic performances. General properties and necessary conditions characterizing optimal drivings are derived, obtaining bang-bang type solutions corresponding to control strategies switching between adiabatic and isothermal transformations. A direct application of these results is the maximization of the work produced by a generic quantum heat engine, where we show that the maximum power is directly linked to a particular conserved quantity naturally emerging from the control problem. Finally we apply our general approach to the specific case of a two level system, which can be put in contact with two different baths at fixed temperatures, identifying the processes which minimize heat dissipation. Moreover, we explicitly solve the optimization problem for a cyclic two-level heat engine driven beyond the linear-response regime, determining the corresponding optimal cycle, the maximum power, and the efficiency at maximum power.
  • PDF
    Covert communication conceals the transmission of the message from an attentive adversary. Recent work on the limits of covert communication in additive white Gaussian noise (AWGN) channels has demonstrated that a covert transmitter (Alice) can reliably transmit a maximum of $\mathcal{O}\left(\sqrt{n}\right)$ bits to a covert receiver (Bob) without being detected by an adversary (Warden Willie) in $n$ channel uses. This paper focuses on the scenario where other friendly nodes distributed according to a two-dimensional Poisson point process with density $m$ are present in the environment. We propose a strategy where the friendly node closest to the adversary, without close coordination with Alice, produces artificial noise. We show that this method allows Alice to reliably and covertly send $\mathcal{O}(\min\{{n,m^{\gamma/2}\sqrt{n}}\})$ bits to Bob in $n$ channel uses, where $\gamma$ is the path-loss exponent. Moreover, we also consider a setting where there are $N_{\mathrm{w}}$ collaborating adversaries uniformly and randomly located in the environment and show that in $n$ channel uses, Alice can reliably and covertly send $\mathcal{O}\left(\min\left\{n,\frac{m^{\gamma/2} \sqrt{n}}{N_{\mathrm{w}}^{\gamma}}\right\}\right)$ bits to Bob when $\gamma >2$, and $\mathcal{O}\left(\min\left\{n,\frac{m \sqrt{n}}{N_{\mathrm{w}}^{2}\log^2 {N_{\mathrm{w}}}}\right\}\right)$ when $\gamma = 2$. Conversely, under mild restrictions on the communication strategy, we demonstrate that no higher covert throughput is possible for $\gamma>2$.
  • PDF
    We study the least squares regression problem \beginalign* \min_\Theta ∈\mathcalS_⊙D,R \|A\Theta-b\|_2, \endalign* where $\mathcal{S}_{\odot D,R}$ is the set of $\Theta$ for which $\Theta = \sum_{r=1}^{R} \theta_1^{(r)} \circ \cdots \circ \theta_D^{(r)}$ for vectors $\theta_d^{(r)} \in \mathbb{R}^{p_d}$ for all $r \in [R]$ and $d \in [D]$, and $\circ$ denotes the outer product of vectors. That is, $\Theta$ is a low-dimensional, low-rank tensor. This is motivated by the fact that the number of parameters in $\Theta$ is only $R \cdot \sum_{d=1}^D p_d$, which is significantly smaller than the $\prod_{d=1}^{D} p_d$ number of parameters in ordinary least squares regression. We consider the above CP decomposition model of tensors $\Theta$, as well as the Tucker decomposition. For both models we show how to apply data dimensionality reduction techniques based on \it sparse random projections $\Phi \in \mathbb{R}^{m \times n}$, with $m \ll n$, to reduce the problem to a much smaller problem $\min_{\Theta} \|\Phi A \Theta - \Phi b\|_2$, for which if $\Theta'$ is a near-optimum to the smaller problem, then it is also a near optimum to the original problem. We obtain significantly smaller dimension and sparsity in $\Phi$ than is possible for ordinary least squares regression, and we also provide a number of numerical simulations supporting our theory.
  • PDF
    In this paper we show that all nodes can be found optimally for almost all random Erdős-Rényi ${\mathcal G}(n,p)$ graphs using continuous-time quantum spatial search procedure. This works for both adjacency and Laplacian matrices, though under different conditions. The first one requires $p=\omega(\log^8(n)/n)$, while the seconds requires $p\geq(1+\varepsilon)\log (n)/n$, where $\varepsilon>0$. The proof was made by analyzing the convergence of eigenvectors corresponding to outlying eigenvalues in the $\|\cdot\|_\infty $ norm. At the same time for $p<(1-\varepsilon)\log(n)/n$, the property does not hold for any matrix, due to the connectivity issues. Hence, our derivation concerning Laplacian matrix is tight.
  • PDF
    In recent years, significant progress has been made in solving challenging problems across various domains using deep reinforcement learning (RL). Reproducing existing work and accurately judging the improvements offered by novel methods is vital to maintaining this rapid progress. Unfortunately, reproducing results for state-of-the-art deep RL methods is seldom straightforward. In particular, non-determinism in standard benchmark environments, combined with variance intrinsic to the methods, can make reported results difficult to interpret. Without significance metrics and tighter standardization of experimental reporting, it is difficult to determine whether improvements over the prior state-of-the-art are meaningful. In this paper, we investigate challenges posed by reproducibility, proper experimental techniques, and reporting procedures. We illustrate the variability in reported metrics and results when comparing against common baselines, and suggest guidelines to make future results in deep RL more reproducible. We aim to spur discussion about how to ensure continued progress in the field, by minimizing wasted effort stemming from results that are non-reproducible and easily misinterpreted.
  • PDF
    Chromatic polynomials and related graph invariants are central objects in both graph theory and statistical physics. Computational difficulties, however, have so far restricted studies of such polynomials to graphs that were either very small, very sparse or highly structured. Recent algorithmic advances (Timme et al 2009 New J. Phys. 11 023001) now make it possible to compute chromatic polynomials for moderately sized graphs of arbitrary structure and number of edges. Here we present chromatic polynomials of ensembles of random graphs with up to 30 vertices, over the entire range of edge density. We specifically focus on the locations of the zeros of the polynomial in the complex plane. The results indicate that the chromatic zeros of random graphs have a very consistent layout. In particular, the crossing point, the point at which the chromatic zeros with non-zero imaginary part approach the real axis, scales linearly with the average degree over most of the density range. While the scaling laws obtained are purely empirical, if they continue to hold in general there are significant implications: the crossing points of chromatic zeros in the thermodynamic limit separate systems with zero ground state entropy from systems with positive ground state entropy, the latter an exception to the third law of thermodynamics.
  • PDF
    Black box variational inference (BBVI) with reparameterization gradients triggered the exploration of divergence measures other than the Kullback-Leibler (KL) divergence, such as alpha divergences. These divergences can be tuned to be more mass-covering (preventing overfitting in complex models), but are also often harder to optimize using Monte-Carlo gradients. In this paper, we view BBVI with generalized divergences as a form of biased importance sampling. The choice of divergence determines a bias-variance tradeoff between the tightness of the bound (low bias) and the variance of its gradient estimators. Drawing on variational perturbation theory of statistical physics, we use these insights to construct a new variational bound which is tighter than the KL bound and more mass covering. Compared to alpha-divergences, its reparameterization gradients have a lower variance. We show in several experiments on Gaussian Processes and Variational Autoencoders that the resulting posterior covariances are closer to the true posterior and lead to higher likelihoods on held-out data.
  • PDF
    Can a large system be fully characterized using its subsystems via inductive reasoning? Is it possible to completely reduce the behavior of a complex system to the behavior of its simplest "atoms"? In the following paper we answer these questions on the negative for a specific class of systems and measurements. We begin with simple two-particle example, where strong correlations arise between two apparently empty boxes. This leads to new surprising effects within atomic and electromagnetic systems. A general construction based on pre- and post-selected ensembles is then suggested, where the N-body correlation can be genuinely perceived as a global property, as long as one is limited to preforming a small set of measurements which we term "strictly local". We conclude that within time-symmetric quantum mechanics and under certain boundary conditions, high-order correlations can determine low-order ones, but not vice versa. Moreover, the latter seem to provide no information at all regarding the former. This supports a top-down structure in many-body quantum mechanics.
  • PDF
    Dependent rounding is a popular technique in designing approximation algorithms. It allows us to randomly round a fractional solution $x$ to an integral vector $X$ such that $E[X] = x$, all $X_i$'s are ("cylindrically") negatively correlated, and the cardinality constraint $\sum_i X_i = \sum_i x_i$ holds with probability one. One can also essentially replace the cardinality constraint by a general linear constraint $\sum_i a_i X_i = \sum_i a_i x_i$ if the $a_i$'s are non-negative. However, in certain problems, the coefficients $a_i$ may have mixed signs; furthermore, some applications require correlation inequalities that are much more general than negative correlation. In this paper, we propose a generalized dependent rounding technique, called symmetric randomized dependent rounding, which can round $x$ to an almost-integer vector so that any given linear constraint is satisfied with probability one and such that all the variables are nearly independent. We give two illustrative examples of this technique: a randomized algorithm for the knapsack center problem with new average approximation guarantees and an improved bi-criteria approximation ratio for the knapsack median problem.
  • PDF
    We explain how asymptotic safety arises in four-dimensional supersymmetric gauge theories. We provide asymptotically safe supersymmetric gauge theories together with their superconformal fixed points, R-charges, phase diagrams, and UV-IR connecting trajectories. Strict perturbative control is achieved in a Veneziano limit. Consistency with unitarity and the a-theorem is established. We find that supersymmetry enhances the predictivity of asymptotically safe theories.
  • PDF
    One of the most interesting features of Bayesian optimization for direct policy search is that it can leverage priors (e.g., from simulation or from previous tasks) to accelerate learning on a robot. In this paper, we are interested in situations for which several priors exist but we do not know in advance which one fits best the current situation. We tackle this problem by introducing a novel acquisition function, called Most Likely Expected Improvement (MLEI), that combines the likelihood of the priors and the expected improvement. We evaluate this new acquisition function on a transfer learning task for a 5-DOF planar arm and on a possibly damaged, 6-legged robot that has to learn to walk on flat ground and on stairs, with priors corresponding to different stairs and different kinds of damages. Our results show that MLEI effectively identifies and exploits the priors, even when there is no obvious match between the current situations and the priors.
  • PDF
    Quantum secure signature schemes have a lot of attention recently, in particular because of the NIST call to standardize quantum safe cryptography. However, only few signature schemes can have concrete quantum security because of technical difficulties associated with the Quantum Random Oracle Model (QROM). In this paper, we show that code-based signature schemes based on the full domain hash paradigm can behave very well in the QROM i.e. that we can have tight security reductions. We also study quantum algorithms related to the underlying code-based assumption. Finally, we apply our reduction to a concrete example: the SURF signature scheme. We provide parameters for 128 bits of quantum security in the QROM and show that the obtained parameters are competitive compared to other similar quantum secure signature schemes.
  • PDF
    We develop some basic results in a higher dimensional foliated Mori theory, and show how these results can be used to prove a structure theorem for the Kleiman-Mori cone of curves in terms of the numerical properties of $K_{\mathcal{F}}$ for rank 2 foliations on threefolds. We also make progress toward realizing a minimal model program for rank 2 foliations on threefolds.
  • PDF
    Covert communication offers a method to transmit messages in such a way that it is not possible to detect that the communication is happening at all. In this work, we report an experimental demonstration of covert communication that is provably secure against unbounded quantum adversaries. The covert communication is carried out over 10 km of optical fiber, addressing the challenges associated with transmission over metropolitan distances. We deploy the protocol in a dense wavelength-division multiplexing infrastructure, where our system has to coexist with a co-propagating C-band classical channel. The noise from the classical channel allows us to perform covert communication in a neighbouring channel. We perform an optimization of all protocol parameters and report the transmission of three different messages with varying levels of security. Our results showcase the feasibility of secure covert communication in a practical setting, with several possible future improvements from both theory and experiment.
  • PDF
    Nonlinear modifications of quantum mechanics have a troubled history. They were initially studied for many promising reasons: resolving the measurement problem, testing the limits of standard quantum mechanics, and reconciling it with gravity. Two results substantially undermined the credibility of non-linear theories. Some have been experimentally refuted, and more importantly, all deterministic non-linear theories can be used for superluminal communication. However, these results are unconvincing because they overlook the fact that the distribution of measurement results predicted by non-linear quantum mechanics depends on the interpretation of quantum mechanics that one uses. For instance, although the Everett and Copenhagen interpretations agree on the expression of Born's rule for the outcomes of multiple measurements in linear quantum mechanics, they disagree in non-linear quantum mechanics. We present the range of expressions of Born's rule that can be obtained by applying different formulations of quantum mechanics to a class of non-linear quantum theories. We then determine that many do not allow for superluminal communication but only two seem to have a reasonable justification. The first is the Everett interpretation, and the second, which we name causal-conditional, states that a measurement broadcasts its outcome to degrees of freedom in its future light-cone, who update the wavefunction that their non-linear Hamiltonian depends on according to this new information.
  • PDF
    We analyze the generalization error of randomized learning algorithms -- focusing on stochastic gradient descent (SGD) -- using a novel combination of PAC-Bayes and algorithmic stability. Importantly, our risk bounds hold for all posterior distributions on the algorithm's random hyperparameters, including distributions that depend on the training data. This inspires an adaptive sampling algorithm for SGD that optimizes the posterior at runtime. We analyze this algorithm in the context of our risk bounds and evaluate it empirically on a benchmark dataset.
  • PDF
    In the model of gate-based quantum computation, the qubits are controlled by a sequence of quantum gates. In superconducting qubit systems, these gates can be implemented by voltage pulses. The success of implementing a particular gate can be expressed by various metrics such as the average gate fidelity, the diamond distance, and the unitarity. We analyze these metrics of gate pulses for a system of two superconducting transmon qubits coupled by a resonator, a system inspired by the architecture of the IBM Quantum Experience. The metrics are obtained by numerical solution of the time-dependent Schrödinger equation of the transmon system. We find that the metrics reflect systematic errors that are most pronounced for echoed cross-resonance gates, but that none of the studied metrics can reliably predict the performance of a gate when used repeatedly in a quantum algorithm.
  • PDF
    We revisit the definition of the Heisenberg category of level k. In level -1, this category was introduced originally by Khovanov, but with some additional cyclicity relations which we show here are unnecessary. In other negative levels, the definition is due to Mackaay and Savage, also with some redundant relations, while the level zero case is the affine oriented Brauer category of Brundan, Comes, Nash and Reynolds. We also discuss cyclotomic quotients.
  • PDF
    Topological phases, such as Chern insulators, are defined in terms of additive indices that are stable against the addition of trivial degrees of freedom. Also, such topology presents an obstruction to representing bands in terms of symmetric, exponentially localized Wannier functions. Here, we address the converse question: Do obstructions to Wannier representations imply stable band topology? We answer this in the negative, pointing out that some bands can also display a distinct type of "fragile topology." Bands with fragile topology admit a Wannier representation if and only if additional trivial degrees of freedom are supplied. We apply this notion to solve a puzzle in diagnosing band topology: A recent work [Nature 547, 298-305 (2017)] made the intriguing suggestion that whenever a so-called "elementary band representation" splits, the two sets of bands, separated by a band gap, are individually topological. Here, we construct a counterexample, defined on the honeycomb lattice, which features a split elementary band. We show that one of the two disconnected bands is completely trivial with exponentially localized, symmetric Wannier functions. This presents a conundrum regarding the nature of the second band, which is resolved by recognizing that it is topological, but in the fragile sense. Our model thus provides a physical example of fragile topology.
  • PDF
    We consider the reduced dynamics of a small quantum system in interaction with a reservoir when the initial state is factorized. We present a rigorous derivation of a GKLS master equation in the weak-coupling limit for a generic bath, which is not assumed to have a bosonic or fermionic nature, and whose reference state is not necessarily thermal. The crucial assumption is a reservoir state endowed with a mixing property: the n-point connected correlation function of the interaction must be asymptotically bounded by the product of two-point functions (clustering property).
  • PDF
    A low precision deep neural network training technique for producing sparse, ternary neural networks is presented. The technique incorporates hard- ware implementation costs during training to achieve significant model compression for inference. Training involves three stages: network training using L2 regularization and a quantization threshold regularizer, quantization pruning, and finally retraining. Resulting networks achieve improved accuracy, reduced memory footprint and reduced computational complexity compared with conventional methods, on MNIST and CIFAR10 datasets. Our networks are up to 98% sparse and 5 & 11 times smaller than equivalent binary and ternary models, translating to significant resource and speed benefits for hardware implementations.
  • PDF
    We analyze the bias correction methods using jackknife, bootstrap, and Taylor series. We focus on the binomial model, and consider the problem of bias correction for estimating $f(p)$, where $f \in C[0,1]$ is arbitrary. We characterize the supremum norm of the bias of general jackknife and bootstrap estimators for any continuous functions, and demonstrate the in delete-$d$ jackknife, different values of $d$ may lead to drastically different behavior in jackknife. We show that in the binomial model, iterating the bootstrap bias correction infinitely many times may lead to divergence of bias and variance, and demonstrate that the bias properties of the bootstrap bias corrected estimator after $r-1$ rounds is exactly the same as that of the $r$-jackknife estimator if a bounded coefficients condition is satisfied.
  • PDF
    We establish an analogy between superconductor-metal interfaces and the quantum physics of a black hole, using the proximity effect. We show that the metal-superconductor interface can be thought of as an event horizon and Andreev reflection from the interface is analogous to the Hawking radiation in black holes. We describe quantum information transfer in Andreev reflection with a final state projection model similar to the Horowitz-Maldacena model for black hole evaporation. We also propose the Andreev reflection-analogue of Hayden and Preskill's description of a black hole final state, where the black hole is described as an information mirror. The analogy between Crossed Andreev Reflections and Einstein-Rosen bridges is discussed: our proposal gives a precise mechanism for the apparent loss of quantum information in a black hole by the process of nonlocal Andreev reflection, transferring the quantum information through a wormhole and into another universe. Given these established connections, we conjecture that the final quantum state of a black hole is exactly the same as the ground state wavefunction of the superconductor/superfluid in the Bardeen-Cooper-Schrieffer (BCS) theory of superconductivity; in particular, the infalling matter and the infalling Hawking quanta, described in the Horowitz-Maldacena model, forms a Cooper pair-like singlet state inside the black hole. A black hole evaporating and shrinking in size can be thought of as the analogue of Andreev reflection by a hole where the superconductor loses a Cooper pair. Our model does not suffer from the black hole information problem since Andreev reflection is unitary. We also relate the thermodynamic properties of a black hole to that of a superconductor, and propose an experiment which can demonstrate the negative specific heat feature of black holes in a growing/evaporating condensate.
  • PDF
    Graph classification is a problem with practical applications in many different domains. Most of the existing methods take the entire graph into account when calculating graph features. In a graphlet-based approach, for instance, the entire graph is processed to get the total count of different graphlets or sub-graphs. In the real-world, however, graphs can be both large and noisy with discriminative patterns confined to certain regions in the graph only. In this work, we study the problem of attentional processing for graph classification. The use of attention allows us to focus on small but informative parts of the graph, avoiding noise in the rest of the graph. We present a novel RNN model, called the Graph Attention Model (GAM), that processes only a portion of the graph by adaptively selecting a sequence of "interesting" nodes. The model is equipped with an external memory component which allows it to integrate information gathered from different parts of the graph. We demonstrate the effectiveness of the model through various experiments.

Recent comments

James Wootton Sep 21 2017 05:41 UTC

What does this imply for https://scirate.com/arxiv/1608.00263? I'm guessing they still regard it as valid (it is ref [14]), but just too hard to implement for now.

Ben Criger Sep 08 2017 08:09 UTC

Oh look, there's another technique for decoding surface codes subject to X/Z correlated errors: https://scirate.com/arxiv/1709.02154

Aram Harrow Sep 06 2017 07:54 UTC

The paper only applies to conformal field theories, and such a result cannot hold for more general 1-D systems by 0705.4077 and other papers (assuming standard complexity theory conjectures).

Felix Leditzky Sep 05 2017 21:27 UTC

Thanks for the clarification, Philippe!

Philippe Faist Sep 05 2017 21:09 UTC

Hi Felix, thanks for the good question.

We've found it more convenient to consider trace-nonincreasing and $\Gamma$-sub-preserving maps (and this is justified by the fact that they can be dilated to fully trace-preserving and $\Gamma$-preserving maps on a larger system). The issue arises because

...(continued)
Felix Leditzky Sep 05 2017 19:02 UTC

What is the reason/motivation to consider trace-non-increasing maps instead of trace-preserving maps in your framework and the definition of the coherent relative entropy?

Steve Flammia Aug 30 2017 22:30 UTC

Thanks for the reference Ashley. If I understand your paper, you are still measuring stabilizers of X- and Z-type at the top layer of the code. So it might be that we can improve on the factor of 2 that you found if we tailor the stabilizers to the noise bias at the base level.

Ashley Aug 30 2017 22:09 UTC

We followed Aliferis and Preskill's approach in [https://arxiv.org/abs/1308.4776][1] and found that the fault-tolerant threshold for the surface code was increased by approximately a factor of two, from around 0.75 per cent to 1.5 per cent for a bias of 10 to 100.

[1]: https://arxiv.org/abs/1308.

...(continued)
Stephen Bartlett Aug 30 2017 21:55 UTC

Following on from Steve's comments, it's possible to use the bias-preserving gate set in Aliferis and Preskill directly to do the syndrome extraction, as you build up a CNOT gadget, but such a direct application of your methods would be very complicated and involve a lot of gate teleportation. If y

...(continued)
Steve Flammia Aug 30 2017 21:38 UTC

We agree that finding good syndrome extraction circuits if an important question. At the moment we do not have such circuits, though we have started to think about them. We are optimistic that this can be done in principle, but it remains to be seen if the circuits can be made sufficiently simple to

...(continued)