Top arXiv papers

sign in to customize
  • PDF
    In order to build a large scale quantum computer, one must be able to correct errors extremely fast. We design a fast decoding algorithm for topological codes to correct for Pauli errors and erasure and combination of both errors and erasure. Our algorithm has a worst case complexity of $O(n \alpha(n))$, where $n$ is the number of physical qubits and $\alpha$ is the inverse of Ackermann's function, which is very slowly growing. For all practical purposes, $\alpha(n) \leq 3$. We prove that our algorithm performs optimally for errors of weight up to $(d-1)/2$ and for loss of up to $d-1$ qubits, where $d$ is the minimum distance of the code. Numerically, we obtain a threshold of $9.9\%$ for the 2d-toric code with perfect syndrome measurements and $2.6\%$ with faulty measurements.
  • PDF
    Pure state entanglement transformations have been thought of as irreversible, with reversible transformations generally only possible in the limit of many copies. Here, we show that reversible entanglement transformations do not require processing on the many copy level, but can instead be undertaken on individual systems, provided the amount of entanglement which is produced or consumed is allowed to fluctuate. We derive necessary and sufficient conditions for entanglement manipulations in this case. As a corollary, we derive an equation which quantifies the fluctuations of entanglement, which is formally identical to the Jarzynski fluctuation equality found in thermodynamics. One can also relate a forward entanglement transformation to its reverse process in terms of the entanglement cost of such a transformation, in a manner equivalent to the Crooks relation. We show that a strong converse theorem for entanglement transformations is related to the second law of thermodynamics, while the fact that the Schmidt rank of an entangled state cannot increase is related to the third law of thermodynamics. Achievability of the protocols is done by introducing an entanglement battery, a device which stores entanglement and uses an amount of entanglement that is allowed to fluctuate but with an average cost which is still optimal. This allows us to also solve the problem of partial entanglement recovery, and in fact, we show that entanglement is fully recovered. Allowing the amount of consumed entanglement to fluctuate also leads to improved and optimal entanglement dilution protocols.
  • PDF
    We study thermal states of strongly interacting quantum spin chains and prove that those can efficiently be represented in terms of convex combinations of matrix product states. Apart from revealing new features of the entanglement structure of Gibbs states, such as an area law for the entanglement of formation, our results provide a theoretical justifications for the use of White's algorithm of minimally entangled typical thermal states. Furthermore, we shed new light on time dependent matrix product state algorithms which yield hydrodynamical descriptions of the underlying dynamics.
  • PDF
    We establish a symmetry-operator framework for designing quantum error correcting~(QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for $\chi^{(2)}$-interaction based quantum computation: the $\chi^{(2)}$ parity-check code, the $\chi^{(2)}$ embedded error-correcting code, and the $\chi^{(2)}$ binomial code, all of which detect photon-loss or photon-gain errors by means of photon-number parity measurements and then correct them via $\chi^{(2)}$ Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes. The $\chi^{(2)}$ binomial code is of special interest because, with $m\le N$ identified from channel monitoring, it can correct $m$-photon loss errors, $m$-photon gain errors, and $(m-1)$th-order dephasing errors using logical qudits that are encoded in $O(N)$ photons. In comparison, other bosonic QEC codes require $O(N^2)$ photons to correct the same degree of bosonic errors. Such improved photon-efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the $\chi^{(2)}$ parity-check code and the $\chi^{(2)}$ embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our $\chi^{(2)}$ QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.
  • PDF
    Quantum computers promise to efficiently solve not only problems believed to be intractable for classical computers, but also problems for which verifying the solution is also considered intractable. This raises the question of how one can check whether quantum computers are indeed producing correct results. This task, known as quantum verification, has been highlighted as a significant challenge on the road to scalable quantum computing technology. We review the most significant approaches to quantum verification and compare them in terms of structure, complexity and required resources. We also comment on the use of cryptographic techniques which, for many of the presented protocols, has proven extremely useful in performing verification. Finally, we discuss issues related to fault tolerance, experimental implementations and the outlook for future protocols.
  • PDF
    We extend quantum Stein's lemma in asymmetric quantum hypothesis testing to composite null and alternative hypotheses. As our main result, we show that the asymptotic error exponent for testing convex combinations of quantum states $\rho^{\otimes n}$ against convex combinations of quantum states $\sigma^{\otimes n}$ is given by a regularized quantum relative entropy distance formula. We prove that in general such a regularization is needed but also discuss various settings where our formula as well as extensions thereof become single-letter. This includes a novel operational interpretation of the relative entropy of coherence in terms of hypothesis testing. For our proof, we start from the composite Stein's lemma for classical probability distributions and lift the result to the non-commutative setting by only using elementary properties of quantum entropy. Finally, our findings also imply an improved Markov type lower bound on the quantum conditional mutual information in terms of the regularized quantum relative entropy - featuring an explicit and universal recovery map.
  • PDF
    We describe an efficient quantum algorithm for the quantum Schur transform. The Schur transform is an operation on a quantum computer that maps the standard computational basis to a basis composed of irreducible representations of the unitary and symmetric groups. We simplify and extend the algorithm of Bacon, Chuang, and Harrow, and provide a new practical construction as well as sharp theoretical and practical analyses. Our algorithm decomposes the Schur transform on $n$ qubits into $O(n^4 \log(n/{\epsilon}))$ operators in the Clifford+T fault-tolerant gate set. We extend our qubit algorithm to decompose the Schur transform on $n$ qudits of dimension $d$ into $O(d^{1+p} n^{2d+1} \log^p (dn/{\epsilon})$) primitive operators from any universal gate set, for $p {\approx} 3.97$.
  • PDF
    We improve the number of T gates needed to perform an n-bit adder from 8n + O(1) to 4n + O(1). We do so via a "temporary logical-AND" construction, which uses four T gates to store the logical-AND of two qubits into an ancilla and zero T gates to later erase the ancilla. Temporary logical-ANDs are a generally useful tool when optimizing T-counts. They can be applied to integer arithmetic, modular arithmetic, rotation synthesis, the quantum Fourier transform, Shor's algorithm, Grover oracles, and many other circuits. Because T gates dominate the cost of quantum computation based on the surface code, and temporary logical-ANDs are widely applicable, our constructions represent a significant reduction in projected costs of quantum computation. We also present an n-bit controlled adder circuit with T-count of 8n + O(1), a temporary adder that can be computed for the same cost as the normal adder but whose result can be kept until it is later uncomputed without using T gates, and discuss some other constructions whose T-count is improved by the temporary logical-AND.
  • PDF
    We explore several new converse bounds for classical communication over quantum channels in both the one-shot and asymptotic regime. First, we show that the Matthews-Wehner meta-converse bound for entanglement assisted classical communication can be achieved by activated, no-signalling assisted codes, suitably generalizing a result for classical channels. Second, we derive a new efficiently computable meta-converse on the amount of information unassisted codes can transmit over a single use of a quantum channel. As an applications, we provide a finite resource analysis of classical communication over quantum erasure channels, including the second-order and moderate deviation asymptotics. Third, we explore the asymptotic analogue of our new meta-converse, the $\Upsilon$-information of the channel. We show that its regularization is an upper bound on the capacity that is generally tighter than the entanglement-assisted capacity and other known efficiently computable strong converse bounds. For covariant channels we show that the $\Upsilon$-information is a strong converse bound.
  • PDF
    An important approach to the fault-tolerant quantum computation is protecting the logical information using the quantum error correction. Usually, the logical information is in the form of logical qubits, which are encoded in physical qubits using quantum error correction codes. Compared with the qubit quantum computation, the fermionic quantum computation has advantages in quantum simulations of fermionic systems, e.g. molecules. In this paper, we show that the fermionic quantum computation can be universal and fault-tolerant if we encode logical Majorana fermions in physical Majorana fermions. We take a color code as an example to demonstrate the universal set of fault-tolerant operations on logical Majorana fermions, and we numerically find that the fault-tolerance threshold is about 0.8%.
  • PDF
    Purification is a powerful technique in quantum physics whereby a mixed quantum state is extended to a pure state on a larger system. This process is not unique, and in systems composed of many degrees of freedom, one natural purification is the one with minimal entanglement. Here we study the entropy of the minimally entangled purification, called the entanglement of purification, in three model systems: an Ising spin chain, conformal field theories holographically dual to Einstein gravity, and random stabilizer tensor networks. We conjecture values for the entanglement of purification in all these models, and we support our conjectures with a variety of numerical and analytical results. We find that such minimally entangled purifications have a number of applications, from enhancing entanglement-based tensor network methods for describing mixed states to elucidating novel aspects of the emergence of geometry from entanglement in the AdS/CFT correspondence.
  • PDF
    Entanglement not only plays a crucial role in quantum technologies, but is key to our understanding of quantum correlations in many-body systems. However, in an experiment, the only way of measuring entanglement in a generic mixed state is through reconstructive quantum tomography, requiring an exponential number of measurements in the system size. Here, we propose an operational scheme to measure the entanglement --- as given by the negativity --- between arbitrary subsystems of size $N_A$ and $N_B$, with $\mathcal{O}(N_A + N_B)$ measurements, and without any prior knowledge of the state. We propose how to experimentally measure the partially transposed moments of a density matrix, and using just the first few of these, extract the negativity via Chebyshev approximation or machine learning techniques. Our procedure will allow entanglement measurements in a wide variety of systems, including strongly interacting many body systems in both equilibrium and non-equilibrium regimes.
  • PDF
    Fundamental questions in chemistry and physics may never be answered due to the exponential complexity of the underlying quantum phenomena. A desire to overcome this challenge has sparked a new industry of quantum technologies with the promise that engineered quantum systems can address these hard problems. A key step towards demonstrating such a system will be performing a computation beyond the capabilities of any classical computer, achieving so-called quantum supremacy. Here, using 9 superconducting qubits, we demonstrate an immediate path towards quantum supremacy. By individually tuning the qubit parameters, we are able to generate thousands of unique Hamiltonian evolutions and probe the output probabilities. The measured probabilities obey a universal distribution, consistent with uniformly sampling the full Hilbert-space. As the number of qubits in the algorithm is varied, the system continues to explore the exponentially growing number of states. Combining these large datasets with techniques from machine learning allows us to construct a model which accurately predicts the measured probabilities. We demonstrate an application of these algorithms by systematically increasing the disorder and observing a transition from delocalized states to localized states. By extending these results to a system of 50 qubits, we hope to address scientific questions that are beyond the capabilities of any classical computer.
  • PDF
    Quantum Markov semigroups characterize the time evolution of an important class of open quantum systems. Studying convergence properties of such a semigroup, and determining concentration properties of its invariant state, have been the focus of much research. Quantum versions of functional inequalities (like the modified logarithmic Sobolev and Poincaré inequalities) and the so-called transportation cost inequalities, have proved to be essential for this purpose. Classical functional and transportation cost inequalities are seen to arise from a single geometric inequality, called the Ricci lower bound, via an inequality which interpolates between them. The latter is called the HWI-inequality, where the letters I, W and H are, respectively, acronyms for the Fisher information (arising in the modified logarithmic Sobolev inequality), the so-called Wasserstein distance (arising in the transportation cost inequality) and the relative entropy (or Boltzmann H function) arising in both. Hence, classically, all the above inequalities and the implications between them form a remarkable picture which relates elements from diverse mathematical fields, such as Riemannian geometry, information theory, optimal transport theory, Markov processes, concentration of measure, and convexity theory. Here we consider a quantum version of the Ricci lower bound introduced by Carlen and Maas, and prove that it implies a quantum HWI inequality from which the quantum functional and transportation cost inequalities follow. Our results hence establish that the unifying picture of the classical setting carries over to the quantum one.
  • PDF
    A quantum measurement is Fisher symmetric if it provides uniform and maximal information on all parameters that characterize the quantum state of interest. Using (complex projective) 2-designs, we construct measurements on a pair of identically prepared quantum states that are Fisher-symmetric for all pure states. Such measurements are optimal in achieving the minimal statistical error without adaptive measurements. We then determine all collective measurements on a pair that are Fisher-symmetric for the completely mixed state and for all pure states simultaneously. For a qubit, these measurements are Fisher-symmetric for all states. The minimal optimal measurements are tied to the elusive symmetric informationally complete measurements, which reflects a deep connection between local symmetry and global symmetry. In the study, we derive a fundamental constraint on the Fisher information matrix of any collective measurement on a pair, which offers a useful tool for characterizing the tomographic efficiency of collective measurements.
  • PDF
    We study the problem of encrypting and authenticating quantum data in the presence of adversaries making adaptive chosen plaintext and chosen ciphertext queries. Classically, security games use string copying and comparison to detect adversarial cheating in such scenarios. Quantumly, this approach would violate no-cloning. We develop new techniques to overcome this problem: we use entanglement to detect cheating, and rely on recent results for characterizing quantum encryption schemes. We give definitions for (i.) ciphertext unforgeability , (ii.) indistinguishability under adaptive chosen-ciphertext attack, and (iii.) authenticated encryption. The restriction of each definition to the classical setting is at least as strong as the corresponding classical notion: (i) implies INT-CTXT, (ii) implies IND-CCA2, and (iii) implies AE. All of our new notions also imply QIND-CPA privacy. Combining one-time authentication and classical pseudorandomness, we construct schemes for each of these new quantum security notions, and provide several separation examples. Along the way, we also give a new definition of one-time quantum authentication which, unlike all previous approaches, authenticates ciphertexts rather than plaintexts.
  • PDF
    Entangling gates between qubits are a crucial component for performing algorithms in quantum computers. However, any quantum algorithm will ultimately have to operate on error-protected logical qubits, which are effective qubits encoded in a high-dimensional Hilbert space. A common approach is to encode logical qubits in collective states of multiple two-level systems, but algorithms operating on multiple logical qubits are highly complex and have not yet been demonstrated. Here, we experimentally realize a controlled NOT (CNOT) gate between two multiphoton qubits in two microwave cavities. In this approach, we encode a qubit in the large Hilbert space of a single cavity mode, rather than in multiple two-level systems. We couple two such encoded qubits together through a transmon, which is driven with an RF pump to apply the CNOT gate within 190 ns. This is two orders of magnitude shorter than the decoherence time of any part of the system, enabling high-fidelity operations comparable to state-of-the-art gates between two-level systems. These results are an important step towards universal algorithms on error-corrected logical qubits.
  • PDF
    Entanglement distribution is a prerequisite for several important quantum information processing and computing tasks, such as quantum teleportation, quantum key distribution, and distributed quantum computing. In this work, we focus on two-dimensional quantum networks based on optical quantum technologies using dual-rail photonic qubits. We lay out a quantum network architecture for entanglement distribution between distant parties, with the technological constraint that quantum repeaters equipped with quantum memories are not currently widely available. We also discuss several quantum network topologies for the building of a fail-safe quantum internet. We use percolation theory to provide figures of merit on the loss parameter of the optical medium for networks with bow-tie lattice and Archimedean lattice topologies. These figures of merit allow for comparisons of the robustness of different networks against intermittent failures of its nodes and against intermittent photon loss, which is an important consideration in the realization of the quantum internet.
  • PDF
    Simulating quantum contextuality with classical systems requires memory. A fundamental yet open question is which is the minimum memory needed and, therefore, the precise sense in which quantum systems outperform classical ones. Here we make rigorous the notion of classically simulating quantum state-independent contextuality (QSIC) in the case of a single quantum system submitted to an infinite sequence of measurements randomly chosen from a finite QSIC set. We obtain the minimum memory classical systems need to simulate arbitrary QSIC sets under the assumption that the simulation should not contain any oracular information. In particular, we show that, while classically simulating two qubits tested with the Peres-Mermin set requires $\log_2 24 \approx 4.585$ bits, simulating a single qutrit tested with the Yu-Oh set requires, at least, $5.740$ bits.
  • PDF
    In the Orthogonal Vectors (OV) problem, we wish to determine if there is an orthogonal pair of vectors among $n$ Boolean vectors in $d$ dimensions. The OV Conjecture (OVC) posits that OV requires $n^{2-o(1)}$ time to solve, for all $d=\omega(\log n)$. Assuming the OVC, optimal time lower bounds have been proved for many prominent problems in $P$. We prove that OVC is true in several computational models of interest: * For all sufficiently large $n$ and $d$, OV for $n$ vectors in $\{0,1\}^d$ has branching program complexity $\tilde{\Theta}(n\cdot \min(n,2^d))$. In particular, the lower bounds match the upper bounds up to polylog factors. * OV has Boolean formula complexity $\tilde{\Theta}(n\cdot \min(n,2^d))$, over all complete bases of $O(1)$ fan-in. * OV requires $\tilde{\Theta}(n\cdot \min(n,2^d))$ wires, in formulas comprised of gates computing arbitrary symmetric functions of unbounded fan-in. Our lower bounds basically match the best known (quadratic) lower bounds for any explicit function in those models. Analogous lower bounds hold for many related problems shown to be hard under OVC, such as Batch Partial Match, Batch Subset Queries, and Batch Hamming Nearest Neighbors, all of which have very succinct reductions to OV. The proofs use a certain kind of input restriction that is different from typical random restrictions where variables are assigned independently. We give a sense in which independent random restrictions cannot be used to show hardness, in that OVC is false in the "average case" even for $AC^0$ formulas: * For every fixed $p \in (0,1)$ there is an $\epsilon_p > 0$ such that for every $n$ and $d$, OV instances where input bits are independently set to $1$ with probability $p$ (and $0$ otherwise) can be solved with $AC^0$ formulas of size $O(n^{2-\epsilon_p})$, on all but a $o_n(1)$ fraction of instances.
  • PDF
    To analyze non-Markovianity of tripartite quantum states from a resource theoretical viewpoint, we introduce a class of quantum operations performed by three distant parties, and investigate an operational resource theory induced it. A tripartite state is a free state if and only if it is a quantum Markov chain. We prove monotonicity of functions such as the conditional mutual information, intrinsic information, squashed entanglement, a generalization of entanglement of purification, and the relative entropy of recovery. We identify bound sets that have a clear correspondence to the monotone functions, and analyze their inclusion relations. We introduce a task of "non-Markovianity dilution", and prove that the optimal rate for the task, namely the "non-Markovianity cost", is bounded from above by the regularized entanglement of purification. We also propose a classical resource theory of non-Markovianity.
  • PDF
    We develop a machine learning method to construct accurate ground-state wave functions of strongly interacting and entangled quantum spin as well as fermionic models on lattices. A restricted Boltzmann machine algorithm in the form of an artificial neural network is combined with a conventional variational Monte Carlo method with pair product (geminal) wave functions and quantum number projections. The combined method substantially improves the accuracy beyond that ever achieved by each method separately, in the Heisenberg as well as Hubbard models on square lattices, thus proving its power as a highly accurate quantum many-body solver.
  • PDF
    Quantum bits based on individual trapped atomic ions constitute a promising technology for building a quantum computer, with all the elementary operations having been achieved with the necessary precision. However, the essential two-qubit logic gate used for generating quantum entanglement has hitherto always been performed in an adiabatic regime, where the gate is slow compared with the characteristic motional frequencies of ions in the trap, giving logic speeds of order 10kHz. There have been numerous proposals for performing gates faster than this natural "speed limit" of the trap. We implement the method of Steane et al., which uses tailored laser pulses: these are shaped on 10ns timescales to drive the ions' motion along trajectories designed such that the gate operation is insensitive to the initial phase of the optical field. This permits fast (MHz-rate) quantum logic which is robust to this important source of experimental error. We demonstrate entanglement generation for gate times as short as 480ns; this is less than a single oscillation period of an ion in the trap, and 8 orders of magnitude shorter than the memory coherence time measured in similar calcium-43 hyperfine qubits. The method's power is most evident at intermediate timescales, where it yields more than an order of magnitude reduction in gate error compared with conventional techniques; for example, we achieve a 1.6$\mu$s gate with fidelity 99.8%. Still faster gates are possible at the price of higher laser intensity. The method requires only a single amplitude-shaped pulse and one pair of beams derived from a cw laser, and offers the prospect of combining the unrivalled coherence properties, optical connectivity and operation fidelities of trapped-ion qubits with the sub-microsecond logic speeds usually associated with solid state devices.
  • PDF
    It has long been recognized that certain quantum correlations are incompatible with particular assumption about classical causal structure. Given a causal structure of unknown classicality, the presence of such correlations certifies the non-classical nature of the causal structure in a device independent fashion. In structures where all parties share a common resource, these non-classical correlations are also known as non-local correlations. Any constraint satisfied by all correlations which are classically compatible with a given causal structure defines a causal compatibility criterion. Such criteria were recently derived for the Triangle structure (arXiv:1609.00672) in the form of polynomial inequalities, begging the question: do any of those inequalities admit violation by quantum correlations? Numerical investigation suggests not, and we further conjecture that the set of correlations admitted by the classical Triangle structure is equivalent to the set of correlations admitted by its quantum generalization whenever the three observable variables are binary. Our main contribution in this work, however, is the derivation of new causal compatibility inequalities for the Triangle structure which do admit quantum violation. This provides the first robust-to-noise witness of quantum correlations in the Triangle structure. We conclude by considering the possibility of quantum resources potentially qualitatively different from those known previously.
  • PDF
    A quantum money scheme enables a trusted bank to provide untrusted users with verifiable quantum banknotes that cannot be forged. In this work, we report an experimental demonstration of the preparation and verification of unforgeable quantum banknotes. We employ a security analysis that takes experimental imperfections fully into account. We measure a total of $3.6\times 10^6$ states in one verification round, limiting the forging probability to $10^{-7}$ based on the security analysis. Our results demonstrate the feasibility of preparing and verifying quantum banknotes using currently available experimental techniques.
  • PDF
    A tight criterion under which the abstract version Lovász Local Lemma (abstract-LLL) holds was given by Shearer decades ago. However, little is known about that of the variable version LLL (variable-LLL) where events are generated by independent random variables, though this model of events is applicable to almost all applications of LLL. We introduce a necessary and sufficient criterion for variable-LLL, in terms of the probabilities of the events and the event-variable graph specifying the dependency among the events. Based on this new criterion, we obtain boundaries for two families of event-variable graphs, namely, cyclic and treelike bigraphs. These are the first two non-trivial cases where the variable-LLL boundary is fully determined. As a byproduct, we also provide a universal constructive method to find a set of events whose union has the maximum probability, given the probability vector and the event-variable graph. Though it is #P-hard in general to determine variable-LLL boundaries, we can to some extent decide whether a gap exists between a variable-LLL boundary and the corresponding abstract-LLL boundary. In particular, we show that the gap existence can be decided without solving Shearer's conditions or checking our variable-LLL criterion. Equipped with this powerful theorem, we show that there is no gap if the base graph of the event-variable graph is a tree, while gap appears if the base graph has an induced cycle of length at least 4. The problem is almost completely solved except when the base graph has only 3-cliques, in which case we also get partial solutions. A set of reduction rules are established that facilitate to infer gap existence of an event-variable graph from known ones. As an application, various event-variable graphs, in particular combinatorial ones, are shown to be gapful/gapless.
  • PDF
    We consider an exactly solvable model in 3+1 dimensions, based on a finite group, which is a natural generalization of Kitaev's quantum double model. The corresponding lattice Hamiltonian yields excitations located at torus-boundaries. By cutting open the three-torus, we obtain a manifold bounded by two tori which supports states satisfying a higher-dimensional version of Ocneanu's tube algebra. This defines an algebraic structure extending the Drinfel'd double. Its irreducible representations, labeled by two fluxes and one charge, characterize the torus-excitations. The tensor product of such representations is introduced in order to construct a basis for (3+1)d gauge models which relies upon the fusion of the defect excitations. This basis is defined on manifolds of the form $\Sigma \times \mathbb{S}_1$, with $\Sigma$ a two-dimensional Riemann surface. As such, our construction is closely related to dimensional reduction from (3+1)d to (2+1)d topological orders.
  • PDF
    We prove two new fundamental uncertainty relations with quantum memory for the Wehrl entropy. These are the first entropic uncertainty relations with quantum memory ever proposed for a single measurement. The first relation applies to the bipartite memory scenario and provides a lower bound to the Wehrl entropy of a quantum state conditioned on the memory quantum system in terms of the von Neumann entropy of the same quantum state conditioned on the same memory quantum system. The second relation applies to the tripartite memory scenario and provides a lower bound to the sum of the Wehrl entropy of a quantum state conditioned on the first memory quantum system with the Wehrl entropy of the same state conditioned on the second memory quantum system. The Wehrl entropy of a quantum state is the Shannon differential entropy of the outcome of a heterodyne measurement performed on the state. The heterodyne measurement is one of the main measurements in quantum optics, and lies at the basis of one of the most promising protocols for quantum key distribution. These fundamental entropic uncertainty relations will be a valuable tool in quantum information, and will e.g. find application in security proofs of quantum key distribution protocols in the asymptotic regime and in entanglement witnessing in quantum optics.
  • PDF
    The quantum autoencoder is a recent paradigm in the field of quantum machine learning, which may enable an enhanced use of resources in quantum technologies. To this end, quantum neural networks with less nodes in the inner than in the outer layers were considered. Here, we propose a useful connection between approximate quantum adders and quantum autoencoders. Specifically, this link allows us to employ optimized approximate quantum adders, obtained with genetic algorithms, for the implementation of quantum autoencoders for a variety of initial states. Furthermore, we can also directly optimize the quantum autoencoders via genetic algorithms. Our approach opens a different path for the design of quantum autoencoders in controllable quantum platforms.
  • PDF
    Maximum A posteriori Probability (MAP) inference in graphical models amounts to solving a graph-structured combinatorial optimization problem. Popular inference algorithms such as belief propagation (BP) and generalized belief propagation (GBP) are intimately related to linear programming (LP) relaxation within the Sherali-Adams hierarchy. Despite the popularity of these algorithms, it is well understood that the Sum-of-Squares (SOS) hierarchy based on semidefinite programming (SDP) can provide superior guarantees. Unfortunately, SOS relaxations for a graph with $n$ vertices require solving an SDP with $n^{\Theta(d)}$ variables where $d$ is the degree in the hierarchy. In practice, for $d\ge 4$, this approach does not scale beyond a few tens of variables. In this paper, we propose binary SDP relaxations for MAP inference using the SOS hierarchy with two innovations focused on computational efficiency. Firstly, in analogy to BP and its variants, we only introduce decision variables corresponding to contiguous regions in the graphical model. Secondly, we solve the resulting SDP using a non-convex Burer-Monteiro style method, and develop a sequential rounding procedure. We demonstrate that the resulting algorithm can solve problems with tens of thousands of variables within minutes, and outperforms BP and GBP on practical problems such as image denoising and Ising spin glasses. Finally, for specific graph types, we establish a sufficient condition for the tightness of the proposed partial SOS relaxation.
  • PDF
    We present a scheme for measuring Rényi entropies in generic atomic Hubbard and spin models using single copies of a quantum state and for partitions in arbitrary spatial dimension. Our approach is based on the generation of random unitaries from random quenches, implemented using engineered time-dependent disorder potentials, and standard projective measurements, as realized by quantum gas microscopes. By analyzing the properties of the generated unitaries and the role of statistical errors, with respect to the size of the partition, we show that the protocol can be realized in exisiting AMO quantum simulators, and used to measure for instance area law scaling of entanglement in two-dimensional spin models or the entanglement growth in many-body localized systems.
  • PDF
    We apply advanced methods of control theory to open quantum systems and we determine finite-time processes which are optimal with respect to thermodynamic performances. General properties and necessary conditions characterizing optimal drivings are derived, obtaining bang-bang type solutions corresponding to control strategies switching between adiabatic and isothermal transformations. A direct application of these results is the maximization of the work produced by a generic quantum heat engine, where we show that the maximum power is directly linked to a particular conserved quantity naturally emerging from the control problem. Finally we apply our general approach to the specific case of a two level system, which can be put in contact with two different baths at fixed temperatures, identifying the processes which minimize heat dissipation. Moreover, we explicitly solve the optimization problem for a cyclic two-level heat engine driven beyond the linear-response regime, determining the corresponding optimal cycle, the maximum power, and the efficiency at maximum power.
  • PDF
    High-dimensional encoding of quantum information provides a promising method of transcending current limitations in quantum communication. One of the central challenges in the pursuit of such an approach is the certification of high-dimensional entanglement. In particular, it is desirable to do so without resorting to inefficient full state tomography. Here, we show how carefully constructed measurements in two or more bases can be used to efficiently certify high-dimensional states and their entanglement under realistic conditions. We considerably improve upon existing criteria and introduce new entanglement dimensionality witnesses which we put to the test for photons entangled in their orbital angular momentum. In our experimental setup, we are able to verify 8-dimensional entanglement for 11-dimensional subspaces, at present the highest amount certified without assumptions on the state itself.
  • PDF
    We introduce a measurement-device independent star network which is conveniently based on continuous variable systems and standard linear optics. Here an arbitrary number of users send modulated coherent states to an untrusted relay where a generalized Bell detection creates multi-partite secret correlations. These correlations are then distilled into a shared secret key to implement a completely-secure quantum conference or, alternatively, a protocol of quantum secret-sharing. Our scheme is composably secure and able to achieve high rates with cheap optical implementation.
  • PDF
    In recent years, significant progress has been made in solving challenging problems across various domains using deep reinforcement learning (RL). Reproducing existing work and accurately judging the improvements offered by novel methods is vital to maintaining this rapid progress. Unfortunately, reproducing results for state-of-the-art deep RL methods is seldom straightforward. In particular, non-determinism in standard benchmark environments, combined with variance intrinsic to the methods, can make reported results difficult to interpret. Without significance metrics and tighter standardization of experimental reporting, it is difficult to determine whether improvements over the prior state-of-the-art are meaningful. In this paper, we investigate challenges posed by reproducibility, proper experimental techniques, and reporting procedures. We illustrate the variability in reported metrics and results when comparing against common baselines, and suggest guidelines to make future results in deep RL more reproducible. We aim to spur discussion about how to ensure continued progress in the field, by minimizing wasted effort stemming from results that are non-reproducible and easily misinterpreted.
  • PDF
    We develop a framework for certifying randomness from Bell-test trials based on directly estimating the probability of the measurement outcomes with adaptive test supermartingales. The number of trials need not be predetermined, and one can stop performing trials early, as soon as the desired amount of randomness is extractable. It can be used with arbitrary, partially known and time-dependent probabilities for the random settings choices. Furthermore, it is suitable for application to experimental configurations with low Bell violation per trial, such as current optical loophole-free Bell tests. It is possible to adapt to time-varying experimental parameters. We formulate the framework for the general situation where the trial probability distributions are constrained to a known set. Randomness expansion with logarithmic settings entropy is possible for many relevant configurations. We implement probability estimation numerically and apply it to a representative settings-conditional outcome probability distribution from an atomic loophole-free Bell test [Rosenfeld et al., Phys. Rev. Lett. 119:010402 (2017), arXiv:1611.04604 (2016)] to illustrate trade-offs between the amount of randomness, error, settings entropy, unknown settings biases, and number of trials. We then show that probability estimation yields more randomness from the loophole-free Bell-test data analyzed in [Bierhorst et al., arXiv:1702.05178 (2017)] and tolerates adversarial settings probability biases.
  • PDF
    The security of conventional cryptography systems is threatened in the forthcoming era of quantum computers. Quantum key distribution (QKD) features fundamentally proven security and offers a promising option for quantum-proof cryptography solution. Although prototype QKD systems over optical fiber have been demonstrated over the years, the key generation rates remain several orders-of-magnitude lower than current classical communication systems. In an effort towards a commercially viable QKD system with improved key generation rates, we developed a discrete-variable QKD system based on time-bin quantum photonic states that is capable of generating provably-secure cryptographic keys at megabit-per-second (Mbps) rates over metropolitan distances. We use high-dimensional quantum states that transmit more than one secret bit per received photon, alleviating detector saturation effects in the superconducting nanowire single-photon detectors (SNSPDs) employed in our system that feature very high detection efficiency (of over 70%) and low timing jitter (of less than 40 ps). Our system is constructed using commercial off-the-shelf components, and the adopted protocol can readily be extended to free-space quantum channels. The security analysis adopted to distill the keys ensures that the demonstrated protocol is robust against coherent attacks, finite-size effects, and a broad class of experimental imperfections identified in our system.
  • PDF
    We establish a connection between orthogonal arrays and pure states of finite-dimensional quantum systems having nonnegative integer entries. This identification allows us to develop a technique to represent the full set of orthogonal arrays with a given number of columns, symbols and strength in terms of a finite set of generators. The corresponding states turn out to play an important role in quantum theory: they exhibit a high degree of multipartite entanglement and include the full set of maximum distance separable codes. We provide a simple construction of the generators and show explicit solutions for orthogonal arrays with two symbols composed of two, three and four columns and two symbols, which correspond to pure quantum states of two, three and four qubits, respectively.
  • PDF
    This is an invited review of Jean Bricmont's book "Making Sense of Quantum Mechanics."
  • PDF
    Point location problems for $n$ points in $d$-dimensional Euclidean space (and $\ell_p$ spaces more generally) have typically had two kinds of running-time solutions: * (Nearly-Linear) less than $d^{poly(d)} \cdot n \log^{O(d)} n$ time, or * (Barely-Subquadratic) $f(d) \cdot n^{2-1/\Theta(d)}$ time, for various $f$. For small $d$ and large $n$, "nearly-linear" running times are generally feasible, while "barely-subquadratic" times are generally infeasible. For example, in the Euclidean metric, finding a Closest Pair among $n$ points in ${\mathbb R}^d$ is nearly-linear, solvable in $2^{O(d)} \cdot n \log^{O(1)} n$ time, while known algorithms for Furthest Pair (the diameter of the point set) are only barely-subquadratic, requiring $\Omega(n^{2-1/\Theta(d)})$ time. Why do these proximity problems have such different time complexities? Is there a barrier to obtaining nearly-linear algorithms for problems which are currently only barely-subquadratic? We give a novel exact and deterministic self-reduction for the Orthogonal Vectors problem on $n$ vectors in $\{0,1\}^d$ to $n$ vectors in ${\mathbb Z}^{\omega(\log d)}$ that runs in $2^{o(d)}$ time. As a consequence, barely-subquadratic problems such as Euclidean diameter, Euclidean bichromatic closest pair, ray shooting, and incidence detection do not have $O(n^{2-\epsilon})$ time algorithms (in Turing models of computation) for dimensionality $d = \omega(\log \log n)^2$, unless the popular Orthogonal Vectors Conjecture and the Strong Exponential Time Hypothesis are false. That is, while poly-log-log-dimensional Closest Pair is in $n^{1+o(1)}$ time, the analogous case of Furthest Pair can encode larger-dimensional problems conjectured to require $n^{2-o(1)}$ time. We also show that the All-Nearest Neighbors problem in $\omega(\log n)$ dimensions requires $n^{2-o(1)}$ time to solve, assuming either of the above conjectures.
  • PDF
    Although quantum coherence is a basic trait of quantum mechanics, the presence of coherences in the quantum description of a certain phenomenon does not rule out the possibility to give an alternative description of the same phenomenon in purely classical terms. Here, we give definite criteria to determine when and to what extent quantum coherence is equivalent to non-classicality. We prove that a Markovian multi-time statistics obtained from repeated measurements of a non-degenerate observable cannot be traced back to a classical statistics if and only if the dynamics is able to generate coherences and to subsequently turn them into populations. Furthermore, we show with simple examples that such connection between quantum coherence and non-classicality is generally absent if the statistics is non-Markovian.
  • PDF
    We prove that the set of quantum correlations for a bipartite system of 5 inputs and 2 outputs is not closed. Our proof relies on computing the correlation functions of a graph, which is a concept that we introduce.
  • PDF
    The multi-qubit GHZ state possesses tangles with elegant transformation properties under stochastic local operations and classical communication. Since almost all pure 3-qubit states are connected to the GHZ state via SLOCC, we derive a necessary and sufficient achievability inequality on arbitrary 3-qubit tangles, which is a strictly stronger constraint than both the monogamy inequality and the marginal eigenvalue inequality. We then show that entanglement shared with any single party in the n-qubit GHZ SLOCC equivalence class is precisely accounted for by the sum of its k-tangles, recently coined the strong monogamy equality, acknowledging competing but agreeing definitions of the k-tangle on this class, one of which is then computable for arbitrary mixed states. Strong monogamy is known to not hold arbitrarily, and so we introduce a unifying outlook on entanglement constraints in light of basic real algebraic geometry.
  • PDF
    The notion of potential output purity of a completely positive map is introduced as a generalization of the regularized output purity. An upper bound is derived for this quantity, and for several classes of maps (including CQ, QC and Hadamard channels) it is shown that potential purity does not exceed the standard output purity. As an application the potential purity is used to bound the logarithmic Sobolev constant of a product of depolarizing channel semigroups.
  • PDF
    Covert communication conceals the transmission of the message from an attentive adversary. Recent work on the limits of covert communication in additive white Gaussian noise (AWGN) channels has demonstrated that a covert transmitter (Alice) can reliably transmit a maximum of $\mathcal{O}\left(\sqrt{n}\right)$ bits to a covert receiver (Bob) without being detected by an adversary (Warden Willie) in $n$ channel uses. This paper focuses on the scenario where other friendly nodes distributed according to a two-dimensional Poisson point process with density $m$ are present in the environment. We propose a strategy where the friendly node closest to the adversary, without close coordination with Alice, produces artificial noise. We show that this method allows Alice to reliably and covertly send $\mathcal{O}(\min\{{n,m^{\gamma/2}\sqrt{n}}\})$ bits to Bob in $n$ channel uses, where $\gamma$ is the path-loss exponent. Moreover, we also consider a setting where there are $N_{\mathrm{w}}$ collaborating adversaries uniformly and randomly located in the environment and show that in $n$ channel uses, Alice can reliably and covertly send $\mathcal{O}\left(\min\left\{n,\frac{m^{\gamma/2} \sqrt{n}}{N_{\mathrm{w}}^{\gamma}}\right\}\right)$ bits to Bob when $\gamma >2$, and $\mathcal{O}\left(\min\left\{n,\frac{m \sqrt{n}}{N_{\mathrm{w}}^{2}\log^2 {N_{\mathrm{w}}}}\right\}\right)$ when $\gamma = 2$. Conversely, under mild restrictions on the communication strategy, we demonstrate that no higher covert throughput is possible for $\gamma>2$.
  • PDF
    We study the least squares regression problem \beginalign* \min_\Theta ∈\mathcalS_⊙D,R \|A\Theta-b\|_2, \endalign* where $\mathcal{S}_{\odot D,R}$ is the set of $\Theta$ for which $\Theta = \sum_{r=1}^{R} \theta_1^{(r)} \circ \cdots \circ \theta_D^{(r)}$ for vectors $\theta_d^{(r)} \in \mathbb{R}^{p_d}$ for all $r \in [R]$ and $d \in [D]$, and $\circ$ denotes the outer product of vectors. That is, $\Theta$ is a low-dimensional, low-rank tensor. This is motivated by the fact that the number of parameters in $\Theta$ is only $R \cdot \sum_{d=1}^D p_d$, which is significantly smaller than the $\prod_{d=1}^{D} p_d$ number of parameters in ordinary least squares regression. We consider the above CP decomposition model of tensors $\Theta$, as well as the Tucker decomposition. For both models we show how to apply data dimensionality reduction techniques based on \it sparse random projections $\Phi \in \mathbb{R}^{m \times n}$, with $m \ll n$, to reduce the problem to a much smaller problem $\min_{\Theta} \|\Phi A \Theta - \Phi b\|_2$, for which if $\Theta'$ is a near-optimum to the smaller problem, then it is also a near optimum to the original problem. We obtain significantly smaller dimension and sparsity in $\Phi$ than is possible for ordinary least squares regression, and we also provide a number of numerical simulations supporting our theory.
  • PDF
    Can a large system be fully characterized using its subsystems via inductive reasoning? Is it possible to completely reduce the behavior of a complex system to the behavior of its simplest "atoms"? In the following paper we answer these questions on the negative for a specific class of systems and measurements. We begin with simple two-particle example, where strong correlations arise between two apparently empty boxes. This leads to new surprising effects within atomic and electromagnetic systems. A general construction based on pre- and post-selected ensembles is then suggested, where the N-body correlation can be genuinely perceived as a global property, as long as one is limited to preforming a small set of measurements which we term "strictly local". We conclude that within time-symmetric quantum mechanics and under certain boundary conditions, high-order correlations can determine low-order ones, but not vice versa. Moreover, the latter seem to provide no information at all regarding the former. This supports a top-down structure in many-body quantum mechanics.
  • PDF
    In this paper we show that all nodes can be found optimally for almost all random Erdős-Rényi ${\mathcal G}(n,p)$ graphs using continuous-time quantum spatial search procedure. This works for both adjacency and Laplacian matrices, though under different conditions. The first one requires $p=\omega(\log^8(n)/n)$, while the seconds requires $p\geq(1+\varepsilon)\log (n)/n$, where $\varepsilon>0$. The proof was made by analyzing the convergence of eigenvectors corresponding to outlying eigenvalues in the $\|\cdot\|_\infty $ norm. At the same time for $p<(1-\varepsilon)\log(n)/n$, the property does not hold for any matrix, due to the connectivity issues. Hence, our derivation concerning Laplacian matrix is tight.
  • PDF
    Chromatic polynomials and related graph invariants are central objects in both graph theory and statistical physics. Computational difficulties, however, have so far restricted studies of such polynomials to graphs that were either very small, very sparse or highly structured. Recent algorithmic advances (Timme et al 2009 New J. Phys. 11 023001) now make it possible to compute chromatic polynomials for moderately sized graphs of arbitrary structure and number of edges. Here we present chromatic polynomials of ensembles of random graphs with up to 30 vertices, over the entire range of edge density. We specifically focus on the locations of the zeros of the polynomial in the complex plane. The results indicate that the chromatic zeros of random graphs have a very consistent layout. In particular, the crossing point, the point at which the chromatic zeros with non-zero imaginary part approach the real axis, scales linearly with the average degree over most of the density range. While the scaling laws obtained are purely empirical, if they continue to hold in general there are significant implications: the crossing points of chromatic zeros in the thermodynamic limit separate systems with zero ground state entropy from systems with positive ground state entropy, the latter an exception to the third law of thermodynamics.
  • PDF
    We experimentally explore the topological Maxwell metal bands by mapping the momentum space of condensed-matter models to the tunable parameter space of superconducting quantum circuits. An exotic band structure that is effectively described by the spin-1 Maxwell equations is imaged. Three-fold degenerate points dubbed Maxwell points are observed in the Maxwell metal bands. Moreover, we engineer and observe the topological phase transition from the topological Maxwell metal to a trivial insulator, and report the first experiment to measure the Chern numbers that are higher than one.

Recent comments

Bassam Helou Sep 22 2017 17:21 UTC

The initial version of the article does not adequately and clearly explain how certain equations demonstrate whether a particular interpretation of QM violates the no-signaling condition.
A revised and improved version is scheduled to appear on September 25.

James Wootton Sep 21 2017 05:41 UTC

What does this imply for https://scirate.com/arxiv/1608.00263? I'm guessing they still regard it as valid (it is ref [14]), but just too hard to implement for now.

Ben Criger Sep 08 2017 08:09 UTC

Oh look, there's another technique for decoding surface codes subject to X/Z correlated errors: https://scirate.com/arxiv/1709.02154

Aram Harrow Sep 06 2017 07:54 UTC

The paper only applies to conformal field theories, and such a result cannot hold for more general 1-D systems by 0705.4077 and other papers (assuming standard complexity theory conjectures).

Felix Leditzky Sep 05 2017 21:27 UTC

Thanks for the clarification, Philippe!

Philippe Faist Sep 05 2017 21:09 UTC

Hi Felix, thanks for the good question.

We've found it more convenient to consider trace-nonincreasing and $\Gamma$-sub-preserving maps (and this is justified by the fact that they can be dilated to fully trace-preserving and $\Gamma$-preserving maps on a larger system). The issue arises because

...(continued)
Felix Leditzky Sep 05 2017 19:02 UTC

What is the reason/motivation to consider trace-non-increasing maps instead of trace-preserving maps in your framework and the definition of the coherent relative entropy?

Steve Flammia Aug 30 2017 22:30 UTC

Thanks for the reference Ashley. If I understand your paper, you are still measuring stabilizers of X- and Z-type at the top layer of the code. So it might be that we can improve on the factor of 2 that you found if we tailor the stabilizers to the noise bias at the base level.

Ashley Aug 30 2017 22:09 UTC

We followed Aliferis and Preskill's approach in [https://arxiv.org/abs/1308.4776][1] and found that the fault-tolerant threshold for the surface code was increased by approximately a factor of two, from around 0.75 per cent to 1.5 per cent for a bias of 10 to 100.

[1]: https://arxiv.org/abs/1308.

...(continued)
Stephen Bartlett Aug 30 2017 21:55 UTC

Following on from Steve's comments, it's possible to use the bias-preserving gate set in Aliferis and Preskill directly to do the syndrome extraction, as you build up a CNOT gadget, but such a direct application of your methods would be very complicated and involve a lot of gate teleportation. If y

...(continued)