# Top arXiv papers

• It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of photons in linear optics, which has sparked interest as a rapid way to demonstrate this quantum supremacy. Photon statistics are governed by intractable matrix functions known as permanents, which suggests that sampling from the distribution obtained by injecting photons into a linear-optical network could be solved more quickly by a photonic experiment than by a classical computer. The contrast between the apparently awesome challenge faced by any classical sampling algorithm and the apparently near-term experimental resources required for a large boson sampling experiment has raised expectations that quantum supremacy by boson sampling is on the horizon. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. While the largest boson sampling experiments reported so far are with 5 photons, our classical algorithm, based on Metropolised independence sampling (MIS), allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. We argue that the impact of experimental photon losses means that demonstrating quantum supremacy by boson sampling would require a step change in technology.
• We propose an efficient scheme for verifying quantum computations in the `high complexity' regime i.e. beyond the remit of classical computers. Previously proposed schemes remarkably provide confidence against arbitrarily malicious adversarial behaviour in the misfunctioning of the quantum computing device. Our scheme is not secure against arbitrarily adversarial behaviour, but may nevertheless be sufficiently acceptable in many practical situations. With this concession we gain in manifest simplicity and transparency, and in contrast to previous schemes, our verifier is entirely classical. It is based on the fact that adaptive Clifford circuits on general product state inputs provide universal quantum computation, while the same processes without adaptation are always classically efficiently simulatable.
• Noise rates in quantum computing experiments have dropped dramatically, but reliable qubits remain precious. Fault-tolerance schemes with minimal qubit overhead are therefore essential. We introduce fault-tolerant error-correction procedures that use only two ancilla qubits. The procedures are based on adding "flags" to catch the faults that can lead to correlated errors on the data. They work for various distance-three codes. In particular, our scheme allows one to test the [[5,1,3]] code, the smallest error-correcting code, using only seven qubits total. Our techniques also apply to the [[7,1,3]] and [[15,7,3]] Hamming codes, thus allowing to protect seven encoded qubits on a device with only 17 physical qubits.
• Suppose a large scale quantum computer becomes available over the Internet. Could we delegate universal quantum computations to this server, using only classical communication between client and server, in a way that is information-theoretically blind (i.e., the server learns nothing about the input apart from its size, with no cryptographic assumptions required)? In this paper we give strong indications that the answer is no. This contrasts with the situation where quantum communication between client and server is allowed --- where we know that such information-theoretically blind quantum computation is possible. It also contrasts with the case where cryptographic assumptions are allowed: there again, it is now known that there are quantum analogues of fully homomorphic encryption. In more detail, we observe that, if there exist information-theoretically secure classical schemes for performing universal quantum computations on encrypted data, then we get unlikely containments between complexity classes, such as ${\sf BQP} \subset {\sf NP/poly}$. Moreover, we prove that having such schemes for delegating quantum sampling problems, such as Boson Sampling, would lead to a collapse of the polynomial hierarchy. We then consider encryption schemes which allow one round of quantum communication and polynomially many rounds of classical communication, yielding a generalization of blind quantum computation. We give a complexity theoretic upper bound, namely ${\sf QCMA/qpoly} \cap {\sf coQCMA/qpoly}$, on the types of functions that admit such a scheme. This upper bound then lets us show that, under plausible complexity assumptions, such a protocol is no more useful than classical schemes for delegating ${\sf NP}$-hard problems to the server. Lastly, we comment on the implications of these results for the prospect of verifying a quantum computation through classical interaction with the server.
• Brandão and Svore very recently gave quantum algorithms for approximately solving semidefinite programs, which in some regimes are faster than the best-possible classical algorithms in terms of the dimension $n$ of the problem and the number $m$ of constraints, but worse in terms of various other parameters. In this paper we improve their algorithms in several ways, getting better dependence on those other parameters. To this end we develop new techniques for quantum algorithms, for instance a general way to efficiently implement smooth functions of sparse Hamiltonians, and a generalized minimum-finding procedure. We also show limits on this approach to quantum SDP-solvers, for instance for combinatorial optimizations problems that have a lot of symmetry. Finally, we prove some general lower bounds showing that in the worst case, the complexity of every quantum LP-solver (and hence also SDP-solver) has to scale linearly with $mn$ when $m\approx n$, which is the same as classical.
• The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.
• Transversality is a simple and effective method for implementing quantum computation fault-tolerantly. However, no quantum error-correcting code (QECC) can transversally implement a quantum universal gate set (Eastin and Knill, Phys. Rev. Lett., 102, 110502). Since reversible classical computation is often a dominating part of useful quantum computation, whether or not it can be implemented transversally is an important open problem. We show that, other than a small set of non-additive codes that we cannot rule out, no binary QECC can transversally implement a classical reversible universal gate set. In particular, no such QECC can implement the Toffoli gate transversally. We prove our result by constructing an information theoretically secure (but inefficient) quantum homomorphic encryption (ITS-QHE) scheme inspired by Ouyang et al. (arXiv:1508.00938). Homomorphic encryption allows the implementation of certain functions directly on encrypted data, i.e. homomorphically. Our scheme builds on almost any QECC, and implements that code's transversal gate set homomorphically. We observe a restriction imposed by Nayak's bound (FOCS 1999) on ITS-QHE, implying that any ITS quantum fully homomorphic scheme (ITS-QFHE) implementing the full set of classical reversible functions must be highly inefficient. While our scheme incurs exponential overhead, any such QECC implementing Toffoli transversally would still violate this lower bound through our scheme.
• We present a fault-tolerant universal gate set consisting of Hadamard and controlled-controlled-Z (CCZ) on Bacon-Shor subsystem codes. Transversal non-Clifford gates on these codes are intriguing in that higher levels of the Clifford hierarchy become accessible as the code becomes more asymmetric. For instance, in an appropriate gauge, Bacon-Shor codes on an $m\times m^k$ lattice have transversal $k$-qubit-controlled $Z$. Through a variety of tricks, including intermediate error-correction and non-Pauli recovery, we reduce the overhead required for fault-tolerant CCZ. We calculate pseudothresholds for our universal gate set on the smallest $3\times3$ Bacon-Shor code and also compare our gates with magic-states within the framework of a proposed ion trap architecture.
• Reliable qubits are difficult to engineer, but standard fault-tolerance schemes use seven or more physical qubits to encode each logical qubit, with still more qubits required for error correction. The large overhead makes it hard to experiment with fault-tolerance schemes with multiple encoded qubits. The 15-qubit Hamming code protects seven encoded qubits to distance three. We give fault-tolerant procedures for applying arbitrary Clifford operations on these encoded qubits, using only two extra qubits, 17 total. In particular, individual encoded qubits within the code block can be targeted. Fault-tolerant universal computation is possible with four extra qubits, 19 total. The procedures could enable testing more sophisticated protected circuits in small-scale quantum devices. Our main technique is to use gadgets to protect gates against correlated faults. We also take advantage of special code symmetries, and use pieceable fault tolerance.
• We determine both the quantum and the private capacities of low-noise quantum channels to leading orders in the channel's distance to the perfect channel. It has been an open problem for more than 20 years to determine the capacities of some of these low-noise channels such as the depolarizing channel. We also show that both capacities are equal to the single-letter coherent information of the channel, again to leading orders. We thus find that, in the low noise regime, super-additivity and degenerate codes have negligible benefit for the quantum capacity, and shielding does not improve the private capacity beyond the quantum capacity, in stark contrast to the situation when noisier channels are considered.
• A tripartite state $\rho_{ABC}$ forms a Markov chain if there exists a recovery map $\mathcal{R}_{B \to BC}$ acting only on the $B$-part that perfectly reconstructs $\rho_{ABC}$ from $\rho_{AB}$. To achieve an approximate reconstruction, it suffices that the conditional mutual information $I(A:C|B)_{\rho}$ is small, as shown recently. Here we ask what conditions are necessary for approximate state reconstruction. This is answered by a lower bound on the relative entropy between $\rho_{ABC}$ and the recovered state $\mathcal{R}_{B\to BC}(\rho_{AB})$. The bound consists of the conditional mutual information and an entropic correction term that quantifies the disturbance of the $B$-part by the recovery map.
• This review presents an entry-level introduction to topological quantum computation -- quantum computing with anyons. This approach is inherently resilient against errors, thus promising to overcome one of the main obstacles for the realisation of quantum computers. We introduce the concept of anyon models and review the literature on condensed matter systems where anyons can emerge. Then we discuss the general steps how to use anyons to encode and process quantum information, as well as discuss various ways topological protection might fail. Finally, these abstract concepts are demonstrated in the concrete system of Kitaev's topological nanowire. This model supports localised Majorana zero modes -- the simplest and experimentally most tractable types of non-Abelian anyons -- and it describes the low-energy physics of several experimentally relevant settings.
• Concatenation of two quantum error correcting codes with complementary sets of transversal gates can provide a means towards universal fault-tolerant computation. We first show that it is generally preferable to choose the inner code with the higher pseudo-threshold in order to achieve lower logical failure rates. We then explore the threshold properties of a wide range of concatenation schemes. Notably, we demonstrate that the concatenation of complementary sets of Reed-Muller codes can increase the code capacity threshold under depolarizing noise when compared to extensions of previously proposed concatenation models. We also analyze the properties of logical errors under circuit level noise, showing that smaller codes perform better for all sampled physical error rates. Our work provides new insights into the performance of universal concatenated quantum codes for both code capacity and circuit level noise.
• An imperative aspect of modern science is that scientific institutions act for the benefit of a common scientific enterprise, rather than for the personal gain of individuals within them. This implies that science should not perpetuate existing or historical unequal social orders. Some scientific terminology, though, gives a very different impression. I will give two examples of terminology invented recently for the field of quantum information which use language associated with subordination, slavery, and racial segregation: 'ancilla qubit' and 'quantum supremacy'.
• Apr 26 2017 quant-ph cs.CC arXiv:1704.07375v1
The notions of entanglement and nonlocality are among the most striking ingredients found in quantum information theory. One tool to better understand these notions is the model of nonlocal games; a mathematical framework that abstractly models a physical system. The simplest instance of a nonlocal game involves two players, Alice and Bob, who are not allowed to communicate with each other once the game has started and who play cooperatively against an adversary referred to as the referee. The focus of this thesis is a class of games called extended nonlocal games, of which nonlocal games are a subset. In an extended nonlocal game, the players initially share a tripartite state with the referee. In such games, the winning conditions for Alice and Bob may depend on outcomes of measurements made by the referee, on its part of the shared quantum state, in addition to Alice and Bob's answers to the questions sent by the referee. We build up the framework for extended nonlocal games and study their properties and how they relate to nonlocal games.
• Finding the optimal encoding strategies can be challenging for communication using quantum channels, as classical and quantum capacities may be superadditive. Entanglement assistance can often simplify this task, as the entanglement-assisted classical capacity for any channel is additive, making entanglement across channel uses unnecessary. If the entanglement assistance is limited, the picture is much more unclear. Suppose the classical capacity is superadditive, then the classical capacity with limited entanglement assistance could retain superadditivity by continuity arguments. If the classical capacity is additive, it is unknown if superadditivity can still be developed with limited entanglement assistance. We show this is possible, by providing an example. We construct a channel for which, the classical capacity is additive, but that with limited entanglement assistance can be superadditive. This shows entanglement plays a weird role in communication and we still understand very little about it.
• The property of superadditivity of the quantum relative entropy states that, in a bipartite system $\mathcal{H}_{AB}=\mathcal{H}_A \otimes \mathcal{H}_B$, for every density operator $\rho_{AB}$ one has $D(\rho_{AB}||\sigma_A\otimes \sigma_B) \ge D(\rho_A || \sigma_A)+D(\rho_B ||\sigma_B)$. In this work, we provide an extension of this inequality for arbitrary density operators $\sigma_{AB}$.
• The exponential scaling of the wave function is a fundamental property of quantum systems with far reaching implications in our ability to process quantum information. A problem where these are particularly relevant is quantum state tomography. State tomography, whose objective is to obtain a full description of a quantum system, can be analysed in the framework of computational learning theory. In this model, quantum states have been shown to be Probably Approximately Correct (PAC)-learnable with sample complexity linear in the number of qubits. However, it is conjectured that in general quantum states require an exponential amount of computation to be learned. Here, using results from the literature on the efficient classical simulation of quantum systems, we show that stabiliser states are efficiently PAC-learnable. Our results solve an open problem formulated by Aaronson [Proc. R. Soc. A, 2088, (2007)] and propose learning theory as a tool for exploring the power of quantum computation.
• We study quantum algorithms on search trees of unknown structure, in a model where the tree can be discovered by local exploration. That is, we are given the root of the tree and access to a black box which, given a vertex $v$, outputs the children of $v$. We construct a quantum algorithm which, given such access to a search tree of depth at most $n$, estimates the size of the tree $T$ within a factor of $1\pm \delta$ in $\tilde{O}(\sqrt{nT})$ steps. More generally, the same algorithm can be used to estimate size of directed acyclic graphs (DAGs) in a similar model. We then show two applications of this result: a) We show how to transform a classical backtracking search algorithm which examines $T$ nodes of a search tree into an $\tilde{O}(\sqrt{T}n^{3/2})$ time quantum algorithm, improving over an earlier quantum backtracking algorithm of Montanaro (arXiv:1509.02374). b) We give a quantum algorithm for evaluating AND-OR formulas in a model where the formula can be discovered by local exploration (modeling position trees in 2-player games). We show that, in this setting, formulas of size $T$ and depth $T^{o(1)}$ can be evaluated in quantum time $O(T^{1/2+o(1)})$. Thus, the quantum speedup is essentially the same as in the case when the formula is known in advance.
• Encoding schemes and error-correcting codes are widely used in information technology to improve the reliability of data transmission over real-world communication channels. Quantum information protocols can further enhance the performance in data transmission by encoding a message in quantum states, however, most proposals to date have focused on the regime of a large number of uses of the noisy channel, which is unfeasible with current quantum technology. We experimentally demonstrate quantum enhanced communication over an amplitude damping noisy channel with only two uses of the channel per bit and a single entangling gate at the decoder. By simulating the channel using a photonic interferometric setup, we experimentally increase the reliability of transmitting a data bit by greater than 20% for a certain damping range over classically sending the message twice. We show how our methodology can be extended to larger systems by simulating the transmission of a single bit with up to eight uses of the channel and a two-bit message with three uses of the channel, predicting a quantum enhancement in all cases.
• We consider the problem of fault-tolerant quantum computation in the presence of slow error diagnostics, either caused by slow measurement readouts or slow decoding algorithms. Our scheme offers a few improvements over previously existing solutions, for instance it does not require active error correction and results in a reduced error-correction overhead when error diagnostics is much slower than the gate time. In addition, we adapt our protocol to cases where the underlying error correction strategy chooses the optimal correction amongst all Clifford gates instead of the usual Pauli gates. The resulting Clifford frame protocol is of independent interest as it can increase error thresholds and could find applications in other areas of quantum computation.
• May 01 2017 cs.DS quant-ph arXiv:1704.09014v1
In this paper we introduce and formalize Substochastic Monte Carlo (SSMC) algorithms. These algorithms, originally intended to be a better classical foil to quantum annealing than simulated annealing, prove to be worthy optimization algorithms in their own right. In SSMC, a population of walkers is initialized according to a known distribution on an arbitrary search space and varied into the solution of some optimization problem of interest. The first argument of this paper shows how an existing classical algorithm, "Go-With-The-Winners" (GWW), is a limiting case of SSMC when restricted to binary search and particular driving dynamics. Although limiting to GWW, SSMC is more general. We show that (1) GWW can be efficiently simulated within the SSMC framework, (2) SSMC can be exponentially faster than GWW, (3) by naturally incorporating structural information, SSMC can exponentially outperform the quantum algorithm that first inspired it, and (4) SSMC exhibits desirable search features in general spaces. Our approach combines ideas from genetic algorithms (GWW), theoretical probability (Fleming-Viot processes), and quantum computing. Not only do we demonstrate that SSMC is often more efficient than competing algorithms, but we also hope that our results connecting these disciplines will impact each independently. An implemented version of SSMC has previously enjoyed some success as a competitive optimization algorithm for Max-$k$-SAT.
• We survey various convex optimization problems in quantum information theory involving the relative entropy function. We show how to solve these problems numerically using off-the-shelf semidefinite programming solvers, via the approximation method proposed in [Fawzi, Saunderson, Parrilo, Semidefinite approximations of the matrix logarithm, arXiv:1705.00812]. In particular we use this method to provide numerical counterexamples for a proposed lower bound on the quantum conditional mutual information in terms of the relative entropy of recovery.
• We introduce and physically motivate the following problem in geometric combinatorics, originally inspired by analysing Bell inequalities. A grasshopper lands at a random point on a planar lawn of area one. It then jumps once, a fixed distance $d$, in a random direction. What shape should the lawn be to maximise the chance that the grasshopper remains on the lawn after jumping? We show that, perhaps surprisingly, a disc shaped lawn is not optimal for any $d>0$. We investigate further by introducing a spin model whose ground state corresponds to the solution of a discrete version of the grasshopper problem. Simulated annealing and parallel tempering searches are consistent with the hypothesis that for $d < \pi^{-1/2}$ the optimal lawn resembles a cogwheel with $n$ cogs, where the integer $n$ is close to $\pi ( \arcsin ( \sqrt{\pi} d /2 ) )^{-1}$. We find transitions to other shapes for $d \gtrsim \pi^{-1/2}$.
• At Crypto 2011, some of us had proposed a family of cryptographic protocols for key establishment capable of protecting quantum and classical legitimate parties unconditionally against a quantum eavesdropper in the query complexity model. Unfortunately, our security proofs were unsatisfactory from a cryptographically meaningful perspective because they were sound only in a worst-case scenario. Here, we extend our results and prove that for any e > 0, there is a classical protocol that allows the legitimate parties to establish a common key after O(N) expected queries to a random oracle, yet any quantum eavesdropper will have a vanishing probability of learning their key after O(N^1.5-e) queries to the same oracle. The vanishing probability applies to a typical run of the protocol. If we allow the legitimate parties to use a quantum computer as well, their advantage over the quantum eavesdropper becomes arbitrarily close to the quadratic advantage that classical legitimate parties enjoyed over classical eavesdroppers in the seminal 1974 work of Ralph Merkle. Along the way, we develop new tools to give lower bounds on the number of quantum queries required to distinguish two probability distributions. This method in itself could have multiple applications in cryptography. We use it here to study average-case quantum query complexity, for which we develop a new composition theorem of independent interest.
• Lieb and Robinson provided bounds on how fast bipartite connected correlations can arise in systems with only short-range interactions. We generalize Lieb-Robinson bounds on bipartite connected correlators to multipartite connected correlators. The bounds imply that an $n$-partite connected correlator can reach unit value in constant time. Remarkably, the bounds also allow for an $n$-partite connected correlator to reach a value that is exponentially large with system size in constant time, a feature which stands in contrast to bipartite connected correlations. We provide explicit examples of such systems.
• Quantum coherence is an essential feature of quantum mechanics which is responsible for the departure between classical and quantum world. The recently established resource theory of quantum coherence studies possible quantum technological applications of quantum coherence, and limitations which arise if one is lacking the ability to establish superpositions. An important open problem in this context is a simple characterization for incoherent operations, constituted by all possible transformations allowed within the resource theory of coherence. In this work, we contribute to such a characterization by proving several upper bounds on the maximum number of incoherent Kraus operators in a general incoherent operation. For a single qubit, we show that the number of incoherent Kraus operators is not more than 5, and it remains an open question if this number can be reduced to 4. Our results also extend to other related concepts such as translationally invariant and strictly incoherent operations. For the latter class we provide a complete solution for the mixed-state qubit conversion problem.
• Measurement-Based Quantum Computing (MBQC) is an alternative to the quantum circuit model, whereby the computation proceeds via measurements on an entangled resource state. Noise processes are a major experimental challenge to the construction of a quantum computer. Here, we investigate how noise processes affecting physical states affect the performed computation by considering MBQC on a one-dimensional cluster state. This allows us to break down the computation in a sequence of building blocks and map physical errors to logical errors. Next, we extend the Matrix Product State construction to mixed states (which is known as Matrix Product Operators) and once again map the effect of physical noise to logical noise acting within the correlation space. This approach allows us to consider more general errors than the conventional Pauli errors, and could be used in order to simulate noisy quantum computation.
• Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.
• Quantum computing is moving rapidly to the point of deployment of technology. Functional quantum devices will require the ability to correct error in order to be scalable and effective. A leading choice of error correction, in particular for modular or distributed architectures, is the surface code with logical two-qubit operations realised via "lattice surgery". These operations consist of "merges" and "splits" acting non-unitarily on the logical states and are not easily captured by standard circuit notation. This raises the question of how best to reason about lattice surgery in order efficiently to use quantum states and operations in architectures with complex resource management issues. In this paper we demonstrate that the operations of the ZX calculus, a form of quantum diagrammatic reasoning designed using category theory, match exactly the operations of lattice surgery. Red and green "spider" nodes match rough and smooth merges and splits, and follow the axioms of a dagger special associative Frobenius algebra. Some lattice surgery operations can require non-trivial correction operations, which are captured natively in the use of the ZX calculus in the form of ensembles of diagrams. We give a first taste of the power of the calculus as a language for surgery by considering two operations (magic state use and producing a CNOT) and show how ZX diagram re-write rules give lattice surgery procedures for these operations that are novel, efficient, and highly configurable.
• Without Niels Bohr, QBism would be nothing. But QBism is not Bohr. This paper attempts to show that, despite a popular misconception, QBism is no minor tweak to Bohr's interpretation of quantum mechanics. It is something quite distinct. Along the way, we lay out three tenets of QBism in some detail: 1) The Born Rule---the foundation of what quantum theory means for QBism---is a normative statement. It is about the decision-making behavior any individual agent should strive for; it is not a descriptive "law of nature" in the usual sense. 2) All probabilities, including all quantum probabilities, are so subjective they never tell nature what to do. This includes probability-1 assignments. Quantum states thus have no "ontic hold" on the world. 3) Quantum measurement outcomes just are personal experiences for the agent gambling upon them. Particularly, quantum measurement outcomes are not, to paraphrase Bohr, instances of "irreversible amplification in devices whose design is communicable in common language suitably refined by the terminology of classical physics." Finally, an explicit comparison is given between QBism and Bohr with regard to three subjects: a) The issue of the "detached observer" as it arose in a debate between Pauli and Bohr, b) Bohr's reply to Einstein, Podolsky, and Rosen, and c) Bohr's mature notion of "quantum phenomena." At the end, we discuss how Bohr's notion of phenomena may have something to offer the philosophy of William James: A physics from which to further develop his vision of the world---call it an ontology if you will---in which "new being comes in local spots and patches."
• We develop an alternative boson sampling model operating on single-photon states followed by linear interferometry and Gaussian measurements. The hardness proof for simulating such continuous-outcome measurements is established in two main steps, making use of the symmetry of quantum evolution under time reversal. Namely, we first construct a time-symmetric version of scattershot boson sampling in which, as opposed to the original proposal, both legs of a collection of two-mode squeezed vacuum states undergo parallel linear-optical transformations. This time-symmetric scattershot model yields, as a corollary, an instance of boson sampling from Gaussian states where photon counting is hard to simulate. Then, a time-reversed setup is used to develop a boson sampling model in which the simulation of Gaussian measurements $-$ namely the outcome of unbalanced heterodyne detection $-$ is proven to be computationally hard. These results illustrate how time symmetry may serve as a tool for analyzing the computational complexity of novel physically-motivated computational problems.
• The problem of determining whether a given quantum state is entangled lies at the heart in quantum information processing. Despite the many methods -- such as the positive partial transpose (PPT) criterion and the $k$-symmetric extendibility criterion -- to tackle this problem, none of them enables a general, practical solution due to the problem's NP-hard complexity. Explicitly, states that are separable form a high-dimensional convex set of vastly complicated structure. In this work, we build a new separability-entanglement classifier underpinned by machine learning techniques. Our method outperforms the existing methods in generic cases in terms of both speed and accuracy, opening up the avenues to explore quantum entanglement via the machine learning approach.
• A fault-tolerant quantum computation requires an efficient means to detect and correct errors that accumulate in encoded quantum information. In the context of machine learning, neural networks are a promising new approach to quantum error correction. Here we show that a recurrent neural network can be trained, using only experimentally accessible data, to detect errors in a widely used topological code, the surface code, with a performance above that of the established minimum-weight perfect matching (or blossom) decoder. The performance gain is achieved because the neural network decoder can detect correlations between bit-flip (X) and phase-flip (Z) errors. The machine learning algorithm adapts to the physical system, hence no noise model is needed to achieve optimal performance. The long short-term memory cell of the recurrent neural network maintains this performance over a large number of error correction cycles, making it a practical decoder for forthcoming experimental realizations. On a density-matrix simulation of the 17-qubit surface code, our neural network decoder achieves a substantial performance improvement over a state-of-the-art blossom decoder.
• A quantitative assessment of the progress of small prototype quantum processors towards fault-tolerant quantum computation is a problem of current interest in experimental and theoretical quantum information science. We introduce a necessary and fair criterion for quantum error correction (QEC), which must be achieved in the development of these quantum processors before their sizes are sufficiently big to consider the well-known QEC threshold. We apply this criterion to benchmark the ongoing effort in implementing QEC with topological color codes using trapped-ion quantum processors and, more importantly, to guide the future hardware developments that shall be required in order to demonstrate beneficial QEC with small topological quantum codes. In doing so, we present a thorough description of a realistic trapped-ion toolbox for QEC, and a physically-motivated error model that goes beyond standard simplifications in the QEC literature. Our large-scale numerical analysis shows that two-species trapped-ion crystals in high-optical aperture segmented traps, with the improvements hereby described, are a very promising candidate for fault-tolerant quantum computation.
• Free energy is energy that is available to do work. Maximizing the free energy gain and the gain in work that can be extracted from a system is important for a wide variety of physical and technological processes, from energy harvesting processes such as photosynthesis to energy storage systems such as fuels and batteries. This paper extends recent results from non-equilibrium thermodynamics and quantum resource theory to derive closed-form solutions for the maximum possible gain in free energy and extractable work that can be obtained by varying the initial states of classical and quantum stochastic processes. Simple formulae allow the comparison the free energy increase for the optimal procedure with that for a sub-optimal procedure. The problem of finding the optimal free-energy harvesting procedure is shown to be convex and solvable via gradient descent.
• We compute the $\mathcal S^p \to \mathcal S^p$ norm of a general Gaussian gauge-covariant multi-mode channel for any $1\leq p<\infty$, where $\mathcal S^p$ is a Schatten space. As a consequence, we verify the Gaussian optimizer conjecture and the multiplicativity conjecture in these cases.
• These lecture notes were created for a graduate-level course on quantum simulation taught at Leibniz University Hannover in 2013. The first part of the course discusses various state of the art methods for the numerical description of many-body quantum systems. In particular, I explain successful applications and inherent limitations due to the exponential complexity of the many-body problem. In the second part of the course, I show how using highly controllable quantum system such as ultracold atoms will provide a way to overcome the limitations of classical simulation methods. I discuss several theoretical and experimental achievements and outline the road for future developments.
• One of the main questions of research on quantum many-body systems following unitary out of equilibrium dynamics is to find out how local expectation values equilibrate in time. For non-interacting models, this question is rather well understood. However, the best known bounds for general quantum systems are vastly crude, scaling unfavorable with the system size. Nevertheless, empirical and numerical evidence suggests that for generic interacting many-body systems, generic local observables, and sufficiently well-behaved states, the equilibration time does not depend strongly on the system size, but only the precision with which this occurs does. In this discussion paper, we aim at giving very simple and plausible arguments for why this happens. While our discussion does not yield rigorous results about equilibration time scales, we believe that it helps to clarify the essential underlying mechanisms, the intuition and important figure of merits behind equilibration. We then connect our arguments to common assumptions and numerical results in the field of equilibration and thermalization of closed quantum systems, such as the eigenstate thermalization hypothesis as well as rigorous results on interacting quantum many-body systems. Finally, we complement our discussions with numerical results - both in the case of examples and counter-examples of equilibrating systems.
• Nonlocality and contextuality are at the root of conceptual puzzles in quantum mechanics, and are key resources for quantum advantage in information-processing tasks. Bell nonlocality is best understood as the incompatibility between quantum correlations and the classical theory of causality, applied to relativistic causal structure. Contextuality, on the other hand, is on a more controversial foundation. In this work, I provide a common conceptual ground between nonlocality and contextuality as violations of classical causality. First, I generalise a recent work by Wood and Spekkens, who showed that all causal models for certain Bell-inequality violations require fine-tuning of its causal parameters -- regardless of the underlying causal structure. Here I show this result holds without two of the original assumptions, applies to all (bipartite) cases of Bell nonlocality, and remarkably, does not require any assumption related to "free choice", unlike all other derivations of Bell inequalities. As a consequence, it can be applied to contextuality scenarios: all causal models for violations of a Kochen-Specker-contextuality inequality (involving two measurements per context) require fine-tuning. Thus the quantum violation of classical causality goes beyond the case of space-like separated systems, and manifests already in scenarios involving single systems.
• Anti-de Sitter/conformal field theory (AdS/CFT) correspondence is one of the most promising realizations of holographic principle towards quantum gravity. The recent development of a discrete version of AdS/CFT correspondence in terms of tensor networks motivates one to simulate and demonstrate AdS/CFT correspondence on quantum simulators. We achieve this goal indeed, in this work, on a six-qubit nuclear magnetic resonance quantum simulator. We demonstrate experimentally the discrete AdS/CFT correspondence, under realistic noises, by measuring the relevant entanglement entropies on the corresponding tensor network state. The fidelity of our experimentally prepared tensor network state is 85.0% via full state tomography and rises to 93.7% if the signal's decay due to decoherence is taken into account. Our experiment serves as the basic module of simulating more complex tensor network states that exhibit AdS/CFT correspondence. As the initial experimental attempt to study AdS/CFT via quantum information processing, our work opens up new avenues exploring quantum gravity phenomena on quantum simulators.
• Randomness is an essential resource in computer science. In most applications perfect, and sometimes private, randomness is needed, while it is not even clear that such a resource exists. It is well known that the tools of classical computer science do not allow us to create perfect and secret randomness from a single weak public source. Quantum physics, on the other hand, allows for such a process, even in the most paranoid cryptographic sense termed "quantum device-independent cryptography". In this work we propose and prove the security of a new device-independent protocol that takes any single public Santha-Vazirani source as input and creates a secret close to uniform string in the presence of a quantum adversary. Our work is the first to achieve randomness amplification with all the following properties: (1) amplification and "privatization" of a public Santha-Vazirani source with arbitrary bias (2) the use of a device with only two components (compared to polynomial number of components) (3) non-vanishing extraction rate and (4) maximal noise tolerance. In particular, this implies that our protocol is the first protocol that can possibly be implemented with reachable parameters. We are able to achieve these by combining three new tools: a particular family of Bell inequalities, a proof technique to lower bound entropy in the device-independent setting, and a special framework for quantum-proof multi-source extractors.
• Recent progress in characterization for gapped quantum phases has also triggered the search of universal resource for quantum computation in symmetric gapped phases. Prior works in one dimension suggest that it is a feature more common than previously thought that nontrivial 1D symmetry-protected topological (SPT) phases provide quantum computational power characterized by the algebraic structure defining these phases. Progress in two and higher dimensions so far has been limited to special fixed points in SPT phases. Here we provide two families of 2D $Z_2$ symmetric wave functions such that there exists a finite region of the parameter in the SPT phases that supports universal quantum computation. The quantum computational power loses its universality at the boundary between the SPT and symmetry-breaking phases.
• Contextuality is a property that proves powerful in identifying genuinely quantum features and it is argued to play a central role in quantum computing. Here we discuss its role within quantum fluctuation theorems, in the light of a recent no-go result by Perarnau \emphet al. We show that any fluctuation theorem reproducing the two-point-measurement scheme for classical states either admits a notion of work quasi-probability or fails to describe protocols exhibiting contextuality. Conversely, we describe a protocol that smoothly interpolates between the two-point measurement work distribution for projective measurements and Allahverdyan's work quasi-probability for weak measurements, and show that the negativity of the latter is a direct signature of contextuality.
• May 12 2017 quant-ph arXiv:1705.04130v1
In a seminal paper (Page and Wootters 1983) Page and Wootters suggest time evolution could be described solely in terms of correlations between systems and clocks, as a means of dealing with the "problem of time" stemming from vanishing Hamiltonian dynamics in many theories of quantum gravity. Their approach to relational time centres around the existence of a Hamiltonian and the subsequent constraint on physical states. In this paper we present a "state-centric" reformulation of the Page and Wootters model better suited to theories which intrinsically lack Hamiltonian dynamics, such as Chern--Simons theories. We describe relational time by encoding logical "clock" qubits into anyons---the topologically protected degrees of freedom in Chern--Simons theories. The timing resolution of such anyonic clocks is determined by the universality of the anyonic braid group, with non-universal models naturally exhibiting discrete time. We exemplify this approach using SU(2)$_2$ anyons and discuss generalizations to other states and models.
• We consider the problem of testing two quantum hypotheses of quantum operations in the setting of many uses where an arbitrary prior distribution is given. The concept of the Chernoff bound for quantum operations is investigated to track the minimal average probability of error of discriminating two quantum operations asymptotically. We show that the Chernoff bound is faithful in the sense that it is finite if and only if the two quantum operations can not be distinguished perfectly. More precisely, upper bounds of the Chernoff bound for quantum operations are provided. We then generalize these results to multiple Chernoff bound for quantum operations.
• The quantum hidden subgroup approach is an actively studied approach to solve combinatorial problems in quantum complexity theory. With the success of the Shor's algorithm, it was hoped that similar approach may be useful to solve the other combinatorial problems. One such problem is the graph isomorphism problem which has survived decades of efforts using the hidden subgroup approach. This paper provides a systematic approach to create arbitrarily large classes of classically efficiently solvable graph automorphism problems or easy graph automorphism problems for which hidden subgroup approach is guaranteed to always fail irrespective of the size of the graphs. As the graph isomorphism problem is believed to be at least as hard as the graph automorphism problem, the result of this paper entails that the hidden subgroup approach is also guaranteed to always fail for the arbitrarily large classes of graph isomorphism problems. Combining these two results, it is argued that the hidden subgroup approach is essentially a dead end and alternative quantum algorithmic approach needs to be investigated for the graph isomorphism and automorphism problems.
• Let $V=\bigotimes_{k=1}^{N} V_{k}$ be the $N$ spin-$j$ Hilbert space with $d=2j+1$-dimensional single particle space. We fix an orthonormal basis $\{|m_i\rangle\}$ for each $V_{k}$, with weight $m_i\in \{-j,\ldots j\}$. Let $V_{(w)}$ be the subspace of $V$ with a constant weight $w$, with an orthonormal basis $\{|m_1,\ldots,m_N\rangle\}$ subject to $\sum_k m_k=w$. We show that the combinatorial properties of the constant weight condition imposes strong constraints on the reduced density matrices for any vector $|\psi\rangle$ in the constant weight subspace, which limits the possible entanglement structures of $|\psi\rangle$. Our results find applications in the overlapping quantum marginal problems, quantum error-correcting codes, and the spin-network structures in quantum gravity.
• We show that the physical mechanism for the equilibration of closed quantum systems is dephasing, and identify the energy scales that determine the equilibration timescale of a given observable. For realistic physical systems (e.g those with local Hamiltonians), our arguments imply timescales that do not increase with the system size, in contrast to previously known upper bounds. In particular, we show that, for such Hamiltonians, the matrix representation of local observables in the energy basis is banded, and that this property is crucial in order to derive equilibration times that are non-negligible in macroscopic systems. Finally, we give an intuitive interpretation to recent theorems on equilibration time-scales.
• Given a certain amount of entanglement available as a resource, what is the most efficient way to accomplish a quantum task? We address this question in the relevant case of continuous variable quantum teleportation protocols implemented using two-mode Gaussian states with a limited degree of entanglement and energy. We first characterize the class of single-mode phase-insensitive Gaussian channels that can be simulated via a Braunstein--Kimble protocol with non-unit gain and minimum shared entanglement, showing that infinite energy is not necessary apart from the special case of the quantum limited attenuator. We then consider the problem of teleporting single-mode coherent states with Gaussian-distributed displacement in phase space. Performing a geometrical optimization over phase-insensitive Gaussian channels, we determine the maximum average teleportation fidelity achievable with any finite entanglement and for any finite variance of the input distribution.

Noon van der Silk May 23 2017 11:15 UTC

I think this thread has reached it's end.

I've locked further comments, and I hope that the quantum computing community can thoughtfully find an approach to language that is inclusive to all and recognises the diverse background of all researchers, current and future.

...(continued)
Varun Narasimhachar May 23 2017 02:14 UTC

While I would never want to antagonize my peers or to allow myself to assume they were acting irrationally, I do share your concerns to an extent. I worry about the association of social justice and inclusivity with linguistic engineering, virtual lynching, censorship, etc. (the latter phenomena sta

...(continued)
Aram Harrow May 23 2017 01:30 UTC

I think you are just complaining about issues that arise from living with other people in the same society. If you disagree with their values, well, then some of them might have a negative opinion about you. If you express yourself in an aggressive way, and use words like "lynch" to mean having pe

...(continued)
Steve Flammia May 23 2017 01:04 UTC

I agree with Noon that the discussion is becoming largely off topic for SciRate, but that it might still be of interest to the community to discuss this. I invite people to post thoughtful and respectful comments over at [my earlier Quantum Pontiff post][1]. Further comments here on SciRate will be

...(continued)
Noon van der Silk May 23 2017 00:59 UTC

I've moderated a few comments on this post because I believe it has gone past useful discussion, and I'll continue to remove comments that I believe don't add anything of substantial value.

Thanks.

Aram Harrow May 22 2017 23:13 UTC

The problem with your argument is that no one is forcing anyone to say anything, or banning anything.

If the terms really were offensive or exclusionary or had other bad side effects, then it's reasonable to discuss as a community whether to keep them, and possibly decide to stop using them. Ther

...(continued)
stan May 22 2017 22:53 UTC

Fair enough. At the end of the day I think most of us are concerned with the strength of the result not the particular language used to describe it.

VeteranVandal May 22 2017 22:41 UTC

But how obvious is ancilla? To me it is not even remotely obvious (nor clear as a term, but as the literature used it so much, I see such word in much the same way as I see auxiliary, in fact - now if you want to take offense with auxiliary, what can I say? I won't invent words just to please you).

...(continued)
VeteranVandal May 22 2017 22:21 UTC

I don't think science can or should avoid the perpetuation of existing "historical unequal social order" by changing the language, as to me it seems that, if you try hard enough you can find problem with anything you want to be offended at - rationalizations are tricky things you can often get carri

...(continued)
Fernando Brandao May 22 2017 21:37 UTC

I am not sure if the ArXiv is the best venue for this kind of paper/rant. Also, I’m concerned that so much energy is being put into the discussion. As a non-native speaker, I might not get all nuances of the language, but I have a hard time understanding why we should drop a scientific jargon like “

...(continued)