# Top arXiv papers

• We study the classical complexity of the exact Boson Sampling problem where the objective is to produce provably correct random samples from a particular quantum mechanical distribution. The computational framework was proposed by Aaronson and Arkhipov in 2011 as an attainable demonstration of quantum supremacy', that is a practical quantum computing experiment able to produce output at a speed beyond the reach of classical (that is non-quantum) computer hardware. Since its introduction Boson Sampling has been the subject of intense international research in the world of quantum computing. On the face of it, the problem is challenging for classical computation. Aaronson and Arkhipov show that exact Boson Sampling is not efficiently solvable by a classical computer unless $P^{\#P} = BPP^{NP}$ and the polynomial hierarchy collapses to the third level. The fastest known exact classical algorithm for the standard Boson Sampling problem takes $O({m + n -1 \choose n} n 2^n )$ time to produce samples for a system with input size $n$ and $m$ output modes, making it infeasible for anything but the smallest values of $n$ and $m$. We give an algorithm that is much faster, running in $O(n 2^n + \operatorname{poly}(m,n))$ time and $O(m)$ additional space. The algorithm is simple to implement and has low constant factor overheads. As a consequence our classical algorithm is able to solve the exact Boson Sampling problem for system sizes far beyond current photonic quantum computing experimentation, thereby significantly reducing the likelihood of achieving near-term quantum supremacy in the context of Boson Sampling.
• One of the main milestones in quantum information science is to realize quantum devices that exhibit an exponential computational advantage over classical ones without being universal quantum computers, a state of affairs dubbed "quantum computational supremacy". The known schemes heavily rely on mathematical assumptions that are plausible but unproven, prominently results on anti-concentration of random prescriptions. In this work, we aim at closing the gap by proving two anti-concentration theorems. Compared to the few other known such results, these results give rise to comparably simple, physically meaningful and resource-economical schemes showing quantum computational supremacy in one and two spatial dimensions. At the heart of the analysis are tools of unitary designs and random circuits that allow us to conclude that universal random circuits anti-concentrate.
• Robust quantum computation requires encoding delicate quantum information into degrees of freedom that are hard for the environment to change. Quantum encodings have been demonstrated in many physical systems by observing and correcting storage errors, but applications require not just storing information; we must accurately compute even with faulty operations. The theory of fault-tolerant quantum computing illuminates a way forward by providing a foundation and collection of techniques for limiting the spread of errors. Here we implement one of the smallest quantum codes in a five-qubit superconducting transmon device and demonstrate fault-tolerant state preparation. We characterize the resulting codewords through quantum process tomography and study the free evolution of the logical observables. Our results are consistent with fault-tolerant state preparation in a protected qubit subspace.
• A Bell test separates quantum mechanics from a classical, local realist theory of physics. However, a Bell test cannot separate quantum physics from all classical theories. Classical devices supplemented with non-signaling correlations, e.g., the Popescu-Rohrlich "nonlocal box," can pass a Bell test with probability at least as high as any quantum devices can. After all, quantum entanglement does not allow for signaling faster than the speed of light, so in a sense is a weaker special case of non-signaling correlations. It could be that underneath quantum mechanics is a deeper non-signaling theory. We present a test to separate quantum theory from powerful non-signaling theories. The test extends the CHSH game to involve three space-like separated devices. Quantum devices sharing a three-qubit GHZ state can pass the test with probability 5.1% higher than classical devices sharing arbitrary non-signaling correlations between pairs. More generally, we give a test that k space-like separated quantum devices can pass with higher probability than classical devices sharing arbitrary (k-1)-local non-signaling correlations.
• Jun 19 2017 quant-ph arXiv:1706.05264v1
Although an input distribution may not majorize a target distribution, it may majorize a distribution which is close to the target. Here we introduce a notion of approximate majorization. For any distribution, and given a distance $\delta$, we find the approximate distributions which majorize (are majorized by) all other distributions within the distance $\delta$. We call these the steepest and flattest approximation. This enables one to compute how close one can get to a given target distribution under a process governed by majorization. We show that the flattest and steepest approximations preserve ordering under majorization. Furthermore, we give a notion of majorization distance. This has applications ranging from thermodynamics, entanglement theory, and economics.
• The construction of topological error correction codes requires the ability to fabricate a lattice of physical qubits embedded on a manifold with a non-trivial topology such that the quantum information is encoded in the global degrees of freedom (i.e. the topology) of the manifold. However, the manufacturing of large-scale topological devices will undoubtedly suffer from fabrication errors---permanent faulty components such as missing physical qubits or failed entangling gates---introducing permanent defects into the topology of the lattice and hence significantly reducing the distance of the code and the quality of the encoded logical qubits. In this work we investigate how fabrication errors affect the performance of topological codes, using the surface code as the testbed. A known approach to mitigate defective lattices involves the use of primitive SWAP gates in a long sequence of syndrome extraction circuits. Instead, we show that in the presence of fabrication errors the syndrome can be determined using the supercheck operator approach and the outcome of the defective gauge stabilizer generators without any additional computational overhead or the use of SWAP gates. We report numerical fault-tolerance thresholds in the presence of both qubit fabrication and gate fabrication errors using a circuit-based noise model and the minimum-weight perfect matching decoder. Our numerical analysis is most applicable to 2D chip-based technologies, but the techniques presented here can be readily extended to other topological architectures. We find that in the presence of 8% qubit fabrication errors, the surface code can still tolerate a computational error rate of up to 0.1%.
• Chaos and complexity entail an entropic and computational obstruction to describing a system, and thus are intrinsically difficult to characterize. In this paper, we consider time evolution by Gaussian Unitary Ensemble (GUE) Hamiltonians and analytically compute out-of-time-ordered correlation functions (OTOCs) and frame potentials to quantify scrambling, Haar-randomness, and circuit complexity. While our random matrix analysis gives a qualitatively correct prediction of the late-time behavior of chaotic systems, we find unphysical behavior at early times including an $\mathcal{O}(1)$ scrambling time and the apparent breakdown of spatial and temporal locality. The salient feature of GUE Hamiltonians which gives us computational traction is the Haar-invariance of the ensemble, meaning that the ensemble-averaged dynamics look the same in any basis. Motivated by this property of the GUE, we introduce $k$-invariance as a precise definition of what it means for the dynamics of a quantum system to be described by random matrix theory. We envision that the dynamical onset of approximate $k$-invariance will be a useful tool for capturing the transition from early-time chaos, as seen by OTOCs, to late-time chaos, as seen by random matrix theory.
• Quantum simulation of the electronic structure problem is one of the most researched applications of quantum computing. The majority of quantum algorithms for this problem encode the wavefunction using $N$ Gaussian orbitals, leading to Hamiltonians with ${\cal O}(N^4)$ second-quantized terms. We avoid this overhead and extend methods to the condensed phase by utilizing a dual form of the plane wave basis which diagonalizes the potential operator, leading to a Hamiltonian representation with ${\cal O}(N^2)$ second-quantized terms. Using this representation we can implement single Trotter steps of the Hamiltonians with linear gate depth on a planar lattice. Properties of the basis allow us to deploy Trotter and Taylor series based simulations with respective circuit depths of ${\cal O}(N^{7/2})$ and $\widetilde{\cal O}(N^{8/3})$ for fixed charge densities - both are large asymptotic improvements over all prior results. Variational algorithms also require significantly fewer measurements to find the mean energy in this basis, ameliorating a primary challenge of that approach. We conclude with a proposal to simulate the uniform electron gas (jellium) using a low depth variational ansatz realizable on near-term quantum devices. From these results we identify simulations of low density jellium as a promising first setting to explore quantum supremacy in electronic structure.
• We study the fundamental limits on precision for parameter estimation using a quantum probe subject to Markovian noise. The best possible scaling of precision $\delta$ with the total probing time $t$ is the Heisenberg limit (HL) $\delta \propto 1/t$, which can be achieved by a noiseless probe, but noise can reduce the precision to the standard quantum limit (SQL) $\delta \propto 1 /\sqrt{t}$. We find a condition on the probe's Markovian noise such that SQL scaling cannot be surpassed if the condition is violated, but if the condition is satisfied then HL scaling can be achieved by using quantum error correction to protect the probe from damage, assuming that noiseless ancilla qubits are available, and fast, accurate quantum processing can be performed. When the HL is achievable, the optimal quantum code can be found by solving a semidefinite program. If the condition is violated but the noise channel acting on the probe is close to one that satisfies the condition, then approximate HL scaling can be realized for an extended period before eventually crossing over to SQL scaling.
• For numerous applications of quantum theory it is desirable to be able to apply arbitrary unitary operations on a given quantum system. However, in particular situations only a subset of unitary operations is easily accessible. This raises the question of what additional unitary gates should be added to a given gate-set in order to attain physical universality, i.e., to be able to perform arbitrary unitary transformation on the relevant Hilbert space. In this work, we study this problem for three paradigmatic cases of naturally occurring restricted gate-sets: (A) particle-number preserving bosonic linear optics, (B) particle-number preserving fermionic linear optics, and (C) general (not necessarily particle-number preserving) fermionic linear optics. Using tools from group theory and control theory, we classify, in each of these scenarios, what sets of gates are generated, if an additional gate is added to the set of allowed transformations. This allows us to solve the universality problem completely for arbitrary number of particles and for arbitrary dimensions of the single-particle Hilbert space.
• We give precise quantum resource estimates for Shor's algorithm to compute discrete logarithms on elliptic curves over prime fields. The estimates are derived from a simulation of a Toffoli gate network for controlled elliptic curve point addition, implemented within the framework of the quantum computing software toolsuite LIQ$Ui|\rangle$. We determine circuit implementations for reversible modular arithmetic, including modular addition, multiplication and inversion, as well as reversible elliptic curve point addition. We conclude that elliptic curve discrete logarithms on an elliptic curve defined over an $n$-bit prime field can be computed on a quantum computer with at most $9n + 2\lceil\log_2(n)\rceil+10$ qubits using a quantum circuit of at most $448 n^3 \log_2(n) + 4090 n^3$ Toffoli gates. We are able to classically simulate the Toffoli networks corresponding to the controlled elliptic curve point addition as the core piece of Shor's algorithm for the NIST standard curves P-192, P-224, P-256, P-384 and P-521. Our approach allows gate-level comparisons to recent resource estimates for Shor's factoring algorithm. The results also confirm estimates given earlier by Proos and Zalka and indicate that, for current parameters at comparable classical security levels, the number of qubits required to tackle elliptic curves is less than for attacking RSA, suggesting that indeed ECC is an easier target than RSA.
• Finding quantitative aspects of quantum phenomena which cannot be explained by any classical model has foundational importance for understanding the boundary between classical and quantum theory. It also has practical significance for identifying information processing tasks for which those phenomena provide a quantum advantage. Using the framework of generalized noncontextuality as our notion of classicality, we find one such nonclassical feature within the phenomenology of quantum minimum error state discrimination. Namely, we identify quantitative limits on the success probability for minimum error state discrimination in any experiment described by a noncontextual ontological model. These constraints constitute noncontextuality inequalities that are violated by quantum theory, and this violation implies a quantum advantage for state discrimination relative to noncontextual models. Furthermore, our noncontextuality inequalities are robust to noise and are operationally formulated, so that any experimental violation of the inequalities is a witness of contextuality, independently of the validity of quantum theory. Along the way, we introduce new methods for analyzing noncontextuality scenarios, and demonstrate a tight connection between our minimum error state discrimination scenario and a Bell scenario.
• Providing system-size independent lower bounds on the spectral gap of local Hamiltonian is in general a hard problem. For the case of finite-range, frustration free Hamiltonians on a spin lattice of arbitrary dimension, we show that a property of the ground state space is sufficient to obtain such a bound. We furthermore show that such a condition is necessary and equivalent to a constant spectral gap. Thanks to this equivalence, we can prove that for gapless models in any dimension, the spectral gap on regions of diameter $n$ is at most $o\left(\frac{\log(n)^{2+\epsilon}}{n}\right)$ for any positive $\epsilon$.
• Random quantum circuits yield minimally structured models for chaotic quantum dynamics, able to capture for example universal properties of entanglement growth. We provide exact results and coarse-grained models for the spreading of operators by quantum circuits made of Haar-random unitaries. We study both 1+1D and higher dimensions, and argue that the coarse-grained pictures carry over to operator spreading in generic many-body systems. In 1+1D, we demonstrate that the out-of-time-order correlator (OTOC) satisfies a biased diffusion equation, which gives exact results for the spatial profile of the OTOC, and the butterfly speed $v_{B}$. We find that in 1+1D the front' of the OTOC broadens diffusively, with a width scaling in time as $t^{1/2}$. We address fluctuations in the OTOC between different realizations of the random circuit, arguing that they are negligible in comparison to the broadening of the front. Turning to higher D, we show that the averaged OTOC can be understood exactly via a remarkable correspondence with a classical droplet growth problem. This implies that the width of the front of the averaged OTOC scales as $t^{1/3}$ in 2+1D and $t^{0.24}$ in 3+1D (KPZ exponents). We support our analytic argument with simulations in 2+1D. We point out that, in a lattice model, the late time shape of the spreading operator is in general not spherical. However when full spatial rotational symmetry is present in 2+1D, our mapping implies an exact asymptotic form for the OTOC in terms of the Tracy-Widom distribution. For an alternative perspective on the OTOC in 1+1D, we map it to the partition function of an Ising-like model. Thanks to special structure arising from unitarity, this partition function reduces to a random walk calculation which can be performed exactly. We also use this mapping to give exact results for entanglement growth in 1+1D circuits.
• The NSF Workshop in Quantum Information and Computation for Chemistry assembled experts from directly quantum-oriented fields such as algorithms, chemistry, machine learning, optics, simulation, and metrology, as well as experts in related fields such as condensed matter physics, biochemistry, physical chemistry, inorganic and organic chemistry, and spectroscopy. The goal of the workshop was to summarize recent progress in research at the interface of quantum information science and chemistry as well as to discuss the promising research challenges and opportunities in the field. Furthermore, the workshop hoped to identify target areas where cross fertilization among these fields would result in the largest payoff for developments in theory, algorithms, and experimental techniques. The ideas can be broadly categorized in two distinct areas of research that obviously have interactions and are not separated cleanly. The first area is quantum information for chemistry, or how quantum information tools, both experimental and theoretical can aid in our understanding of a wide range of problems pertaining to chemistry. The second area is chemistry for quantum information, which aims to discuss the several aspects where research in the chemical sciences can aid progress in quantum information science and technology. The results of the workshop are summarized in this report.
• How useful can machine learning be in a quantum laboratory? Here we raise the question of the potential of intelligent machines in the context of scientific research. We investigate this question by using the projective simulation model, a physics-oriented approach to artificial intelligence. In our approach, the projective simulation system is challenged to design complex photonic quantum experiments that produce high-dimensional entangled multiphoton states, which are of high interest in modern quantum experiments. The artificial intelligence system learns to create a variety of entangled states, in number surpassing the best previously studied automated approaches, and improves the efficiency of their realization. In the process, the system autonomously (re)discovers experimental techniques which are only becoming standard in modern quantum optical experiments - a trait which was not explicitly demanded from the system but emerged through the process of learning. Such features highlight the possibility that machines could have a significantly more creative role in future research.
• We show that qubit stabilizer states can be represented by non-negative quasi-probability distributions associated with a Wigner-Weyl-Moyal formalism where Clifford gates are positive state-independent maps. This is accomplished by generalizing the Wigner-Weyl-Moyal formalism to three generators instead of two---producing an exterior, or Grassmann, algebra---which results in Clifford group gates for qubits that act as a permutation on the finite Weyl phase space points naturally associated with stabilizer states. As a result, a non-negative probability distribution can be associated with each stabilizer state's three-generator Wigner function, and these distributions evolve deterministically to one another under Clifford gates. This corresponds to a hidden variable theory that is non-contextual and local for qubit Clifford gates while Clifford (Pauli) measurements have a context-dependent representation. Equivalently, we show that qubit Clifford gates can be expressed as propagators within the three-generator Wigner-Weyl-Moyal formalism whose semiclassical expansion is truncated at order $\hbar^0$ with a finite number of terms. The $T$-gate, which extends the Clifford gate set to one capable of universal quantum computation, require a semiclassical expansion of the propagator to order $\hbar^1$. We compare this approach to previous quasi-probability descriptions of qubits that relied on the two-generator Wigner-Weyl-Moyal formalism and find that the two-generator Weyl symbols of stabilizer states result in a description of evolution under Clifford gates that is state-dependent, in contrast to the three-generator formalism. We have thus extended Wigner non-negative quasi-probability distributions from the odd $d$-dimensional case to $d=2$ qubits, which describe the non-contextuality of Clifford gates and contextuality of Pauli measurements on qubit stabilizer states.
• Jun 21 2017 quant-ph arXiv:1706.06221v1
Non-asymptotic entanglement distillation studies the trade-off between three parameters: the distillation rate, the number of independent and identically distributed prepared states, and the fidelity of the distillation. We first study the one-shot \epsilon-infidelity distillable entanglement under quantum operations that completely preserve positivity of the partial transpose (PPT) and characterize it as a semidefinite program (SDP). For isotropic states, it can be further simplified to a linear program. The one-shot \epsilon-infidelity PPT-assisted distillable entanglement can be transformed to a quantum hypothesis testing problem. Moreover, we show efficiently computable second-order upper and lower bounds for the non-asymptotic distillable entanglement with a given infidelity tolerance. Utilizing these bounds, we obtain the second order asymptotic expansions of the optimal distillation rates for pure states and some classes of mixed states. In particular, this result recovers the second-order expansion of LOCC distillable entanglement for pure states in [Datta/Leditzky, IEEE Trans. Inf. Theory 61:582, 2015]. Furthermore, we provide an algorithm for calculating the Rains bound and present direct numerical evidence (not involving any other entanglement measures, as in [Wang/Duan, arXiv:1605.00348]), showing that the Rains bound is not additive under tensor products.
• Recent results have shown the stability of frustration-free Hamiltonians to weak local perturbations, assuming several conditions. In this paper, we prove the stability of free fermion Hamiltonians which are gapped and local. These free fermion Hamiltonians are not necessarily frustration-free, but we are able to adapt previous work to prove stability. The key idea is to add an additional copy of the system to cancel topological obstructions. We comment on applications to quantization of Hall conductance in such systems.
• One of the most widely known building blocks of modern physics is Heisenberg's indeterminacy principle. Among the different statements of this fundamental property of the full quantum mechanical nature of physical reality, the uncertainty relation for energy and time has a special place. Its interpretation and its consequences have inspired continued research efforts for almost a century. In its modern formulation, the uncertainty relation is understood as setting a fundamental bound on how fast any quantum system can evolve. In this Topical Review we describe important milestones, such as the Mandelstam-Tamm and the Margolus-Levitin bounds on the quantum speed limit, and summarise recent applications in a variety of current research fields -- including quantum information theory, quantum computing, and quantum thermodynamics amongst several others. To bring order and to provide an access point into the many different notions and concepts, we have grouped the various approaches into the minimal time approach and the geometric approach, where the former relies on quantum control theory, and the latter arises from measuring the distinguishability of quantum states. Due to the volume of the literature, this Topical Review can only present a snapshot of the current state-of-the-art and can never be fully comprehensive. Therefore, we highlight but a few works hoping that our selection can serve as a representative starting point for the interested reader.
• Magic states can be used as a resource to circumvent the restrictions due to stabilizer-preserving operations, and magic-state conversion has been studied only in the asymptotic regime thus far. Here we solve the question of whether a stabilizer-preserving quantum operation exists that can convert between two given magic states in the single-shot regime. We first phrase this question as a feasibility problem for a semi-definite program (SDP), which provides a procedure for constructing a free channel if it exists. Then we employ a variant of Farkas' Lemma to derive necessary and sufficient conditions for existence, and this method is used to construct a complete set of magic monotones.
• To implement fault-tolerant quantum computation with continuous variables, Gottesman-Kitaev-Preskill (GKP) qubits have been recognized as an important technological element. However, the analog outcome of GKP qubits, which includes beneficial information to improve the error tolerance, has been wasted, because the GKP qubits have been treated as only discrete variables. In this paper, we propose a hybrid quantum error correction approach that combines digital information with the analog information of the GKP qubits using the maximum-likelihood method. As an example, we demonstrate that the three-qubit bit-flip code can correct double errors, whereas the conventional method based on majority voting on the binary measurement outcome can correct only a single error. As another example, a concatenated code known as Knill's C4/C6 code can achieve the hashing bound for the quantum capacity of the Gaussian quantum channel. To the best of our knowledge, this approach is the first attempt to draw both digital and analog information from a single quantum state to improve quantum error correction performance.
• This paper reports on an experiment realized on the IBM 5Q chip which demonstrates strong evidence for the advantage of using error detection and fault-tolerant design of quantum circuits. By showing that fault-tolerant quantum computation is already within our reach, the author hopes to encourage this approach.
• In this work we introduce an intermediate setting between quantum nonlocality and communication complexity problems. More precisely, we study the value of XOR games $G$ when Alice and Bob are allowed to use a limited amount of one-way classical communication $\omega_{o.w.-c}(G)$ (resp. one-way quantum communication $\omega_{o.w.-c}^*(G)$), where $c$ denotes the number of bits (resp. qubits). The key quantity here is the quotient $\omega_{o.w.-c}^*(G)/\omega_{o.w.-c}(G)$. We provide a universal way to obtain Bell inequality violations of general Bell functionals from XOR games for which the quotient $\omega_{o.w.-c}^*(G)/\omega_{o.w.-c}(G)$ is larger than 1. This allows, in particular, to find (unbounded) Bell inequality violations from communication complexity problems in the same spirit as the recent work by Buhrman et al. (2016). We also provide an example of a XOR game for which the previous quotient is optimal (up to a logarithmic factor) in terms of the amount of information $c$. Interestingly, this game has only polynomially many inputs per player. For the related problem of separating the classical vs quantum communication complexity of a function, the known examples attaining exponential separation require exponentially many inputs per party.
• We introduce a family of tensor network states that we term quasi-injective Projected Entangled-Pair States (PEPS). They extend the class of injective PEPS and include other states, like the ground states of the AKLT and the CZX models in square lattices. We construct parent Hamiltonians for which quasi-injective PEPS are unique ground states. We also determine the necessary and sufficient conditions for two tensors to generate the same family of such states in two spatial dimensions. Using this result, we show that the third cohomology labeling of Symmetry Protected Topological phases extends to quasi-injective PEPS.
• The tensor rank of a tensor is the smallest number r such that the tensor can be decomposed as a sum of r simple tensors. Let s be a k-tensor and let t be an l-tensor. The tensor product of s and t is a (k + l)-tensor (not to be confused with the "tensor Kronecker product" used in algebraic complexity theory, which multiplies two k-tensors to get a k-tensor). Tensor rank is sub-multiplicative under the tensor product. We revisit the connection between restrictions and degenerations. It is well-known that tensor rank is not in general multiplicative under the tensor Kronecker product. A result of our study is that tensor rank is also not in general multiplicative under the tensor product. This answers a question of Draisma and Saptharishi. It remains an open question whether border rank and asymptotic rank are multiplicative under the tensor product. Interestingly, all lower bounds on border rank obtained from Young flattenings are multiplicative under the tensor product.
• We present a synthesis framework to map logic networks into quantum circuits for quantum computing. The synthesis framework is based on LUT networks (lookup-table networks), which play a key role in conventional logic synthesis. Establishing a connection between LUTs in a LUT network and reversible single-target gates in a reversible network allows us to bridge conventional logic synthesis with logic synthesis for quantum computing, despite several fundamental differences. We call our synthesis framework LUT-based Hierarchical Reversible Logic Synthesis (LHRS). Input to LHRS is a classical logic network; output is a quantum network (realized in terms of Clifford+$T$ gates). The framework offers to trade-off the number of qubits for the number of quantum gates. In a first step, an initial network is derived that only consists of single-target gates and already completely determines the number of qubits in the final quantum network. Different methods are then used to map each single-target gate into Clifford+$T$ gates, while aiming at optimally using available resources. We demonstrate the effectiveness of our method in automatically synthesizing IEEE compliant floating point networks up to double precision. As many quantum algorithms target scientific simulation applications, they can make rich use of floating point arithmetic components. But due to the lack of quantum circuit descriptions for those components, it can be difficult to find a realistic cost estimation for the algorithms. Our synthesized benchmarks provide cost estimates that allow quantum algorithm designers to provide the first complete cost estimates for a host of quantum algorithms. Thus, the benchmarks and, more generally, the LHRS framework are an essential step towards the goal of understanding which quantum algorithms will be practical in the first generations of quantum computers.
• Integer arithmetic is the underpinning of many quantum algorithms, with applications ranging from Shor's algorithm over HHL for matrix inversion to Hamiltonian simulation algorithms. A basic objective is to keep the required resources to implement arithmetic as low as possible. This applies in particular to the number of qubits required in the implementation as for the foreseeable future this number is expected to be small. We present a reversible circuit for integer multiplication that is inspired by Karatsuba's recursive method. The main improvement over circuits that have been previously reported in the literature is an asymptotic reduction of the amount of space required from $O(n^{1.585})$ to $O(n^{1.427})$. This improvement is obtained in exchange for a small constant increase in the number of operations by a factor less than $2$ and a small asymptotic increase in depth for the parallel version. The asymptotic improvement are obtained from analyzing pebble games on complete ternary trees.
• Grid states form a discrete set of mixed quantum states that can be described by graphs. We characterize the entanglement properties of these states and provide methods to evaluate entanglement criteria for grid states in a graphical way. With these ideas we find bound entangled grid states for two-particle systems of any dimension and multiparticle grid states that provide examples for the different aspects of genuine multiparticle entanglement. Our findings suggest that entanglement theory for grid states, although being a discrete set, has already a complexity similar to the one for general states.
• If two quantum players at a nonlocal game G achieve a superclassical score, then their measurement outcomes must be at least partially random from the perspective of any third player. This is the basis for device-independent quantum cryptography. In this paper we address a related question: does a superclassical score at G guarantee that one player has created randomness from the perspective of the other player? We show that for complete-support games, the answer is yes: even if the second player is given the first player's input at the conclusion of the game, he cannot perfectly recover her output. Thus some amount of local randomness (i.e., randomness possessed by only one player) is always obtained when randomness is certified from nonlocal games with quantum strategies. This is in contrast to non-signaling game strategies, which may produce global randomness without any local randomness. We discuss potential implications for cryptographic protocols between mistrustful parties.
• The analysis of photon count data from the standard nitrogen vacancy (NV) measurement process is treated as a statistical inference problem. This has applications toward gaining better and more rigorous error bars for tasks such as parameter estimation (eg. magnetometry), tomography, and randomized benchmarking. We start by providing a summary of the standard phenomenological model of the NV optical process in terms of Lindblad jump operators. This model is used to derive random variables describing emitted photons during measurement, to which finite visibility, dark counts, and imperfect state preparation are added. NV spin-state measurement is then stated as an abstract statistical inference problem consisting of an underlying biased coin obstructed by three Poisson rates. Relevant frequentist and Bayesian estimators are provided, discussed, and quantitatively compared. We show numerically that the risk of the maximum likelihood estimator is well approximated by the Cramer-Rao bound, for which we provide a simple formula. Of the estimators, we in particular promote the Bayes estimator, owing to its slightly better risk performance, and straight-forward error propagation into more complex experiments. This is illustrated on experimental data, where Quantum Hamiltonian Learning is performed and cross-validated in a fully Bayesian setting, and compared to a more traditional weighted least squares fit.
• Jun 20 2017 quant-ph arXiv:1706.05384v1
We consider the simulation of a quantum channel by two parties who share a resource state and may apply local operations assisted by classical communication (LOCC). One specific type of such LOCC is standard teleportation, which is however limited to the simulation of Pauli channels. Here we show how we can easily enlarge this class by means of a minimal perturbation of the teleportation protocol, where we introduce noise in the classical communication channel between the remote parties. By adopting this noisy protocol, we provide a necessary condition for simulating a non-Pauli channel. In particular, we characterize the set of channels that are generated assuming the Choi matrix of an amplitude damping channel as a resource state. Within this set, we identify a class of Pauli-damping channels for which we bound the two-way quantum and private capacities.
• Upper bounds on the secret-key-agreement capacity of a quantum channel serve as a way to assess the performance of practical quantum-key-distribution protocols conducted over that channel. In particular, if a protocol employs a quantum repeater, achieving secret-key rates exceeding these upper bounds is a witness to having a working quantum repeater. In this paper, we extend a recent advance [Liuzzo-Scorpo et al., arXiv:1705.03017] in the theory of the teleportation simulation of single-mode phase-insensitive Gaussian channels such that it now applies to the relative entropy of entanglement measure. As a consequence of this extension, we find tighter upper bounds on the non-asymptotic secret-key-agreement capacity of the lossy thermal bosonic channel than were previously known. The lossy thermal bosonic channel serves as a more realistic model of communication than the pure-loss bosonic channel, because it can model the effects of eavesdropper tampering and imperfect detectors. An implication of our result is that the previously known upper bounds on the secret-key-agreement capacity of the thermal channel are too pessimistic for the practical finite-size regime in which the channel is used a finite number of times, and so it should now be somewhat easier to witness a working quantum repeater when using secret-key-agreement capacity upper bounds as a benchmark.
• For a pair of observables, they are called "incompatible", if and only if the commutator between them does not vanish, which represents one of the key features in quantum mechanics. The question is, how can we characterize the incompatibility among three or more observables? Here we explore one possible route towards this goal through Heisenberg's uncertainty relations, which impose fundamental constraints on the measurement precisions for incompatible observables. Specifically, we quantify the incompatibility by the optimal state-independent bounds of additive variance-based uncertainty relations. In this way, the degree of incompatibility becomes an intrinsic property among the operators, but not on the quantum state. To justify our case, we focus on the incompatibility of spin systems. For an arbitrary setting of two or three linearly-independent Pauli-spin operators, the incompatibility is analytically solved, the spins are maximally incompatible if and only if they are orthogonal to each other. On the other hand, the measure of incompatibility represents a versatile tool for applications such as testing entanglement of bipartite states, and EPR-steering criteria.
• Given an arbitrary quantum state ($\sigma$), we obtain an explicit construction of a state $\rho^*_\varepsilon(\sigma)$ (resp. $\rho_{*,\varepsilon}(\sigma)$) which has the maximum (resp. minimum) entropy among all states which lie in a specified neighbourhood ($\varepsilon$-ball) of $\sigma$. Computing the entropy of these states leads to a local strengthening of the continuity bound of the von Neumann entropy, i.e., the Audenaert-Fannes inequality. Our bound is local in the sense that it depends on the spectrum of $\sigma$. The states $\rho^*_\varepsilon(\sigma)$ and $\rho_{*,\varepsilon}(\sigma)$ depend only on the geometry of the $\varepsilon$-ball and are in fact optimizers for a larger class of entropies. These include the Rényi entropy and the min- and max- entropies. This allows us to obtain local continuity bounds for these quantities as well. In obtaining this bound, we first derive a more general result which may be of independent interest, namely a necessary and sufficient condition under which a state maximizes a concave and Gâteaux-differentiable function in an $\varepsilon$-ball around a given state $\sigma$. Examples of such a function include the von Neumann entropy, and the conditional entropy of bipartite states. Our proofs employ tools from the theory of convex optimization under non-differentiable constraints, in particular Fermat's Rule, and majorization theory.
• In the typical model, a discrete-time coined quantum walk searching the 2D grid for a marked vertex achieves a success probability of $O(1/\log N)$ in $O(\sqrt{N \log N})$ steps, which with amplitude amplification yields an overall runtime of $O(\sqrt{N} \log N)$. We show that making the quantum walk lackadaisical or lazy by adding a self-loop of weight $4/N$ to each vertex speeds up the search, causing the success probability to reach a constant near $1$ in $O(\sqrt{N \log N})$ steps, thus yielding an $O(\sqrt{\log N})$ improvement over the typical, loopless algorithm. This improved runtime matches the best known quantum algorithms for this search problem. Our results are based on numerical simulations since it is not amenable the analysis of the abstract search algorithm.
• Given a direct system of Hilbert spaces $s\mapsto \mathcal H_s$ (with isometric inclusion maps $\iota_s^t:\mathcal H_s\rightarrow \mathcal H_t$ for $s\leq t$) corresponding to quantum systems on scales $s$, we define notions of scale invariant and weakly scale invariant operators. Is some cases of quantum spin chains we find conditions for transfer matrices and nearest neighbour Hamiltonians to be scale invariant or weakly so. Scale invariance forces spatial inhomogeneity of the spectral parameter. But weakly scale invariant transfer matrices may be spatially homogeneous in which case the change of spectral parameter from one scale to another is governed by a classical dynamical system exhibiting fractal behaviour.
• We show that the border support rank of the tensor corresponding to two-by-two matrix multiplication is seven over the complex numbers. We do this by constructing two polynomials that vanish on all complex tensors with format four-by-four-by-four and border rank at most six, but that do not vanish simultaneously on any tensor with the same support as the two-by-two matrix multiplication tensor. This extends the work of Hauenstein, Ikenmeyer, and Landsberg. We also give two proofs that the support rank of the two-by-two matrix multiplication tensor is seven over any field: one proof using a result of De Groote saying that the decomposition of this tensor is unique up to sandwiching, and another proof via the substitution method. These results answer a question asked by Cohn and Umans. Studying the border support rank of the matrix multiplication tensor is relevant for the design of matrix multiplication algorithms, because upper bounds on the border support rank of the matrix multiplication tensor lead to upper bounds on the computational complexity of matrix multiplication, via a construction of Cohn and Umans. Moreover, support rank has applications in quantum communication complexity.
• We introduce a framework for graphical security proofs in device-independent quantum cryptography using the methods of categorical quantum mechanics. We are optimistic that this approach will make some of the highly complex proofs in quantum cryptography more accessible, facilitate the discovery of new proofs, and enable automated proof verification. As an example of our framework, we reprove a recent result from device-independent quantum cryptography: any linear randomness expansion protocol can be converted into an unbounded randomness expansion protocol. We give a graphical exposition of a proof of this result and implement parts of it in the Globular proof assistant.
• An important class of contextuality arguments in quantum foundations are the All-versus-Nothing (AvN) proofs, generalising a construction originally due to Mermin. We present a general formulation of All-versus-Nothing arguments, and a complete characterisation of all such arguments which arise from stabiliser states. We show that every AvN argument for an n-qubit stabiliser state can be reduced to an AvN proof for a three-qubit state which is local Clifford-equivalent to the tripartite GHZ state. This is achieved through a combinatorial characterisation of AvN arguments, the AvN triple Theorem, whose proof makes use of the theory of graph states. This result enables the development of a computational method to generate all the AvN arguments in $\mathbb{Z}_2$ on n-qubit stabiliser states. We also present new insights into the stabiliser formalism and its connections with logic.
• Bose condensation is central to our understanding of quantum phases of matter. Here we review Bose condensation in topologically ordered phases (also called topological symmetry breaking), where the condensing bosons have non-trivial mutual statistics with other quasiparticles in the system. We give a non-technical overview of the relationship between the phases before and after condensation, drawing parallels with more familiar symmetry-breaking transitions. We then review two important applications of this phenomenon. First, we describe the equivalence between such condensation transitions and pairs of phases with gappable boundaries, as well as examples where multiple types of gapped boundary between the same two phases exist. Second, we discuss how such transitions can lead to global symmetries which exchange or permute anyon types. Finally we discuss the nature of the critical point, which can be mapped to a conventional phase transition in some -- but not all -- cases.
• Quantifying quantum mechanical uncertainty is vital for the increasing number of experiments that reach the uncertainty limited regime. We present a method for computing tight variance uncertainty relations, i.e., the optimal state-independent lower bound for the sum of the variances for any set of two or more measurements. The bounds come with a guaranteed error estimate, so results of pre-assigned accuracy can be obtained straightforwardly. Our method also works for POVM measurements. Therefore, it can be used for detecting entanglement in noisy environments, even in cases where conventional spin squeezing criteria fail because of detector noise.
• Communication over a noisy channel is often conducted in a setting in which different input symbols to the channel incur a certain cost. For example, for the additive white Gaussian noise channel, the cost associated with a real number input symbol is the square of its magnitude. In such a setting, it is often useful to know the maximum amount of information that can be reliably transmitted per cost incurred. This is known as the capacity per unit cost. In this paper, we generalize the capacity per unit cost to various communication tasks involving a quantum channel; in particular, we consider classical communication, entanglement-assisted classical communication, private communication, and quantum communication. For each task, we define the corresponding capacity per unit cost and derive a formula for it analogous to that of the usual capacity. Furthermore, for the special case in which there is a zero-cost quantum state, we obtain expressions for the various capacities per unit cost in terms of an optimized relative entropy involving the zero-cost state. For each communication task, we construct an explicit pulse-position-modulation coding scheme that achieves the capacity per unit cost. Finally, we compute capacities per unit cost for various quantum Gaussian channels and introduce the notion of a blocklength-constrained capacity per unit cost.
• The adiabatic quantum algorithm has drawn intense interest as a potential approach to accelerating optimization tasks using quantum computation. The algorithm is most naturally realised in systems which support Hamiltonian evolution, rather than discrete gates. We explore an alternative approach in which slowly varying measurements are used to mimic adiabatic evolution. We show that for certain Hamiltonians, which remain frustration-free all along the adiabatic path, the necessary measurements can be implemented through the measurement of random terms from the Hamiltonian. This offers a new, and potentially more viable, method of realising adiabatic evolution in gate-based quantum computer architectures.
• Neural networks can efficiently encode the probability distribution of errors in an error correcting code. Moreover, these distributions can be conditioned on the syndromes of the corresponding errors. This paves a path forward for a decoder that employs a neural network to calculate the conditional distribution, then sample from the distribution - the sample will be the predicted error for the given syndrome. We present an implementation of such an algorithm that can be applied to any stabilizer code. Testing it on the toric code, it has higher threshold than a number of known decoders thanks to naturally finding the most probable error and accounting for correlations between errors.
• We consider the contextual fraction as a quantitative measure of contextuality of empirical models, i.e. tables of probabilities of measurement outcomes in an experimental scenario. It provides a general way to compare the degree of contextuality across measurement scenarios; it bears a precise relationship to violations of Bell inequalities; its value, and a witnessing inequality, can be computed using linear programming; it is monotone with respect to the "free" operations of a resource theory for contextuality; and it measures quantifiable advantages in informatic tasks, such as games and a form of measurement based quantum computing.
• The violation of certain Bell inequalities allows for device-independent information processing secure against non-signalling eavesdroppers. However, this only holds for the Bell network, in which two or more agents perform local measurements on a single shared source of entanglement. To overcome the practical constraint that entangled systems can only be transmitted over relatively short distances, large-scale multi-source networks---in which entanglement is swapped between intermediate nodes---have been employed. Yet, to use Bell inequality violation to establish secure device-independent protocols between distant agents in such multi-source networks, the outcomes of intermediate entanglement swapping measurements must be post-selected---impeding the rate at which security can be established. Moreover, having multiple intermediate nodes in the network opens the door for generalised eavesdropping attacks not seen in the Bell network. Do there exist analogues of Bell inequalities for such multi-source networks whose violation is a resource for device-independent information processing without the need for post-selection? Recently, polynomial inequalities on the correlations classically achievable in multi-source networks have been derived. In this paper, the violation of these polynomial inequalities will be shown to allow for device-independent information processing without the need to condition on intermediate agents' outcomes. Moreover, their violation can prevent generalised eavesdropper attacks.
• We introduce both an exactly solvable model and a coupled-layer construction for an exotic, three-dimensional phase of matter with immobile topological excitations that carry a protected internal degeneracy. Unitary transformations on this degenerate Hilbert space may be implemented by braiding certain point-like excitations. This provides a new way of extending non-Abelian statistics to three-dimensions.
• In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
• In the study of thermodynamics for nanoscale quantum systems, a family of quantities known as generalized free energies have been derived as necessary and sufficient conditions that govern state transitions. These free energies become important especially in the regime where the system of interest consists of only a few (quantum) particles. In this work, we introduce a new family of smoothed generalized free energies, by constructing explicit smoothing procedures that maximize/minimize the free energies over an $\varepsilon$-ball of quantum states. In contrast to previously known smoothed free energies, these quantities now allow us to make an operational statement for approximate thermodynamic state transitions. We show that these newly defined smoothed quantities converge to the standard free energy in the thermodynamic limit.

Kenneth Goodenough Jun 21 2017 12:48 UTC

Ah yes I see, thank you for the clarification!

Stefano Pirandola Jun 20 2017 13:26 UTC

Hi Kenneth, more precisely that plot is for a particular "Pauli-damping" channel, i.e., a qubit channel that is decomposable into a Pauli channel (1) and an amplitude damping channel (2). This "Pauli-damping" channel can be simulated by performing noisy teleportation over a resource state that corre

...(continued)
Kenneth Goodenough Jun 20 2017 12:47 UTC

Interesting work! I was wondering, how do the new upper bounds for the amplitude-damping channel in Fig. 2 compare to previous bounds?

Barbara Terhal Jun 20 2017 07:25 UTC

It would be good if this conflict on assigning priority and credit is peacefully resolved by the parties involved (i have no opinions on the matter).

Stefano Pirandola Jun 15 2017 05:32 UTC

The secret-key capacity of the pure-loss channel -log(1-t) was proven in [9], not in the follow-up work [13] (which appeared 4 months later). Ref. [13] found that this capacity is also a strong converse bound, which is Eq. (1) here. Same story for Eq. (4) that was proven in [9], not in [13]. Again t

...(continued)
Chris Ferrie Jun 09 2017 10:06 UTC

I have posted an open review of this paper here: https://github.com/csferrie/openreviews/blob/master/arxiv.1703.09835/arxiv.1703.09835.md

Eddie Smolansky May 26 2017 05:23 UTC

Updated summary [here](https://github.com/eddiesmo/papers).

# How they made the dataset
- automated filtering with yolo and landmark detection projects
- crowd source final filtering (AMT - give 50 face images to turks and ask which don't belong)
- quality control through s

...(continued)
Felix Leditzky May 24 2017 20:43 UTC

Yes, that's right, thanks!

For (5), you use the Cauchy-Schwarz inequality $\left| \operatorname{tr}(X^\dagger Y) \right| \leq \sqrt{\operatorname{tr}(X^\dagger X)} \sqrt{\operatorname{tr}(Y^\dagger Y)}$ for the Hilbert-Schmidt inner product $\langle X,Y\rangle := \operatorname{tr}(X^\dagger Y)$ wi

...(continued)
Michael Tolan May 24 2017 20:27 UTC

Just reading over Eq (5) on P5 concerning the diamond norm.

Should the last $\sigma_1$ on the 4th line be replaced with a $\sigma_2$? I think I can see how the proof is working but not entirely certain.

Noon van der Silk May 23 2017 11:15 UTC

I think this thread has reached it's end.

I've locked further comments, and I hope that the quantum computing community can thoughtfully find an approach to language that is inclusive to all and recognises the diverse background of all researchers, current and future.