# Top arXiv papers

• A critical milestone on the path to useful quantum computers is quantum supremacy - a demonstration of a quantum computation that is prohibitively hard for classical computers. A leading near-term candidate, put forth by the Google/UCSB team, is sampling from the probability distributions of randomly chosen quantum circuits, which we call Random Circuit Sampling (RCS). In this paper we study both the hardness and verification of RCS. While RCS was defined with experimental realization in mind, we show complexity theoretic evidence of hardness that is on par with the strongest theoretical proposals for supremacy. Specifically, we show that RCS satisfies an average-case hardness condition - computing output probabilities of typical quantum circuits is as hard as computing them in the worst-case, and therefore #P-hard. Our reduction exploits the polynomial structure in the output amplitudes of random quantum circuits, enabled by the Feynman path integral. In addition, it follows from known results that RCS satisfies an anti-concentration property, making it the first supremacy proposal with both average-case hardness and anti-concentration.
• Contextuality has been conjectured to be a super-classical resource for quantum computation, analogous to the role of non-locality as a super-classical resource for communication. We show that the presence of contextuality places a lower bound on the amount of classical memory required to simulate any quantum sub-theory, thereby establishing a quantitative connection between contextuality and classical simulability. We apply our result to the qubit stabilizer sub-theory, where the presence of state-independent contextuality has been an obstacle in establishing contextuality as a quantum computational resource. We find that the presence of contextuality in this sub-theory demands that the minimum number of classical bits of memory required to simulate a multi-qubit system must scale quadratically in the number of qubits; notably, this is the same scaling as the Gottesman-Knill algorithm. We contrast this result with the (non-contextual) qudit case, where linear scaling is possible.
• Characterising quantum processes is a key task in and constitutes a challenge for the development of quantum technologies, especially at the noisy intermediate scale of today's devices. One method for characterising processes is randomised benchmarking, which is robust against state preparation and measurement (SPAM) errors, and can be used to benchmark Clifford gates. A complementing approach asks for full tomographic knowledge. Compressed sensing techniques achieve full tomography of quantum channels essentially at optimal resource efficiency. So far, guarantees for compressed sensing protocols rely on unstructured random measurements and can not be applied to the data acquired from randomised benchmarking experiments. It has been an open question whether or not the favourable features of both worlds can be combined. In this work, we give a positive answer to this question. For the important case of characterising multi-qubit unitary gates, we provide a rigorously guaranteed and practical reconstruction method that works with an essentially optimal number of average gate fidelities measured respect to random Clifford unitaries. Moreover, for general unital quantum channels we provide an explicit expansion into a unitary 2-design, allowing for a practical and guaranteed reconstruction also in that case. As a side result, we obtain a new statistical interpretation of the unitarity -- a figure of merit that characterises the coherence of a process. In our proofs we exploit recent representation theoretic insights on the Clifford group, develop a version of Collins' calculus with Weingarten functions for integration over the Clifford group, and combine this with proof techniques from compressed sensing.
• We provide the first example of a symmetry protected quantum phase that has universal computational power. Throughout this phase, which lives in spatial dimension two, the ground state is a universal resource for measurement based quantum computation.
• Feb 21 2018 quant-ph arXiv:1802.07130v1
A family of quantum Hamiltonians is said to be universal if any other finite-dimensional Hamiltonian can be approximately encoded within the low-energy space of a Hamiltonian from that family. If the encoding is efficient, universal families of Hamiltonians can be used as universal analogue quantum simulators and universal quantum computers, and the problem of approximately determining the ground-state energy of a Hamiltonian from a universal family is QMA-complete. One natural way to categorise Hamiltonians into families is in terms of the interactions they are built from. Here we prove universality of some important classes of interactions on qudits ($d$-level systems): (1) We completely characterise the $k$-qudit interactions which are universal, if augmented with arbitrary 1-local terms. We find that, for all $k \geqslant 2$ and all local dimensions $d \geqslant 2$, almost all such interactions are universal aside from a simple stoquastic class. (2) We prove universality of generalisations of the Heisenberg model that are ubiquitous in condensed-matter physics, even if free 1-local terms are not provided. We show that the $SU(d)$ and $SU(2)$ Heisenberg interactions are universal for all local dimensions $d \geqslant 2$ (spin $\geqslant 1/2$), implying that a quantum variant of the Max-$d$-Cut problem is QMA-complete. We also show that for $d=3$ all bilinear-biquadratic Heisenberg interactions are universal. One example is the general AKLT model. (3) We prove universality of any interaction proportional to the projector onto a pure entangled state.
• Feb 27 2018 quant-ph cs.LG arXiv:1802.09025v1
Suppose we have many copies of an unknown $n$-qubit state $\rho$. We measure some copies of $\rho$ using a known two-outcome measurement $E_{1}$, then other copies using a measurement $E_{2}$, and so on. At each stage $t$, we generate a current hypothesis $\sigma_{t}$ about the state $\rho$, using the outcomes of the previous measurements. We show that it is possible to do this in a way that guarantees that $|\operatorname{Tr}(E_{i} \sigma_{t}) - \operatorname{Tr}(E_{i}\rho) |$, the error in our prediction for the next measurement, is at least $\varepsilon$ at most $\operatorname{O}\!\left(n / \varepsilon^2 \right)$ times. Even in the "non-realizable" setting---where there could be arbitrary noise in the measurement outcomes---we show how to output hypothesis states that do significantly worse than the best possible states at most $\operatorname{O}\!\left(\sqrt {Tn}\right)$ times on the first $T$ measurements. These results generalize a 2007 theorem by Aaronson on the PAC-learnability of quantum states, to the online and regret-minimization settings. We give three different ways to prove our results---using convex optimization, quantum postselection, and sequential fat-shattering dimension---which have different advantages in terms of parameters and portability.
• We give a quantum algorithm to exactly solve certain problems in combinatorial optimization, including weighted MAX-2-SAT as well as problems where the objective function is a weighted sum of products of Ising variables, all terms of the same degree $D$; this problem is called weighted MAX-E$D$-LIN2. We require that the optimal solution be unique for odd $D$ and doubly degenerate for even $D$; however, we expect that the algorithm still works without this condition and we show how to reduce to the case without this assumption at the cost of an additional overhead. While the time required is still exponential, the algorithm provably outperforms Grover's algorithm assuming a mild condition on the number of low energy states of the target Hamiltonian. The detailed analysis of the runtime dependence on a tradeoff between the number of such states and algorithm speed: fewer such states allows a greater speedup. This leads to a natural hybrid algorithm that finds either an exact or approximate solution.
• The Harrow-Hassidim-Lloyd (HHL) quantum algorithm for sampling from the solution of a linear system provides an exponential speed-up over its classical counterpart. The problem of solving a system of linear equations has a wide scope of applications, and thus HHL constitutes an important algorithmic primitive. In these notes, we present the HHL algorithm and its improved versions in detail, including explanations of the constituent sub- routines. More specifically, we discuss various quantum subroutines such as quantum phase estimation and amplitude amplification, as well as the important question of loading data into a quantum computer, via quantum RAM. The improvements to the original algorithm exploit variable-time amplitude amplification as well as a method for implementing linear combinations of unitary operations (LCUs) based on a decomposition of the operators using Fourier and Chebyshev series. Finally, we discuss a linear solver based on the quantum singular value estimation (QSVE) subroutine.
• These are notes on some entanglement properties of quantum field theory, aiming to make accessible a variety of ideas that are known in the literature. The main goal is to explain how to deal with entanglement when -- as in quantum field theory -- it is a property of the algebra of observables and not just of the states.
• The complexity of the commuting local Hamiltonians (CLH) problem still remains a mystery after two decades of research of quantum Hamiltonian complexity; it is only known to be contained in NP for few low parameters. Of particular interest is the tightly related question of understanding whether groundstates of CLHs can be generated by efficient quantum circuits. The two problems touch upon conceptual, physical and computational questions, including the centrality of non-commutation in quantum mechanics, quantum PCP and the area law. It is natural to try to address first the more physical case of CLHs embedded on a 2D lattice but this problem too remained open, apart from some very specific cases. Here we consider a wide class of two dimensional CLH instances; these are $k$-local CLHs, for any constant $k$; they are defined on qubits set on the edges of any surface complex, where we require that this surface complex is not too far from being "Euclidean". Each vertex and each face can be associated with an arbitrary term (as long as the terms commute). We show that this class is in NP, and moreover that the groundstates have an efficient quantum circuit that prepares them. This result subsumes that of Schuch [2011] which regarded the special case of $4$-local Hamiltonians on a grid with qubits, and by that it removes the mysterious feature of Schuch's proof which showed containment in NP without providing a quantum circuit for the groundstate and considerably generalizes it. We believe this work and the tools we develop make a significant step towards showing that 2D CLHs are in NP.
• Quantum computing exploits quantum phenomena such as superposition and entanglement to realize a form of parallelism that is not available to traditional computing. It offers the potential of significant computational speed-ups in quantum chemistry, materials science, cryptography, and machine learning. The dominant approach to programming quantum computers is to provide an existing high-level language with libraries that allow for the expression of quantum programs. This approach can permit computations that are meaningless in a quantum context; prohibits succinct expression of interaction between classical and quantum logic; and does not provide important constructs that are required for quantum programming. We present Q#, a quantum-focused domain-specific language explicitly designed to correctly, clearly and completely express quantum algorithms. Q# provides a type system, a tightly constrained environment to safely interleave classical and quantum computations; specialized syntax, symbolic code manipulation to automatically generate correct transformations of quantum operations, and powerful functional constructs which aid composition.
• Finding optimal correction of errors in generic stabilizer codes is a computationally hard problem, even for simple noise models. While this task can be simplified for codes with some structure, such as topological stabilizer codes, developing good and efficient decoders still remains a challenge. In our work, we systematically study a very versatile class of decoders based on feedforward neural networks. To demonstrate adaptability, we apply neural decoders to the triangular color and toric codes under various noise models with realistic features, such as spatially-correlated errors. We report that neural decoders provide significant improvement over leading efficient decoders in terms of the error-correction threshold. Using neural networks simplifies the process of designing well-performing decoders, and does not require prior knowledge of the underlying noise model.
• The ability to implement the Quantum Fourier Transform (QFT) efficiently on a quantum computer enables the advantages offered by a variety of fundamental quantum algorithms, such as those for integer factoring, computing discrete logarithm over Abelian groups, and phase estimation. The standard fault-tolerant implementation of an $n$-qubit QFT approximates the desired transformation by removing small-angle controlled rotations and synthesizing the remaining ones into Clifford+T gates, incurring the T-count complexity of $O(n \log^2(n))$. In this paper we show how to obtain approximate QFT with the T-count of $O(n \log(n))$. Our approach relies on quantum circuits with measurements and feedforward, and on reusing a special quantum state that induces the phase gradient transformation. We report asymptotic analysis as well as concrete circuits, demonstrating significant advantages in both theory and practice.
• Short-depth algorithms are crucial for reducing computational error on near-term quantum computers, for which decoherence and gate infidelity remain important issues. Here we present a machine-learning approach for discovering such algorithms. We apply our method to a ubiquitous primitive: computing the overlap ${\rm Tr}(\rho\sigma)$ between two quantum states $\rho$ and $\sigma$. The standard algorithm for this task, known as the Swap Test, is used in many applications such as quantum support vector machines, and, when specialized to $\rho = \sigma$, quantifies the Renyi entanglement. Here, we find algorithms that have shorter depths than the Swap Test, including one that has constant depth (independent of problem size). Furthermore, we apply our approach to the hardware-specific connectivity and gate alphabets used by Rigetti's and IBM's quantum computers and demonstrate that the shorter algorithms that we derive significantly reduce the error - compared to the Swap Test - on these computers.
• This work derives an analytical formula for the asymptotic state -- the quantum state resulting from an infinite number of applications of a general quantum channel on some initial state. For channels admitting multiple fixed or rotating points, conserved quantities -- the left fixed/rotating points of the channel -- determine the dependence of the asymptotic state on the initial state. The analytical formula stems from results regarding conserved quantities, including the fact that, for any channel admitting a full-rank fixed point, conserved quantities commute with that channel's Kraus operators up to a phase. In the same way that asymptotic states depend on initial states, the thermodynamic limit of (noninjective) matrix product states (MPS) depends on the MPS boundary conditions. Expectation values of local observables in such MPS are calculated in the thermodynamic limit, showing that effects due to the interaction of "twisted" boundary conditions with decaying bond degrees of freedom can persist. It is shown that one can eliminate all decaying bond degrees of freedom and calculate the same expectation values using mixtures of MPS with reduced bond dimension and multiple boundary conditions.
• Is entanglement an exclusive feature of quantum systems, or is it common to all non-classical theories? And if this is the case, how strong is quantum mechanical entanglement as compared to that exhibited by other theories? The first part of this thesis deals with these questions by considering quantum theory as part of a wider landscape of physical theories, collectively called general probabilistic theories (GPTs). Among the other things, this manuscript contains a detailed introduction to the abstract state space formalism for GPTs. We start with a comprehensive review of the proof of a famous theorem by Ludwig that constitutes one of its cornerstones (Ch. 1). After explaining the basic rules of the game, we translate our questions into precise conjectures and present our progress toward a full solution (Ch. 2). In Ch. 3 we consider entanglement at the level of measurements instead of states, focusing on one of its main implications, i.e. data hiding. We determine the maximal data hiding strength that a quantum mechanical system can exhibit, and also the maximum value among all GPTs, finding that the former scales as the square root of the latter. In the second part of this manuscript we look into quantum entanglement. In Ch. 4 we discuss the entanglement transformation properties of a class of maps that model white noise acting either locally or globally on a bipartite system. In Ch. 5 we employ matrix analysis tools to develop a unified approach to Gaussian entanglement. The third part of this thesis concerns more general forms of non-classical correlations in bipartite continuous variable systems. In Ch. 6 we devise a general scheme that allows to consistently classify correlations of bipartite Gaussian states into classical and quantum ones. Finally, Ch. 7 explores some problems connected with a certain strong subadditivity matrix inequality.
• We investigate randomized benchmarking in a general setting with quantum gates that form a representation, not necessarily an irreducible one, of a finite group. We derive an estimate for the average fidelity, to which experimental data may then be calibrated. Furthermore, we establish that randomized benchmarking can be achieved by the sole implementation of quantum gates that generate the group as well as one additional arbitrary group element. In this case, we need to assume that the noise is close to being covariant. This yields a more practical approach to randomized benchmarking. Moreover, we show that randomized benchmarking is stable with respect to approximate Haar sampling for the sequences of gates. This opens up the possibility of using Markov chain Monte Carlo methods to obtain the random sequences of gates more efficiently. We demonstrate these results numerically using the well-studied example of the Clifford group as well as the group of monomial unitary matrices. For the latter, we focus on the subgroup with nonzero entries consisting of n-th roots of unity, which contains T gates.
• The No Low-Energy Trivial States (NLTS) conjecture of Freedman and Hastings (Quantum Information and Computation 2014), which asserts the existence of local Hamiltonians whose low energy states cannot be generated by constant depth quantum circuits, identifies a fundamental obstacle to resolving the quantum PCP conjecture. Progress towards the NLTS conjecture was made by Eldar and Harrow (Foundations of Computer Science 2017), who proved a closely related theorem called No Low-Error Trivial States (NLETS). In this paper, we give a much simpler proof of the NLETS theorem, and use the same technique to establish superpolynomial circuit size lower bounds for noisy ground states of local Hamiltonians (assuming $\mathsf{QCMA} \neq \mathsf{QMA}$), resolving an open question of Eldar and Harrow. We discuss the new light our results cast on the relationship between NLTS and NLETS. Finally, our techniques imply the existence of $\textit{approximate quantum low-weight check (qLWC) codes}$ with linear rate, linear distance, and constant weight checks. These codes are similar to quantum LDPC codes except (1) each particle may participate in a large number of checks, and (2) errors only need to be corrected up to fidelity $1 - 1/\mathsf{poly}(n)$. This stands in contrast to the best-known stabilizer LDPC codes due to Freedman, Meyer, and Luo which achieve a distance of $O(\sqrt{n \log n})$. The principal technique used in our results is to leverage the Feynman-Kitaev clock construction to approximately embed a subspace of states defined by a circuit as the ground space of a local Hamiltonian.
• Quantum error correction can allow quantum computers to operate despite the presence of noise and imperfections. A critical component of any error correcting scheme is the mapping of error syndromes onto an ancillary measurement system. However, errors occurring in the ancilla can propagate onto the logical qubit, and irreversibly corrupt the encoded information. Here, we demonstrate a fault-tolerant syndrome measurement scheme that dramatically suppresses forward propagation of ancilla errors. We achieve an eightfold reduction of the logical error probability per measurement, while maintaining the syndrome assignment fidelity. We use the same method to prevent the propagation of thermal ancilla excitations, increasing the logical qubit dephasing time by more than an order of magnitude. Our approach is hardware-efficient, as it uses a single multilevel transmon ancilla and a cavity-encoded logical qubit, whose interaction is engineered in situ using an off-resonant sideband drive. These results demonstrate that hardware-efficient approaches which exploit system-specific error models can yield practical advances towards fault-tolerant quantum computation.
• Mar 06 2018 quant-ph arXiv:1803.01601v1
In this work, we provide a quantum algorithm to the matrix multiplication problem. When the norm of rows and columns of given matrices are small in size $O(d^{0.3728639})$, then our algorithm is better than the best classical algorithm. The main techniques are swap test and quantum state preparation.
• We investigate the tradeoff between the quality of an approximate version of a given measurement and the disturbance it induces in the measured quantum system. We prove that if the target measurement is a non-degenerate von Neumann measurement, then the optimal tradeoff can always be achieved within a two-parameter family of quantum devices that is independent of the chosen distance measures. This form of almost universal optimality holds under mild assumptions on the distance measures such as convexity and basis-independence, which are satisfied for all the usual cases that are based on norms, transport cost functions, relative entropies, fidelities, etc. for both worst-case and average-case analysis. We analyze the case of the cb-norm (or diamond norm) more generally for which we show dimension-independence of the derived optimal tradeoff for general von Neumann measurements. A SDP solution is provided for general POVMs and shown to exist for arbitrary convex semialgebraic distance measures.
• Emerging reinforcement learning techniques using deep neural networks have shown great promise in robotic control optimization. They harness non-local regularities of noisy control trajectories and facilitate transfer learning between tasks. To leverage these powerful capabilities for quantum control, we propose a control framework and derive an analytic leakage bound to simultaneously optimize the quantum gate-time and gate-fidelity against both leakage and stochastic control errors. For a broad family of two-qubit-unitary gates important for quantum simulation of many-electron systems, we improve the control robustness by adding control noise into training environments for reinforcement learning agents trained with trusted-region-policy-optimization. The agent control solutions demonstrate a two-order-of-magnitude reduction in average-gate-infidelity over baseline stochastic-gradient-descent solutions and up to a one-order-of-magnitude reduction in gate time from optimal gate synthesis counterparts.
• Projected entangled pair states aim at describing lattice systems in two spatial dimensions that obey an area law. They are specified by associating a tensor with each site, and they are generated by patching these tensors. We consider the problem of determining whether the state resulting from this patching is null, and prove it to be NP-hard; the PEPS used to prove this claim have a boundary and are homogeneous in their bulk. A variation of this problem is next shown to be undecidable. These results have various implications: they question the possibility of a 'fundamental theorem' for PEPS; there are PEPS for which the presence of a symmetry is undecidable; there exist parent hamiltonians of PEPS for which the existence of a gap above the ground state is undecidable. En passant, we identify a family of classical Hamiltonians, with nearest neighbour interactions, and translationally invariant in their bulk, for which the commuting 2-local Hamiltonian problem is NP-complete.
• A quantum system driven by a weak deterministic force while under strong continuous energy measurement exhibits quantum jumps between its energy levels. This celebrated phenomenon is emblematic of the special nature of randomness in quantum physics. The times at which the jumps occur are reputed to be fundamentally unpredictable. However, certain classical phenomena, like tsunamis, while unpredictable in the long term, may possess a degree of predictability in the short term, and in some cases it may be possible to prevent a disaster by detecting an advance warning signal. Can there be, despite the indeterminism of quantum physics, a possibility to know if a quantum jump is about to occur or not? In this paper, we answer this question affirmatively by experimentally demonstrating that the completed jump from the ground to an excited state of a superconducting artificial atom can be tracked, as it follows its predictable "flight," by monitoring the population of an auxiliary level coupled to the ground state. Furthermore, we show that the completed jump is continuous, deterministic, and coherent. Exploiting this coherence, we catch and reverse a quantum jump mid-flight, thus preventing its completion. This real-time intervention is based on a particular lull period in the population of the auxiliary level, which serves as our advance warning signal. Our results, which agree with theoretical predictions essentially without adjustable parameters, support the modern quantum trajectory theory and provide new ground for the exploration of real-time intervention techniques in the control of quantum systems, such as early detection of error syndromes.
• Contextuality is a fundamental non-classical property of quantum theory, which has recently been proven to be a key resource for achieving quantum speed-ups in some leading models of quantum computation. However, which of the forms of contextuality, and how much thereof, are required to obtain a speed-up in an arbitrary model of quantum computation remains unclear. In this paper, we show that the relation between contextuality and a compuational advantage is more complicated than previously thought. We achieve this by proving that generalized contextuality is present even within the simplest subset of quantum operations, the so-called single-qubit stabilizer theory, which offers no computational advantage and was previously believed to be completely non-contextual. However, the contextuality of the single-qubit stabilizer theory can be confined to transformations. Therefore our result also demonstrates that the commonly considered prepare-and-measure scenarios (which ignore transformations) do not fully capture the contextuality of quantum theory.
• We propose a scheme to perform a Non-Clifford gate on a logical qudit encoded in a pair of $Z_N$ parafermionic zero modes via the Aharonov Casher effect. This gate can be implemented by moving a half fluxon around the pair of parafermionic zero modes that can be realized in a two-dimensional set-up via existing proposals. The half fluxon can be created as a part of half fluxon/anti-half fluxon pair in long Josephson junction with chiral superconductors and then moved around the parafermionic zero modes. Supplementing this gate with braiding of parafermions provides the avenue for universal quantum computing with parafermions.
• This paper considers the comparison of noisy channels from the viewpoint of statistical decision theory. Various orderings are discussed, all formalizing the idea that one channel is "better" than another for information transmission. The main result is an equivalence relation that is proved for classical channels, quantum channels with classical encoding, and quantum channels with quantum encoding.
• Recent developments in quantum hardware indicate that systems featuring more than 50 physical qubits are within reach. At this scale, classical simulation will no longer be feasible and there is a possibility that such quantum devices may outperform even classical supercomputers at certain tasks. With the rapid growth of qubit numbers and coherence times comes the increasingly difficult challenge of quantum program compilation. This entails the translation of a high-level description of a quantum algorithm to hardware-specific low-level operations which can be carried out by the quantum device. Some parts of the calculation may still be performed manually due to the lack of efficient methods. This, in turn, may lead to a design gap, which will prevent the programming of a quantum computer. In this paper, we discuss the challenges in fully-automatic quantum compilation. We motivate directions for future research to tackle these challenges. Yet, with the algorithms and approaches that exist today, we demonstrate how to automatically perform the quantum programming flow from algorithm to a physical quantum computer for a simple algorithmic benchmark, namely the hidden shift problem. We present and use two tool flows which invoke RevKit. One which is based on ProjectQ and which targets the IBM Quantum Experience or a local simulator, and one which is based on Microsoft's quantum programming language Q$\#$.
• While all bipartite pure entangled states are known to generate correlations violating a Bell inequality, and are therefore nonlocal, the quantitative relation between pure-state entanglement and nonlocality is poorly understood. In fact, some Bell inequalities are maximally violated by non-maximally entangled states and this phenomenon is also observed for other operational measures of nonlocality. In this work, we study a recently proposed measure of nonlocality defined as the probability that a pure state displays nonlocal correlations when subjected to random measurements. We first prove that this measure satisfies some natural properties for an operational measure of nonlocality. Then, we show that for pure states of two qubits the measure is monotonic with entanglement for all correlation two-outcome Bell inequalities: for all these inequalities, the more the state is entangled, the larger the probability to violate them when random measurements are performed. Finally, we extend our results to the multipartite setting.
• Feb 21 2018 quant-ph arXiv:1802.06952v2
Classical simulations of quantum circuits are limited in both space and time when the qubit count is above 50, the realm where quantum supremacy reigns. However, recently, the barrier has been lifted to 56 by teams at Google and IBM. Here, we propose a method of simulation, achieving a 64-qubit simulation of a universal random circuit of depth 22 using a 64-node cluster, and 56- and 42-qubit circuits on a single PC. We also estimate that a 72-qubit circuit of depth 22 can be simulated in a few days on a supercomputer identical to that used by the IBM team. Moreover, the simulation processes are exceedingly separable, hence parallelizable, involving just a few inter-process communications. Our work enables simulating more qubits with less hardware, provides a new perspective regarding classical simulations, and extends the issue of quantum supremacy.
• Non-Gaussian states and operations are crucial for various continuous-variable quantum information processing tasks. To quantitatively understand non-Gaussianity beyond states, we establish a resource theory for non-Gaussian operations. In our framework, we consider Gaussian operations as free operations, and non-Gaussian operations as resources. We define entanglement-assisted non-Gaussianity generating power and show that it is a monotone that is non-increasing under the set of free super-operations, i.e., concatenation and tensoring with Gaussian channels. For conditional unitary maps, this monotone can be analytically calculated. As examples, we show that the non-Gaussianity of ideal photon-number subtraction and photon-number addition equal the non-Gaussianity of the single-photon Fock state. Based on our non-Gaussianity monotone, we divide non-Gaussian operations into two classes: (1) the finite non-Gaussianity class, e.g., photon-number subtraction, photon-number addition and all Gaussian-dilatable non-Gaussian channels; and (2) the diverging non-Gaussianity class, e.g., the binary phase-shift channel and the Kerr nonlinearity. This classification also implies that not all non-Gaussian channels are exactly Gaussian-dilatable, disproving a recent conjecture in [Phys. Rev. A 95, 062309 (2017)]. Our resource theory enables a quantitative characterization and a first classification of non-Gaussian operations, paving the way towards the full understanding of non-Gaussianity.
• Recently, it has been shown that for out-of-equilibrium systems, there are additional constraints on thermodynamical evolution besides the ordinary second law. These form a new family of second laws of thermodynamics, which are equivalent to the monotonicity of quantum Rényi divergences. In black hole thermodynamics, the usual second law is manifest as the area increase theorem. Hence one may ask if these additional laws imply new restrictions for gravitational dynamics, such as for out-of-equilibrium black holes? Inspired by this question, we study these constraints within the AdS/CFT correspondence. First, we show that the Rényi divergence can be computed via a Euclidean path integral for a certain class of excited CFT states. Applying this construction to the boundary CFT, the Rényi divergence is evaluated as the renormalized action for a particular bulk solution of a minimally coupled gravity-scalar system. Further, within this framework, we show that there exist transitions which are allowed by the traditional second law, but forbidden by the additional thermodynamical constraints. We speculate on the implications of our findings.
• HHL quantum algorithm to solve linear systems is one of the most important subroutines in many quantum machine learning algorithms. In this work, we present and analyze several other caveats in HHL algorithm, which have been ignored in the past. Their influences on the efficiency, accuracy and practicability of HHL algorithm and several related quantum machine learning algorithms will be discussed. We also found that these caveats affect HHL algorithm much deeper than the already noticed caveats. In order to obtain more practical quantum machine learning algorithms with less assumptions based on HHL algorithm, we should pay more attention to these caveats.
• Feb 28 2018 quant-ph cs.CR cs.IT math.IT arXiv:1802.10007v2
Authors note: After posting this manuscript, we were made aware of a number of previous works along very similar lines, such as Refs [1-5] in version 2 of this manuscript, amongst others. We had formulated this concept and arrived at our results independently, and regret that as a result we did not properly cite these works and failed to put our work in a proper context. We are grateful to the individuals who brought this to our attention, and we are gratified to find that our ideas, while not as new as we had believed, are at least of interest to the community. We are currently working on assessing how our results relate to these prior works. In the mean time, we remain proud of our work, and we present the manuscript in its original form. Original abstract: We introduce the concept of a quantum "seal" and investigate its feasibility. We define a seal as a state provided by Alice to Bob along with a set of instructions which Bob can follow in order to "break the seal" and read the classical message stored inside. We define two success criteria for a seal: the probability Bob can successfully read the message if he follows Alice's instructions without any further input must be high, and if Alice asks for the state back from Bob, the probability Alice can tell if Bob broke the seal without permission must be high. We prove that within the constraints of quantum mechanics these two criteria are mutually exclusive. Finally, we derive upper and lower bounds on the achievability of a probabilistic seal for various message lengths, and show that for a 1-bit message our bound is tight.
• We present a general framework to tackle the problem of finding time-independent dynamics generating target unitary evolutions. We show that this problem is equivalently stated as a set of conditions over the spectrum of the time-independent gate generator, thus transforming the task to an inverse eigenvalue problem. We illustrate our methodology by identifying suitable time-independent generators implementing Toffoli and Fredkin gates without the need for ancillae or effective evolutions. We show how the same conditions can be used to solve the problem numerically, via supervised learning techniques. In turn, this allows us to solve problems that are not amenable, in general, to direct analytical solution, providing at the same time a high degree of flexibility over the types of gate-design problems that can be approached. As a significant example, we find generators for the Toffoli gate using only diagonal pairwise interactions, which are easier to implement in some experimental architectures. To showcase the flexibility of the supervised learning approach, we give an example of a nontrivial four-qubit gate that is implementable using only diagonal, pairwise interactions.
• Feb 26 2018 quant-ph arXiv:1802.08424v1
Contextuality describes the nontrivial dependence of measurement outcomes on particular choices of jointly measurable observables. In this work we review and generalize the bundle diagram representation introduced in [S. Abramsky et al., 24th EACSL Annual Conference on Computer Science Logic, 41, 211-228, (2015)] in order to graphically demonstrate the contextuality of diverse empirical models.
• Machine learning is a crucial aspect of artificial intelligence. This paper details an approach for quantum Hebbian learning through a batched version of quantum state exponentiation. Here, batches of quantum data are interacted with learning and processing quantum bits (qubits) by a series of elementary controlled partial swap operations, resulting in a Hamiltonian simulation of the statistical ensemble of the data. We decompose this elementary operation into one and two qubit quantum gates from the Clifford+$T$ set and use the decomposition to perform an efficiency analysis. Our construction of quantum Hebbian learning is motivated by extension from the established classical approach, and it can be used to find details about the data such as eigenvalues through phase estimation. This work contributes to the near-term development and implementation of quantum machine learning techniques.
• For every $d \geq 2$, we present a generalization of the CHSH inequality with the property that maximal violation self-tests the maximally entangled state of local dimension $d$. This is the first example of a family of inequalities with this property. Moreover, we provide a conjecture for a family of inequalities generalizing the tilted CHSH inequalities, and we conjecture that for each pure bipartite entangled state there is an inequality in the family whose maximal violation self-tests it. All of these inequalities are inspired by the self-testing correlations of [Nat. Comm. 8, 15485 (2017)].
• Quantum algorithms for diverse problems, including search and optimization problems, require the implementation of a reflection operator over a target state. Commonly, such reflections are approximately implemented using phase estimation. Here we use a linear combination of unitaries and a version of amplitude amplification to approximate reflection operators over eigenvectors of unitary operators using exponentially less ancillary qubits in terms of a precision parameter. The gate complexity of our method is also comparable to that of the phase estimation approach in a certain limit of interest. Like phase estimation, our method requires the implementation of controlled unitary operations. We then extend our results to the Hamiltonian case where the target state is an eigenvector of a Hamiltonian whose matrix elements can be queried. Our results are useful in that they reduce the resources required by various quantum algorithms in the literature. Our improvements also rely on an efficient quantum algorithm to prepare a quantum state with Gaussian-like amplitudes that may be of independent interest. We also provide a lower bound on the query complexity of implementing approximate reflection operators on a quantum computer.
• Although skeptical of the prohibitive power of no-hidden-variables theorems, John Bell was himself responsible for the two most important ones. I describe some recent versions of the lesser known of the two (familar to experts as the "Kochen-Specker theorem") which have transparently simple proofs. One of the new versions can be converted without additional analysis into a powerful form of the very much better known "Bell's Theorem", thereby clarifying the conceptual link between these two results of Bell.
• We consider a variation of the well-studied quantum state redistribution task, in which the starting state is known only to the receiver Bob and not to the sender Alice. We refer to this as quantum state redistribution with a one-sided promise. In addition, we consider communication from Alice to Bob over a noisy channel $\mathcal{N}$, instead of the noiseless channel, as is usually considered in state redistribution. We take a natural approach towards solution of this problem where we "embed" the promise as part of the state and then invoke known protocols for quantum state redistribution composed with known protocols for transfer of quantum information over noisy channels. We interpret the communication primitive Alpha-bit, recently introduced in Ref. [arXiv:1706.09434], as an instance of state transfer (a sub-task of state redistribution) with a one-sided promise over noisy channels. Using our approach, we are able to reproduce the Alpha-bit capacities with or without entanglement assistance in Ref. [arXiv:1706.09434], using known protocols for quantum state redistribution and quantum communication over noisy channels. Furthermore, we generalize the entanglement assisted classical capacity of the Alpha-bit, showing that any quantum state redistribution protocol can be used as a black box to simulate classical communication.
• We study a generalization of the Wigner function to arbitrary tuples of hermitian operators, which is a distribution uniquely characterized by the property that the marginals for all linear combinations of the given operators agree with the quantum mechanical distributions. Its role as a joint quasi-probability distribution is underlined by the property that its support always lies in the set of expectation value tuples of the operators. We characterize the set of singularities and positivity, and provide some basic examples.
• We show that for any collection of hermitian operators A1...An , and any quantum state there is a unique joint distribution on R^n, with the property that the marginals of all linear combinations of the operators coincide with their quantum counterpart. We call it the Wigner distribution, because for position and momentum this property defines the standard Wigner function. In this note we discuss the application to finite dimensional systems, establish many basic properties and illustrate these by examples. The properties include the support, the location of singularities, positivity, the behaviour under symmetry groups, and informational completeness.
• We introduce QuEST, the Quantum Exact Simulation Toolkit, and compare it to ProjectQ, qHipster and a recent distributed implementation of Quantum++. QuEST is the first open source, OpenMP and MPI hybridised, GPU accelerated simulator written in C, capable of simulating generic quantum circuits of general single-qubit gates and many-qubit controlled gates. Using the ARCUS Phase-B and ARCHER supercomputers, we benchmark QuEST's simulation of random circuits of up to 38 qubits, distributed over up to 2048 distributed nodes, each with up to 24 cores. We directly compare QuEST's performance to ProjectQ's on single machines, and discuss the differences in distribution strategies of QuEST, qHipster and Quantum++. QuEST shows excellent scaling, both strong and weak, on multicore and distributed architectures.
• We derive an attainable bound on the precision of quantum state estimation for finite dimensional systems, providing a construction for the asymptotically optimal measurement. Our results hold under an assumption called local asymptotic covariance, which is weaker than unbiasedness or local unbiasedness. The derivation is based on an analysis of the limiting distribution of the estimator's deviation from the true value of the parameter, and takes advantage of quantum local asymptotic normality, a duality between sequences of identically prepared states and Gaussian states of continuous variable systems. We first prove our results for the mean square error of a special class of models, called D-invariant, and then extend the results to arbitrary models, generic cost functions, and global state estimation, where the unknown parameter is not restricted to a local neighbourhood of the true value. The extension includes a treatment of nuisance parameters, namely parameters that are not of interest to the experimenter but nevertheless affect the estimation. As an illustration of the general approach, we provide the optimal estimation strategies for the joint measurement of two qubit observables, for the estimation of qubit states in the presence of amplitude damping noise, and for noisy multiphase estimation.
• Quantum error-correcting codes can be used to protect qubits involved in quantum computation. This requires that logical operators acting on protected qubits be translated to physical operators (circuits) acting on physical quantum states. We propose a mathematical framework for synthesizing physical circuits that implement logical Clifford operators for stabilizer codes. Circuit synthesis is enabled by representing the desired physical Clifford operator in $\mathbb{C}^{N \times N}$ as a partial $2m \times 2m$ binary symplectic matrix, where $N = 2^m$. We state and prove two theorems that use symplectic transvections to efficiently enumerate all symplectic matrices that satisfy a system of linear equations. As an important corollary of these results, we prove that for an $[\![ m,m-k ]\!]$ stabilizer code every logical Clifford operator has $2^{k(k+1)/2}$ symplectic solutions. The desired physical circuits are then obtained by decomposing each solution as a product of elementary symplectic matrices. Our assembly of the possible physical realizations enables optimization over them with respect to a suitable metric. Furthermore, we show that any circuit that normalizes the stabilizer of the code can be transformed into a circuit that centralizes the stabilizer, while realizing the same logical operation. Our method of circuit synthesis can be applied to any stabilizer code, and this paper provides a proof of concept synthesis of universal Clifford gates for the $[\![ 6,4,2 ]\!]$ CSS code. We conclude with a classical coding-theoretic perspective for constructing logical Pauli operators for CSS codes. Since our circuit synthesis algorithm builds on the logical Pauli operators for the code, this paper provides a complete framework for constructing all logical Clifford operators for CSS codes. Programs implementing our algorithms can be found at https://github.com/nrenga/symplectic-arxiv18a.
• We generalize the concepts of weak quantum logarithmic Sobolev inequality (LSI) and weak hypercontractivity (HC), introduced in the quantum setting by Olkiewicz and Zegarlinski, to the case of non-primitive quantum Markov semigroups (QMS). The originality of this work resides in that this new notion of hypercontractivity is given in terms of the so-called amalgamated $\mathbb{L}_p$ norms introduced recently by Junge and Parcet in the context of operator spaces theory. We make three main contributions. The first one is a version of Gross's integration lemma: we prove that (weak) HC implies (weak) LSI. Surprisingly, the converse implication differs from the primitive case as we show that LSI implies HC but with a weak constant equal to the degeneracy of the decoherence-free algebra. Building on the first implication, our second contribution is the fact that strong LSI and therefore strong HC do not hold for non-trivially primitive QMS. This implies that the amalgamated $\mathbb{L}_p$ norms are not uniformly convex for $1\leq p \leq 2$. As a third contribution, we derive universal bounds on the (weak) logarithmic Sobolev constants for a QMS on a finite dimensional Hilbert space, using a similar method as Diaconis and Saloff-Coste in the case of classical primitive Markov chains, and Temme, Pastawski and Kastoryano in the case of primitive QMS. This leads to new bounds on the decoherence rates of decohering QMS. Additionally, we apply our results to the study of the tensorization of HC in non-commutative spaces in terms of the completely bounded norms (CB norms) recently introduced by Beigi and King for unital and trace preserving QMS. We generalize their results to the case of a general primitive QMS and provide estimates on the (weak) constants.
• We introduce canonical forms for single qutrit Clifford+T circuits and prove that every single-qutrit Clifford+T operator admits a unique such canonical form. We show that our canonical forms are T-optimal in the sense that among all the single-qutrit Clifford+T circuits implementing a given operator our canonical form uses the least number of T gates. Finally, we provide an algorithm which inputs the description of an operator (as a matrix or a circuit) and constructs the canonical form for this operator. The algorithm runs in time linear in the number of T gates. Our results provide a higher-dimensional generalization of prior work by Matsumoto and Amano who introduced similar canonical forms for single-qubit Clifford+T circuits.
• What is the ultimate performance for discriminating two arbitrary quantum channels acting on a finite-dimensional Hilbert space? Here we address this basic question by deriving a fundamental lower bound. More precisely, we investigate the symmetric discrimination of two arbitrary qudit channels by means of the most general protocols based on adaptive (feedback-assisted) quantum operations. In this general scenario, we first show how port-based teleportation can be used to completely simplify these adaptive protocols into a much simpler non-adaptive form, designing a new form of teleportation stretching. Then, we prove that the minimum error probability affecting the channel discrimination cannot beat a bound determined by the Choi matrices of the channels, establishing an ultimate and elegant formula for quantum hypothesis testing. As a consequence of this bound, we derive the ultimate limits for adaptive quantum illumination and single-photon quantum optical resolution. Finally, we show that our methodology can also be applied to other tasks, such as quantum metrology, quantum communication and secret key generation.
• Mar 05 2018 quant-ph arXiv:1803.00745v1
We propose a classical-quantum hybrid algorithm for machine learning on near-term quantum processors, which we call quantum circuit learning. A quantum circuit driven by our framework learns a given task by tuning parameters implemented on it. The iterative optimization of the parameters allows us to circumvent the high-depth circuit. Theoretical investigation shows that a quantum circuit can approximate nonlinear functions, which is further confirmed by numerical simulations. Hybridizing a low-depth quantum circuit and a classical computer for machine learning, the proposed framework paves the way toward applications of near-term quantum devices for quantum machine learning.

Luis Cruz Mar 16 2018 15:34 UTC

Related Work:

- [Performance-Based Guidelines for Energy Efficient Mobile Applications](http://ieeexplore.ieee.org/document/7972717/)
- [Leafactor: Improving Energy Efficiency of Android Apps via Automatic Refactoring](http://ieeexplore.ieee.org/document/7972807/)

Dan Elton Mar 16 2018 04:36 UTC

Code is open source and available at :
[https://github.com/delton137/PIMD-F90][1]

[1]: https://github.com/delton137/PIMD-F90

Danial Dervovic Mar 01 2018 12:08 UTC

Hello again Māris, many thanks for your patience. Your comments and questions have given me much food for thought, and scope for an amended version of the paper -- please see my responses below.

Please if any of the authors of [AST17 [arXiv:1712.01609](https://arxiv.org/abs/1712.01609)] have any fu

...(continued)
igorot Feb 28 2018 05:19 UTC

The Igorots built an [online community][1] that helps in the exchange, revitalization, practice, and learning of indigenous culture. It is the first and only Igorot community on the web.

[1]: https://www.igorotage.com/

Beni Yoshida Feb 13 2018 19:53 UTC

This is not a direct answer to your question, but may give some intuition to formulate the problem in a more precise language. (And I simplify the discussion drastically). Consider a static slice of an empty AdS space (just a hyperbolic space) and imagine an operator which creates a particle at some

...(continued)
Abhinav Deshpande Feb 10 2018 15:42 UTC

I see. Yes, the epsilon ball issue seems to be a thorny one in the prevalent definition, since the gate complexity to reach a target state from any of a fixed set of initial states depends on epsilon, and not in a very nice way (I imagine that it's all riddled with discontinuities). It would be inte

...(continued)
Elizabeth Crosson Feb 10 2018 05:49 UTC

Thanks for the correction Abhinav, indeed I meant that the complexity of |psi(t)> grows linearly with t.

Producing an arbitrary state |phi> exactly is also too demanding for the circuit model, by the well-known argument that given any finite set of gates, the set of states that can be reached i

...(continued)
Abhinav Deshpande Feb 09 2018 20:21 UTC

Elizabeth, interesting comment! Did you mean to say that the complexity of $U(t)$ increases linearly with $t$ as opposed to exponentially?

Also, I'm confused about your definition. First, let us assume that the initial state is well defined and is $|\psi(0)\rangle$.
If you define the complexit

...(continued)
Elizabeth Crosson Feb 08 2018 04:27 UTC

The complexity of a state depends on the dynamics that one is allowed to use to generate the state. If we restrict the dynamics to be "evolving according a specific Hamiltonian H" then we immediately have that the complexity of U(t) = exp(i H t) grows exponentially with t, up until recurrences that

...(continued)
Danial Dervovic Feb 05 2018 15:03 UTC

Thank you Māris for the extremely well thought-out and articulated points here.

I think this very clearly highlights the need to think explicitly about the precompute time if using the lifting to directly simulate the quantum walk, amongst other things.

I wish to give a well-considered respons

...(continued)