results for quantum machine learning

- In traditional ELM and its improved versions suffer from the problems of outliers or noises due to overfitting and imbalance due to distribution. We propose a novel hybrid adaptive fuzzy ELM(HA-FELM), which introduces a fuzzy membership function to the traditional ELM method to deal with the above problems. We define the fuzzy membership function not only basing on the distance between each sample and the center of the class but also the density among samples which based on the quantum harmonic oscillator model. The proposed fuzzy membership function overcomes the shortcoming of the traditional fuzzy membership function and could make itself adjusted according to the specific distribution of different samples adaptively. Experiments show the proposed HA-FELM can produce better performance than SVM, ELM, and RELM in text classification.
- May 17 2018 cond-mat.stat-mech arXiv:1805.05961v1Machine learning of topological phase transitions has proven to be challenging due to their inherent non-local nature. We propose an unsupervised approach based on diffusion maps that learns topological phase transitions from raw data without the need of manual feature engineering. Using bare spin configurations as input, the approach is shown to be capable of classifying samples of the two-dimensional XY model by winding number and capture the Berezinskii-Kosterlitz-Thouless transition. We also discuss a connection between the output of diffusion maps and the eigenstates of a quantum-well Hamiltonian.
- May 17 2018 cs.CV arXiv:1805.06260v1Image classification is an important task in the field of machine learning and image processing. However, the usually used classification method --- the K Nearest-Neighbor algorithm has high complexity, because its two main processes: similarity computing and searching are time-consuming. Especially in the era of big data, the problem is prominent when the amount of images to be classified is large. In this paper, we try to use the powerful parallel computing ability of quantum computers to optimize the efficiency of image classification. The scheme is based on quantum K Nearest-Neighbor algorithm. Firstly, the feature vectors of images are extracted on classical computers. Then the feature vectors are inputted into a quantum superposition state, which is used to achieve parallel computing of similarity. Next, the quantum minimum search algorithm is used to speed up searching process for similarity. Finally, the image is classified by quantum measurement. The complexity of the quantum algorithm is only O((kM)^(1/2)), which is superior to the classical algorithms. Moreover, the measurement step is executed only once to ensure the validity of the scheme. The experimental results show that, the classification accuracy is 83.1% on Graz-01 dataset and 78% on Caltech-101 dataset, which is close to existing classical algorithms. Hence, our quantum scheme has a good classification performance while greatly improving the efficiency.
- May 10 2018 quant-ph arXiv:1805.03477v1We consider the problem of correctly classifying a given quantum two-level system (qubit) which is known to be in one of two equally probable quantum states. We assume that this task should be performed by a quantum machine which does not have at its disposal a complete classical description of the two template states, but can only have partial prior information about their level of purity and mutual orthogonality. Moreover, similarly to the classical supervised learning paradigm, we assume that the machine can be trained by $n$ qubits prepared in the first template state and by $n$ more qubits prepared in the second template state. In this situation we are interested in the optimal process which correctly classifies the input qubit with the largest probability allowed by quantum mechanics. The problem is studied in its full generality for a number of different prior information scenarios and for an arbitrary size $n$ of the training data. Finite size corrections around the asymptotic limit $n\rightarrow \infty$ are also derived.
- Clustering is a complex process in finding the relevant hidden patterns in unlabeled datasets, broadly known as unsupervised learning. Support vector clustering algorithm is a well-known clustering algorithm based on support vector machines and Gaussian kernels. In this paper, we have investigated the support vector clustering algorithm in quantum paradigm. We have developed a quantum algorithm which is based on quantum support vector machine and the quantum kernel (Gaussian kernel and polynomial kernel) formulation. The investigation exhibits approximately exponential speed up in the quantum version with respect to the classical counterpart.
- Machine learning and quantum computing are two technologies each with the potential for altering how computation is performed to address previously untenable problems. Kernel methods for machine learning are ubiquitous for pattern recognition, with support vector machines (SVMs) being the most well-known method for classification problems. However, there are limitations to the successful solution to such problems when the feature space becomes large, and the kernel functions become computationally expensive to estimate. A core element to computational speed-ups afforded by quantum algorithms is the exploitation of an exponentially large quantum state space through controllable entanglement and interference. Here, we propose and use two novel methods which represent the feature space of a classification problem by a quantum state, taking advantage of the large dimensionality of quantum Hilbert space to obtain an enhanced solution. One method, the quantum variational classifier builds on [1,2] and operates through using a variational quantum circuit to classify a training set in direct analogy to conventional SVMs. In the second, a quantum kernel estimator, we estimate the kernel function and optimize the classifier directly. The two methods present a new class of tools for exploring the applications of noisy intermediate scale quantum computers to machine learning.
- Apr 27 2018 quant-ph arXiv:1804.10068v1This text aims to present and explain quantum machine learning algorithms to a data scientist in an accessible and consistent way. The algorithms and equations presented are not written in rigorous mathematical fashion, instead, the pressure is put on examples and step by step explanation of difficult topics. This contribution gives an overview of selected quantum machine learning algorithms, however there is also a method of scores extraction for quantum PCA algorithm proposed as well as a new cost function in feed-forward quantum neural networks is introduced. The text is divided into four parts: the first part explains the basic quantum theory, then quantum computation and quantum computer architecture are explained in section two. The third part presents quantum algorithms which will be used as subroutines in quantum machine learning algorithms. Finally, the fourth section describes quantum machine learning algorithms with the use of knowledge accumulated in previous parts.
- Apr 25 2018 quant-ph arXiv:1804.08641v2Quantum machine learning is expected to be one of the first potential general-purpose applications of near-term quantum devices. A major recent breakthrough in classical machine learning is the notion of generative adversarial training, where the gradients of a discriminator model are used to train a separate generative model. In this work and a companion paper, we extend adversarial training to the quantum domain and show how to construct generative adversarial networks using quantum circuits. Furthermore, we also show how to compute gradients -- a key element in generative adversarial network training -- using another quantum circuit. We give an example of a simple practical circuit ansatz to parametrize quantum machine learning models and perform a simple numerical experiment to demonstrate that quantum generative adversarial networks can be trained successfully.
- Apr 25 2018 quant-ph arXiv:1804.09139v1Generative adversarial networks (GANs) represent a powerful tool for classical machine learning: a generator tries to create statistics for data that mimics those of a true data set, while a discriminator tries to discriminate between the true and fake data. The learning process for generator and discriminator can be thought of as an adversarial game, and under reasonable assumptions, the game converges to the point where the generator generates the same statistics as the true data and the discriminator is unable to discriminate between the true and the generated data. This paper introduces the notion of quantum generative adversarial networks (QuGANs), where the data consists either of quantum states, or of classical data, and the generator and discriminator are equipped with quantum information processors. We show that the unique fixed point of the quantum adversarial game also occurs when the generator produces the same statistics as the data. Since quantum systems are intrinsically probabilistic the proof of the quantum case is different from - and simpler than - the classical case. We show that when the data consists of samples of measurements made on high-dimensional spaces, quantum adversarial networks may exhibit an exponential advantage over classical adversarial networks.
- Apr 23 2018 quant-ph arXiv:1804.07718v2We reduce measurement errors in a quantum computer using machine learning techniques. We exploit a simple yet versatile neural network to classify multi-qubit quantum states, which is trained using experimental data. This flexible approach allows the incorporation of any number of features of the data with minimal modifications to the underlying network architecture. We experimentally illustrate this approach in the readout of trapped-ion qubits using additional spatial and temporal features in the data. Using this neural network classifier, we efficiently treat qubit readout crosstalk, resulting in a 30\% improvement in detection error over the conventional threshold method. Our approach does not depend on the specific details of the system and can be readily generalized to other quantum computing platforms.
- Apr 23 2018 quant-ph arXiv:1804.07653v1We introduce a new graphical framework for designing quantum error correction codes based on classical principles. A key feature of this graphical language, over previous approaches, is that it is closely related to that of factor graphs or graphical models in classical information theory and machine learning. It enables us to formulate the description of recently-introduced `coherent parity check' quantum error correction codes entirely within the language of classical information theory. This makes our construction accessible without requiring background in quantum error correction or even quantum mechanics in general. More importantly, this allows for a collaborative interplay where one can design new quantum error correction codes derived from classical codes.
- In this paper, we propose a simple neural net that requires only $O(nlog_2k)$ number of qubits and $O(nk)$ quantum gates: Here, $n$ is the number of input parameters, and $k$ is the number of weights applied to these parameters in the proposed neural net. We describe the network in terms of a quantum circuit, and then draw its equivalent classical neural net which involves $O(k^n)$ nodes in the hidden layer. Then, we show that the network uses a periodic activation function of cosine values of the linear combinations of the inputs and weights. The backpropagation is described through the gradient descent, and then iris and breast cancer datasets are used for the simulations. The numerical results indicate the network can be used in machine learning problems and it may provide exponential speedup over the same structured classical neural net.
- A machine learning technique to obtain the ground states of quantum few-body systems using artificial neural networks is developed. Bosons in continuous space are considered and a neural network is optimized in such a way that when particle positions are input into the network, the ground-state wave function is output from the network. The method is applied to the Calogero-Sutherland model in one-dimensional space and Efimov bound states in three-dimensional space.
- Apr 17 2018 quant-ph arXiv:1804.05231v1Gradient descent method, as one of the major methods in numerical optimization, is the key ingredient in many machine learning algorithms. As one of the most fundamental way to solve the optimization problems, it promises the function value to move along the direction of steepest descent. For the vast resource consumption when dealing with high-dimensional problems, a quantum version of this iterative optimization algorithm has been proposed recently[arXiv:1612.01789]. Here, we develop this protocol and implement it on a quantum simulator with limited resource. Moreover, a prototypical experiment was shown with a 4-qubit Nuclear Magnetic Resonance quantum processor, demonstrating a optimization process of polynomial function iteratively. In each iteration, we achieved an average fidelity of 94\% compared with theoretical calculation via full-state tomography. In particular, the iterative point gradually converged to the local minimum. We apply our method to multidimensional scaling problem, further showing the potentially capability to yields an exponentially improvement compared with classical counterparts. With the onrushing tendency of quantum information, our work could provide a subroutine for the application of future practical quantum computers.
- Quantum circuit Born machines are generative models which represent the probability distribution of classical dataset as quantum pure states. Computational complexity considerations of the quantum sampling problem suggest that the quantum circuits exhibit stronger expressibility compared to classical neural networks. One can efficiently draw samples from the quantum circuits via projective measurements on qubits. However, similar to the leading implicit generative models in deep learning, such as the generative adversarial networks, the quantum circuits cannot provide the likelihood of the generated samples, which poses a challenge to the training. We devise an efficient gradient-based learning algorithm for the quantum circuit Born machine by minimizing the kerneled maximum mean discrepancy loss. We simulated generative modeling of the Bars-and-Stripes dataset and Gaussian mixture distributions using deep quantum circuits. Our experiments show the importance of circuit depth and gradient-based optimization algorithm. The proposed learning algorithm is runnable on near-term quantum device and can exhibit quantum advantages for generative modeling.
- Apr 12 2018 quant-ph arXiv:1804.03680v1Hierarchical quantum circuits have been shown to perform binary classification of classical data encoded in a quantum state. We demonstrate that more expressive circuits in the same family achieve better accuracy and can be used to classify highly entangled quantum states, for which there is no known efficient classical method. We compare performance for several different parameterizations on two classical machine learning datasets, Iris and MNIST, and on a synthetic dataset of quantum states. Finally, we demonstrate that performance is robust to noise and deploy an Iris dataset classifier on the ibmqx4 quantum computer.
- Machine learning algorithms often take inspiration from established results and knowledge from statistical physics. A prototypical example is the Boltzmann machine algorithm for supervised learning, which utilizes knowledge of classical thermal partition functions and the Boltzmann distribution. Recent advances in the study of non-equilibrium quantum integrable systems, which never thermalize, have lead to the exploration of a wider class of statistical ensembles. These systems may be described by the so-called generalized Gibbs ensemble, which incorporates a number of "effective temperatures". We propose that these generalized Gibbs ensembles can be successfully applied as the basis of a Boltzmann-machine-like learning algorithm, which operates by learning the optimal values of effective temperatures. We apply our algorithm to the classification of handwritten digits in the MNIST database. While lower error rates can be found with other state-of-the-art algorithms, we find that our algorithm reaches relatively low error rates while learning a much smaller number of parameters than would be needed in a traditional Boltzmann machine, thereby reducing computational cost.
- Apr 11 2018 quant-ph physics.comp-ph arXiv:1804.03159v1We introduce Strawberry Fields, an open-source quantum programming architecture for light-based quantum computers. Built in Python, Strawberry Fields is a full-stack library for design, simulation, optimization, and quantum machine learning of continuous-variable circuits. The platform consists of three main components: (i) an API for quantum programming based on an easy-to-use language named Blackbird; (ii) a suite of three virtual quantum computer backends, built in NumPy and Tensorflow, each targeting specialized uses; and (iii) an engine which can compile Blackbird programs on various backends, including the three built-in simulators, and -- in the near future -- photonic quantum information processors. The library also contains examples of several paradigmatic algorithms, including teleportation, (Gaussian) boson sampling, instantaneous quantum polynomial, Hamiltonian simulation, and variational quantum circuit optimization.
- Apr 10 2018 quant-ph cond-mat.dis-nn arXiv:1804.02926v1A quantum computer needs the assistance of a classical algorithm to detect and identify errors that affect encoded quantum information. At this interface of classical and quantum computing the technique of machine learning has appeared as a way to tailor such an algorithm to the specific error processes of an experiment --- without the need for a priori knowledge of the error model. Here, we apply this technique to topological color codes. We demonstrate that a recurrent neural network with long short-term memory cells can be trained to reduce the error rate $\epsilon_{\rm L}$ of the encoded logical qubit to values much below the error rate $\epsilon_{\rm phys}$ of the physical qubits --- fitting the expected power law scaling $\epsilon_{\rm L} \propto \epsilon_{\rm phys}^{(d+1)/2}$, with $d$ the code distance. The neural network incorporates the information from "flag qubits" to avoid reduction in the effective code distance caused by the circuit. As a test, we apply the neural network decoder to a density-matrix based simulation of a superconducting quantum computer, demonstrating that the logical qubit has a longer life-time than the constituting physical qubits with near-term experimental parameters.
- Matrix product states minimize bipartite correlations to compress the classical data representing quantum states. Matrix product state algorithms and similar tools---called tensor network methods---form the backbone of modern numerical methods used to simulate many-body physics. Matrix product states have a further range of applications in machine learning. Finding matrix product states is in general a computationally challenging task, a computational task which we show quantum computers can accelerate. We present a quantum algorithm which returns a classical description of a $k$-rank matrix product state approximating an eigenvector given black-box access to a unitary matrix. Each iteration of the optimization requires $O(n\cdot k^2)$ quantum gates, yielding sufficient conditions for our quantum variational algorithm to terminate in polynomial-time.
- Variational quantum simulation of imaginary time evolution with applications in chemistry and beyondApr 10 2018 quant-ph arXiv:1804.03023v1Imaginary time evolution is a powerful tool in the study of many-body quantum systems. While it is conceptually simple to simulate such evolution with a classical computer, the time and memory requirements scale exponentially with the system size. Conversely, quantum computers can efficiently simulate many-body quantum systems, but the non-unitary nature of imaginary time evolution is incompatible with canonical unitary quantum circuits. Here we propose a variational method for simulating imaginary time evolution on a quantum computer, using a hybrid algorithm that combines quantum and classical resources. We apply this technique to the problem of finding the ground state energy of many-particle Hamiltonians. We numerically test our algorithm on problems in quantum computational chemistry; specifically, finding the ground state energy of the Hydrogen molecule and Lithium Hydride. Our algorithm successfully finds the global ground state with high probability, outperforming gradient descent optimisation which commonly becomes trapped in local minima. Our method can also be applied to general optimisation problems, Gibbs state preparation, and quantum machine learning. As our algorithm is hybrid, suitable for error mitigation methods, and can exploit shallow quantum circuits, it can be implemented with current and near-term quantum computers.
- We apply the framework of block-encodings, introduced by Low and Chuang (under the name standard-form), to the study of quantum machine learning algorithms using quantum accessible data structures. We develop several tools within the block-encoding framework, including quantum linear system solvers using block-encodings. Our results give new techniques for Hamiltonian simulation of non-sparse matrices, which could be relevant for certain quantum chemistry applications, and which in turn imply an exponential improvement in the dependence on precision in quantum linear systems solvers for non-sparse matrices. In addition, we develop a technique of variable-time amplitude estimation, based on Ambainis' variable-time amplitude amplification technique, which we are also able to apply within the framework. As applications, we design the following algorithms: (1) a quantum algorithm for the quantum weighted least squares problem, exhibiting a 6-th power improvement in the dependence on the condition number and an exponential improvement in the dependence on the precision over the previous best algorithm of Kerenidis and Prakash; (2) the first quantum algorithm for the quantum generalized least squares problem; and (3) quantum algorithms for estimating electrical-network quantities, including effective resistance and dissipated power, improving upon previous work in other input models.
- The intersection between the fields of machine learning and quantum information processing is proving to be a fruitful field for the discovery of new quantum algorithms, which potentially offer an exponential speed-up over their classical counterparts. However, many such algorithms require the ability to produce states proportional to vectors stored in quantum memory. Even given access to quantum databases which store exponentially long vectors, the construction of which is considered a one-off overhead, it has been argued that the cost of preparing such amplitude-encoded states may offset any exponential quantum advantage. Here we argue that specifically in the context of machine learning applications it suffices to prepare a state close to the ideal state only in the $\infty$-norm, and that this can be achieved with only a constant number of memory queries.
- Apr 03 2018 quant-ph arXiv:1804.00633v1The current generation of quantum computing technologies call for quantum algorithms that require a limited number of qubits and quantum gates, and which are robust against errors. A suitable design approach are variational circuits where the parameters of gates are learnt, an approach that is particularly fruitful for applications in machine learning. In this paper, we propose a low-depth variational quantum algorithm for supervised learning. The input feature vectors are encoded into the amplitudes of a quantum system, and a quantum circuit of parametrised single and two-qubit gates together with a single-qubit measurement is used to classify the inputs. This circuit architecture ensures that the number of learnable parameters is poly-logarithmic in the input dimension. We propose a quantum-classical training scheme where the analytical gradients of the model can be estimated by running several slightly adapted versions of the variational circuit. We show with simulations that the circuit-centric quantum classifier performs well on standard classical benchmark datasets while requiring dramatically fewer parameters than other methods. We also evaluate sensitivity of the classification to state preparation and parameter noise, introduce a quantum version of dropout regularisation and provide a graphical representation of quantum gates as highly symmetric linear layers of a neural network.
- Apr 02 2018 quant-ph arXiv:1803.11278v2We propose to generalise classical maximum likelihood learning to density matrices. As the objective function, we propose a quantum likelihood that is related to the cross entropy between density matrices. We apply this learning criterion to the quantum Boltzmann machine (QBM), previously proposed by \citeamin2016quantum. We demonstrate for the first time learning a quantum Hamiltonian from quantum statistics. For the anti-ferromagnetic Heisenberg and XYZ model we recover the true ground state wave function and Hamiltonian. The second contribution is to apply quantum learning to learn from classical data. Quantum learning uses in addition to the classical statistics also quantum statistics for learning. These statistics may violate the Bell inequality, as in the quantum case. Maximizing the quantum likelihood yields results that are significantly more accurate than the classical maximum likelihood approach in several cases. We give an example how the QBM can learn a strongly non-linear problem such as the parity problem. The solution shows entanglement, quantified by the entanglement entropy.
- Machine learning is a promising application of quantum computing, but challenges remain as near-term devices will have a limited number of physical qubits and high error rates. Motivated by the usefulness of tensor networks for machine learning in the classical context, we propose quantum computing approaches to both discriminative and generative learning, with circuits based on tree and matrix product state tensor networks that could have benefits for near-term devices. The result is a unified framework where classical and quantum computing can benefit from the same theoretical and algorithmic developments, and the same model can be trained classically then transferred to the quantum setting for additional optimization. Tensor network circuits can also provide qubit-efficient schemes where, depending on the architecture, the number of physical qubits required scales only logarithmically with, or independently of the input or output data sizes. We demonstrate our proposals with numerical experiments, training a discriminative model to perform handwriting recognition using a optimization procedure that could be carried out on quantum hardware, and testing the noise resilience of the trained model.
- Many experimental proposals for noisy intermediate scale quantum devices involve training a parameterized quantum circuit with a classical optimization loop. Such hybrid quantum-classical algorithms are popular for applications in quantum simulation, optimization, and machine learning. Due to its simplicity and hardware efficiency, random circuits are often proposed as initial guesses for exploring the space of quantum states. We show that the exponential dimension of Hilbert space and the gradient estimation complexity make this choice unsuitable for hybrid quantum-classical algorithms run on more than a few qubits. Specifically, we show that for a wide class of reasonable parameterized quantum circuits, the probability that the gradient along any reasonable direction is non-zero to some fixed precision is exponentially small as a function of the number of qubits. We argue that this is related to the 2-design characteristic of random circuits, and that solutions to this problem must be studied.
- The method of choice to study one-dimensional strongly interacting many body quantum systems is based on matrix product states and operators. Such method allows to explore the most relevant, and numerically manageable, portion of an exponentially large space. It also allows to describe accurately correlations between distant parts of a system, an important ingredient to account for the context in machine learning tasks. Here we introduce a machine learning model in which matrix product operators are trained to implement sequence to sequence prediction, i.e. given a sequence at a time step, it allows one to predict the next sequence. We then apply our algorithm to cellular automata (for which we show exact analytical solutions in terms of matrix product operators), and to nonlinear coupled maps. We show advantages of the proposed algorithm when compared to conditional random fields and bidirectional long short-term memory neural network. To highlight the flexibility of the algorithm, we also show that it can readily perform classification tasks.
- We study the problem of preparing a quantum many-body system from an initial to a target state by optimizing the fidelity over the family of bang-bang protocols. We present compelling nu- merical evidence for a universal spin-glass-like transition controlled by the protocol time duration. The glassy critical point is marked by the occurrence of an extensive number of protocols with close-to-optimal fidelity and with a true optimum that appears exponentially difficult to locate. Using a machine learning (ML) inspired framework based on the manifold learning algorithm t- SNE, we are able to visualize the geometry of the high-dimensional control landscape in an effective low-dimensional representation. Across the glassy transition, the control landscape features a prolif- eration of an exponential number of attractors separated by extensive barriers, which bears a strong resemblance with replica symmetry breaking in spin glasses and random satisfiability problems. We further show that the quantum control landscape maps onto a disorder-free classical Ising model with frustrated nonlocal, multibody interactions. Our work highlights an intricate but unexpected connection between optimal quantum control and spin glass physics, and how tools from ML can be used to visualize and understand glassy optimization landscapes.
- Mar 29 2018 quant-ph arXiv:1803.10296v1Considering recent advancements and successes in the development of efficient quantum algorithms for electronic structure calculations --- alongside similarly impressive results using machine learning techniques for computation --- hybridizing quantum computing with machine learning for the intent of perform electronic structure calculations is a natural progression. Here we present a hybrid quantum algorithm employing a quantum restricted Boltzmann machine to obtain accurate molecular potential energy surfaces. The Boltzmann machine trains parameters within an Ising-type model which exists in thermal equilibrium. By exploiting a quantum algorithm to optimize the underlying objective function, we obtained an efficient procedure for the calculation of the electronic ground state energy for a system. Our approach achieves high accuracy for the ground state energy of a simple diatomic molecular system such as H2, LiH, H2O at a specific location on its potential energy surface. With the future availability of larger-scale quantum computers and the possible training of some machine units with the simple dimensional scaling results for electronic structure, quantum machine learning techniques are set to become powerful tools to obtain accurate values for ground state energies and electronic structure for molecular systems.
- Gaussian processes (GPs) are important models in supervised machine learning. Training in Gaussian processes refers to selecting the covariance functions and the associated parameters in order to improve the outcome of predictions, the core of which amounts to evaluating the logarithm of the marginal likelihood (LML) of a given model. LML gives a concrete measure of the quality of prediction that a GP model is expected to achieve. The classical computation of LML typically carries a polynomial time overhead with respect to the input size. We propose a quantum algorithm that computes the logarithm of the determinant of a Hermitian matrix, which runs in logarithmic time for sparse matrices. This is applied in conjunction with a variant of the quantum linear system algorithm that allows for logarithmic time computation of the form $\mathbf{y}^TA^{-1}\mathbf{y}$, where $\mathbf{y}$ is a dense vector and $A$ is the covariance matrix. We hence show that quantum computing can be used to estimate the LML of a GP with exponentially improved efficiency under certain conditions.
- It is a fundamental, but still elusive question whether methods based on quantum mechanics, in particular on quantum entanglement, can be used for classical information processing and machine learning. Even partial answer to this question would bring important insights to both fields of machine learning and quantum mechanics. In this work, we implement simple numerical experiments, related to pattern/images classification, in which we represent the classifiers by many-qubit quantum states written in the matrix product states (MPS). Classical machine learning algorithm is applied to these quantum states to learn the classical data. We explicitly show how quantum features (i.e., single-site and bipartite entanglement) can emerge in such represented images. Particularly, entanglement characterizes here the importance of data, and such information are used to guide the architecture of MPS, and improve the efficiency. The number of needed qubits can be reduced to less than $1/10$ of the original number. We expect such numerical experiments could open new paths in classical machine learning algorithms, and at the same time shed lights on generic quantum simulations/computations for machine learning tasks.
- The harnessing of modern computational abilities for many-body wave-function representations is naturally placed as a prominent avenue in contemporary condensed matter physics. Specifically, highly expressive computational schemes that are able to efficiently represent the entanglement properties of many-particle systems are of interest. In the seemingly unrelated field of machine learning, deep network architectures have exhibited an unprecedented ability to tractably encompass the dependencies characterizing hard learning tasks such as image classification. However, key questions regarding deep learning architecture design still have no adequate theoretical answers. In this paper, we establish a Tensor Network (TN) based common language between the two disciplines, which allows us to offer bidirectional contributions. By showing that many-body wave-functions are structurally equivalent to mappings of ConvACs and RACs, we construct their TN equivalents, and suggest quantum entanglement measures as natural quantifiers of dependencies in such networks. Accordingly, we propose a novel entanglement based deep learning design scheme. In the other direction, we identify that an inherent re-use of information in state-of-the-art deep learning architectures is a key trait that distinguishes them from standard TNs. Therefore, we employ a TN manifestation of information re-use and construct TNs corresponding to powerful architectures such as deep recurrent and overlapping convolutional networks. This allows us to demonstrate that the entanglement scaling supported by state-of-the-art deep learning architectures matches that of MERA TN in 1D, and that they support volume law entanglement in 2D polynomially more efficiently than RBMs. We thus provide theoretical motivation to shift trending neural-network based wave-function representations closer to state-of-the-art deep learning architectures.
- Mar 23 2018 cond-mat.other arXiv:1803.08195v1We present a machine-learning method for predicting sharp transitions in a Hamiltonian phase diagram by extrapolating the properties of quantum systems. The method is based on Gaussian Process regression with a combination of kernels chosen through an iterative procedure maximizing the predicting power of the kernels. The method is capable of extrapolating across the transition lines. The calculations within a given phase can be used to predict not only the closest sharp transition, but also a transition removed from the available data by a separate phase.This makes the present method particularly valuable for searching phase transitions in the parts of the parameter space that cannot be probed experimentally or theoretically.
- Machine learning techniques can reveal hidden structure in large data amounts and can potentially extent or even replace analytical scientific methods. In nanophotonics, modes can increase the light yield from emitters located inside the nanostructure or near the surface. Optimizing such systems enforces to systematically analyze large amounts of three-dimensional field distribution data. We present a method based on finite element simulations and machine learning for the identification of modes with large field energies and specific spatial properties. By clustering we reduce the field distribution data to a minimal subset of prototypes. The predictive power of the approach is demonstrated using an analysis of experimentally measured fluorescence enhancement of quantum dots on a photonic crystal surface. The clustering method can be used for any optimization task that depends on three-dimensional field data, and is therefore relevant for biosensing, quantum dot solar cells or photon upconversion.
- Mar 21 2018 quant-ph arXiv:1803.07128v1The basic idea of quantum computing is surprisingly similar to that of kernel methods in machine learning, namely to efficiently perform computations in an intractably large Hilbert space. In this paper we explore some theoretical foundations of this link and show how it opens up a new avenue for the design of quantum machine learning algorithms. We interpret the process of encoding inputs in a quantum state as a nonlinear feature map that maps data to quantum Hilbert space. A quantum computer can now analyse the input data in this feature space. Based on this link, we discuss two approaches for building a quantum model for classification. In the first approach, the quantum device estimates inner products of quantum states to compute a classically intractable kernel. This kernel can be fed into any classical kernel method such as a support vector machine. In the second approach, we can use a variational quantum circuit as a linear model that classifies data explicitly in Hilbert space. We illustrate these ideas with a feature map based on squeezing in a continuous-variable system, and visualise the working principle with $2$-dimensional mini-benchmark datasets.
- Mar 20 2018 quant-ph arXiv:1803.07039v1Machine learning is a crucial aspect of artificial intelligence. This paper details an approach for quantum Hebbian learning through a batched version of quantum state exponentiation. Here, batches of quantum data are interacted with learning and processing quantum bits (qubits) by a series of elementary controlled partial swap operations, resulting in a Hamiltonian simulation of the statistical ensemble of the data. We decompose this elementary operation into one and two qubit quantum gates from the Clifford+$T$ set and use the decomposition to perform an efficiency analysis. Our construction of quantum Hebbian learning is motivated by extension from the established classical approach, and it can be used to find details about the data such as eigenvalues through phase estimation. This work contributes to the near-term development and implementation of quantum machine learning techniques.
- Mar 15 2018 quant-ph arXiv:1803.05169v1We study the application of machine learning methods based on a geometrical and time-series character of data in the application to quantum control. We demonstrate that recurrent neural networks posses the ability to generalize the correction pulses with respect to the level of noise present in the system. We also show that the utilisation of the geometrical structure of control pulses is sufficient for achieving high-fidelity in quantum control using machine learning procedures.
- Machine learning employs dynamical algorithms that mimic the human capacity to learn, where the reinforcement learning ones are among the most similar to humans in this respect. On the other hand, adaptability is an essential aspect to perform any task efficiently in a changing environment, and it is fundamental for many purposes, such as natural selection. Here, we propose an algorithm based on successive measurements to adapt one quantum state to a reference unknown state, in the sense of achieving maximum overlap. The protocol naturally provides many identical copies of the reference state, such that in each measurement iteration more information about it is obtained. In our protocol, we consider a system composed of three parts, the "environment" system, which provides the reference state copies; the register, which is an auxiliary subsystem that interacts with the environment to acquire information from it; and the agent, which corresponds to the quantum state that is adapted by digital feedback with input corresponding to the outcome of the measurements on the register. With this proposal we can achieve an average fidelity between the environment and the agent of more than $90\% $ with less than $30$ iterations of the protocol. In addition, we extend the formalism to $ d $-dimensional states, reaching an average fidelity of around $80\% $ in less than $400$ iterations for $d=$ 11, for a variety of genuinely quantum as well as semiclassical states. This work paves the way for the development of quantum reinforcement learning protocols using quantum data, and the future deployment of semi-autonomous quantum systems.
- Mar 14 2018 quant-ph arXiv:1803.04574v1Quantum reservoir computing provides a framework for exploiting the natural dynamics of quantum systems as a computational resource. It can implement real-time signal processing and solve temporal machine learning problems in general, which requires memory and nonlinear mapping of the recent input stream using the quantum dynamics in computational supremacy region, where the classical simulation of the system is intractable. A nuclear magnetic resonance spin-ensemble system is one of the realistic candidates for such physical implementations, which is currently available in laboratories. In this paper, considering these realistic experimental constraints for implementing the framework, we introduce a scheme, which we call a spatial multiplexing technique, to effectively boost the computational power of the platform. This technique exploits disjoint dynamics, which originate from multiple different quantum systems driven by common input streams in parallel. Accordingly, unlike designing a single large quantum system to increase the number of qubits for computational nodes, it is possible to prepare a huge number of qubits from multiple but small quantum systems, which are operationally easy to handle in laboratory experiments. We numerically demonstrate the effectiveness of the technique using several benchmark tasks and quantitatively investigate its specifications, range of validity, and limitations in detail.
- Mar 13 2018 physics.chem-ph arXiv:1803.04395v1We use HIP-NN, a neural network architecture that excels at predicting molecular energies, to predict atomic charges. The charge predictions are accurate over a wide range of molecules (both small and large) and for a diverse set of charge assignment schemes. To demonstrate the power of charge prediction on non-equilibrium geometries, we use HIP-NN to generate IR spectra from dynamical trajectories on a variety of molecules. The results are in good agreement with reference IR spectra produced by traditional theoretical methods. Critically, for this application, HIP-NN charge predictions are about 104 times faster than direct DFT charge calculations. Thus, ML provides a pathway to greatly increase the range of feasible simulations while retaining quantum-level accuracy. In summary, our results provide further evidence that machine learning can replicate high-level quantum calculations at a tiny fraction of the computational cost.
- Mar 13 2018 quant-ph arXiv:1803.04114v1Short-depth algorithms are crucial for reducing computational error on near-term quantum computers, for which decoherence and gate infidelity remain important issues. Here we present a machine-learning approach for discovering such algorithms. We apply our method to a ubiquitous primitive: computing the overlap ${\rm Tr}(\rho\sigma)$ between two quantum states $\rho$ and $\sigma$. The standard algorithm for this task, known as the Swap Test, is used in many applications such as quantum support vector machines, and, when specialized to $\rho = \sigma$, quantifies the Renyi entanglement. Here, we find algorithms that have shorter depths than the Swap Test, including one that has constant depth (independent of problem size). Furthermore, we apply our approach to the hardware-specific connectivity and gate alphabets used by Rigetti's and IBM's quantum computers and demonstrate that the shorter algorithms that we derive significantly reduce the error - compared to the Swap Test - on these computers.
- Introduction to the special issue of Phil. Trans. R. Soc. A 376, 2018, `Hilbert's Sixth Problem'. The essence of the Sixth Problem is discussed and the content of this issue is introduced. In 1900, David Hilbert presented 23 problems for the advancement of mathematical science. Hilbert's Sixth Problem proposed the expansion of the axiomatic method outside of mathematics, in physics and beyond. Its title was shocking: "Mathematical Treatment of the Axioms of Physics." Axioms of physics did not exist and were not expected. During further explanation, Hilbert specified this problem with special focus on probability and "the limiting processes, ... which lead from the atomistic view to the laws of motion of continua". The programmatic call was formulated "to treat, by means of axioms, those physical sciences in which already today mathematics plays an important part." This issue presents a modern slice of the work on the Sixth Problem, from quantum probability to fluid dynamics and machine learning, and from review of solid mathematical and physical results to opinion pieces with new ambitious ideas. Some expectations were broken: The continuum limit of atomistic kinetics may differ from the classical fluid dynamics. The "curse of dimensionality" in machine learning turns into the "blessing of dimensionality" that is closely related to statistical physics. Quantum probability facilitates the modelling of geological uncertainty and hydrocarbon reservoirs. And many other findings are presented.
- Mar 09 2018 physics.optics arXiv:1803.02875v1Topological concepts open many new horizons for photonic devices, from integrated optics to lasers. The complexity of large scale topological devices asks for an effective solution of the inverse problem: how best to engineer the topology for a specific application? We introduce a novel machine learning approach to the topological inverse problem. We train a neural network system with the band structure of the Aubry-Andre-Harper model and then adopt the network for solving the inverse problem. Our application is able to identify the parameters of a complex topological insulator in order to obtain protected edge states at target frequencies. One challenging aspect is handling the multivalued branches of the direct problem and discarding unphysical solutions. We overcome this problem by adopting a self-consistent method to only select physically relevant solutions. We demonstrate our technique in a realistic topological laser design and by resorting to the widely available open-source TensorFlow library. Our results are general and scalable to thousands of topological components. This new inverse design technique based on machine learning potentially extends the applications of topological photonics, for example, to frequency combs, quantum sources, neuromorphic computing and metrology.
- Mar 09 2018 quant-ph arXiv:1803.02886v1We present an algorithm for quantum-assisted cluster analysis (QACA) that makes use of the topological properties of a D-Wave 2000Q quantum processing unit (QPU). Clustering is a form of unsupervised machine learning, where instances are organized into groups whose members share similarities. The assignments are, in contrast to classification, not known a priori, but generated by the algorithm. We explain how the problem can be expressed as a quadratic unconstrained binary optimization (QUBO) problem, and show that the introduced quantum-assisted clustering algorithm is, regarding accuracy, equivalent to commonly used classical clustering algorithms. Quantum annealing algorithms belong to the class of metaheuristic tools, applicable for solving binary optimization problems. Hardware implementations of quantum annealing, such as the quantum annealing machines produced by D-Wave Systems, have been subject to multiple analyses in research, with the aim of characterizing the technology's usefulness for optimization, sampling, and clustering. Our first and foremost aim is to explain how to represent and solve parts of these problems with the help of the QPU, and not to prove supremacy over every existing classical clustering algorithm.
- Mar 06 2018 quant-ph arXiv:1803.01486v1HHL quantum algorithm to solve linear systems is one of the most important subroutines in many quantum machine learning algorithms. In this work, we present and analyze several other caveats in HHL algorithm, which have been ignored in the past. Their influences on the efficiency, accuracy and practicability of HHL algorithm and several related quantum machine learning algorithms will be discussed. We also found that these caveats affect HHL algorithm much deeper than the already noticed caveats. In order to obtain more practical quantum machine learning algorithms with less assumptions based on HHL algorithm, we should pay more attention to these caveats.
- Mar 06 2018 cond-mat.str-el cond-mat.dis-nn arXiv:1803.01035v1Quantum many-body systems realise many different phases of matter characterised by their exotic emergent phenomena. While some simple versions of these properties can occur in systems of free fermions, their occurrence generally implies that the physics is dictated by an interacting Hamiltonian. The interaction distance has been successfully used to quantify the effect of interactions in a variety of states of matter via the entanglement spectrum [Nat. Commun. 8, 14926 (2017), arXiv:1705.09983]. The computation of the interaction distance reduces to a global optimisation problem whose goal is to search for the free-fermion entanglement spectrum closest to the given entanglement spectrum. In this work, we employ techniques from machine learning in order to perform this same task. In a supervised learning setting, we use labelled data obtained by computing the interaction distance and predict its value via linear regression. Moving to a semi-supervised setting, we train an auto-encoder to estimate an alternative measure to the interaction distance, and we show that it behaves in a similar manner.
- Quantum computing exploits quantum phenomena such as superposition and entanglement to realize a form of parallelism that is not available to traditional computing. It offers the potential of significant computational speed-ups in quantum chemistry, materials science, cryptography, and machine learning. The dominant approach to programming quantum computers is to provide an existing high-level language with libraries that allow for the expression of quantum programs. This approach can permit computations that are meaningless in a quantum context; prohibits succinct expression of interaction between classical and quantum logic; and does not provide important constructs that are required for quantum programming. We present Q#, a quantum-focused domain-specific language explicitly designed to correctly, clearly and completely express quantum algorithms. Q# provides a type system, a tightly constrained environment to safely interleave classical and quantum computations; specialized syntax, symbolic code manipulation to automatically generate correct transformations of quantum operations, and powerful functional constructs which aid composition.
- Mar 05 2018 quant-ph arXiv:1803.00745v1We propose a classical-quantum hybrid algorithm for machine learning on near-term quantum processors, which we call quantum circuit learning. A quantum circuit driven by our framework learns a given task by tuning parameters implemented on it. The iterative optimization of the parameters allows us to circumvent the high-depth circuit. Theoretical investigation shows that a quantum circuit can approximate nonlinear functions, which is further confirmed by numerical simulations. Hybridizing a low-depth quantum circuit and a classical computer for machine learning, the proposed framework paves the way toward applications of near-term quantum devices for quantum machine learning.
- Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to predict binding specificity. Using simplified datasets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified datasets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems.
- One of the ambitious goals of artificial intelligence is to build a machine that outperforms human, even if limited knowledge and data are provided. Reinforcement Learning (RL) provides one such possibility to reach this goal. In this work, we consider a specific task from quantum physics, i.e. quantum state transfer in a one-dimensional spin chain. The mission for the machine is to find transfer schemes with fastest speeds while maintaining high transfer fidelities. The first scenario we consider is when the Hamiltonian is time-independent. We update the coupling strength by minimizing a loss function dependent on both the fidelity and the speed. Compared with a scheme proven to be at the quantum speed limit for the perfect state transfer, the scheme provided by RL is faster while maintaining the infidelity below $5\times 10^{-4}$. In the second scenario that a time-dependent external field is introduced, we convert the state transfer process into a Markov decision process that can be understood by the machine. We solve it with the deep Q-learning algorithm. After training, the machine successfully finds transfer schemes with high fidelities and speeds, which are faster than previously known ones. These results show that Reinforcement Learning can be a powerful tool for quantum control problems.
- Feb 27 2018 physics.chem-ph arXiv:1802.09238v1Molecular dynamics (MD) simulations employing classical force fields constitute the cornerstone of contemporary atomistic modeling in chemistry, biology, and materials science. However, the predictive power of these simulations is only as good as the underlying interatomic potential. Classical potentials are based on mechanistic models of interatomic interactions, which often fail to faithfully capture key quantum effects in molecules and materials. Here we enable the direct construction of flexible molecular force fields from high-level ab initio calculations by incorporating spatial and temporal physical symmetries into a gradient-domain machine learning (sGDML) model in an automatic data-driven way, thus greatly reducing the intrinsic complexity of the force field learning problem. The developed sGDML approach faithfully reproduces global force fields at quantum-chemical CCSD(T) level of accuracy [coupled cluster with single, double, and perturbative triple excitations] and for the first time allows converged molecular dynamics simulations with fully quantized electrons and nuclei for flexible molecules with up to a few dozen atoms. We present MD simulations for five molecules ranging from benzene to aspirin and demonstrate new insights into the dynamical behavior of these molecules. Our approach provides the key missing ingredient for achieving spectroscopic accuracy in molecular simulations.
- Feb 22 2018 quant-ph cond-mat.str-el arXiv:1802.07347v1We present an algorithm that extends existing quantum algorithms for simulating fermion systems in quantum chemistry and condensed matter physics to include phonons. The phonon degrees of freedom are represented with exponential accuracy on a truncated Hilbert space with a size that increases linearly with the cutoff of the maximum phonon number. The additional number of qubits required by the presence of phonons scales linearly with the size of the system. The additional circuit depth is constant for systems with finite-range electron-phonon and phonon-phonon interactions and linear for long-range electron-phonon interactions. Our algorithm for a Holstein polaron problem was implemented on an Atos Quantum Learning Machine (QLM) quantum simulator employing the Quantum Phase Estimation method. The energy and the phonon number distribution of the polaron state agree with exact diagonalization results for weak, intermediate and strong electron-phonon coupling regimes.
- Finding efficient decoders for quantum error correcting codes adapted to realistic experimental noise in fault-tolerant devices represents a significant challenge. In this paper we introduce several decoding algorithms complemented by deep neural decoders and apply them to analyze several fault-tolerant error correction protocols such as the surface code as well as Steane and Knill error correction. Our methods require no knowledge of the underlying noise model afflicting the quantum device making them appealing for real-world experiments. Our analysis is based on a full circuit-level noise model. It considers both distance-three and five codes, and is performed near the codes pseudo-threshold regime. Training deep neural decoders in low noise rate regimes appears to be a challenging machine learning endeavour. We provide a detailed description of our neural network architectures and training methodology. We then discuss both the advantages and limitations of deep neural decoders. Lastly, we provide a rigorous analysis of the decoding runtime of trained deep neural decoders and compare our methods with anticipated gate times in future quantum devices. Given the broad applications of our decoding schemes, we believe that the methods presented in this paper could have practical applications for near term fault-tolerant experiments.
- Feb 16 2018 quant-ph arXiv:1802.05428v3Perceptron model is a fundamental linear classifier in machine learning and also the building block of artificial neural networks. Recently, Wiebe et al. (arXiv:1602.04799) proposed that the training of a perceptron can be quadratically speeded using Grover search with a quantum computer, which has potentially important big-data applications. In this paper, we design a quantum circuit for implementing this algorithm. The Grover oracle, the central part of the circuit, is realized by Quantum-Fourier-Transform based arithmetics that specifies whether an input weight vector can correctly classify all training data samples. We also analyze the required number of qubits and universal gates for the algorithm, as well as the success probability using uniform sampling, showing that it has higher possibility than spherical Gaussian distribution $N(0,1)$. The feasibility of the circuit is demonstrated by a testing example using the IBM-Q cloud quantum computer, where 16 qubits are used to classify four data samples.
- Feb 15 2018 quant-ph arXiv:1802.05267v2Machine learning with artificial neural networks is revolutionizing science. While the most prevalent technique involves supervised training on queries with a known correct answer, more advanced challenges often require discovering answers autonomously. In reinforcement learning, control strategies are improved according to a reward function. The power of this approach has been highlighted by spectactular recent successes, such as playing Go. So far, it has remained an open question whether neural-network-based reinforcement learning can be successfully applied in physics. Here, we show how to use this method for finding quantum feedback schemes, where a network-based "agent" interacts with and occasionally decides to measure a quantum system. We illustrate the utility by finding gate sequences that preserve the quantum information stored in a small collection of qubits against noise. This specific application will help to find hardware-adapted feedback schemes for small quantum modules while demonstrating more generally the promise of neural-network based reinforcement learning in physics.
- Feb 13 2018 physics.comp-ph cond-mat.dis-nn arXiv:1802.03930v2In this letter, motivated by the question that whether the empirical fitting of data by neural network can yield the same structure of physical laws, we apply the neural network to a simple quantum mechanical two-body scattering problem with short-range potentials, which by itself also plays an important role in many branches of physics. We train a neural network to accurately predict $ s $-wave scattering length, which governs the low-energy scattering physics, directly from the scattering potential without solving Schrödinger equation or obtaining the wavefunction. After analyzing the neural network, it is shown that the neural network develops perturbation theory order by order when the potential increases. This provides an important benchmark to the machine-assisted physics research or even automated machine learning physics laws.
- Feb 13 2018 quant-ph cond-mat.str-el arXiv:1802.03738v2Machine learning representations of many-body quantum states have recently been introduced as an ansatz to describe the ground states and unitary evolutions of many-body quantum systems. We explore one of the most important representations, restricted Boltzmann machine (RBM) representation, in stabilizer formalism. We give the general method of constructing RBM representation for stabilizer code states and find the exact RBM representation for several types of stabilizer groups with the number of hidden neurons equal or less than the number of visible neurons, which indicates that the representation is extremely efficient. Then we analyze the surface code with boundaries, defects, domain walls and twists in full detail and find that all the models can be efficiently represented via RBM ansatz states. Besides, the case for Kitaev's $D(\Zb_d)$ model, which is a generalized model of surface code, is also investigated.
- Many important problems are characterized by the eigenvalues of a large matrix. For example, the difficulty of many optimization problems, such as those arising from the fitting of large models in statistics and machine learning, can be investigated via the spectrum of the Hessian of the empirical loss function. Network data can be understood via the eigenstructure of a graph Laplacian matrix using spectral graph theory. Quantum simulations and other many-body problems are often characterized via the eigenvalues of the solution space, as are various dynamic systems. However, naive eigenvalue estimation is computationally expensive even when the matrix can be represented; in many of these situations the matrix is so large as to only be available implicitly via products with vectors. Even worse, one may only have noisy estimates of such matrix vector products. In this work, we combine several different techniques for randomized estimation and show that it is possible to construct unbiased estimators to answer a broad class of questions about the spectra of such implicit matrices, even in the presence of noise. We validate these methods on large-scale problems in which graph theory and random matrix theory provide ground truth.
- In this work we introduce the application of black-box quantum control as an interesting rein- forcement learning problem to the machine learning community. We analyze the structure of the reinforcement learning problems arising in quantum physics and argue that agents parameterized by long short-term memory (LSTM) networks trained via stochastic policy gradients yield a general method to solving them. In this context we introduce a variant of the proximal policy optimization (PPO) algorithm called the memory proximal policy optimization (MPPO) which is based on this analysis. We then show how it can be applied to specific learning tasks and present results of nu- merical experiments showing that our method achieves state-of-the-art results for several learning tasks in quantum control with discrete and continouous control parameters.
- Feb 06 2018 quant-ph arXiv:1802.01520v1PhD thesis investigating homological quantum codes derived from curved and higher dimensional geometries. In the first part we will consider closed surfaces with constant negative curvature. We show how such surfaces can be constructed and enumerate all quantum codes derived from them which have less than 10.000 physical qubits. For codes that are extremal in a certain sense we perform numerical simulations to determine the value of their threshold. Furthermore, we give evidence that these codes can be used for more overhead efficient storage as compared to the surface code by orders of magnitude. We also show how to read and write the encoded qubits while keeping their connectivity low. In the second part we consider codes in which qubits are layed-out according to a four- dimensional geometry. Such codes allow for much simpler decoding schemes compared to codes which are two-dimensional. In particular, measurements do not necessarily have to be repeated to obtain reliable information about the error and the classical hardware performing the error correction is greatly simplified. We perform numerical simulations to analyze the performance of these codes using decoders based on local updates. We also introduce a novel decoder based on techniques from machine learning and image recognition to decode four-dimensional codes.
- Jan 31 2018 quant-ph arXiv:1801.09684v1Machine learning is actively being explored for its potential to design, validate, and even hybridize with near-term quantum devices. Stochastic neural networks will play a central role in state tomography, due to their ability to model a quantum wavefunction. However, to be useful in real experiments such methods must be able to reconstruct general quantum mixed states. Here, we parameterize a density matrix based on a restricted Boltzmann machine that is capable of purifying an arbitrary state through auxiliary degrees of freedom embedded in the latent space of its hidden units. We implement the algorithm numerically and use it to perform tomography on some typical states of entangled photons, and achieve fidelities competitive with standard techniques.
- Jan 24 2018 quant-ph arXiv:1801.07686v1Hybrid quantum-classical approaches, such as the variational quantum eigensolver, provide ways to use near-term, pre-fault-tolerant quantum computers of intermediate size for practical applications. Expanding the portfolio of such techniques, we propose a quantum circuit learning algorithm that can be used to assist the characterization of quantum devices and to train shallow circuits for generative tasks. The procedure leverages quantum hardware capabilities to its fullest extent by using native gates and their qubit connectivity. We demonstrate that our approach can learn an optimal preparation of the maximally entangled Greenberger-Horne-Zeilinger states, also known as cat states. The circuit layout needed to prepare cat states for any number of qubits is then obtained by analyzing the resulting circuit patterns for odd and even number of qubits. We further demonstrate our hybrid approach can efficiently prepare approximate representations of coherent thermal states, wave functions that encode Boltzmann probabilities in their amplitudes. Finally, complementing proposals to characterize the power or usefulness of near-term quantum devices, such as IBM's quantum volume, we provide a new hardware-independent metric called the qBAS score. It is based on the performance yield in a specific sampling task on one of the canonical machine learning data sets known as Bars and Stripes. We show how entanglement is a key ingredient in encoding the patterns of this data set into the quantum distribution resulting from the shallow quantum circuit; an ideal benchmark for testing hardware proposal starting at four qubits and up. We provide experimental results and evaluation of this metric to probe the trade off between several architectural circuit designs and circuit depths on an ion-trap quantum computer.
- Detecting a change point is a crucial task in statistics that has been recently extended to the quantum realm. A source state generator that emits a series of single photons in a default state suffers an alteration at some point and starts to emit photons in a mutated state. The problem consists in identifying the point where the change took place. In this work, we consider a learning agent that applies Bayesian inference on experimental data to solve this problem. This learning machine adjusts the measurement over each photon according to the past experimental results finds the change position in an online fashion. Our results show that the local-detection success probability can be largely improved by using such a machine learning technique. This protocol provides a tool for improvement in many applications where a sequence of identical quantum states is required.
- Jan 24 2018 quant-ph arXiv:1801.07418v1We present novel and simple estimation of a minimal dimension required for an effective reservoir in open quantum systems. Using a tensor network formalism we introduce a new object called a reservoir network (RN). The reservoir network is the tensor network in the form of a Matrix Product State, which contains all effects of open dynamics. This object is especially useful for understanding memory effects. We discuss possible applications of the reservoir network and the estimation of dimension to develop new numerical and machine learning based methods for open quantum systems.
- Jan 18 2018 quant-ph arXiv:1801.05417v1In recent years, along with the overwhelming advances in the field of neural information processing, quantum information processing (QIP) has shown significant progress in solving problems that are intractable on classical computers. Quantum machine learning (QML) explores the ways in which these fields can learn from one another. We propose quantum walk neural networks (QWNN), a new graph neural network architecture based on quantum random walks, the quantum parallel to classical random walks. A QWNN learns a quantum walk on a graph to construct a diffusion operator which can be applied to a signal on a graph. We demonstrate the use of the network for prediction tasks for graph structured signals.
- Jan 18 2018 hep-lat cond-mat.dis-nn arXiv:1801.05784v1Numerical lattice quantum chromodynamics studies of the strong interaction are important in many aspects of particle and nuclear physics. Such studies require significant computing resources to undertake. A number of proposed methods promise improved efficiency of lattice calculations, and access to regions of parameter space that are currently computationally intractable, via multi-scale action-matching approaches that necessitate parametric regression of generated lattice datasets. The applicability of machine learning to this regression task is investigated, with deep neural networks found to provide an efficient solution even in cases where approaches such as principal component analysis fail. The high information content and complex symmetries inherent in lattice QCD datasets require custom neural network layers to be introduced and present opportunities for further development.
- Jan 16 2018 cond-mat.mtrl-sci physics.chem-ph arXiv:1801.04900v3We present a proof of concept that machine learning techniques can be used to predict the properties of CNOHF energetic molecules from their molecular structures. We focus on a small but diverse dataset consisting of 109 molecular structures spread across ten compound classes. Up until now, candidate molecules for energetic materials have been screened using predictions from expensive quantum simulations and thermochemical codes. We present a comprehensive comparison of machine learning models and several molecular featurization methods - sum over bonds, custom descriptors, Coulomb matrices, bag of bonds, and fingerprints. The best featurization was sum over bonds (bond counting), and the best model was kernel ridge regression. Despite having a small data set, we obtain acceptable errors and Pearson correlations for the prediction of detonation pressure, detonation velocity, explosive energy, heat of formation, density, and other properties out of sample. By including another dataset with 309 additional molecules in our training we show how the error can be pushed lower, although the convergence with number of molecules is slow. Our work paves the way for future applications of machine learning in this domain, including automated lead generation and interpreting machine learning models to obtain novel chemical insights.
- Jan 16 2018 quant-ph arXiv:1801.04377v1Quantum error correction is an essential technique for constructing a scalable quantum computer. In order to implement quantum error correction with near-term quantum devices, a fast and near-optimal decoding method is demanded. A decoder based on machine learning is considered as one of the most viable solutions for this purpose, since its prediction is fast once training has been done, and it is applicable to any quantum error correcting codes and any noise models. So far, various formulations of the decoding problem as the task of machine learning has been proposed. Here, we discuss general constructions of machine-learning-based decoders. We found several conditions to achieve near-optimal performance, and proposed a criterion which should be optimized when a size of training data set is limited. We also discuss preferable constructions of neural networks, and proposed a decoder using spatial structures of topological codes using a convolutional neural network. We numerically show that our method can improve the performance of machine-learning-based decoders in various topological codes and noise models.
- Inspired by the fact that the neural network, as the mainstream for machine learning, has brought successes in many application areas, here we propose to use this approach for decoding hidden correlation among pseudo-random data and predicting events accordingly. With a simple neural network structure and a typical training procedure, we demonstrate the learning and prediction power of the neural network in extremely random environment. Finally, we postulate that the high sensitivity and efficiency of the neural network may allow to critically test if there could be any fundamental difference between quantum randomness and pseudo randomness, which is equivalent to the question: Does God play dice?