results for quantum machine learning

- Jul 21 2017 physics.chem-ph stat.ML arXiv:1707.06338v1Understanding the relationship between the structure of light-harvesting systems and their excitation energy transfer properties is of fundamental importance in many applications including the development of next generation photovoltaics. Natural light harvesting in photosynthesis shows remarkable excitation energy transfer properties, which suggests that pigment-protein complexes could serve as blueprints for the design of nature inspired devices. Mechanistic insights into energy transport dynamics can be gained by leveraging numerically involved propagation schemes such as the hierarchical equations of motion (HEOM). Solving these equations, however, is computationally costly due to the adverse scaling with the number of pigments. Therefore virtual high-throughput screening, which has become a powerful tool in material discovery, is less readily applicable for the search of novel excitonic devices. We propose the use of artificial neural networks to bypass the computational limitations of established techniques for exploring the structure-dynamics relation in excitonic systems. Once trained, our neural networks reduce computational costs by several orders of magnitudes. Our predicted transfer times and transfer efficiencies exhibit similar or even higher accuracies than frequently used approximate methods such as secular Redfield theory
- Jul 05 2017 quant-ph arXiv:1707.00986v3A network of driven nonlinear oscillators without dissipation has recently been proposed for solving combinatorial optimization problems via quantum adiabatic evolution through its bifurcation point. Here we investigate the behavior of the quantum bifurcation machine in the presence of dissipation. Our numerical study suggests that the output probability distribution of the dissipative quantum bifurcation machine is Boltzmann-like, where the energy in the Boltzmann distribution corresponds to the cost function of the optimization problem. We explain the Boltzmann distribution by generalizing the concept of quantum heating in a single oscillator to the case of multiple coupled oscillators. The present result also suggests that such driven dissipative nonlinear oscillator networks can be applied to Boltzmann sampling, which is used, e.g., for Boltzmann machine learning in the field of artificial intelligence.
- The application of state-of-the-art machine learning techniques to statistical physic problems has seen a surge of interest for their ability to discriminate phases of matter by extracting essential features in the many-body wavefunction or the ensemble of correlators sampled in Monte Carlo simulations. Here we introduce a gener- alization of supervised machine learning approaches that allows to accurately map out phase diagrams of inter- acting many-body systems without any prior knowledge, e.g. of their general topology or the number of distinct phases. To substantiate the versatility of this approach, which combines convolutional neural networks with quantum Monte Carlo sampling, we map out the phase diagrams of interacting boson and fermion models both at zero and finite temperatures and show that first-order, second-order, and Kosterlitz-Thouless phase transitions can all be identified. We explicitly demonstrate that our approach is capable of identifying the phase transition to non-trivial many-body phases such as superfluids or topologically ordered phases without supervision.
- Jul 04 2017 quant-ph arXiv:1707.00360v1With the significant advancement in quantum computation in the past couple of decades, the exploration of machine-learning subroutines using quantum strategies has become increasingly popular. Gaussian process regression is a widely used technique in supervised classical machine learning. Here we introduce an algorithm for Gaussian process regression using continuous-variable quantum systems that can be realized with technology based on photonic quantum computers. Our algorithm shows that by using a continuous-variable quantum computer a dramatic speed-up in computing Gaussian process regression can be achieved, i.e., the possibility of exponentially reducing the time to compute. Furthermore, our results also include a continuous-variable quantum-assisted singular value decomposition method of non-sparse low rank matrices and forms an important subroutine in our Gaussian process regression algorithm.
- Determining the best method for training a machine learning algorithm is critical to maximizing its ability to classify data. In this paper, we compare the standard "fully supervised" approach (that relies on knowledge of event-by-event truth-level labels) with a recent proposal that instead utilizes class ratios as the only discriminating information provided during training. This so-called "weakly supervised" technique has access to less information than the fully supervised method and yet is still able to yield impressive discriminating power. In addition, weak supervision seems particularly well suited to particle physics since quantum mechanics is incompatible with the notion of mapping an individual event onto any single Feynman diagram. We examine the technique in detail -- both analytically and numerically -- with a focus on the robustness to issues of mischaracterizing the training samples. Weakly supervised networks turn out to be remarkably insensitive to systematic mismodeling. Furthermore, we demonstrate that the event level outputs for weakly versus fully supervised networks are probing different kinematics, even though the numerical quality metrics are essentially identical. This implies that it should be possible to improve the overall classification ability by combining the output from the two types of networks. For concreteness, we apply this technology to a signature of beyond the Standard Model physics to demonstrate that all these impressive features continue to hold in a scenario of relevance to the LHC. Example code is provided at https://github.com/bostdiek/PublicWeaklySupervised/tree/master.
- Jun 27 2017 cond-mat.stat-mech cond-mat.quant-gas arXiv:1706.07977v1This work aims at the goal whether the artificial intelligence can recognize phase transition without the prior human knowledge. If this becomes successful, it can be applied to, for instance, analyze data from quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark of this approach. In this work, we feed the compute with data generated by the classical Monte Carlo simulation for the XY model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principle component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principle component analysis with kernel tricks and the neural network method.
- Jun 27 2017 quant-ph cond-mat.dis-nn arXiv:1706.08470v2Quantum annealers aim at solving non-convex optimization problems by exploiting cooperative tunneling effects to escape local minima. The underlying idea consists in designing a classical energy function whose ground states are the sought optimal solutions of the original optimization problem and add a controllable quantum transverse field to generate tunneling processes. A key challenge is to identify classes of non-convex optimization problems for which quantum annealing remains efficient while thermal annealing fails. We show that this happens for a wide class of problems which are central to machine learning. Their energy landscapes is dominated by local minima that cause exponential slow down of classical thermal annealers while quantum annealing converges efficiently to rare dense regions of optimal solutions.
- Quantum computing for machine learning attracts increasing attention and recent technological developments suggest that especially adiabatic quantum computing may soon be of practical interest. In this paper, we therefore consider this paradigm and discuss how to adopt it to the problem of binary clustering. Numerical simulations demonstrate the feasibility of our approach and illustrate how systems of qubits adiabatically evolve towards a solution.
- Jun 20 2017 quant-ph physics.chem-ph arXiv:1706.05413v2The NSF Workshop in Quantum Information and Computation for Chemistry assembled experts from directly quantum-oriented fields such as algorithms, chemistry, machine learning, optics, simulation, and metrology, as well as experts in related fields such as condensed matter physics, biochemistry, physical chemistry, inorganic and organic chemistry, and spectroscopy. The goal of the workshop was to summarize recent progress in research at the interface of quantum information science and chemistry as well as to discuss the promising research challenges and opportunities in the field. Furthermore, the workshop hoped to identify target areas where cross fertilization among these fields would result in the largest payoff for developments in theory, algorithms, and experimental techniques. The ideas can be broadly categorized in two distinct areas of research that obviously have interactions and are not separated cleanly. The first area is quantum information for chemistry, or how quantum information tools, both experimental and theoretical can aid in our understanding of a wide range of problems pertaining to chemistry. The second area is chemistry for quantum information, which aims to discuss the several aspects where research in the chemical sciences can aid progress in quantum information science and technology. The results of the workshop are summarized in this report.
- Given a quantum (or statistical) system with a very large number of degrees of freedom and a preferred tensor product factorization of the Hilbert space (or of a space of distributions) we describe how it can be approximated with a very low-dimensional field theory with geometric degrees of freedom. The geometric approximation procedure consists of three steps. The first step is to construct weighted graphs (we call information graphs) with vertices representing subsystems (e.g. qubits or random variables) and edges representing mutual information (or the flow of information) between subsystems. The second step is to deform the adjacency matrices of the information graphs to that of a (locally) low-dimensional lattice using the graph flow equations introduced in the paper. (Note that the graph flow produces very sparse adjacency matrices and thus might also be used, for example, in machine learning or network science where the task of graph sparsification is of a central importance.) The third step is to define an emergent metric and to derive an effective description of the metric and possibly other degrees of freedom. To illustrate the procedure we analyze (numerically and analytically) two information graph flows with geometric attractors (towards locally one- and two-dimensional lattices) and metric perturbations obeying a geometric flow equation. Our analysis also suggests a possible approach to (a non-perturbative) quantum gravity in which the geometry (a secondary object) emerges directly from a quantum state (a primary object) due to the flow of the information graphs.
- Jun 07 2017 quant-ph arXiv:1706.01561v1The primary questions in the emerging field of quantum machine learning (QML) are, "Is it possible to achieve quantum speed-up of machine learning?" and "What quantum effects contribute to the speed-up and how?" Satisfactory answers to such questions have recently been provided by the quantum support vector machine (QSVM), which classifies many quantum data vectors by exploiting their quantum parallelism. However, it is demanded to realize a full-scale quantum computer that can process big data to learn quantum-mechanically, while nearly all the real-world data the users recognize are measured data, classical say. Then, the following important question arises: "Can quantum learning speed-up be attained even with the user-recognizable (classical) data?" Here, we provide an affirmative answer to the afore-mentioned question by performing proof-of-principle experiments.
- How useful can machine learning be in a quantum laboratory? Here we raise the question of the potential of intelligent machines in the context of scientific research. We investigate this question by using the projective simulation model, a physics-oriented approach to artificial intelligence. In our approach, the projective simulation system is challenged to design complex photonic quantum experiments that produce high-dimensional entangled multiphoton states, which are of high interest in modern quantum experiments. The artificial intelligence system learns to create a variety of entangled states, in number surpassing the best previously studied automated approaches, and improves the efficiency of their realization. In the process, the system autonomously (re)discovers experimental techniques which are only becoming standard in modern quantum optical experiments - a trait which was not explicitly demanded from the system but emerged through the process of learning. Such features highlight the possibility that machines could have a significantly more creative role in future research.
- Recent theoretical and experimental results suggest the possibility of using current and near-future quantum hardware in challenging sampling tasks. In this paper, we introduce free energy-based reinforcement learning (FERL) as an application of quantum hardware. We propose a method for processing a quantum annealer's measured qubit spin configurations in approximating the free energy of a quantum Boltzmann machine (QBM). We then apply this method to perform reinforcement learning on the grid-world problem using the D-Wave 2000Q quantum annealer. The experimental results show that our technique is a promising method for harnessing the power of quantum sampling in reinforcement learning tasks.
- Jun 02 2017 cond-mat.mtrl-sci physics.chem-ph arXiv:1706.00179v1Determining the stability of molecules and condensed phases is the cornerstone of atomistic modelling, underpinning our understanding of chemical and materials properties and transformations. Here we show that a machine learning model, based on a local description of chemical environments and Bayesian statistical learning, provides a unified framework to predict atomic-scale properties. It captures the quantum mechanical effects governing the complex surface reconstructions of silicon, predicts the stability of different classes of molecules with chemical accuracy, and distinguishes active and inactive protein ligands with more than 99% reliability. The universality and the systematic nature of our framework provides new insight into the potential energy surface of materials and molecules.
- In this paper, we propose the classification method based on a learning paradigm we are going to call Quantum Low Entropy based Associative Reasoning or QLEAR learning. The approach is based on the idea that classification can be understood as supervised clustering, where a quantum entropy in the context of the quantum probabilistic model, will be used as a "capturer" (measure, or external index), of the "natural structure" of the data. By using quantum entropy we do not make any assumption about linear separability of the data that are going to be classified. The basic idea is to find close neighbors to a query sample and then use relative change in the quantum entropy as a measure of similarity of the newly arrived sample with the representatives of interest. In other words, method is based on calculation of quantum entropy of the referent system and its relative change with the addition of the newly arrived sample. Referent system consists of vectors that represent individual classes and that are the most similar, in Euclidean distance sense, to the vector that is analyzed. Here, we analyze the classification problem in the context of measuring similarities to prototype examples of categories. While nearest neighbor classifiers are natural in this setting, they suffer from the problem of high variance (in bias-variance decomposition) in the case of limited sampling. Alternatively, one could use machine learning techniques (like support vector machines) but they involve time-consuming optimization. Here we propose a hybrid of nearest neighbor and machine learning technique which deals naturally with the multi-class setting, has reasonable computational complexity both in training and at run time, and yields excellent results in practice.
- May 23 2017 quant-ph cond-mat.dis-nn arXiv:1705.07855v1A fault-tolerant quantum computation requires an efficient means to detect and correct errors that accumulate in encoded quantum information. In the context of machine learning, neural networks are a promising new approach to quantum error correction. Here we show that a recurrent neural network can be trained, using only experimentally accessible data, to detect errors in a widely used topological code, the surface code, with a performance above that of the established minimum-weight perfect matching (or blossom) decoder. The performance gain is achieved because the neural network decoder can detect correlations between bit-flip (X) and phase-flip (Z) errors. The machine learning algorithm adapts to the physical system, hence no noise model is needed to achieve optimal performance. The long short-term memory cell of the recurrent neural network maintains this performance over a large number of error correction cycles, making it a practical decoder for forthcoming experimental realizations. On a density-matrix simulation of the 17-qubit surface code, our neural network decoder achieves a substantial performance improvement over a state-of-the-art blossom decoder.
- May 18 2017 physics.chem-ph arXiv:1705.05907v1Machine learning has emerged as an invaluable tool in many research areas. In the present work, we harness this power to predict highly accurate molecular infrared spectra with unprecedented computational efficiency. To account for vibrational anharmonic and dynamical effects -- typically neglected by conventional quantum chemistry approaches -- we base our machine learning strategy on ab initio molecular dynamics simulations. While these simulations are usually extremely time consuming even for small molecules, we overcome these limitations by leveraging the power of a variety of machine learning techniques, not only accelerating simulations by several orders of magnitude, but also greatly extending the size of systems that can be treated. To this end, we develop a molecular dipole moment model based on environment dependent neural network charges and combine it with the neural network potentials of Behler and Parrinello. Contrary to the prevalent big data philosophy, we are able to obtain very accurate machine learning models for the prediction of infrared spectra based on only a few hundreds of electronic structure reference points. This is made possible through the introduction of a fully automated sampling scheme and the use of molecular forces during neural network potential training. We demonstrate the power of our machine learning approach by applying it to model the infrared spectra of a methanol molecule, n-alkanes containing up to 200 atoms and the protonated alanine tripeptide, which at the same time represents the first application of machine learning techniques to simulate the dynamics of a peptide. In all these case studies we find excellent agreement between the infrared spectra predicted via machine learning models and the respective theoretical and experimental spectra.
- Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era.
- Correlated many-body problems ubiquitously appear in various fields of physics such as condensed matter physics, nuclear physics, and statistical physics. However, due to the interplay of the large number of degrees of freedom, it is generically impossible to treat these problems from first principles. Thus the construction of a proper model, namely effective Hamiltonian, is essential. Here, we propose a simple scheme of constructing Hamiltonians from given energy or entanglement spectra with machine learning. Taking the Hubbard model at the half-filling as an example, we show that we can optimize the parameters of a trial Hamiltonian and automatically find the reduced description of the original model in a way that the estimation bias and error are well controlled. The same approach can be used to construct the entanglement Hamiltonian of a quantum many-body state from its entanglement spectrum. We exemplify this using the ground states of the $S=1/2$ two-leg Heisenberg ladders and point out the importance of multi-spin interactions in the entanglement Hamiltonian. We observe a qualitative difference between the entanglement Hamiltonians of the two phases (the Haldane phase and the Rung Singlet phase) of the model, though their field-theoretical descriptions are almost equivalent. Possible applications to the study of strongly-correlated systems and the model construction from experimental data are discussed.
- This work demonstrates how to accelerate dense linear algebra computations using CLBlast, an open-source OpenCL BLAS library providing optimized routines for a wide variety of devices. It is targeted at machine learning and HPC applications and thus provides a fast matrix-multiplication routine (GEMM) to accelerate the core of many applications (e.g. deep learning, iterative solvers, astrophysics, computational fluid dynamics, quantum chemistry). CLBlast has four main advantages over other BLAS libraries: 1) it is optimized for and tested on a large variety of OpenCL devices including less commonly used devices such as embedded and low-power GPUs, 2) it can be explicitly tuned for specific problem-sizes on specific hardware platforms, 3) it can perform operations in half-precision floating-point FP16 saving precious bandwidth, time and energy, 4) and it can combine multiple operations in a single batched routine, accelerating smaller problems significantly. This paper describes the library and demonstrates the advantages of CLBlast experimentally for different use-cases on a wide variety of OpenCL hardware.
- After decades of progress and effort, obtaining a phase diagram for a strongly-correlated topological system still remains a challenge. Although in principle one could turn to Wilson loops and long-range entanglement, evaluating these non-local observables at many points in phase space can be prohibitively costly. With growing excitement over topological quantum computation comes the need for an efficient approach for obtaining topological phase diagrams. Here we turn to machine learning using quantum loop topography (QLT), a notion we have recently introduced. Specifically, we propose a construction of QLT that is sensitive to quasi-particle statistics. We then use mutual statistics between the spinons and visions to detect a $\mathbb Z_2$ quantum spin liquid in a multi-parameter phase space. We successfully obtain the quantum phase boundary between the topological and trivial phases using a simple feed forward neural network. Furthermore we demonstrate how our approach can speed up evaluation of the phase diagram by orders of magnitude. Such statistics-based machine learning of topological phases opens new efficient routes to studying topological phase diagrams in strongly correlated systems.
- Second-Harmonic Scatteringh (SHS) experiments provide a unique approach to probe non-centrosymmetric environments in aqueous media, from bulk solutions to interfaces, living cells and tissue. A central assumption made in analyzing SHS experiments is that the each molecule scatters light according to a constant molecular hyperpolarizability tensor $\boldsymbol{\beta}^{(2)}$. Here, we investigate the dependence of the molecular hyperpolarizability of water on its environment and internal geometric distortions, in order to test the hypothesis of constant $\boldsymbol{\beta}^{(2)}$. We use quantum chemistry calculations of the hyperpolarizability of a molecule embedded in point-charge environments obtained from simulations of bulk water. We demonstrate that both the heterogeneity of the solvent configurations and the quantum mechanical fluctuations of the molecular geometry introduce large variations in the non-linear optical response of water. This finding has the potential to change the way SHS experiments are interpreted: in particular, isotopic differences between H$_2$O and D$_2$O could explain recent second-harmonic scattering observations. Finally, we show that a simple machine-learning framework can predict accurately the fluctuations of the molecular hyperpolarizability. This model accounts for the microscopic inhomogeneity of the solvent and represents a first step towards quantitative modelling of SHS experiments.
- May 04 2017 quant-ph arXiv:1705.01523v3The problem of determining whether a given quantum state is entangled lies at the heart of quantum information processing, which is known to be an NP-hard problem in general. Despite the proposed many methods such as the positive partial transpose (PPT) criterion and the k-symmetric extendibility criterion to tackle this problem in practice, none of them enables a general, effective solution to the problem even for small dimensions. Explicitly, separable states form a high-dimensional convex set, which exhibits a vastly complicated structure. In this work, we build a new separability-entanglement classifier underpinned by machine learning techniques. Our method outperforms the existing methods in generic cases in terms of both speed and accuracy, opening up the avenues to explore quantum entanglement via the machine learning approach.
- Quantum information science has profoundly changed the ways we understand, store, and process information. A major challenge in this field is to look for an efficient means for classifying quantum state. For instance, one may want to determine if a given quantum state is entangled or not. However, the process of a complete characterization of quantum states, known as quantum state tomography, is a resource-consuming operation in general. An attractive proposal would be the use of Bell's inequalities as an entanglement witness, where only partial information of the quantum state is needed. The problem is that entanglement is necessary but not sufficient for violating Bell's inequalities, making it an unreliable state classifier. Here we aim at solving this problem by the methods of machine learning. More precisely, given a family of quantum states, we randomly picked a subset of it to construct a quantum-state classifier, accepting only partial information of each quantum state. Our results indicated that these transformed Bell-type inequalities can perform significantly better than the original Bell's inequalities in classifying entangled states. We further extended our analysis to three-qubit and four-qubit systems, performing classification of quantum states into multiple species. These results demonstrate how the tools in machine learning can be applied to solving problems in quantum information science.
- The ability to prepare a physical system in a desired quantum state is central to many areas of physics such as nuclear magnetic resonance, cold atoms, and quantum computing. However, preparing a quantum state quickly and with high fidelity remains a formidable challenge. Here we tackle this problem by applying cutting edge Machine Learning (ML) techniques, including Reinforcement Learning, to find short, high-fidelity driving protocols from an initial to a target state in complex many-body quantum systems of interacting qubits. We show that the optimization problem undergoes a spin-glass like phase transition in the space of protocols as a function of the protocol duration, indicating that the optimal solution may be exponentially difficult to find. However, ML allows us to identify a simple, robust variational protocol, which yields nearly optimal fidelity even in the glassy phase. Our study highlights how ML offers new tools for understanding nonequilibrium physics.
- Apr 21 2017 quant-ph arXiv:1704.06174v2Solving linear systems of equations is a frequently encountered problem in machine learning and optimisation. Given a matrix $A$ and a vector $\mathbf b$ the task is to find the vector $\mathbf x$ such that $A \mathbf x = \mathbf b$. We describe a quantum algorithm that achieves a sparsity-independent runtime scaling of $\mathcal{O}(\kappa^2 \|A\|_F \text{polylog}(n)/\epsilon)$, where $n\times n$ is the dimensionality of $A$ with Frobenius norm $\|A\|_F$, $\kappa$ denotes the condition number of $A$, and $\epsilon$ is the desired precision parameter. When applied to a dense matrix with spectral norm bounded by a constant, the runtime of the proposed algorithm is bounded by $\mathcal{O}(\kappa^2\sqrt{n} \text{polylog}(n)/\epsilon)$, which is a quadratic improvement over known quantum linear system algorithms. Our algorithm is built upon a singular value estimation subroutine, which makes use of a memory architecture that allows for efficient preparation of quantum states that correspond to the rows and row Frobenius norms of $A$.
- Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how it works in MLE and show that DQAEM outperforms EM.
- Apr 18 2017 quant-ph arXiv:1704.04992v3Quantum Machine Learning is an exciting new area that was initiated by the breakthrough quantum algorithm of Harrow, Hassidim, Lloyd \citeHHL09 for solving linear systems of equations and has since seen many interesting developments \citeLMR14, LMR13a, LMR14a, KP16. In this work, we start by providing a quantum linear system solver that outperforms the current ones for large families of matrices and provides exponential savings for any low-rank (even dense) matrix. Our algorithm uses an improved procedure for Singular Value Estimation which can be used to perform efficiently linear algebra operations, including matrix inversion and multiplication. Then, we provide the first quantum method for performing gradient descent for cases where the gradient is an affine function. Performing $\tau$ steps of the quantum gradient descent requires time $O(\tau C_S)$, where $C_S$ is the cost of performing quantumly one step of the gradient descent, which can be exponentially smaller than the cost of performing the step classically. We provide two applications of our quantum gradient descent algorithm: first, for solving positive semidefinite linear systems, and, second, for performing stochastic gradient descent for the weighted least squares problem.
- Quantum machine learning witnesses an increasing amount of quantum algorithms for data-driven decision making, a problem with potential applications ranging from automated image recognition to medical diagnosis. Many of those algorithms are implementations of quantum classifiers, or models for the classification of data inputs with a quantum computer. Following the success of collective decision making with ensembles in classical machine learning, this paper introduces the concept of quantum ensembles of quantum classifiers. Creating the ensemble corresponds to a state preparation routine, after which the quantum classifiers are evaluated in parallel and their combined decision is accessed by a single-qubit measurement. This framework naturally allows for exponentially large ensembles in which -- similar to Bayesian learning -- the individual classifiers do not have to be trained. As an example, we analyse an exponentially large quantum ensemble in which each classifier is weighed according to its performance in classifying the training data, leading to new results for quantum as well as classical machine learning.
- D-Wave quantum annealers represent a novel computational architecture and have attracted significant interest, but have been used for few real-world computations. Machine learning has been identified as an area where quantum annealing may be useful. Here, we show that the D-Wave 2X can be effectively used as part of an unsupervised machine learning method. This method can be used to analyze large datasets. The D-Wave only limits the number of features that can be extracted from the dataset. We apply this method to learn the features from a set of facial images.
- Deep convolutional networks have witnessed unprecedented success in various machine learning applications. Formal understanding on what makes these networks so successful is gradually unfolding, but for the most part there are still significant mysteries to unravel. The inductive bias, which reflects prior knowledge embedded in the network architecture, is one of them. In this work, we establish a fundamental connection between the fields of quantum physics and deep learning. We use this connection for asserting novel theoretical observations regarding the role that the number of channels in each layer of the convolutional network fulfills in the overall inductive bias. Specifically, we show an equivalence between the function realized by a deep convolutional arithmetic circuit (ConvAC) and a quantum many-body wave function, which relies on their common underlying tensorial structure. This facilitates the use of quantum entanglement measures as well-defined quantifiers of a deep network's expressive ability to model intricate correlation structures of its inputs. Most importantly, the construction of a deep ConvAC in terms of a Tensor Network is made available. This description enables us to carry a graph-theoretic analysis of a convolutional network, with which we demonstrate a direct control over the inductive bias of the deep network via its channel numbers, that are related to the min-cut in the underlying graph. This result is relevant to any practitioner designing a network for a specific task. We theoretically analyze ConvACs, and empirically validate our findings on more common ConvNets which involve ReLU activations and max pooling. Beyond the results described above, the description of a deep convolutional network in well-defined graph-theoretic tools and the formal connection to quantum entanglement, are two interdisciplinary bridges that are brought forth by this work.
- Apr 03 2017 quant-ph arXiv:1703.10793v1Lately, much attention has been given to quantum algorithms that solve pattern recognition tasks in machine learning. Many of these quantum machine learning algorithms try to implement classical models on large-scale universal quantum computers that have access to non-trivial subroutines such as Hamiltonian simulation, amplitude amplification and phase estimation. We approach the problem from the opposite direction and analyse a distance-based classifier that is realised by a simple quantum interference circuit. After state preparation, the circuit only consists of a Hadamard gate as well as two single-qubit measurements and can be implemented with small-scale setups available today. We demonstrate this using the IBM Quantum Experience and analyse the classifier with numerical simulations.
- We present a feature functional theory - binding predictor (FFT-BP) for the protein-ligand binding affinity prediction. The underpinning assumptions of FFT-BP are as follows: i) representability: there exists a microscopic feature vector that can uniquely characterize and distinguish one protein-ligand complex from another; ii) feature-function relationship: the macroscopic features, including binding free energy, of a complex is a functional of microscopic feature vectors; and iii) similarity: molecules with similar microscopic features have similar macroscopic features, such as binding affinity. Physical models, such as implicit solvent models and quantum theory, are utilized to extract microscopic features, while machine learning algorithms are employed to rank the similarity among protein-ligand complexes. A large variety of numerical validations and tests confirms the accuracy and robustness of the proposed FFT-BP model. The root mean square errors (RMSEs) of FFT-BP blind predictions of a benchmark set of 100 complexes, the PDBBind v2007 core set of 195 complexes and the PDBBind v2015 core set of 195 complexes are 1.99, 2.02 and 1.92 kcal/mol, respectively. Their corresponding Pearson correlation coefficients are 0.75, 0.80, and 0.78, respectively.
- Mar 17 2017 quant-ph arXiv:1703.05402v1Efficiently characterising quantum systems, verifying operations of quantum devices and validating underpinning physical models, are central challenges for the development of quantum technologies and for our continued understanding of foundational physics. Machine-learning enhanced by quantum simulators has been proposed as a route to improve the computational cost of performing these studies. Here we interface two different quantum systems through a classical channel - a silicon-photonics quantum simulator and an electron spin in a diamond nitrogen-vacancy centre - and use the former to learn the latter's Hamiltonian via Bayesian inference. We learn the salient Hamiltonian parameter with an uncertainty of approximately $10^{-5}$. Furthermore, an observed saturation in the learning algorithm suggests deficiencies in the underlying Hamiltonian model, which we exploit to further improve the model itself. We go on to implement an interactive version of the protocol and experimentally show its ability to characterise the operation of the quantum photonic device. This work demonstrates powerful new quantum-enhanced techniques for investigating foundational physical models and characterising quantum technologies.
- The experimental realization of increasingly complex synthetic quantum systems calls for the development of general theoretical methods, to validate and fully exploit quantum resources. Quantum-state tomography (QST) aims at reconstructing the full quantum state from simple measurements, and therefore provides a key tool to obtain reliable analytics. Brute-force approaches to QST, however, demand resources growing exponentially with the number of constituents, making it unfeasible except for small systems. Here we show that machine learning techniques can be efficiently used for QST of highly-entangled states in arbitrary dimension. Remarkably, the resulting approach allows one to reconstruct traditionally challenging many-body quantities - such as the entanglement entropy - from simple, experimentally accessible measurements. This approach can benefit existing and future generations of devices ranging from quantum computers to ultra-cold atom quantum simulators.
- Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations, and in particular graph convolutional networks, are powerful tools for molecular machine learning and broadly offer the best performance. However, for quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be significantly more important than choice of particular learning algorithm.
- Feb 21 2017 cond-mat.mtrl-sci physics.chem-ph arXiv:1702.05771v1High-throughput computational screening has emerged as a critical component of materials discovery. Direct density functional theory (DFT) simulation of inorganic materials and molecular transition metal complexes is often used to describe subtle trends in inorganic bonding and spin-state ordering, but these calculations are computationally costly and properties are sensitive to the exchange-correlation functional employed. To begin to overcome these challenges, we trained artificial neural networks (ANNs) to predict quantum-mechanically-derived properties, including spin-state ordering, sensitivity to Hartree-Fock exchange, and spin- state specific bond lengths in transition metal complexes. Our ANN is trained on a small set of inorganic-chemistry-appropriate empirical inputs that are both maximally transferable and do not require precise three-dimensional structural information for prediction. Using these descriptors, our ANN predicts spin-state splittings of single-site transition metal complexes (i.e., Cr-Ni) at arbitrary amounts of Hartree-Fock exchange to within 3 kcal/mol accuracy of DFT calculations. Our exchange-sensitivity ANN enables improved predictions on a diverse test set of experimentally-characterized transition metal complexes by extrapolation from semi-local DFT to hybrid DFT. The ANN also outperforms other machine learning models (i.e., support vector regression and kernel ridge regression), demonstrating particularly improved performance in transferability, as measured by prediction errors on the diverse test set. We establish the value of new uncertainty quantification tools to estimate ANN prediction uncertainty in computational chemistry, and we provide additional heuristics for identification of when a compound of interest is likely to be poorly predicted by the ANN.
- Feb 21 2017 physics.chem-ph arXiv:1702.05532v2We investigate the impact of choosing regressors and molecular representations for the construction of fast machine learning (ML) models of thirteen electronic ground-state properties of organic molecules. The performance of each regressor/representation/property combination is assessed using learning curves which report out-of-sample errors as a function of training set size with up to $\sim$117k distinct molecules. Molecular structures and properties at hybrid density functional theory (DFT) level of theory used for training and testing come from the QM9 database [Ramakrishnan et al, \em Scientific Data \bf 1 140022 (2014)] and include dipole moment, polarizability, HOMO/LUMO energies and gap, electronic spatial extent, zero point vibrational energy, enthalpies and free energies of atomization, heat capacity and the highest fundamental vibrational frequency. Various representations from the literature have been studied (Coulomb matrix, bag of bonds, BAML and ECFP4, molecular graphs (MG)), as well as newly developed distribution based variants including histograms of distances (HD), and angles (HDA/MARAD), and dihedrals (HDAD). Regressors include linear models (Bayesian ridge regression (BR) and linear regression with elastic net regularization (EN)), random forest (RF), kernel ridge regression (KRR) and two types of neural net works, graph convolutions (GC) and gated graph networks (GG). We present numerical evidence that ML model predictions deviate from DFT less than DFT deviates from experiment for all properties. Furthermore, our out-of-sample prediction errors with respect to hybrid DFT reference are on par with, or close to, chemical accuracy. Our findings suggest that ML models could be more accurate than hybrid DFT if explicitly electron correlated quantum (or experimental) data was available.
- This paper surveys quantum learning theory: the theoretical aspects of machine learning using quantum computers. We describe the main results known for three models of learning: exact learning from membership queries, and Probably Approximately Correct (PAC) and agnostic learning from classical or quantum examples.
- Jan 19 2017 cond-mat.dis-nn quant-ph arXiv:1701.05039v1The challenge of quantum many-body problems comes from the difficulty to represent large-scale quantum states, which in general requires an exponentially large number of parameters. Recently, a connection has been made between quantum many-body states and the neural network representation (\textitarXiv:1606.02318). An important open question is what characterizes the representational power of deep and shallow neural networks, which is of fundamental interest due to popularity of the deep learning methods. Here, we give a rigorous proof that a deep neural network can efficiently represent most physical states, including those generated by any polynomial size quantum circuits or ground states of many body Hamiltonians with polynomial-size gaps, while a shallow network through a restricted Boltzmann machine cannot efficiently represent those states unless the polynomial hierarchy in computational complexity theory collapses.
- Superconducting circuit technologies have recently achieved quantum protocols involving closed feedback loops. Quantum artificial intelligence and quantum machine learning are emerging fields inside quantum technologies which may enable quantum devices to acquire information from the outer world and improve themselves via a learning process. Here we propose the implementation of basic protocols in quantum reinforcement learning, with superconducting circuits employing feedback-loop control. We introduce diverse scenarios for proof-of-principle experiments with state-of-the-art superconducting circuit technologies and analyze their feasibility in presence of imperfections. The field of quantum artificial intelligence implemented with superconducting circuits paves the way for enhanced quantum control and quantum computation protocols.
- Restricted Boltzmann machine (RBM) is one of the fundamental building blocks of deep learning. RBM finds wide applications in dimensional reduction, feature extraction, and recommender systems via modeling the probability distributions of a variety of input data including natural images, speech signals, and customer ratings, etc. We build a bridge between RBM and tensor network states (TNS) widely used in quantum many-body physics research. We devise efficient algorithms to translate an RBM into the commonly used TNS. Conversely, we give sufficient and necessary conditions to determine whether a TNS can be transformed into an RBM of given architectures. Revealing these general and constructive connections can cross-fertilize both deep learning and quantum-many body physics. Notably, by exploiting the entanglement entropy bound of TNS, we can rigorously quantify the expressive power of RBM on complex datasets. Insights into TNS and its entanglement capacity can guide the design of more powerful deep learning architectures. On the other hand, RBM can represent quantum many-body states with fewer parameters compared to TNS, which may allow more efficient classical simulations.
- Machine learning, one of today's most rapidly growing interdisciplinary fields, promises an unprecedented perspective for solving intricate quantum many-body problems. Understanding the physical aspects of the representative artificial neural-network states is recently becoming highly desirable in the applications of machine learning techniques to quantum many-body physics. Here, we study the quantum entanglement properties of neural-network states, with a focus on the restricted-Boltzmann-machine (RBM) architecture. We prove that the entanglement of all short-range RBM states satisfies an area law for arbitrary dimensions and bipartition geometry. For long-range RBM states we show by using an exact construction that such states could exhibit volume-law entanglement, implying a notable capability of RBM in representing efficiently quantum states with massive entanglement. We further examine generic RBM states with random weight parameters. We find that their averaged entanglement entropy obeys volume-law scaling and meantime strongly deviates from the Page-entropy of the completely random pure states. We show that their entanglement spectrum has no universal part associated with random matrix theory and bears a Poisson-type level statistics. Using reinforcement learning, we demonstrate that RBM is capable of finding the ground state (with power-law entanglement) of a model Hamiltonian with long-range interaction. In addition, we show, through a concrete example of the one-dimensional symmetry-protected topological cluster states, that the RBM representation may also be used as a tool to analytically compute the entanglement spectrum. Our results uncover the unparalleled power of artificial neural networks in representing quantum many-body states, which paves a novel way to bridge computer science based machine learning techniques to outstanding quantum condensed matter physics problems.
- Jan 18 2017 stat.ML arXiv:1701.04503v1The rise and fall of artificial neural networks is well documented in the scientific literature of both computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on multilayer neural networks. Within the last few years, we have seen the transformative impact of deep learning in many domains, particularly in speech recognition and computer vision, to the extent that the majority of expert practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties that distinguish them from traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including QSAR, virtual screening, protein structure prediction, quantum chemistry, materials design and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non-neural networks state-of-the-art models across disparate research topics, and deep neural network based models often exceeded the "glass ceiling" expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a valuable tool for computational chemistry.
- We investigate whether quantum annealers with select chip layouts can outperform classical computers in reinforcement learning tasks. We associate a transverse field Ising spin Hamiltonian with a layout of qubits similar to that of a deep Boltzmann machine (DBM) and use simulated quantum annealing (SQA) to numerically simulate quantum sampling from this system. We design a reinforcement learning algorithm in which the set of visible nodes representing the states and actions of an optimal policy are the first and last layers of the deep network. In absence of a transverse field, our simulations show that DBMs train more effectively than restricted Boltzmann machines (RBM) with the same number of weights. Since sampling from Boltzmann distributions of a DBM is not classically feasible, this is evidence of advantage of a non-Turing sampling oracle. We then develop a framework for training the network as a quantum Boltzmann machine (QBM) in the presence of a significant transverse field for reinforcement learning. This further improves the reinforcement learning method using DBMs.
- We propose a quantum machine learning algorithm for efficiently solving a class of problems encoded in quantum controlled unitary operations. The central physical mechanism of the protocol is the iteration of a quantum time-delayed equation that introduces feedback in the dynamics and eliminates the necessity of intermediate measurements. The performance of the quantum algorithm is analyzed by comparing the results obtained in numerical simulations with the outcome of classical machine learning methods for the same problem. The use of time-delayed equations enhances the toolbox of the field of quantum machine learning, which may enable unprecedented applications in quantum technologies.
- Dec 16 2016 cond-mat.str-el stat.ML arXiv:1612.04895v1We present a machine learning approach to the inversion of Fredholm integrals of the first kind. The approach provides a natural regularization in cases where the inverse of the Fredholm kernel is ill-conditioned. It also provides an efficient and stable treatment of constraints. The key observation is that the stability of the forward problem permits the construction of a large database of outputs for physically meaningful inputs. We apply machine learning to this database to generate a regression function of controlled complexity, which returns approximate solutions for previously unseen inputs; the approximate solutions are then projected onto the subspace of functions satisfying relevant constraints. We also derive and present uncertainty estimates. We illustrate the approach by applying it to the analytical continuation problem of quantum many-body physics, which involves reconstructing the frequency dependence of physical excitation spectra from data obtained at specific points in the complex frequency plane. Under standard error metrics the method performs as well or better than the Maximum Entropy method for low input noise and is substantially more robust to increased input noise. We expect the methodology to be similarly effective for any problem involving a formally ill-conditioned inversion, provided that the forward problem can be efficiently solved.
- Dec 16 2016 quant-ph arXiv:1612.05204v1The promise of quantum neural nets, which utilize quantum effects to model complex data sets, has made their development an aspirational goal for quantum machine learning and quantum computing in general. Here we provide new methods of training quantum Boltzmann machines, which are a class of recurrent quantum neural network. Our work generalizes existing methods and provides new approaches for training quantum neural networks that compare favorably to existing methods. We further demonstrate that quantum Boltzmann machines enable a form of quantum state tomography that not only estimates a state but provides a perscription for generating copies of the reconstructed state. Classical Boltzmann machines are incapable of this. Finally we compare small non-stoquastic quantum Boltzmann machines to traditional Boltzmann machines for generative tasks and observe evidence that quantum models outperform their classical counterparts.
- Dec 13 2016 quant-ph arXiv:1612.03713v2The support vector machine (SVM) is a popular machine learning classification method which produces a nonlinear decision boundary in a feature space by constructing linear boundaries in a transformed Hilbert space. It is well known that these algorithms when executed on a classical computer do not scale well with the size of the feature space both in terms of data points and dimensionality. One of the most significant limitations of classical algorithms using non-linear kernels is that the kernel function has to be evaluated for all pairs of input feature vectors which themselves may be of substantially high dimension. This can lead to computationally excessive times during training and during the prediction process for a new data point. Here, we propose using both canonical and generalized coherent states to rapidly calculate specific nonlinear kernel functions. The key link will be the reproducing kernel Hilbert space (RKHS) property for SVMs that naturally arise from canonical and generalized coherent states. Specifically, we discuss the fast evaluation of radial kernels through a positive operator valued measure (POVM) on a quantum optical system based on canonical coherent states. A similar procedure may also lead to fast calculations of kernels not usually used in classical algorithms such as those arising from generalized coherent states.
- The task of reconstructing a low rank matrix from incomplete linear measurements arises in areas such as machine learning, quantum state tomography and in the phase retrieval problem. In this note, we study the particular setup that the measurements are taken with respect to rank one matrices constructed from the elements of a random tight frame. We consider a convex optimization approach and show both robustness of the reconstruction with respect to noise on the measurements as well as stability with respect to passing to approximately low rank matrices. This is achieved by establishing a version of the null space property of the corresponding measurement map.
- Dec 07 2016 quant-ph arXiv:1612.01789v1Solving optimization problems in disciplines such as machine learning is commonly done by iterative methods. Gradient descent algorithms find local minima by moving along the direction of steepest descent while Newton's method takes into account curvature information and thereby often improves convergence. Here, we develop quantum versions of these iterative optimization algorithms and apply them to homogeneous polynomial optimization with a unit norm constraint. In each step, multiple copies of the current candidate are used to improve the candidate using quantum phase estimation, an adapted quantum principal component analysis scheme, as well as quantum matrix multiplications and inversions. The required operations perform polylogarithmically in the dimension of the solution vector, an exponential speed-up over classical algorithms, which scale polynomially. The quantum algorithm can therefore be beneficial for high dimensional problems where a relatively small number of iterations is sufficient.
- Martingale concentration inequalities constitute a powerful mathematical tool in the analysis of problems in a wide variety of fields ranging from probability and statistics to information theory and machine learning. Here we apply techniques borrowed from this field to quantum hypothesis testing, which is the problem of discriminating quantum states belonging to two different sequences $\{\rho_n\}_{n}$ and $\{\sigma_n\}_n$. We obtain upper bounds on the finite blocklength type II Stein- and Hoeffding errors, which, for i.i.d. states, are in general tighter than the corresponding bounds obtained by Audenaert, Mosonyi and Verstraete [Journal of Mathematical Physics, 53(12), 2012]. We also derive finite blocklength bounds and moderate deviation results for pairs of sequences of correlated states satisfying a (non-homogeneous) factorization property. Examples of such sequences include Gibbs states of spin chains with translation-invariant finite range interaction, as well as finitely correlated quantum states. We apply our results to find bounds on the capacity of a certain class of classical-quantum channels with memory, which satisfy a so-called channel factorization property- both in the finite blocklength and moderate deviation regimes.
- Recent progress implies that a crossover between machine learning and quantum information processing benefits both fields. Traditional machine learning has dramatically improved the benchmarking and control of experimental quantum computing systems, including adaptive quantum phase estimation and designing quantum computing gates. On the other hand, quantum mechanics offers tantalizing prospects to enhance machine learning, ranging from reduced computational complexity to improved generalization performance. The most notable examples include quantum enhanced algorithms for principal component analysis, quantum support vector machines, and quantum Boltzmann machines. Progress has been rapid, fostered by demonstrations of midsized quantum optimizers which are predicted to soon outperform their classical counterparts. Further, we are witnessing the emergence of a physical theory pinpointing the fundamental and natural limitations of learning. Here we survey the cutting edge of this merger and list several open problems.
- Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.
- Nov 23 2016 physics.comp-ph physics.chem-ph arXiv:1611.07435v2The training of molecular models of quantum mechanical properties based on statistical machine learning requires large datasets which exemplify the map from chemical structure to molecular property. Intelligent a priori selection of training examples is often difficult or impossible to achieve as prior knowledge may be sparse or unavailable. Ordinarily representative selection of training molecules from such datasets is achieved through random sampling. We use genetic algorithms for the optimization of training set composition consisting of tens of thousands of small organic molecules. The resulting machine learning models are considerably more accurate with respect to small randomly selected training sets: mean absolute errors for out-of-sample predictions are reduced to ~25% for enthalpies, free energies, and zero-point vibrational energy, to ~50% for heat-capacity, electron-spread, and polarizability, and by more than ~20% for electronic properties such as frontier orbital eigenvalues or dipole-moments. We discuss and present optimized training sets consisting of 10 molecular classes for all molecular properties studied. We show that these classes can be used to design improved training sets for the generation of machine learning models of the same properties in similar but unrelated molecular sets.
- Nov 16 2016 physics.chem-ph arXiv:1611.04678v4Using conservation of energy - a fundamental property of closed classical and quantum mechanical systems - we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio molecular dynamics (AIMD) trajectories. The GDML implementation is able to reproduce global potential energy surfaces of intermediate-sized molecules with an accuracy of 0.3 kcal $\text{mol}^{-1}$ for energies and 1 kcal $\text{mol}^{-1}$ $\text{\AA}^{-1}$ for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative molecular dynamics simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods.
- Nov 15 2016 physics.chem-ph cond-mat.mtrl-sci arXiv:1611.03877v2We present a novel scheme to accurately predict atomic forces as vector quantities, rather than sets of scalar components, by Gaussian Process (GP) Regression. This is based on matrix-valued kernel functions, on which we impose the requirements that the predicted force rotates with the target configuration and is independent of any rotations applied to the configuration database entries. We show that such covariant GP kernels can be obtained by integration over the elements of the rotation group SO(d) for the relevant dimensionality d. Remarkably, in specific cases the integration can be carried out analytically and yields a conservative force field that can be recast into a pair interaction form. Finally, we show that restricting the integration to a summation over the elements of a finite point group relevant to the target system is sufficient to recover an accurate GP. The accuracy of our kernels in predicting quantum-mechanical forces in real materials is investigated by tests on pure and defective Ni, Fe and Si crystalline systems.
- Despite rapidly growing interest in harnessing machine learning in the study of quantum many-body systems, training neural networks to identify quantum phases is a nontrivial challenge. The key challenge is in efficiently extracting essential information from the many-body Hamiltonian or wave function and turning the information into an image that can be fed into a neural network. When targeting topological phases, this task becomes particularly challenging as topological phases are defined in terms of non-local properties. Here we introduce quantum loop topography (QLT): a procedure of constructing a multi-dimensional image from the "sample" Hamiltonian or wave function by evaluating two-point operators that form loops at independent Monte Carlo steps. The loop configuration is guided by characteristic response for defining the phase, which is Hall conductivity for the cases at hand. Feeding QLT to a fully-connected neural network with a single hidden layer, we demonstrate that the architecture can be effectively trained to distinguish Chern insulator and fractional Chern insulator from trivial insulators with high fidelity. In addition to establishing the first case of obtaining a phase diagram with topological quantum phase transition with machine learning, the perspective of bridging traditional condensed matter theory with machine learning will be broadly valuable.
- Laplacian eigenmap algorithm is a typical nonlinear model for dimensionality reduction in classical machine learning. We propose an efficient quantum Laplacian eigenmap algorithm to exponentially speed up the original counterparts. In our work, we demonstrate that the Hermitian chain product proposed in quantum linear discriminant analysis (arXiv:1510.00113,2015) can be applied to implement quantum Laplacian eigenmap algorithm. While classical Laplacian eigenmap algorithm requires polynomial time to solve the eigenvector problem, our algorithm is able to exponentially speed up nonlinear dimensionality reduction.
- The emerging field of quantum machine learning has the potential to substantially aid in the problems and scope of artificial intelligence. This is only enhanced by recent successes in the field of classical machine learning. In this work we propose an approach for the systematic treatment of machine learning, from the perspective of quantum information. Our approach is general and covers all three main branches of machine learning: supervised, unsupervised and reinforcement learning. While quantum improvements in supervised and unsupervised learning have been reported, reinforcement learning has received much less attention. Within our approach, we tackle the problem of quantum enhancements in reinforcement learning as well, and propose a systematic scheme for providing improvements. As an example, we show that quadratic improvements in learning efficiency, and exponential improvements in performance over limited time periods, can be obtained for a broad class of learning problems.
- Oct 18 2016 cond-mat.mtrl-sci arXiv:1610.04684v1Surface phenomena are increasingly becoming important in exploring nanoscale materials growth and characterization. Consequently, the need for atomistic based simulations is increasing. Nevertheless, relying entirely on quantum mechanical methods limits the length and time scales one can consider, resulting in an ever increasing dependence on alternative machine learning based force fields. Recently, we proposed a machine learning approach, known as AGNI, that allows fast and accurate atomic force predictions given the atom's neighborhood environment. Here, we make use of such force fields to study and characterize the nanoscale diffusion and growth processes occurring on an Al (111) surface. In particular we focus on the adatom ripening phenomena, confirming past experimental findings, wherein a low and high temperature growth regime were observed, using entirely molecular dynamics simulations.
- Classifying phases of matter is a central problem in physics. For quantum mechanical systems, this task can be daunting owing to the exponentially large Hilbert space. Thanks to the available computing power and access to ever larger data sets, classification problems are now routinely solved using machine learning techniques. Here, we propose to use a neural network based approach to find phase transitions depending on the performance of the neural network after training it with deliberately incorrectly labelled data. We demonstrate the success of this method on the topological phase transition in the Kitaev chain, the thermal phase transition in the classical Ising model, and the many-body-localization transition in a disordered quantum spin chain. Our method does not depend on order parameters, knowledge of the topological content of the phases, or any other specifics of the transition at hand. It therefore paves the way to a generic tool to identify unexplored phase transitions.
- Oct 10 2016 cond-mat.mtrl-sci arXiv:1610.02098v2Force fields developed with machine learning methods in tandem with quantum mechanics are beginning to find merit, given their (i) low cost, (ii) accuracy, and (iii) versatility. Recently, we proposed one such approach, wherein, the vectorial force on an atom is computed directly from its environment. Here, we discuss the multi-step workflow required for their construction, which begins with generating diverse reference atomic environments and force data, choosing a numerical representation for the atomic environments, down selecting a representative training set, and lastly the learning method itself, for the case of Al. The constructed force field is then validated by simulating complex materials phenomena such as surface melting and stress-strain behavior - that truly go beyond the realm of $ab\ initio$ methods both in length and time scales. To make such force fields truly versatile an attempt to estimate the uncertainty in force predictions is put forth, allowing one to identify areas of poor performance and paving the way for their continual improvement.
- Decompositions of tensors into factor matrices, which interact through a core tensor, have found numerous applications in signal processing and machine learning. A more general tensor model which represents data as an ordered network of sub-tensors of order-2 or order-3 has, so far, not been widely considered in these fields, although this so-called tensor network decomposition has been long studied in quantum physics and scientific computing. In this study, we present novel algorithms and applications of tensor network decompositions, with a particular focus on the tensor train decomposition and its variants. The novel algorithms developed for the tensor train decomposition update, in an alternating way, one or several core tensors at each iteration, and exhibit enhanced mathematical tractability and scalability to exceedingly large-scale data tensors. The proposed algorithms are tested in classic paradigms of blind source separation from a single mixture, denoising, and feature extraction, and achieve superior performance over the widely used truncated algorithms for tensor train decomposition.
- Artificial neural networks play a prominent role in the rapidly growing field of machine learning and are recently introduced to quantum many-body systems to tackle complex problems. Here, we find that even topological states with long-range quantum entanglement can be represented with classical artificial neural networks. This is demonstrated by using two concrete spin systems, the one-dimensional (1D) symmetry-protected topological cluster state and the 2D toric code state with an intrinsic topological order. For both cases we show rigorously that the topological ground states can be represented by short-range neural networks in an \it exact fashion. This neural network representation, in addition to being exact, is surprisingly \it efficient as the required number of hidden neurons is as small as the number of physical spins. Our exact construction of topological-order neuron-representation demonstrates explicitly the exceptional power of neural networks in describing exotic quantum states, and at the same time provides valuable topological data to supervise machine learning topological quantum orders in generic lattice models.
- Sep 28 2016 physics.chem-ph arXiv:1609.08259v4Learning from data has led to paradigm shifts in a multitude of disciplines, including web, text, and image search, speech recognition, as well as bioinformatics. Can machine learning enable similar breakthroughs in understanding quantum many-body systems? Here we develop an efficient deep learning approach that enables spatially and chemically resolved insights into quantum-mechanical observables of molecular systems. We unify concepts from many-body Hamiltonians with purpose-designed deep tensor neural networks (DTNN), which leads to size-extensive and uniformly accurate (1 kcal/mol) predictions in compositional and configurational chemical space for molecules of intermediate size. As an example of chemical relevance, the DTNN model reveals a classification of aromatic rings with respect to their stability -- a useful property that is not contained as such in the training dataset. Further applications of DTNN for predicting atomic energies and local chemical potentials in molecules, reliable isomer energies, and molecules with peculiar electronic structure demonstrate the high potential of machine learning for revealing novel insights into complex quantum-chemical systems.
- There is an enormous amount of information that can be extracted from the data of a quantum gas microscope that has yet to be fully explored. The quantum gas microscope has been used to directly measure magnetic order, dynamic correlations, Pauli blocking, and many other physical phenomena in several recent groundbreaking experiments. However, the analysis of the data from a quantum gas microscope can be pushed much further, and when used in conjunction with theoretical constructs it is possible to measure virtually any observable of interest in a wide range of systems. We focus on how to measure quantum entanglement in large interacting quantum systems. In particular, we show that quantum gas microscopes can be used to measure the entanglement of interacting boson systems exactly, where previously it had been thought this was only possible for non-interacting systems. We consider algorithms that can work for large experimental data sets which are similar to theoretical variational Monte Carlo techniques, and more data limited sets using properties of correlation functions.
- The current work addresses quantum machine learning in the context of Quantum Artificial Neural Networks such that the networks' processing is divided in two stages: the learning stage, where the network converges to a specific quantum circuit, and the backpropagation stage where the network effectively works as a self-programing quantum computing system that selects the quantum circuits to solve computing problems. The results are extended to general architectures including recurrent networks that interact with an environment, coupling with it in the neural links' activation order, and self-organizing in a dynamical regime that intermixes patterns of dynamical stochasticity and persistent quasiperiodic dynamics, making emerge a form of noise resilient dynamical record.
- We use density-matrix renormalization group, applied to a one-dimensional model of continuum Hamiltonians, to accurately solve chains of hydrogen atoms of various separations and numbers of atoms. We train and test a machine-learned approximation to $F[n]$, the universal part of the electronic density functional, to within quantum chemical accuracy. Our calculation (a) bypasses the standard Kohn-Sham approach, avoiding the need to find orbitals, (b) includes the strong correlation of highly-stretched bonds without any specific difficulty (unlike all standard DFT approximations) and (c) is so accurate that it can be used to find the energy in the thermodynamic limit to quantum chemical accuracy.
- Last year, at least 30,000 scientific papers used the Kohn-Sham scheme of density functional theory to solve electronic structure problems in a wide variety of scientific fields, ranging from materials science to biochemistry to astrophysics. Machine learning holds the promise of learning the kinetic energy functional via examples, by-passing the need to solve the Kohn-Sham equations. This should yield substantial savings in computer time, allowing either larger systems or longer time-scales to be tackled, but attempts to machine-learn this functional have been limited by the need to find its derivative. The present work overcomes this difficulty by directly learning the density-potential and energy-density maps for test systems and various molecules. Both improved accuracy and lower computational cost with this method are demonstrated by reproducing DFT energies for a range of molecular geometries generated during molecular dynamics simulations. Moreover, the methodology could be applied directly to quantum chemical calculations, allowing construction of density functionals of quantum-chemical accuracy.