results for quantum machine learning

- A common challenge faced in many branches of quantum physics is finding the extremal eigenvalues and eigenvectors of a Hamiltonian matrix too large to store in computer memory. There are numerous efficient methods developed for this task, but they generally fail when some control parameter in the Hamiltonian matrix such as interaction coupling exceeds some threshold value. In this work we present a new technique called eigenvector continuation that can extend the reach of these methods. Borrowing some concepts from machine learning, the key insight is that while an eigenvector resides in a linear space that may have enormous dimensions, the effective dimensionality of the eigenvector trajectory traced out by the one-parameter family of Hamiltonian matrices is small. We prove this statement using analytic function theory and propose a algorithm to solve for the extremal eigenvectors. We benchmark the method using several examples from quantum many-body theory.
- Nov 20 2017 physics.chem-ph arXiv:1711.06376v1A quantum description of a chemical reaction requires the knowledge of an accurate potential energy surface. For typical reactive systems of three atoms or more, such potential energy surfaces are constructed by computing the potential energy at thousands of points in the configuration space and fitting the computed points by an analytical function or a neural network. Computing and fitting the potential energy points is a very laborious and time-consuming task. Here, we demonstrate that accurate potential energy surfaces for quantum reactive scattering calculations can be obtained with a very small number of potential energy points and without analytical fits. We show that accurate results for the reaction probabilities in a wide range of energies can be obtained with only 30 ab initio points for the three-dimensional H + H$_2$ $\rightarrow$ H$_2$ + H reaction and 290 ab inito points for the six-dimensional OH + H$_2$ $\rightarrow$ H$_2$O + H reaction. To obtain these results, we represent the scattering observables and the potential surfaces by Gaussian Processes, which are trained by the results of rigorous quantum scattering calculations and optimized by means of Bayesian optimization producing and trying hundreds of surfaces. In this approach, the construction of the surfaces is completely automated. This work demonstrates that a combination of machine learning with quantum dynamics calculations allows one to ask new questions, such as: What is the least number of \it ab initio points sufficient to describe quantum dynamics of polyatomic systems with desired accuracy?
- Security for machine learning has begun to become a serious issue for present day applications. An important question remaining is whether emerging quantum technologies will help or hinder the security of machine learning. Here we discuss a number of ways that quantum information can be used to help make quantum classifiers more secure or private. In particular, we demonstrate a form of robust principal component analysis that, under some circumstances, can provide an exponential speedup relative to robust methods used at present. To demonstrate this approach we introduce a linear combinations of unitaries Hamiltonian simulation method that we show functions when given an imprecise Hamiltonian oracle, which may be of independent interest. We also introduce a new quantum approach for bagging and boosting that can use quantum superposition over the classifiers or splits of the training set to aggregate over many more models than would be possible classically. Finally, we provide a private form of $k$--means clustering that can be used to prevent an all powerful adversary from learning more than a small fraction of a bit from any user. These examples show the role that quantum technologies can play in the security of ML and vice versa. This illustrates that quantum computing can provide useful advantages to machine learning apart from speedups.
- We discuss practical methods to ensure near wirespeed performance from clusters with either one or two Intel(R) Omni-Path host fabric interfaces (HFI) per node, and Intel(R) Xeon Phi(TM) 72xx (Knight's Landing) processors, and using the Linux operating system. The study evaluates the performance improvements achievable and the required programming approaches in two distinct example problems: firstly in Cartesian communicator halo exchange problems, appropriate for structured grid PDE solvers that arise in quantum chromodynamics simulations of particle physics, and secondly in gradient reduction appropriate to synchronous stochastic gradient descent for machine learning. As an example, we accelerate a published Baidu Research reduction code and obtain a factor of ten speedup over the original code using the techniques discussed in this paper. This displays how a factor of ten speedup in strongly scaled distributed machine learning could be achieved when synchronous stochastic gradient descent is massively parallelised with a fixed mini-batch size. We find a significant improvement in performance robustness when memory is obtained using carefully allocated 2MB "huge" virtual memory pages, implying that either non-standard allocation routines should be used for communication buffers. These can be accessed via a LD\_PRELOAD override in the manner suggested by libhugetlbfs. We make use of a the Intel(R) MPI 2019 library "Technology Preview" and underlying software to enable thread concurrency throughout the communication software stake via multiple PSM2 endpoints per process and use of multiple independent MPI communicators. When using a single MPI process per node, we find that this greatly accelerates delivered bandwidth in many core Intel(R) Xeon Phi processors.
- Nov 15 2017 cond-mat.dis-nn arXiv:1711.04252v1Machine learning has been successfully applied to identify phases and phase transitions in condensed matter systems. However, quantitative characterization of the critical fluctuations near phase transitions is lacking. In this study we extract the critical behavior of a quantum Hall plateau transition with a convolutional neural network. We introduce a finite-size scaling approach and show that the localization length critical exponent learned by the neural network is consistent with the value obtained by conventional approaches. We illustrate the physics behind the approach by a cross-examination of the inverse participation ratios.
- Nov 08 2017 quant-ph arXiv:1711.02249v1Current approaches to fault-tolerant quantum computation will not enable useful quantum computation on near-term devices of 50 to 100 qubits. Leading proposals, such as the color code and surface code schemes, must devote a large fraction of their physical quantum bits to quantum error correction. Building from recent quantum machine learning techniques, we propose an alternative approach to quantum error correction aimed at reducing this overhead, which can be implemented in existing quantum hardware and on a myriad of quantum computing architectures. This method aims to optimize the average fidelity of encoding and recovery circuits with respect to the actual noise in the device, as opposed to that of an artificial or approximate noise model. The quantum variational error corrector (QVECTOR) algorithm employs a quantum circuit with parameters that are variationally-optimized according to processed data originating from quantum sampling of the device, so as to learn encoding and error-recovery gate sequences. We develop this approach for the task of preserving quantum memory and analyze its performance with simulations. We find that, subject to phase damping noise, the simulated QVECTOR algorithm learns a three-qubit encoding and recovery which extend the effective T2 of a quantum memory six-fold. Subject to a continuous-time amplitude- plus phase-damping noise model on five qubits, the simulated QVECTOR algorithm learns encoding and decoding circuits which exploit the coherence among Pauli errors in the noise model to outperform the five-qubit stabilizer code and any other scheme that does not leverage such coherence. Both of these schemes can be implemented with existing hardware.
- A central task in the field of quantum computing is to find applications where quantum computer could provide exponential speedup over any classical computer. Machine learning represents an important field with broad applications where quantum computer may offer significant speedup. Several quantum algorithms for discriminative machine learning have been found based on efficient solving of linear algebraic problems, with potential exponential speedup in runtime under the assumption of effective input from a quantum random access memory. In machine learning, generative models represent another large class which is widely used for both supervised and unsupervised learning. Here, we propose an efficient quantum algorithm for machine learning based on a quantum generative model. We prove that our proposed model is exponentially more powerful to represent probability distributions compared with classical generative models and has exponential speedup in training and inference at least for some instances under a reasonable assumption in computational complexity theory. Our result opens a new direction for quantum machine learning and offers a remarkable example in which a quantum algorithm shows exponential improvement over any classical algorithm in an important application field.
- Nov 07 2017 cs.LG arXiv:1711.01464v1Gaussian kernel is a very popular kernel function used in many machine learning algorithms, especially in support vector machines (SVM). For nonlinear training instances in machine learning, it often outperforms polynomial kernels in model accuracy. The Gaussian kernel is heavily used in formulating nonlinear classical SVM. A very elegant quantum version of least square support vector machine which is exponentially faster than the classical counterparts was discussed in literature with quantum polynomial kernel. In this paper, we have demonstrated a quantum version of the Gaussian kernel and analyzed its complexity, which is O(\epsilon^(-1)logN) with N-dimensional instances and an accuracy \epsilon. The Gaussian kernel is not only more efficient than polynomial kernel but also has broader application range than polynomial kernel.
- Recent work on quantum machine learning has demonstrated that quantum computers can offer dramatic improvements over classical devices for data mining, prediction and classification. However, less is known about the advantages using quantum computers may bring in the more general setting of reinforcement learning, where learning is achieved via interaction with a task environment that provides occasional rewards. Reinforcement learning can incorporate data-analysis-oriented learning settings as special cases, but also includes more complex situations where, e.g., reinforcing feedback is delayed. In a few recent works, Grover-type amplification has been utilized to construct quantum agents that achieve up-to-quadratic improvements in learning efficiency. These encouraging results have left open the key question of whether super-polynomial improvements in learning times are possible for genuine reinforcement learning problems, that is problems that go beyond the other more restricted learning paradigms. In this work, we provide a family of such genuine reinforcement learning tasks, and we construct quantum-enhanced learners which learn super-polynomially faster than any classical reinforcement learning model.
- Oct 31 2017 cond-mat.mtrl-sci arXiv:1710.10475v1The allotropes of boron continue to challenge structural elucidation and solid-state theory. Here we use machine learning combined with random structure searching (RSS) algorithms to systematically construct an interatomic potential for boron. Starting from ensembles of randomized atomic configurations, we use alternating single-point quantum-mechanical energy and force computations, Gaussian approximation potential (GAP) fitting, and GAP-driven RSS to iteratively generate a representation of the element's potential-energy surface. Beyond the total energies of the very different boron allotropes, our model readily provides atom-resolved, local energies and thus deepened insight into the frustrated $\beta$-rhombohedral boron structure. Our results open the door for the efficient and automated generation of GAPs and other machine-learning-based interatomic potentials, and suggest their usefulness as a tool for materials discovery.
- Oct 30 2017 quant-ph arXiv:1710.10158v1In this paper we address the problem of using one probability space for estimating parameters and predicting future data when the observed data come from multiple contexts and thus from distinct spaces. We explain that a set-based probabilistic space might be suboptimal in the case of multiple contexts. To overcome suboptimality and reconcile multiple contexts in one space, the paper introduces the Quantum Probability Space (QPS). We also present an algorithm to calculate the QPS for data observed from multiple contexts and provide a web application that implements the algorithm. The QPS has application in Information Retrieval (IR), Machine Learning (ML) and in any domain where items should optimally be ranked and classified by some properties but under conditions of uncertainty due to context.
- We propose a new statistical model suitable for machine learning of systems with long distance correlations such as natural languages. The model is based on directed acyclic graph decorated by multi-linear tensor maps in the vertices and vector spaces in the edges, called tensor network. Such tensor networks have been previously employed for effective numerical computation of the renormalization group flow on the space of effective quantum field theories and lattice models of statistical mechanics. We provide explicit algebro-geometric analysis of the parameter moduli space for tree graphs, discuss model properties and applications such as statistical translation.
- Oct 27 2017 quant-ph arXiv:1710.09489v2Machine learning has the potential to become an important tool in quantum error correction as it allows the decoder to adapt to the error distribution of a quantum chip. An additional motivation for using neural networks is the fact that they can be evaluated by dedicated hardware which is very fast and consumes little power. Machine learning has been previously applied to decode the surface code. However, these approaches are not scalable as the training has to be redone for every system size which becomes increasingly difficult. In this work the existence of local decoders for higher dimensional codes leads us to use a low-depth convolutional neural network to locally assign a likelihood of error on each qubit. For noiseless syndrome measurements, numerical simulations show that the decoder has a threshold of around $7.1\%$ when applied to the 4D toric code. When the syndrome measurements are noisy, the decoder performs better for larger code sizes when the error probability is low. We also give theoretical and numerical analysis to show how a convolutional neural network is different from the 1-nearest neighbor algorithm, which is a baseline machine learning method.
- Oct 24 2017 quant-ph arXiv:1710.07871v1Nested quantum annealing correction (NQAC) is an error correcting scheme for quantum annealing that allows for the encoding of a logical qubit into an arbitrarily large number of physical qubits. The encoding replaces each logical qubit by a complete graph of degree $C$. The nesting level $C$ represents the distance of the error-correcting code and controls the amount of protection against thermal and control errors. Theoretical mean-field analyses and empirical data obtained with a D-Wave Two quantum annealer (supporting up to $512$ qubits) showed that NQAC has the potential to achieve a scalable effective temperature reduction, $T_{\rm eff} \sim C^{-\eta}$, with $\eta \leq 2$. We confirm that this scaling is preserved when NQAC is tested on a D-Wave 2000Q device (supporting up to $2048$ qubits). In addition, we show that NQAC can be also used in sampling problems to lower the effective temperature of a quantum annealer. Such effective temperature reduction is relevant for machine-learning applications. Since we demonstrate that NQAC achieves error correction via an effective reduction of the temperature of the quantum annealing device, our results address the problem of the "temperature scaling law for quantum annealers", which requires the temperature of quantum annealers to be reduced as problems of larger sizes are attempted to be solved.
- Oct 23 2017 quant-ph arXiv:1710.07405v1Anomaly detection is used for identifying data that deviate from `normal' data patterns. Its usage on classical data finds diverse applications in many important areas like fraud detection, medical diagnoses, data cleaning and surveillance. With the advent of quantum technologies, anomaly detection of quantum data, in the form of quantum states, may become an important component of quantum applications. Machine learning algorithms are playing pivotal roles in anomaly detection using classical data. Two widely-used algorithms are kernel principal component analysis and one-class support vector machine. We find corresponding quantum algorithms to detect anomalies in quantum states. We show that these two quantum algorithms can be performed using resources logarithmic in the dimensionality of quantum states. For pure quantum states, these resources can also be logarithmic in the number of quantum states used for training the machine learning algorithm. This makes these algorithms potentially applicable to big quantum data applications.
- We demonstrate identification of position, material, orientation and shape of objects imaged by an $^{85}$Rb atomic magnetometer performing electromagnetic induction imaging supported by machine learning. Machine learning maximizes the information extracted from the images created by the magnetometer, demonstrating the use of hidden data. Localization 2.6 times better than the spatial resolution of the imaging system and successful classification up to 97$\%$ are obtained. This circumvents the need of solving the inverse problem, and demonstrates the extension of machine learning to diffusive systems such as low-frequency electrodynamics in media. Automated collection of task-relevant information from quantum-based electromagnetic imaging will have a relevant impact from biomedicine to security.
- The resemblance between the methods used in studying quantum-many body physics and in machine learning has drawn considerable attention. In particular, tensor networks (TNs) and deep learning architectures bear striking similarities to the extent that TNs can be used for machine learning. Previous results used one-dimensional TNs in image recognition, showing limited scalability and a high bond dimension. In this work, we train two-dimensional hierarchical TNs to solve image recognition problems, using a training algorithm derived from the multipartite entanglement renormalization ansatz (MERA). This approach overcomes scalability issues and implies novel mathematical connections among quantum many-body physics, quantum information theory, and machine learning. While keeping the TN unitary in the training phase, TN states can be defined, which optimally encodes each class of the images into a quantum many-body state. We study the quantum features of the TN states, including quantum entanglement and fidelity. We suggest these quantities could be novel properties that characterize the image classes, as well as the machine learning tasks. Our work could be further applied to identifying possible quantum properties of certain artificial intelligence methods.
- Oct 13 2017 physics.chem-ph quant-ph arXiv:1710.04535v1A challenge for molecular quantum dynamics (QD) calculations is the curse of dimensionality with respect to the nuclear degrees of freedom. A common approach that works especially well for fast reactive processes is to reduce the dimensionality of the system to a few most relevant coordinates. Identifying these can become a very difficult task, since they often are highly unintuitive. We present a machine learning approach that utilizes an autoencoder that is trained to find a low-dimensional representation of a set of molecular configurations. These configurations are generated by trajectory calculations performed on the reactive molecular systems of interest. The resulting low-dimensional representation can be used to generate a potential energy surface grid in the desired subspace. Using the G-matrix formalism to calculate the kinetic energy operator, QD calculations can be carried out on this grid. In addition to step-by-step instructions for the grid construction, we present the application to a test system.
- Machine learning, the core of artificial intelligence and big data science, is one of today's most rapidly growing interdisciplinary fields. Recently, its tools and techniques have been adopted to tackle intricate quantum many-body problems. In this work, we introduce machine learning techniques to the detection of quantum nonlocality in many-body systems, with a focus on the restricted-Boltzmann-machine (RBM) architecture. Using reinforcement learning, we demonstrate that RBM is capable of finding the maximum quantum violations of multipartite Bell inequalities with given measurement settings. Our results build a novel bridge between computer-science-based machine learning and quantum many-body nonlocality, which will benefit future studies in both areas.
- Neural Networks Quantum States have been recently introduced as an Ansatz for describing the wave function of quantum many-body systems. We show that there are strong connections between Neural Networks Quantum States in the form of Restricted Boltzmann Machines and some classes of Tensor Network states in arbitrary dimension. In particular we demonstrate that short-range Restricted Boltzmann Machines are Entangled Plaquette States, while fully connected Restricted Boltzmann Machines are String-Bond States with a non-local geometry and low bond dimension. These results shed light on the underlying architecture of Restricted Boltzmann Machines and their efficiency at representing many-body quantum states. String-Bond States also provide a generic way of enhancing the power of Neural Networks Quantum States and a natural generalization to systems with larger local Hilbert space. We compare the advantages and drawbacks of these different classes of states and present a method to combine them together. This allows us to benefit from both the entanglement structure of Tensor Networks and the efficiency of Neural Network Quantum States into a single Ansatz capable of targeting the wave function of strongly correlated systems. While it remains a challenge to describe states with chiral topological order using traditional Tensor Networks, we show that Neural Networks Quantum States and their String-Bond States extension can describe a lattice Fractional Quantum Hall state exactly. In addition, we provide numerical evidence that Neural Networks Quantum States can approximate a chiral spin liquid with better accuracy than Entangled Plaquette States and local String-Bond States. Our results demonstrate the efficiency of neural networks to describe complex quantum wave functions and pave the way towards the use of String-Bond States as a tool in more traditional machine learning applications.
- Oct 11 2017 quant-ph arXiv:1710.03599v1Quantum computing allows for the potential of significant advancements in both the speed and the capacity of widely-used machine learning algorithms. In this paper, we introduce quantum algorithms for a recurrent neural network, the Hopfield network, which can be used for pattern recognition, reconstruction, and optimization as a realization of a content addressable memory system. We show that an exponentially large network can be stored in a polynomial number of quantum bits by encoding the network into the amplitudes of quantum states. By introducing a new classical technique for operating such a network, we can leverage quantum techniques to obtain a quantum computational complexity that is logarithmic in the dimension of the data. This potentially yields an exponential speed-up in comparison to classical approaches. We present an application of our method as a genetic sequence recognizer.
- Oct 11 2017 quant-ph cond-mat.str-el arXiv:1710.03545v1Correlator product states (CPS) are a powerful and very broad class of states for quantum lattice systems whose amplitudes can be sampled exactly and efficiently. They work by gluing together states of overlapping clusters of sites on the lattice, called correlators. Recently Carleo and Troyer Science 355, 602 (2017) introduced a new type sampleable ansatz called neural-network quantum states (NQS) that are inspired by the restricted Boltzmann model used in machine learning. By employing the formalism of tensor networks we show that NQS are a special form of CPS with novel properties. Diagramatically a number of simple observations become transparent. Namely, that NQS are CPS built from extensively sized GHZ-form correlators, which are related to a canonical polyadic decomposition of a tensor, making them uniquely unbiased geometrically. Another immediate implication of the equivalence to CPS is that we are able to formulate exact NQS representations for a wide range of paradigmatic states, including superposition of weighed-graph states, the Laughlin state, toric code states, and the resonating valence bond state. These examples reveal the potential of using higher dimensional hidden units and a second hidden layer in NQS. The major outlook of this study is the elevation of NQS to correlator operators allowing them to enhance conventional well-established variational Monte Carlo approaches for strongly correlated fermions.
- Inspired by the recent work of Carleo and Troyer[1], we apply machine learning methods to quantum mechanics in this article. The radial basis function network in a discrete basis is used as the variational wavefunction for the ground state of a quantum system. Variational Monte Carlo(VMC) calculations are carried out for some simple Hamiltonians. The results are in good agreements with theoretical values. The smallest eigenvalue of a Hermitian matrix can also be acquired using VMC calculations. Our results demonstrate that machine learning techniques are capable of solving quantum mechanical problems.
- Oct 06 2017 quant-ph arXiv:1710.01794v1Heterogeneous high-performance computing (HPC) systems offer novel architectures accommodating specialized processors that accelerate specific workloads. Near-term quantum computing technologies are poised to benefit applications as wide-ranging as quantum chemistry, machine learning, and optimization. A novel approach to scale these applications with future heterogeneous HPC is to enhance conventional computing systems with quantum processor accelerators. We present the eXtreme-scale ACCelerator programming model (XACC) to enable near-term quantum acceleration within existing conventional HPC applications and workflows. We design and demonstrate the XACC programming model within the C++ language by following a coprocessor machine model akin to the design of OpenCL or CUDA for GPUs. However, we take into account the subtleties and complexities inherent to the interplay between conventional and quantum processing hardware. The XACC framework provides a high-level API that enables applications to offload computational work represented as quantum kernels for execution on an attached quantum accelerator. Our approach is agnostic to the quantum programming language and the quantum processor hardware, which enables quantum programs to be ported to multiple processors for benchmarking, verification and validation, and performance studies. This includes a set of virtual numerical simulators as well as actual quantum processing units. The XACC programming model and its reference implementation may serve as a foundation for future HPC-ready applications, data structures, and libraries using conventional-quantum hybrid computing.
- Studying general quantum many-body systems is one of the major challenges in modern physics because it requires an amount of computational resources that scales exponentially with the size of the system.Simulating the evolution of a state, or even storing its description, rapidly becomes intractable for exact classical algorithms. Recently, machine learning techniques, in the form of restricted Boltzmann machines, have been proposed as a way to efficiently represent certain quantum states with applications in state tomography and ground state estimation. Here, we introduce a new representation of states based on variational autoencoders. Variational autoencoders are a type of generative model in the form of a neural network. We probe the power of this representation by encoding probability distributions associated with states from different classes. Our simulations show that deep networks give a better representation for states that are hard to sample from, while providing no benefit for random states. This suggests that the probability distributions associated to hard quantum states might have a compositional structure that can be exploited by layered neural networks. Specifically, we consider the learnability of a class of quantum states introduced by Fefferman and Umans. Such states are provably hard to sample for classical computers, but not for quantum ones, under plausible computational complexity assumptions. The good level of compression achieved for hard states suggests these methods can be suitable for characterising states of the size expected in first generation quantum hardware.
- We study the quantum synchronization between a pair of two-level systems inside two coupledcavities. Using a digital-analog decomposition of the master equation that rules the system dynamics, we show that this approach leads to quantum synchronization between both two-level systems. Moreover, we can identify in this digital-analog block decomposition the fundamental elements of a quantum machine learning protocol, in which the agent and the environment (learning units) interact through a mediating system, namely, the register. If we can additionally equip this algorithm with a classical feedback mechanism, which consists of projective measurements in the register, reinitialization of the register state and local conditional operations on the agent and register subspace, a powerful and flexible quantum machine learning protocol emerges. Indeed, numerical simulations show that this protocol enhances the synchronization process, even when every subsystem experience different loss/decoherence mechanisms, and give us flexibility to choose the synchronization state. Finally, we propose an implementation based on current technologies in superconducting circuits.
- We propose a protocol to perform generalized quantum reinforcement learning with quantum technologies. At variance with recent results on quantum reinforcement learning with superconducting circuits [L. Lamata, Sci. Rep. 7, 1609 (2017)], in our current protocol coherent feedback during the learning process is not required, enabling its implementation in a wide variety of quantum systems. We consider diverse possible scenarios for an agent, an environment, and a register that connects them, involving multiqubit and multilevel systems, as well as open-system dynamics. We finally propose possible implementations of this protocol in trapped ions and superconducting circuits. The field of quantum reinforcement learning with quantum technologies will enable enhanced quantum control, as well as more efficient machine learning calculations.
- The quantum autoencoder is a recent paradigm in the field of quantum machine learning, which may enable an enhanced use of resources in quantum technologies. To this end, quantum neural networks with less nodes in the inner than in the outer layers were considered. Here, we propose a useful connection between approximate quantum adders and quantum autoencoders. Specifically, this link allows us to employ optimized approximate quantum adders, obtained with genetic algorithms, for the implementation of quantum autoencoders for a variety of initial states. Furthermore, we can also directly optimize the quantum autoencoders via genetic algorithms. Our approach opens a different path for the design of quantum autoencoders in controllable quantum platforms.
- Sep 21 2017 quant-ph arXiv:1709.06678v1Fundamental questions in chemistry and physics may never be answered due to the exponential complexity of the underlying quantum phenomena. A desire to overcome this challenge has sparked a new industry of quantum technologies with the promise that engineered quantum systems can address these hard problems. A key step towards demonstrating such a system will be performing a computation beyond the capabilities of any classical computer, achieving so-called quantum supremacy. Here, using 9 superconducting qubits, we demonstrate an immediate path towards quantum supremacy. By individually tuning the qubit parameters, we are able to generate thousands of unique Hamiltonian evolutions and probe the output probabilities. The measured probabilities obey a universal distribution, consistent with uniformly sampling the full Hilbert-space. As the number of qubits in the algorithm is varied, the system continues to explore the exponentially growing number of states. Combining these large datasets with techniques from machine learning allows us to construct a model which accurately predicts the measured probabilities. We demonstrate an application of these algorithms by systematically increasing the disorder and observing a transition from delocalized states to localized states. By extending these results to a system of 50 qubits, we hope to address scientific questions that are beyond the capabilities of any classical computer.
- We develop a machine learning method to construct accurate ground-state wave functions of strongly interacting and entangled quantum spin as well as fermionic models on lattices. A restricted Boltzmann machine algorithm in the form of an artificial neural network is combined with a conventional variational Monte Carlo method with pair product (geminal) wave functions and quantum number projections. The combined method substantially improves the accuracy beyond that ever achieved by each method separately, in the Heisenberg as well as Hubbard models on square lattices, thus proving its power as a highly accurate quantum many-body solver.
- Sep 19 2017 quant-ph arXiv:1709.05381v2NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.
- We develop a variational method to obtain many-body ground states of the Bose-Hubbard model using feedforward artificial neural networks. A fully-connected network with a single hidden layer works better than a fully-connected network with multiple hidden layers, and a multi-layer convolutional network is more efficient than a fully-connected network. AdaGrad and Adam are optimization methods that work well. Moreover, we show that many-body ground states with different numbers of atoms can be generated by a single network.
- In this letter, we apply the artificial neural network in a supervised manner to map out the quantum phase diagram of disordered topological superconductor in class DIII. Given the disorder that keeps the discrete symmetries of the ensemble as a whole, translational symmetry which is broken in the quasiparticle distribution individually is recovered statistically by taking an ensemble average. By using this, we classify the phases by the artificial neural network that learned the quasiparticle distribution in the clean limit, and show that the result is totally consistent with the calculation by the transfer matrix method or noncommutative geometry approach. If all three phases, namely the $\mathbb{Z}_2$, trivial, and the thermal metal phases appear in the clean limit, the machine can classify them with high confidence over the entire phase diagram. If only the former two phases are present, we find that the machine remains confused in the certain region, leading us to conclude the detection of the unknown phase which is eventually identified as the thermal metal phase. In our method, only the first moment of the quasiparticle distribution is used for input, but application to a wider variety of systems is expected by the inclusion of higher moments.
- Entanglement not only plays a crucial role in quantum technologies, but is key to our understanding of quantum correlations in many-body systems. However, in an experiment, the only way of measuring entanglement in a generic mixed state is through reconstructive quantum tomography, requiring an exponential number of measurements in the system size. Here, we propose an operational scheme to measure the entanglement --- as given by the negativity --- between arbitrary subsystems of size $N_A$ and $N_B$, with $\mathcal{O}(N_A + N_B)$ measurements, and without any prior knowledge of the state. We propose how to experimentally measure the partially transposed moments of a density matrix, and using just the first few of these, extract the negativity via Chebyshev approximation or machine learning techniques. Our procedure will allow entanglement measurements in a wide variety of systems, including strongly interacting many body systems in both equilibrium and non-equilibrium regimes.
- Sep 18 2017 quant-ph arXiv:1709.05015v3Quantum walks on graphs have shown prioritized benefits and applications in wide areas. In some scenarios, however, it may be more natural and accurate to mandate high-order relationships for hypergraphs, due to the density of information stored inherently. Therefore, we can explore the potential of quantum walks on hypergraphs. In this paper, by presenting the one-to-one correspondence between regular uniform hypergraphs and bipartite graphs, we construct a model for quantum walks on bipartite graphs of regular uniform hypergraphs with Szegedy's quantum walks, which gives rise to a quadratic speed-up. Furthermore, we deliver spectral properties of the transition matrix, given that the cardinalities of the two disjoint sets are different in the bipartite graph. Our model provides the foundation for building quantum algorithms on the strength of quantum walks, suah as quantum walks search, quantized Google's PageRank and quantum machine learning, based on hypergraphs.
- Sep 13 2017 quant-ph arXiv:1709.03617v1This paper introduces the forest algorithm, an algorithm that can detect entanglement through the use of decision trees generated by machine learning. Tests against similar tomography-based detection algorithms using experimental data and numerical simulations indicate that, once trained, the proposed algorithm outperforms previous approaches. The results identify entanglement detection as another area of quantum information where machine learning can play a helpful role.
- Quantum information technologies, and intelligent learning systems, are both emergent technologies that will likely have a transforming impact on our society. The respective underlying fields of research -- quantum information (QI) versus machine learning (ML) and artificial intelligence (AI) -- have their own specific challenges, which have hitherto been investigated largely independently. However, in a growing body of recent work, researchers have been probing the question to what extent these fields can learn and benefit from each other. QML explores the interaction between quantum computing and ML, investigating how results and techniques from one field can be used to solve the problems of the other. Recently, we have witnessed breakthroughs in both directions of influence. For instance, quantum computing is finding a vital application in providing speed-ups in ML, critical in our "big data" world. Conversely, ML already permeates cutting-edge technologies, and may become instrumental in advanced quantum technologies. Aside from quantum speed-up in data analysis, or classical ML optimization used in quantum experiments, quantum enhancements have also been demonstrated for interactive learning, highlighting the potential of quantum-enhanced learning agents. Finally, works exploring the use of AI for the very design of quantum experiments, and for performing parts of genuine research autonomously, have reported their first successes. Beyond the topics of mutual enhancement, researchers have also broached the fundamental issue of quantum generalizations of ML/AI concepts. This deals with questions of the very meaning of learning and intelligence in a world that is described by quantum mechanics. In this review, we describe the main ideas, recent developments, and progress in a broad spectrum of research investigating machine learning and artificial intelligence in the quantum domain.
- Sep 08 2017 physics.soc-ph cond-mat.quant-gas cs.CY physics.atom-ph physics.data-an quant-ph arXiv:1709.02230v1Despite recent advances driven by machine learning algorithms, experts agree that such algorithms are still often unable to match the experience-based and intuitive problem solving skills of humans in highly complex settings. Recent studies have demonstrated how the intuition of lay people in citizen science games [1] and the experience of fusion-scientists [2] have assisted automated search algorithms by restricting the size of the active search space leading to optimized results. Humans, thus, have an uncanny ability to detect patterns and solution strategies based on observations, calculations, or physical insight. Here we explore the fundamental question: Are these strategies truly distinct or merely labels we attach to different points in a high dimensional continuum of solutions? In the latter case, our human desire to identify patterns may lead us to terminate search too early. We demonstrate that this is the case in a theoretical study of single atom transport in an optical tweezer, where more than 200,000 citizen scientists helped probe the Quantum Speed Limit [1]. With this insight, we develop a novel global entirely deterministic search methodology yielding dramatically improved results. We demonstrate that this "bridging" of solution strategies can also be applied to closed-loop optimization of the production of Bose-Einstein condensates. Here we find improved solutions using two implementations of a novel remote interface. First, a team of theoretical optimal control researchers employ a Remote version of their dCRAB optimization algorithm (RedCRAB), and secondly a gamified interface allowed 600 citizen scientists from around the world to participate in the optimization. Finally, the "real world" nature of such problems allow for an entirely novel approach to the study of human problem solving, enabling us to run a hypothesis-driven social science experiment "in the wild".
- Generative modeling, which learns joint probability distribution from training data and generates samples according to it, is an important task in machine learning and artificial intelligence. Inspired by probabilistic interpretation of quantum physics, we propose a generative model using matrix product states, which is a tensor network originally proposed for describing (particularly one-dimensional) entangled quantum states. Our model enjoys efficient learning by utilizing the density matrix renormalization group method which allows dynamic adjusting dimensions of the tensors, and offers an efficient direct sampling approach, Zipper, for generative tasks. We apply our method to generative modeling of several standard datasets including the principled Bars and Stripes, random binary patterns and the MNIST handwritten digits, to illustrate ability of our model, and discuss features as well as drawbacks of our model over popular generative models such as Hopfield model, Boltzmann machines and generative adversarial networks. Our work shed light on many interesting directions for future exploration on the development of quantum-inspired algorithms for unsupervised machine learning, which is of possibility of being realized by a quantum device.
- Motivated by the close relations of the renormalization group with both the holography duality and the deep learning, we propose that the holographic geometry can emerge from deep learning the entanglement feature of a quantum many-body state. We develop a concrete algorithm, call the entanglement feature learning (EFL), based on the random tensor network (RTN) model for the tensor network holography. We show that each RTN can be mapped to a Boltzmann machine, trained by the entanglement entropies over all subregions of a given quantum many-body state. The goal is to construct the optimal RTN that best reproduce the entanglement feature. The RTN geometry can then be interpreted as the emergent holographic geometry. We demonstrate the EFL algorithm on 1D free fermion system and observe the emergence of the hyperbolic geometry (AdS$_3$ spatial geometry) as we tune the fermion system towards the gapless critical point (CFT$_2$ point).
- We report a proof-of-principle experimental demonstration of the quantum speed-up for learning agents utilizing a small-scale quantum information processor based on radiofrequency-driven trapped ions. The decision-making process of a quantum learning agent within the projective simulation paradigm for machine learning is implemented in a system of two qubits. The latter are realized using hyperfine states of two frequency-addressed atomic ions exposed to a static magnetic field gradient. We show that the deliberation time of this quantum learning agent is quadratically improved with respect to comparable classical learning agents. The performance of this quantum-enhanced learning agent highlights the potential of scalable quantum processors taking advantage of machine learning.
- Fundamental open questions about the two-dimensional Holstein model of electrons coupled to quantum phonons are addressed by continuous-time quantum Monte Carlo simulations. The critical temperature of the charge-density-wave transition is determined by a finite-size scaling of a renormalization-group-invariant correlation ratio. $T_c$ is finite for any nonzero coupling for classical phonons, and suppressed by quantum lattice fluctuations. The phase transition---also detectable via the fidelity susceptibility and machine learning---is demonstrated to be in the universality class of the two-dimensional Ising model. We discuss the possibility of $T_c=0$ at weak coupling and present evidence for a spin-gapped, bipolaronic metal above $T_c$.
- With quantum computing technologies nearing the era of commercialization and quantum supremacy, machine learning (ML) appears as one of the promising "killer" applications. Despite significant effort, there has been a disconnect between most quantum machine learning proposals, the needs of ML practitioners, and the capabilities of near-term quantum devices to demonstrate quantum enhancement in the near future. In this contribution to the focus collection on "What would you do with 1000 qubits?", we provide concrete examples of intractable ML tasks that could be enhanced with near-term devices. We argue that to reach this target, the focus should be on areas where ML researchers are still struggling, such as generative models in unsupervised or semisupervised learning, instead of the popular and much more tractable ML techniques. We also highlight the case of classical datasets with potential quantum-like statistical correlations where quantum models could be more suitable. We focus on hybrid quantum-classical approaches and illustrate some of the key challenges we foresee for near-term implementations. Finally, we introduce the quantum-assisted Helmholtz machine (QAHM); an attempt to use near-term quantum devices to tackle high-resolution datasets on continuous variables. Instead of using quantum computers to assist deep learning, as previous approaches do, the QAHM uses deep learning to extract a low-dimensional binary representation of data, suitable for relatively small quantum processors which can assist the training of an unsupervised generative model. Although we illustrate this concept on a quantum annealer, other quantum platforms could benefit as well from this hybrid quantum-classical framework.
- In this Letter we supervisedly train neural networks to distinguish different topological phases in the context of topological band insulators. After training with Hamiltonians of one-dimensional insulators with chiral symmetry, the neural network can predict their topological winding numbers with nearly 100% accuracy, even for Hamiltonians with larger winding numbers that are not included in the training data. These results show a remarkable success that the neural network can capture the global and nonlinear topological features of quantum phases from local inputs. By opening up the neural network, we confirm that the network does learn the discrete version of the winding number formula. We also make a couple of remarks regrading the role of the symmetry and the opposite effect of regularization techniques when applying machine learning to physical systems.
- Machine learning has been presented as one of the key applications for near-term quantum technologies, given its high commercial value and wide range of applicability. In this work, we introduce the quantum-assisted Helmholtz machine: a hybrid quantum-classical framework with the potential of tackling high-dimensional real-world machine learning datasets on continuous variables. Instead of using quantum computers to only assist deep learning, as previous approaches have suggested, we use deep learning to extract a low-dimensional binary representation of data, suitable for relatively small quantum processors which can assist the training of an unsupervised generative model. To demonstrate this concept on a real-world dataset, we used 1644 quantum bits of a noisy non-fault-tolerant quantum device, the D-Wave 2000Q, to assist the training of a sub-sampled version of the MNIST handwritten digit dataset with 16 x 16 continuous valued pixels. Although we illustrate this concept on a quantum annealer, adaptations to other quantum platforms, such as ion-trap technologies or superconducting gate-model architectures, could be explored within this flexible framework.
- Aug 25 2017 physics.gen-ph hep-th arXiv:1708.07408v1In this essay we conjecture that quantum fields such as the Higgs field is related to a restricted Boltzmann machine for deep neural networks. An accelerating Rindler observer in a flat spacetime sees the quantum fields having a thermal distribution from the quantum entanglement, and a renormalization group process for the thermal fields on a lattice is similar to a deep learning algorithm. This correspondence can be generalized for the KMS states of quantum fields in a curved spacetime like a black hole.
- Aug 22 2017 cond-mat.mtrl-sci arXiv:1708.06017v1Machine learning (ML) of quantum mechanical properties shows promise for accelerating chemical discovery. For transition metal chemistry where accurate calculations are computationally costly and available training data sets are small, the molecular representation becomes a critical ingredient in ML model predictive accuracy. We introduce a series of revised autocorrelation functions (RACs) that encode relationships between the heuristic atomic properties (e.g., size, connectivity, and electronegativity) on a molecular graph. We alter the starting point, scope, and nature of the quantities evaluated in standard ACs to make these RACs amenable to inorganic chemistry. On an organic molecule set, we first demonstrate superior standard AC performance to other presently-available topological descriptors for ML model training, with mean unsigned errors (MUEs) for atomization energies on set-aside test molecules as low as 6 kcal/mol. For inorganic chemistry, our RACs yield 1 kcal/mol ML MUEs on set-aside test molecules in spin-state splitting in comparison to 15-20x higher errors from feature sets that encode whole-molecule structural information. Systematic feature selection methods including univariate filtering, recursive feature elimination, and direct optimization (e.g., random forest and LASSO) are compared. Random-forest- or LASSO-selected subsets 4-5x smaller than RAC-155 produce sub- to 1-kcal/mol spin-splitting MUEs, with good transferability to metal-ligand bond length prediction (0.004-5 Å MUE) and redox potential on a smaller data set (0.2-0.3 eV MUE). Evaluation of feature selection results across property sets reveals the relative importance of local, electronic descriptors (e.g., electronegativity, atomic number) in spin-splitting and distal, steric effects in redox potential and bond lengths.
- Aug 22 2017 quant-ph arXiv:1708.05753v1Clustering is a powerful machine learning technique that groups "similar" data points based on their characteristics. Many clustering algorithms work by approximating the minimization of an objective function, namely the sum of within-the-cluster distances between points. The straightforward approach involves examining all the possible assignments of points to each of the clusters. This approach guarantees the solution will be a global minimum, however the number of possible assignments scales quickly with the number of data points and becomes computationally intractable even for very small datasets. In order to circumvent this issue, cost function minima are found using popular local-search based heuristic approaches such as k-means and hierarchical clustering. Due to their greedy nature, such techniques do not guarantee that a global minimum will be found and can lead to sub-optimal clustering assignments. Other classes of global-search based techniques, such as simulated annealing, tabu search, and genetic algorithms may offer better quality results but can be too time consuming to implement. In this work, we describe how quantum annealing can be used to carry out clustering. We map the clustering objective to a quadratic binary optimization (QUBO) problem and discuss two clustering algorithms which are then implemented on commercially-available quantum annealing hardware, as well as on a purely classical solver "qbsolv." The first algorithm assigns N data points to K clusters, and the second one can be used to perform binary clustering in a hierarchical manner. We present our results in the form of benchmarks against well-known k-means clustering and discuss the advantages and disadvantages of the proposed techniques.
- We employ several unsupervised machine learning techniques, including autoencoders, random trees embedding, and t-distributed stochastic neighboring ensemble (t-SNE), to reduce the dimensionality of, and therefore classify, raw (auxiliary) spin configurations generated, through Monte Carlo simulations of small clusters, for the Ising and Fermi-Hubbard models at finite temperatures. Results from a convolutional autoencoder for the three-dimensional Ising model can be shown to produce the magnetization and the susceptibility as a function of temperature with a high degree of accuracy. Quantum fluctuations distort this picture and prevent us from making such connections between the output of the autoencoder and physical observables for the Hubbard model. However, we are able to define an indicator based on the output of the t-SNE algorithm that shows a near perfect agreement with the antiferromagnetic structure factor of the model in two and three spatial dimensions in the weak-coupling regime. t-SNE also predicts a transition to the canted antiferromagnetic phase for the three-dimensional model when a strong magnetic field is present. We show that these techniques cannot be expected to work away from half filling when the "sign problem" in quantum Monte Carlo simulations is present.
- Aug 02 2017 quant-ph cond-mat.mes-hall arXiv:1708.00238v1In the near future, more and more laborious tasks will be replaced by machines. In the context of quantum control, the question is, can machines replace human beings to design reliable quantum control methods? Here we investigate the performance of machine learning in composing composite pulse sequences which are indispensable for a universal control of singlet-triplet spin qubits. Subject to the control constraints, one can in principle construct a sequence of composite pulses to achieve an arbitrary rotation of a singlet-triplet spin qubits. In absence of noise, they are required to perform arbitrary single-qubit operations due to the special control constraint of a singlet-triplet qubit; Furthermore, even in a noisy environment, it is possible to develop sophisticated pulse sequences to dynamically compensate the errors. However, tailoring these sequences is in general a resource-consuming process, where a numerical search for the solution of certain non-linear equations is required. Here we demonstrate that these composite-pulse sequences can be efficiently generated by a well-trained, double-layer neural network. For sequences designed for the noise-free case, the trained neural network is capable of producing almost exactly the same pulses developed in the literature. For more complicated noise-correcting sequences, the neural network produced pulses with a slightly different line-shape, but the robustness against noises remains about the same. These results indicate that the neural network can be a judicious and powerful alternative to existing techniques, in developing pulse sequences for universal fault-tolerant quantum computation.
- Aug 01 2017 quant-ph arXiv:1707.09524v2In recent years, quantum computing has been shown to be powerful in efficiently solving various machine learning problems, one of the most representative examples being linear regression. However, all the previous quantum linear regression algorithms only consider the ordinary linear regression (OLR), which is very unsatisfactory in practice when encountering multicollinearity of independent variables and overfitting. In this letter, we address a more general version of OLR---ridge regression---on quantum computer, which can effectively circumvent these two difficulties by introducing an amount, called regularization parameter ($\alpha$), of regularization of fitting parameters into optimization. In particular, we suggest two quantum algorithms for tackling two main and fundamental problems in ridge regression, namely estimating the optimal fitting parameters and choosing a good $\alpha$. The first algorithm generates a quantum state to encode the optimal fitting parameters in its amplitudes, and the second one adopts $K$-fold cross validation to choose a good $\alpha$ with which ridge regression achieves good predictive performance. Both algorithms are shown exponentially faster than their classical counterparts when the design matrix admits low-rank approximation and is of low condition number.
- With the extensive applications of machine learning, the issue of private or sensitive data in the training examples becomes more and more serious: during the training process, personal information or habits may be disclosed to unexpected persons or organisations, which can cause serious privacy problems or even financial loss. In this paper, we present a quantum privacy-preserving algorithm for machine learning with perceptron. There are mainly two steps to protect original training examples. Firstly when checking the current classifier, quantum tests are employed to detect data user's possible dishonesty. Secondly when updating the current classifier, private random noise is used to protect the original data. The advantages of our algorithm are: (1) it protects training examples better than the known classical methods; (2) it requires no quantum database and thus is easy to implement.
- Aug 01 2017 cond-mat.dis-nn cond-mat.quant-gas arXiv:1707.09723v1Motivated by the recent successful application of artificial neural networks to quantum many-body problems [G. Carleo and M. Troyer, Science \bf 355, 602 (2017)], a method to calculate the ground state of the Bose-Hubbard model using a feedforward neural network is proposed. The results are in good agreement with those obtained by exact diagonalization and the Gutzwiller approximation. The method of neural-network quantum states is promising for solving quantum many-body problems of ultracold atoms in optical lattices.
- Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning techniques to impressive results in regression, classification, data-generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets are motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed-up classical machine learning algorithms. Here we review the literature in quantum machine learning and discuss perspectives for a mixed readership of classical machine learning and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in machine learning are identified as promising directions for the field. Practical questions, like how to upload classical data into quantum form, will also be addressed.
- Jul 21 2017 physics.chem-ph stat.ML arXiv:1707.06338v1Understanding the relationship between the structure of light-harvesting systems and their excitation energy transfer properties is of fundamental importance in many applications including the development of next generation photovoltaics. Natural light harvesting in photosynthesis shows remarkable excitation energy transfer properties, which suggests that pigment-protein complexes could serve as blueprints for the design of nature inspired devices. Mechanistic insights into energy transport dynamics can be gained by leveraging numerically involved propagation schemes such as the hierarchical equations of motion (HEOM). Solving these equations, however, is computationally costly due to the adverse scaling with the number of pigments. Therefore virtual high-throughput screening, which has become a powerful tool in material discovery, is less readily applicable for the search of novel excitonic devices. We propose the use of artificial neural networks to bypass the computational limitations of established techniques for exploring the structure-dynamics relation in excitonic systems. Once trained, our neural networks reduce computational costs by several orders of magnitudes. Our predicted transfer times and transfer efficiencies exhibit similar or even higher accuracies than frequently used approximate methods such as secular Redfield theory
- Jul 14 2017 physics.chem-ph arXiv:1707.04146v3Given sufficient examples, recently introduced machine learning models enable rapid, yet accurate, predictions of properties of new molecules. Extrapolation to larger molecules with differing composition is prohibitive due to all the specific chemistries which would be required for training. We address this problem by exploiting redundancies due to chemical similarity of repeating building blocks each represented by an effective \underline atom in \underline molecule: The "am-on". In analogy to the DNA sequence in a gene encoding its function, constituting amons encode a query molecule's properties. The use of amons affords highly accurate machine learning predictions of quantum properties of arbitrary query molecules in real time. We investigate this approach for predicting energies of various covalently and non-covalently bonded systems. After training on the few amons detected, very low prediction errors can be reached, on par with experimental uncertainty. Systems studied include two dozen large biomolecules, eleven thousand medium sized organic molecules, large common polymers, water clusters, doped $h$BN sheets, bulk silicon, and Watson-Crick DNA base pairs. Conceptually, the amons extend Mendeleev's table to account for the chemical environments of elements. They represent an important stepping stone to machine learning based virtual chemical space exploration campaigns.
- Jul 05 2017 quant-ph arXiv:1707.00986v3A network of driven nonlinear oscillators without dissipation has recently been proposed for solving combinatorial optimization problems via quantum adiabatic evolution through its bifurcation point. Here we investigate the behavior of the quantum bifurcation machine in the presence of dissipation. Our numerical study suggests that the output probability distribution of the dissipative quantum bifurcation machine is Boltzmann-like, where the energy in the Boltzmann distribution corresponds to the cost function of the optimization problem. We explain the Boltzmann distribution by generalizing the concept of quantum heating in a single oscillator to the case of multiple coupled oscillators. The present result also suggests that such driven dissipative nonlinear oscillator networks can be applied to Boltzmann sampling, which is used, e.g., for Boltzmann machine learning in the field of artificial intelligence.
- The application of state-of-the-art machine learning techniques to statistical physic problems has seen a surge of interest for their ability to discriminate phases of matter by extracting essential features in the many-body wavefunction or the ensemble of correlators sampled in Monte Carlo simulations. Here we introduce a gener- alization of supervised machine learning approaches that allows to accurately map out phase diagrams of inter- acting many-body systems without any prior knowledge, e.g. of their general topology or the number of distinct phases. To substantiate the versatility of this approach, which combines convolutional neural networks with quantum Monte Carlo sampling, we map out the phase diagrams of interacting boson and fermion models both at zero and finite temperatures and show that first-order, second-order, and Kosterlitz-Thouless phase transitions can all be identified. We explicitly demonstrate that our approach is capable of identifying the phase transition to non-trivial many-body phases such as superfluids or topologically ordered phases without supervision.
- Jul 04 2017 quant-ph arXiv:1707.00360v1With the significant advancement in quantum computation in the past couple of decades, the exploration of machine-learning subroutines using quantum strategies has become increasingly popular. Gaussian process regression is a widely used technique in supervised classical machine learning. Here we introduce an algorithm for Gaussian process regression using continuous-variable quantum systems that can be realized with technology based on photonic quantum computers. Our algorithm shows that by using a continuous-variable quantum computer a dramatic speed-up in computing Gaussian process regression can be achieved, i.e., the possibility of exponentially reducing the time to compute. Furthermore, our results also include a continuous-variable quantum-assisted singular value decomposition method of non-sparse low rank matrices and forms an important subroutine in our Gaussian process regression algorithm.
- Determining the best method for training a machine learning algorithm is critical to maximizing its ability to classify data. In this paper, we compare the standard "fully supervised" approach (that relies on knowledge of event-by-event truth-level labels) with a recent proposal that instead utilizes class ratios as the only discriminating information provided during training. This so-called "weakly supervised" technique has access to less information than the fully supervised method and yet is still able to yield impressive discriminating power. In addition, weak supervision seems particularly well suited to particle physics since quantum mechanics is incompatible with the notion of mapping an individual event onto any single Feynman diagram. We examine the technique in detail -- both analytically and numerically -- with a focus on the robustness to issues of mischaracterizing the training samples. Weakly supervised networks turn out to be remarkably insensitive to systematic mismodeling. Furthermore, we demonstrate that the event level outputs for weakly versus fully supervised networks are probing different kinematics, even though the numerical quality metrics are essentially identical. This implies that it should be possible to improve the overall classification ability by combining the output from the two types of networks. For concreteness, we apply this technology to a signature of beyond the Standard Model physics to demonstrate that all these impressive features continue to hold in a scenario of relevance to the LHC.
- Quantum annealers aim at solving non-convex optimization problems by exploiting cooperative tunneling effects to escape local minima. The underlying idea consists in designing a classical energy function whose ground states are the sought optimal solutions of the original optimization problem and add a controllable quantum transverse field to generate tunneling processes. A key challenge is to identify classes of non-convex optimization problems for which quantum annealing remains efficient while thermal annealing fails. We show that this happens for a wide class of problems which are central to machine learning. Their energy landscapes is dominated by local minima that cause exponential slow down of classical thermal annealers while simulated quantum annealing converges efficiently to rare dense regions of optimal solutions.
- Jun 27 2017 cond-mat.stat-mech cond-mat.quant-gas arXiv:1706.07977v2This work aims at the goal whether the artificial intelligence can recognize phase transition without the prior human knowledge. If this becomes successful, it can be applied to, for instance, analyze data from quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark of this approach. In this work, we feed the compute with data generated by the classical Monte Carlo simulation for the XY model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principle component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principle component analysis with kernel tricks and the neural network method.
- Quantum computing for machine learning attracts increasing attention and recent technological developments suggest that especially adiabatic quantum computing may soon be of practical interest. In this paper, we therefore consider this paradigm and discuss how to adopt it to the problem of binary clustering. Numerical simulations demonstrate the feasibility of our approach and illustrate how systems of qubits adiabatically evolve towards a solution.
- Jun 20 2017 quant-ph physics.chem-ph arXiv:1706.05413v2The NSF Workshop in Quantum Information and Computation for Chemistry assembled experts from directly quantum-oriented fields such as algorithms, chemistry, machine learning, optics, simulation, and metrology, as well as experts in related fields such as condensed matter physics, biochemistry, physical chemistry, inorganic and organic chemistry, and spectroscopy. The goal of the workshop was to summarize recent progress in research at the interface of quantum information science and chemistry as well as to discuss the promising research challenges and opportunities in the field. Furthermore, the workshop hoped to identify target areas where cross fertilization among these fields would result in the largest payoff for developments in theory, algorithms, and experimental techniques. The ideas can be broadly categorized in two distinct areas of research that obviously have interactions and are not separated cleanly. The first area is quantum information for chemistry, or how quantum information tools, both experimental and theoretical can aid in our understanding of a wide range of problems pertaining to chemistry. The second area is chemistry for quantum information, which aims to discuss the several aspects where research in the chemical sciences can aid progress in quantum information science and technology. The results of the workshop are summarized in this report.
- Given a quantum (or statistical) system with a very large number of degrees of freedom and a preferred tensor product factorization of the Hilbert space (or of a space of distributions) we describe how it can be approximated with a very low-dimensional field theory with geometric degrees of freedom. The geometric approximation procedure consists of three steps. The first step is to construct weighted graphs (we call information graphs) with vertices representing subsystems (e.g. qubits or random variables) and edges representing mutual information (or the flow of information) between subsystems. The second step is to deform the adjacency matrices of the information graphs to that of a (locally) low-dimensional lattice using the graph flow equations introduced in the paper. (Note that the graph flow produces very sparse adjacency matrices and thus might also be used, for example, in machine learning or network science where the task of graph sparsification is of a central importance.) The third step is to define an emergent metric and to derive an effective description of the metric and possibly other degrees of freedom. To illustrate the procedure we analyze (numerically and analytically) two information graph flows with geometric attractors (towards locally one- and two-dimensional lattices) and metric perturbations obeying a geometric flow equation. Our analysis also suggests a possible approach to (a non-perturbative) quantum gravity in which the geometry (a secondary object) emerges directly from a quantum state (a primary object) due to the flow of the information graphs.
- Jun 07 2017 quant-ph arXiv:1706.01561v1The primary questions in the emerging field of quantum machine learning (QML) are, "Is it possible to achieve quantum speed-up of machine learning?" and "What quantum effects contribute to the speed-up and how?" Satisfactory answers to such questions have recently been provided by the quantum support vector machine (QSVM), which classifies many quantum data vectors by exploiting their quantum parallelism. However, it is demanded to realize a full-scale quantum computer that can process big data to learn quantum-mechanically, while nearly all the real-world data the users recognize are measured data, classical say. Then, the following important question arises: "Can quantum learning speed-up be attained even with the user-recognizable (classical) data?" Here, we provide an affirmative answer to the afore-mentioned question by performing proof-of-principle experiments.
- How useful can machine learning be in a quantum laboratory? Here we raise the question of the potential of intelligent machines in the context of scientific research. We investigate this question by using the projective simulation model, a physics-oriented approach to artificial intelligence. In our approach, the projective simulation system is challenged to design complex photonic quantum experiments that produce high-dimensional entangled multiphoton states, which are of high interest in modern quantum experiments. The artificial intelligence system learns to create a variety of entangled states, in number surpassing the best previously studied automated approaches, and improves the efficiency of their realization. In the process, the system autonomously (re)discovers experimental techniques which are only becoming standard in modern quantum optical experiments - a trait which was not explicitly demanded from the system but emerged through the process of learning. Such features highlight the possibility that machines could have a significantly more creative role in future research.
- Recent theoretical and experimental results suggest the possibility of using current and near-future quantum hardware in challenging sampling tasks. In this paper, we introduce free energy-based reinforcement learning (FERL) as an application of quantum hardware. We propose a method for processing a quantum annealer's measured qubit spin configurations in approximating the free energy of a quantum Boltzmann machine (QBM). We then apply this method to perform reinforcement learning on the grid-world problem using the D-Wave 2000Q quantum annealer. The experimental results show that our technique is a promising method for harnessing the power of quantum sampling in reinforcement learning tasks.
- Jun 02 2017 cond-mat.mtrl-sci physics.chem-ph arXiv:1706.00179v1Determining the stability of molecules and condensed phases is the cornerstone of atomistic modelling, underpinning our understanding of chemical and materials properties and transformations. Here we show that a machine learning model, based on a local description of chemical environments and Bayesian statistical learning, provides a unified framework to predict atomic-scale properties. It captures the quantum mechanical effects governing the complex surface reconstructions of silicon, predicts the stability of different classes of molecules with chemical accuracy, and distinguishes active and inactive protein ligands with more than 99% reliability. The universality and the systematic nature of our framework provides new insight into the potential energy surface of materials and molecules.
- In this paper, we propose the classification method based on a learning paradigm we are going to call Quantum Low Entropy based Associative Reasoning or QLEAR learning. The approach is based on the idea that classification can be understood as supervised clustering, where a quantum entropy in the context of the quantum probabilistic model, will be used as a "capturer" (measure, or external index), of the "natural structure" of the data. By using quantum entropy we do not make any assumption about linear separability of the data that are going to be classified. The basic idea is to find close neighbors to a query sample and then use relative change in the quantum entropy as a measure of similarity of the newly arrived sample with the representatives of interest. In other words, method is based on calculation of quantum entropy of the referent system and its relative change with the addition of the newly arrived sample. Referent system consists of vectors that represent individual classes and that are the most similar, in Euclidean distance sense, to the vector that is analyzed. Here, we analyze the classification problem in the context of measuring similarities to prototype examples of categories. While nearest neighbor classifiers are natural in this setting, they suffer from the problem of high variance (in bias-variance decomposition) in the case of limited sampling. Alternatively, one could use machine learning techniques (like support vector machines) but they involve time-consuming optimization. Here we propose a hybrid of nearest neighbor and machine learning technique which deals naturally with the multi-class setting, has reasonable computational complexity both in training and at run time, and yields excellent results in practice.