Top arXiv papers

sign in to customize
  • PDF
    We consider the classical complexity of approximately simulating time evolution under spatially local quadratic bosonic Hamiltonians for time $t$. We obtain upper and lower bounds on the scaling of $t$ with the number of bosons, $n$, for which simulation, cast as a sampling problem, is classically efficient and provably hard, respectively. We view these results in the light of classifying phases of physical systems based on parameters in the Hamiltonian and conjecture a link to dynamical phase transitions. In doing so, we combine ideas from mathematical physics and computational complexity to gain insight into the behavior of condensed matter systems.
  • PDF
    A prominent application of quantum cryptography is the distribution of cryptographic keys with unconditional security. Recently, such security was extended by Vazirani and Vidick (Physical Review Letters, 113, 140501, 2014) to the device-independent (DI) scenario, where the users do not need to trust the integrity of the underlying quantum devices. The protocols analyzed by them and by subsequent authors all require a sequential execution of N multiplayer games, where N is the security parameter. In this work, we prove unconditional security of a protocol where all games are executed in parallel. Our result further reduces the requirements for QKD (allowing for arbitrary information leakage within each players' lab) and opens the door to more efficient implementation. To the best of our knowledge, this is the first parallel security proof for a fully device-independent QKD protocol. Our protocol tolerates a constant level of device imprecision and achieves a linear key rate.
  • PDF
    Efficiently characterising quantum systems, verifying operations of quantum devices and validating underpinning physical models, are central challenges for the development of quantum technologies and for our continued understanding of foundational physics. Machine-learning enhanced by quantum simulators has been proposed as a route to improve the computational cost of performing these studies. Here we interface two different quantum systems through a classical channel - a silicon-photonics quantum simulator and an electron spin in a diamond nitrogen-vacancy centre - and use the former to learn the latter's Hamiltonian via Bayesian inference. We learn the salient Hamiltonian parameter with an uncertainty of approximately $10^{-5}$. Furthermore, an observed saturation in the learning algorithm suggests deficiencies in the underlying Hamiltonian model, which we exploit to further improve the model itself. We go on to implement an interactive version of the protocol and experimentally show its ability to characterise the operation of the quantum photonic device. This work demonstrates powerful new quantum-enhanced techniques for investigating foundational physical models and characterising quantum technologies.
  • PDF
    The experimental realization of increasingly complex synthetic quantum systems calls for the development of general theoretical methods, to validate and fully exploit quantum resources. Quantum-state tomography (QST) aims at reconstructing the full quantum state from simple measurements, and therefore provides a key tool to obtain reliable analytics. Brute-force approaches to QST, however, demand resources growing exponentially with the number of constituents, making it unfeasible except for small systems. Here we show that machine learning techniques can be efficiently used for QST of highly-entangled states in arbitrary dimension. Remarkably, the resulting approach allows one to reconstruct traditionally challenging many-body quantities - such as the entanglement entropy - from simple, experimentally accessible measurements. This approach can benefit existing and future generations of devices ranging from quantum computers to ultra-cold atom quantum simulators.
  • PDF
    We study the properties of the set of marginal distributions of infinite translation-invariant systems in the 2D square lattice. In cases where the local variables can only take a small number $d$ of possible values, we completely solve the marginal or membership problem for nearest-neighbors distributions ($d=2,3$) and nearest and next-to-nearest neighbors distributions ($d=2$). Remarkably, all these sets form convex polytopes in probability space. This allows us to devise an algorithm to compute the minimum energy per site of any TI Hamiltonian in these scenarios exactly. We also devise a simple algorithm to approximate the minimum energy per site up to arbitrary accuracy for the cases not covered above. For variables of a higher (but finite) dimensionality, we prove two no-go results. To begin, the exact computation of the energy per site of arbitrary TI Hamiltonians with only nearest-neighbor interactions is an undecidable problem. In addition, in scenarios with $d\geq 2947$, the boundary of the set of nearest-neighbor marginal distributions contains both flat and smoothly curved surfaces and the set itself is not semi-algebraic. This implies, in particular, that it cannot be characterized via semidefinite programming, even if we allow the input of the program to include polynomials of nearest-neighbor probabilities.
  • PDF
    We present a family of easily computable upper bounds for the Holevo quantity of ensemble of quantum states depending on a reference state as a free parameter. These upper bounds are obtained by combining probabilistic and metric characteristics of the ensemble. We show that appropriate choice of the reference state gives tight upper bounds for the Holevo quantity which in many cases improve existing estimates in the literature. We also present upper bound for the Holevo quantity of a generalized ensemble of quantum states with finite average energy depending on metric divergence of the ensemble. The specification of this upper bound for the multi-mode quantum oscillator is tight for large energy. The above results are used to obtain tight upper bounds for the Holevo capacity of finite-dimensional and infinite-dimensional energy-constrained quantum channels depending on metric characteristics of the channel output.
  • PDF
    A class of Adaptive Decoders (AD's) for coherent-state sequences is studied, including in particular the most common technology for optical-signal processing, e.g., interferometers, coherent displacements and photon-counting detectors. More generally we consider AD's comprising adaptive procedures based on passive multi-mode Gaussian unitaries and arbitrary single-mode destructive measurements. For classical communication on quantum phase-insensitive Gaussian channels with a coherent-state encoding, we show that the AD's optimal information transmission rate is not greater than that of a single-mode decoder. Our result also implies that the ultimate classical capacity of quantum phase-insensitive Gaussian channels is unlikely to be achieved with the considered class of AD's.
  • PDF
    In this brief paper, we go through the theoretical steps of the spectral clustering on quantum computers by employing the phase estimation and the amplitude amplification algorithms. To speed-up the amplitude amplification, we introduce a biased version of the phase estimation algorithm. In addition, when the circuit representation of a data matrix of order $N$ is produced through an ancilla based circuit in which the matrix is written as a sum of $L$ number of Householder matrices; we show that the computational complexity of the whole process is bound by $O(c2^mLN)$ number of quantum gates. Here, $m$ represents the number of qubits involved in the phase register of the phase estimation algorithm and $c$ represents the number of trials done to find the best clustering.
  • PDF
    This is a redacted transcript of a course given by the author at Harvard in spring semester 2016. It contains a pedagogical overview of recent developments connecting the subjects of soft theorems, the memory effect and asymptotic symmetries in four-dimensional QED, nonabelian gauge theory and gravity with applications to black holes. The lectures may be viewed online at https://goo.gl/3DJdOr. Please send typos or corrections to strominger@physics.harvard.edu.
  • PDF
    Superconducting electronic devices have re-emerged as contenders for both classical and quantum computing due to their fast operation speeds, low dissipation and long coherence times. An ultimate demonstration of coherence is lasing. We use one of the fundamental aspects of superconductivity, the ac Josephson effect, to demonstrate a laser made from a Josephson junction strongly coupled to a multi-mode superconducting cavity. A dc voltage bias to the junction provides a source of microwave photons, while the circuit's nonlinearity allows for efficient down-conversion of higher order Josephson frequencies down to the cavity's fundamental mode. The simple fabrication and operation allows for easy integration with a range of quantum devices, allowing for efficient on-chip generation of coherent microwave photons at low temperatures.
  • PDF
    Current Deep Learning approaches have been very successful using convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers. Three limitations of this approach are: 1) they are based on a simple layered network topology, i.e., highly connected layers, without intra-layer connections; 2) the networks are manually configured to achieve optimal results, and 3) the implementation of neuron model is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. We use the MNIST dataset for our experiment, due to input size limitations of current quantum computers. Our results show the feasibility of using the three architectures in tandem to address the above deep learning limitations. We show a quantum computer can find high quality values of intra-layer connections weights, in a tractable time as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware.
  • PDF
    In this essay, I look at (bemoan) the issues surrounding simulating quantum systems in order to design quantum devices for quantum technologies. The program runs into a natural difficulty that simulating quantum systems really require a proper quantum simulator. The problem is likened to the "tyranny of numbers" that faced computer engineers in the 1960s.
  • PDF
    Free-hand sketch-based image retrieval (SBIR) is a specific cross-view retrieval task, in which queries are abstract and ambiguous sketches while the retrieval database is formed with natural images. Work in this area mainly focuses on extracting representative and shared features for sketches and natural images. However, these can neither cope well with the geometric distortion between sketches and images nor be feasible for large-scale SBIR due to the heavy continuous-valued distance computation. In this paper, we speed up SBIR by introducing a novel binary coding method, named \textbfDeep Sketch Hashing (DSH), where a semi-heterogeneous deep architecture is proposed and incorporated into an end-to-end binary coding framework. Specifically, three convolutional neural networks are utilized to encode free-hand sketches, natural images and, especially, the auxiliary sketch-tokens which are adopted as bridges to mitigate the sketch-image geometric distortion. The learned DSH codes can effectively capture the cross-view similarities as well as the intrinsic semantic correlations between different categories. To the best of our knowledge, DSH is the first hashing work specifically designed for category-level SBIR with an end-to-end deep architecture. The proposed DSH is comprehensively evaluated on two large-scale datasets of TU-Berlin Extension and Sketchy, and the experiments consistently show DSH's superior SBIR accuracies over several state-of-the-art methods, while achieving significantly reduced retrieval time and memory footprint.
  • PDF
    In 1961 Landauer pointed out that resetting a binary memory requires a minimum energy of $k_{B}T \ln(2)$ where $k_{B}$ is the Boltzmann constant and $T$ the absolute temperature of the memory device. Any memory however, is doomed to loose its content as time proceeds if no action is taken. In order to avoid memory loss, a refresh procedure is periodically performed with time interval $t_R$. In this paper we show that it does exist a fundamental bound to the minimum energy required to preserve one bit of information for a time $\bar{t}$, with probability of error less than $P_E$, and that this energy is a monotonically decreasing function of $t_R$. Two main conclusions are drawn: the good news is that, in principle, the cost of remembering can be arbitrarily reduced if the refresh procedure is performed often enough. The bad news is that no memory can be preserved forever, no matter how much energy is invested.
  • PDF
    We prove that conformal curved spacetime can be encoded into the initial wave function and that curved propagation can be simulated on a two-dimensional regular lattice with a finite set of homogeneous unitary operators. We generalize recent results shown in [1], where the author transforms flat-spacetime in curved static spacetime via non-unitary rotations. In particular, encoding the metric in the initial quantum state via unitary local rotations, can naturally be useful for quantum simulators due to the efficiency and simplicity of the implementation scheme. We validate our model proving that the continuous limit converges to the Dirac Eq. in curved spacetime.
  • PDF
    Planck's law for black-body radiation marks the origin of quantum theory and is discussed in all introductory (or advanced) courses on this subject. However, the question whether Planck really implied quantisation is debated among historians of physics. We present a simplified account of this debate which also sheds light on the issue of indistinguishability and Einstein's light quantum hypothesis. We suggest that the teaching of quantum mechanics could benefit from including this material beyond the question of historical accuracy.
  • PDF
    We consider the problem of efficient exploration in finite horizon MDPs.We show that an optimistic modification to model-based value iteration, can achieve a regret bound $\tilde{O}( \sqrt{HSAT} + H^2S^2A+H\sqrt{T})$ where $H$ is the time horizon, $S$ the number of states, $A$ the number of actions and $T$ the time elapsed. This result improves over the best previous known bound $\tilde{O}(HS \sqrt{AT})$ achieved by the UCRL2 algorithm.The key significance of our new results is that when $T\geq H^3S^3A$ and $SA\geq H$, it leads to a regret of $\tilde{O}(\sqrt{HSAT})$ that matches the established lower bounds of $\Omega(\sqrt{HSAT})$ up to a logarithmic factor. Our analysis contain two key insights. We use careful application of concentration inequalities to the optimal value function as a whole, rather than to the transitions probabilities (to improve scaling in $S$), and we use "exploration bonuses" based on Bernstein's inequality, together with using a recursive -Bellman-type- Law of Total Variance (to improve scaling in $H$).
  • PDF
    In this paper, for the first time, we introduce a multiple instance (MI) deep hashing technique for learning discriminative hash codes with weak bag-level supervision suited for large-scale retrieval. We learn such hash codes by aggregating deeply learnt hierarchical representations across bag members through a dedicated MI pool layer. For better trainability and retrieval quality, we propose a two-pronged approach that includes robust optimization and training with an auxiliary single instance hashing arm which is down-regulated gradually. We pose retrieval for tumor assessment as an MI problem because tumors often coexist with benign masses and could exhibit complementary signatures when scanned from different anatomical views. Experimental validations on benchmark mammography and histology datasets demonstrate improved retrieval performance over the state-of-the-art methods.
  • PDF
    Texture is an essential property of physical objects that affects aesthetics, usability, and functionality. However, designing and applying textures to 3D objects with existing tools remains difficult and time-consuming; it requires proficient 3D modeling skills. To address this, we investigated an auto-completion approach for efficient texture creation that automates the tedious, repetitive process of applying texture while allowing flexible customization. We developed techniques for users to select a target surface, sketch and manipulate a texture with 2D drawings, and then generate 3D printable textures onto an arbitrary curved surface. In a controlled experiment our tool sped texture creation by 80% over conventional tools, a performance gain that is higher with more complex target surfaces. This result confirms that auto-completion is powerful for creating 3D textures.
  • PDF
    We present a data-driven approach to the problem of inductive computer program synthesis. Our method learns a probabilistic model for real-world programs from a corpus of existing code. It uses this model during synthesis to automatically infer a posterior distribution over sketches, or syntactic models of the problem to be synthesized. Sketches sampled from this posterior are then used to drive combinatorial synthesis of a program in a high-level programming language. The key technical innovation of our approach --- embodied in a system called Bayou --- is utilizing user-supplied evidence as to the program's desired behavior, along with a Bayesian update, to obtain a posterior distribution over the program's true, latent specification (indicating user intent), which in turn produces a posterior over possible sketches. As we show experimentally, explicitly modeling uncertainty in specification significantly increases the accuracy of the synthesis algorithm. We evaluate Bayou's ability to synthesize Java and Android methods. We find that using just a few example API sequences to communicate user intent, Bayou can synthesize complex method bodies, some implementing tasks never encountered during training.
  • PDF
    This paper proposes the SVDNet for retrieval problems, with focus on the application of person re-identification (re-ID). We view each weight vector within a fully connected (FC) layer in a convolutional neuron network (CNN) as a projection basis. It is observed that the weight vectors are usually highly correlated. This problem leads to correlations among entries of the FC descriptor, and compromises the retrieval performance based on the Euclidean distance. To address the problem, this paper proposes to optimize the deep representation learning process with Singular Vector Decomposition (SVD). Specifically, with the restraint and relaxation iteration (RRI) training scheme, we are able to iteratively integrate the orthogonality constraint in CNN training, yielding the so-called SVDNet. We conduct experiments on the Market-1501, CUHK03, and Duke datasets, and show that RRI effectively reduces the correlation among the projection vectors, produces more discriminative FC descriptors, and significantly improves the re-ID accuracy. On the Market-1501 dataset, for instance, rank-1 accuracy is improved from 55.3% to 80.5% for CaffeNet, and from 73.8% to 82.3% for ResNet-50.
  • PDF
    Mutual usage of vehicles as well as car sharing became more and more attractive during the last years. Especially in urban environments with limited parking possibilities and a higher risk for traffic jams, car rentals and sharing services may save time and money. But when renting a vehicle it could already be damaged (e.g., scratches or bumps inflicted by a previous user) without the damage being perceived by the service provider. In order to address such problems, we present an automated, motion-based system for impact detection, that facilitates a common smartphone as a sensor platform. The system is capable of detecting the impact segment and the point of time of an impact event on a vehicle's surface, as well as its direction of origin. With this additional specific knowledge, it may be possible to reconstruct the circumstances of an impact event, e.g., to prove possible innocence of a service's customer.
  • PDF
    Precise predictions for Higgs production via vector-boson fusion play an important role when testing the properties of the Higgs boson and probing new-physics effects. While the inclusive cross section changes little when including NNLO and N3LO QCD corrections, a differential NNLO calculation with typical VBF cuts shows large corrections of up to 10% in distributions. In this article, we investigate the dependence of the differential NNLO QCD calculation on the jet definition. Starting from the known results at a fixed jet clustering choice, we use the electroweak H+3 jets production cross section at NLO precision to derive NNLO results for H+2 jets production for other jet definitions. We find that larger clustering radii significantly reduce the impact of NNLO corrections. The sizable NNLO corrections for distributions are largely caused by the broader energy flow inside jets at NNLO and are to be expected generally for processes with jets at LO.
  • PDF
    This a pedagogical introduction to scattering amplitudes in gauge theories. It proceeds from Dirac equation and Weyl fermions to the two pivot points of current developments: the recursion relations of Britto, Cachazo, Feng and Witten, and the unitarity cut method pioneered by Bern, Dixon, Dunbar and Kosower. In ten lectures, it covers the basic elements of on-shell methods.
  • PDF
    Structured Prediction Energy Networks (Belanger and McCallum, 2016) (SPENs) are a simple, yet expressive family of structured prediction models. An energy function over candidate structured outputs is given by a deep network, and predictions are formed by gradient-based optimization. Unfortunately, we have struggled to apply the structured SVM (SSVM) learning method of Belanger and McCallum, 2016 to applications with more complex structure than multi-label classification. In general, SSVMs are unreliable whenever exact energy minimization is intractable. In response, we present end-to-end learning for SPENs, where the energy function is discriminatively trained by back-propagating through gradient-based prediction. This paper presents a collection of methods necessary to apply the technique to problems with complex structure. For example, we avoid vanishing gradients when learning SPENs for convex relaxations of discrete prediction problems and explicitly train models such that energy minimization converges quickly in practice. Using end-to-end learning, we demonstrate the power of SPENs on 7-Scenes depth image denoising and CoNLL-2005 semantic role labeling tasks. In both, we outperform competitive baselines that employ more simplistic energy functions, but perform exact energy minimization. In particular, for denoising we achieve 40 PSNR, outperforming the previous state-of-the-art of 36.
  • PDF
    We develop a new formalism of gravitoelectromagnetism from a recently introduced new class of scalar-tensor theories of gravity formulated on Weyl-Integrable geometry. The gravitational sector is described by both a scalar and a tensor metric field. The electromagnetic potential appears as a consequence of the invariance of the action under Weyl symmetry transformations. A feature of this formalism is that both the scalar field and the electromagnetic vector field have a geometrical origin. We have derived the field equations from the perspective of two frames: the Weyl and the Einstein-Riemann frames. We found that when a Brans-Dicke constant parameter is regarded in the Weyl frame, photons seems to be massive, while in the Einstein-Riemann frame a theory of gravitoelectromagnetism is recovered for massless photons.
  • PDF
    Mechanism design for fully strategic agents commonly assumes broadcast nature of communication between agents of the system. Moreover, for mechanism design, the stability of Nash equilibrium (NE) is demonstrated by showing convergence of specific pre-designed learning dynamics, rather than for a class of learning dynamics. In this paper we consider two common resource allocation problems: sharing $ K $ infinitely divisible resources among strategic agents for their private consumption (private goods), and determining the level for an infinitely divisible public good with $ P $ features, that is shared between strategic agents. For both cases, we present a distributed mechanism for a set of agents who communicate through a given network. In a distributed mechanism, agents' messages are not broadcast to all other agents as in the standard mechanism design framework, but are exchanged only in the local neighborhood of each agent. The presented mechanisms produce a unique NE and fully implement the social welfare maximizing allocation. In addition, the mechanisms are budget-balanced at NE. It is also shown that the mechanisms induce a game with contractive best-response, leading to guaranteed convergence for all learning dynamics within the Adaptive Best-Response dynamics class, including dynamics such as Cournot best-response, $ k- $period best-response and Fictitious Play. We also present a numerically study of convergence under repeated play, for various communication graphs and learning dynamics.
  • PDF
    Robust environment perception is essential for decision-making on robots operating in complex domains. Intelligent task execution requires principled treatment of uncertainty sources in a robot's observation model. This is important not only for low-level observations (e.g., accelerometer data), but also for high-level observations such as semantic object labels. This paper formalizes the concept of macro-observations in Decentralized Partially Observable Semi-Markov Decision Processes (Dec-POSMDPs), allowing scalable semantic-level multi-robot decision making. A hierarchical Bayesian approach is used to model noise statistics of low-level classifier outputs, while simultaneously allowing sharing of domain noise characteristics between classes. Classification accuracy of the proposed macro-observation scheme, called Hierarchical Bayesian Noise Inference (HBNI), is shown to exceed existing methods. The macro-observation scheme is then integrated into a Dec-POSMDP planner, with hardware experiments running onboard a team of dynamic quadrotors in a challenging domain where noise-agnostic filtering fails. To the best of our knowledge, this is the first demonstration of a real-time, convolutional neural net-based classification framework running fully onboard a team of quadrotors in a multi-robot decision-making domain.
  • PDF
    Designing and building automated systems with which people can interact naturally is one of the emerging objective of Mechatronics. In this perspective multimodality and adaptivity represent focal issues, enabling users to communicate more freely and naturally with automated systems. One of the basic problem of multimodal interaction is the fusion process. Current approaches to fusion are mainly two: the former implements the multimodal fusion at dialogue management level, whereas the latter at grammar level. In this paper, we propose a multimodal attribute grammar, that provides constructions both for representing input symbols from different modalities and for modeling semantic and temporal features of multimodal input symbols, enabling the specification of multimodal languages. Moreover, an application of the proposed approach in the context of a multimodal language specification to control a driver assistance system, as robots using different integrated interaction modalities, is given.
  • PDF
    Programming language-design and run-time-implementation require detailed knowledge about the programs that users want to implement. Acquiring this knowledge is hard, and there is little tool support to effectively estimate whether a proposed tradeoff actually makes sense in the context of real world applications. Ideally, knowledge about behaviour of "typical" programs is 1) easily obtainable, 2) easily reproducible, and 3) easily sharable. We present Spencer, a web service and API framework for dynamic analysis of a continuously growing set of traces of standard program corpora. Users do not obtain traces on their own, but can instead send queries to the web service that will be executed on a set of program traces. Queries are built in terms of a set of query combinators that present a high level interface for working with trace data. Since the framework is high level, and there is a hosted collection of recorded traces, queries are easy to implement. Since the data sets are shared by the research community, results are reproducible. Since the actual queries run on one (or many) servers that provide analysis as a service, obtaining results is possible on commodity hardware. Data in Spencer is meant to be obtained once, and analysed often, making the overhead of data collection mostly irrelevant. This allows Spencer to collect more data than traditional tracing tools can afford within their performance budget. Results in Spencer are cached, making complicated analyses that build on cached primitive queries speedy.
  • PDF
    We address the problem of determining correspondences between two images in agreement with a geometric model such as an affine or thin-plate spline transformation, and estimating its parameters. The contributions of this work are three-fold. First, we propose a convolutional neural network architecture for geometric matching. The architecture is based on three main components that mimic the standard steps of feature extraction, matching and simultaneous inlier detection and model parameter estimation, while being trainable end-to-end. Second, we demonstrate that the network parameters can be trained from synthetically generated imagery without the need for manual annotation and that our matching layer significantly increases generalization capabilities to never seen before images. Finally, we show that the same model can perform both instance-level and category-level matching giving state-of-the-art results on the challenging Proposal Flow dataset.
  • PDF
    Late Winter Lecture Notes, Short Course (10 hours) of Relativistic Astrophysics held at the Department of Physics and Astronomy of the University of Padova, March 13-17, 2017.
  • PDF
    The Bag--of--Visual--Words (BoVW) is a visual description technique that aims at shortening the semantic gap by partitioning a low--level feature space into regions of the feature space that potentially correspond to visual concepts and by giving more value to this space. In this paper we present a conceptual analysis of three major properties of language grammar and how they can be adapted to the computer vision and image understanding domain based on the bag of visual words paradigm. Evaluation of the visual grammar shows that a positive impact on classification accuracy and/or descriptor size is obtained when the technique are applied when the proposed techniques are applied.
  • PDF
    Machine learning is increasingly used in security-critical applications, such as autonomous driving, face recognition and malware detection. Most learning methods, however, have not been designed with security in mind and thus are vulnerable to different types of attacks. This problem has motivated the research field of adversarial machine learning that is concerned with attacking and defending learning methods. Concurrently, a different line of research has tackled a very similar problem: In digital watermarking information are embedded in a signal in the presence of an adversary. As a consequence, this research field has also extensively studied techniques for attacking and defending watermarking methods. The two research communities have worked in parallel so far, unnoticeably developing similar attack and defense strategies. This paper is a first effort to bring these communities together. To this end, we present a unified notation of black-box attacks against machine learning and watermarking that reveals the similarity of both settings. To demonstrate the efficacy of this unified view, we apply concepts from watermarking to machine learning and vice versa. We show that countermeasures from watermarking can mitigate recent model-extraction attacks and, similarly, that techniques for hardening machine learning can fend off oracle attacks against watermarks. Our work provides a conceptual link between two research fields and thereby opens novel directions for improving the security of both, machine learning and digital watermarking.
  • PDF
    We study the maximal rank in affine subspaces of symmetric or alternating matrices, in terms of the matching numbers of certain associated graphs. Applications include simple proofs of upper bounds on the dimension of such subspaces in terms of their maximal rank.
  • PDF
    We introduce an architecture based on deep hierarchical decompositions to learn effective representations of large graphs. Our framework extends classic R-decompositions used in kernel methods, enabling nested "part-of-part" relations. Unlike recursive neural networks, which unroll a template on input graphs directly, we unroll a neural network template over the decomposition hierarchy, allowing us to deal with the high degree variability that typically characterize social network graphs. Deep hierarchical decompositions are also amenable to domain compression, a technique that reduces both space and time complexity by exploiting symmetries. We show empirically that our approach is competitive with current state-of-the-art graph classification methods, particularly when dealing with social network datasets.
  • PDF
    Dynamic Textures (DTs) are sequences of images of moving scenes that exhibit certain stationarity properties in time such as smoke, vegetation and fire. The analysis of DT is important for recognition, segmentation, synthesis or retrieval for a range of applications including surveillance, medical imaging and remote sensing. Deep learning methods have shown impressive results and are now the new state of the art for a wide range of computer vision tasks including image and video recognition and segmentation. In particular, Convolutional Neural Networks (CNNs) have recently proven to be well suited for texture analysis with a design similar to a filter bank approach. In this paper, we develop a new approach to DT analysis based on a CNN method applied on three orthogonal planes x y , xt and y t . We train CNNs on spatial frames and temporal slices extracted from the DT sequences and combine their outputs to obtain a competitive DT classifier. Our results on a wide range of commonly used DT classification benchmark datasets prove the robustness of our approach. Significant improvement of the state of the art is shown on the larger datasets.
  • PDF
    Steganography is collection of methods to hide secret information ("payload") within non-secret information ("container"). Its counterpart, Steganalysis, is the practice of determining if a message contains a hidden payload, and recovering it if possible. Presence of hidden payloads is typically detected by a binary classifier. In the present study, we propose a new model for generating image-like containers based on Deep Convolutional Generative Adversarial Networks (DCGAN). This approach allows to generate more setganalysis-secure message embedding using standard steganography algorithms. Experiment results demonstrate that the new model successfully deceives the steganography analyzer, and for this reason, can be used in steganographic applications.
  • PDF
    This paper demonstrates a data-driven control approach for demand response in real-life residential buildings. The objective is to optimally schedule the heating cycles of the Domestic Hot Water (DHW) buffer to maximize the self-consumption of the local photovoltaic (PV) production. A model-based reinforcement learning technique is used to tackle the underlying sequential decision-making problem. The proposed algorithm learns the stochastic occupant behavior, predicts the PV production and takes into account the dynamics of the system. A real-life experiment with six residential buildings is performed using this algorithm. The results show that the self-consumption of the PV production is significantly increased, compared to the default thermostat control.
  • PDF
    In today's databases, previous query answers rarely benefit answering future queries. For the first time, to the best of our knowledge, we change this paradigm in an approximate query processing (AQP) context. We make the following observation: the answer to each query reveals some degree of knowledge about the answer to another query because their answers stem from the same underlying distribution that has produced the entire dataset. Exploiting and refining this knowledge should allow us to answer queries more analytically, rather than by reading enormous amounts of raw data. Also, processing more queries should continuously enhance our knowledge of the underlying distribution, and hence lead to increasingly faster response times for future queries. We call this novel idea---learning from past query answers---Database Learning. We exploit the principle of maximum entropy to produce answers, which are in expectation guaranteed to be more accurate than existing sample-based approximations. Empowered by this idea, we build a query engine on top of Spark SQL, called Verdict. We conduct extensive experiments on real-world query traces from a large customer of a major database vendor. Our results demonstrate that Verdict supports 73.7% of these queries, speeding them up by up to 23.0x for the same accuracy level compared to existing AQP systems.
  • PDF
    With a large influx of dermoscopy images and a growing shortage of dermatologists, automatic dermoscopic image analysis plays an essential role in skin cancer diagnosis. In this paper, a new deep fully convolutional neural network (FCNN) is proposed to automatically segment melanoma out of skin images by end-to-end learning with only pixels and labels as inputs. Our proposed FCNN is capable of using both local and global information to segment melanoma by adopting skipping layers. The public benchmark database consisting of 150 validation images, 600 test images and 2000 training images in the melanoma detection challenge 2017 at International Symposium Biomedical Imaging 2017 is used to test the performance of our algorithm. All large size images (for example, $4000\times 6000$ pixels) are reduced to much smaller images with $384\times 384$ pixels (more than 10 times smaller). We got and submitted preliminary results to the challenge without any pre or post processing. The performance of our proposed method could be further improved by data augmentation and by avoiding image size reduction.
  • PDF
    This paper describes a neural-network model which performed competitively (top 6) at the SemEval 2017 cross-lingual Semantic Textual Similarity (STS) task. Our system employs an attention-based recurrent neural network model that optimizes the sentence similarity. In this paper, we describe our participation in the multilingual STS task which measures similarity across English, Spanish, and Arabic.
  • PDF
    Machine learning is a field of computer science that builds algorithms that learn. In many cases, machine learning algorithms are used to recreate a human ability like adding a caption to a photo, driving a car, or playing a game. While the human brain has long served as a source of inspiration for machine learning, little effort has been made to directly use data collected from working brains as a guide for machine learning algorithms. Here we demonstrate a new paradigm of "neurally-weighted" machine learning, which takes fMRI measurements of human brain activity from subjects viewing images, and infuses these data into the training process of an object recognition learning algorithm to make it more consistent with the human brain. After training, these neurally-weighted classifiers are able to classify images without requiring any additional neural data. We show that our neural-weighting approach can lead to large performance gains when used with traditional machine vision features, as well as to significant improvements with already high-performing convolutional neural network features. The effectiveness of this approach points to a path forward for a new class of hybrid machine learning algorithms which take both inspiration and direct constraints from neuronal data.
  • PDF
    As virtual reality (VR) emerges as a mainstream platform, designers have started to experiment new interaction techniques to enhance the user experience. This is a challenging task because designers not only strive to provide designs with good performance but also carefully ensure not to disrupt users' immersive experience. There is a dire need for a new evaluation tool that extends beyond traditional quantitative measurements to assist designers in the design process. We propose an EEG-based experiment framework that evaluates interaction techniques in VR by measuring intentionally elicited cognitive conflict. Through the analysis of the feedback-related negativity (FRN) as well as other quantitative measurements, this framework allows designers to evaluate the effect of the variables of interest. We studied the framework by applying it to the fundamental task of 3D object selection using direct 3D input, i.e. tracked hand in VR. The cognitive conflict is intentionally elicited by manipulating the selection radius of the target object. Our first behavior experiment validated the framework in line with the findings of conflict-induced behavior adjustments like those reported in other classical psychology experiment paradigms. Our second EEG-based experiment examines the effect of the appearance of virtual hands. We found that the amplitude of FRN correlates with the level of realism of the virtual hands, which concurs with the Uncanny Valley theory.
  • PDF
    The machining process is the most common method for metal cutting, and especially in the finishing of machined parts. In modern industry the goal of production is to manufacture products at a low cost, with high quality in the shortest time. In this research different biomaterials, machinability properties, surface characteristics, cutting tools, cutting fluids and machining conditions for biomaterials with machinability capability are reviewed. In the first step prosthetic acetabular (PA) hip is designed and printed by using selective laser melting (SLM) process then current limitations on fabrication are analyzed to optimize production process and obtain samples with higher quality. The feasibility of artificial intelligence (AI) in machining is determined and In order to calculate dimensional deviation the effect of tool path on tool deflection is modelled. The main focus of this research is determining the machining conditions on surface quality and osseointegration, work hardening and force analyzing of PA. Also the effect of heat treatment on machinability and mechanical properties of produced parts is determined.
  • PDF
    We consider the optimal value of information (VoI) problem, where the goal is to sequentially select a set of tests with a minimal cost, so that one can efficiently make the best decision based on the observed outcomes. Existing algorithms are either heuristics with no guarantees, or scale poorly (with exponential run time in terms of the number of available tests). Moreover, these methods assume a known distribution over the test outcomes, which is often not the case in practice. We propose an efficient sampling-based online learning framework to address the above issues. First, assuming the distribution over hypotheses is known, we propose a dynamic hypothesis enumeration strategy, which allows efficient information gathering with strong theoretical guarantees. We show that with sufficient amount of samples, one can identify a near-optimal decision with high probability. Second, when the parameters of the hypotheses distribution are unknown, we propose an algorithm which learns the parameters progressively via posterior sampling in an online fashion. We further establish a rigorous bound on the expected regret. We demonstrate the effectiveness of our approach on a real-world interactive troubleshooting application and show that one can efficiently make high-quality decisions with low cost.
  • PDF
    Studies show that refining real-world categories into semantic subcategories contributes to better image modeling and classification. Previous image sub-categorization work relying on labeled images and WordNet's hierarchy is not only labor-intensive, but also restricted to classify images into NOUN subcategories. To tackle these problems, in this work, we exploit general corpus information to automatically select and subsequently classify web images into semantic rich (sub-)categories. The following two major challenges are well studied: 1) noise in the labels of subcategories derived from the general corpus; 2) noise in the labels of images retrieved from the web. Specifically, we first obtain the semantic refinement subcategories from the text perspective and remove the noise by the relevance-based approach. To suppress the search error induced noisy images, we then formulate image selection and classifier learning as a multi-class multi-instance learning problem and propose to solve the employed problem by the cutting-plane algorithm. The experiments show significant performance gains by using the generated data of our way on both image categorization and sub-categorization tasks. The proposed approach also consistently outperforms existing weakly supervised and web-supervised approaches.
  • PDF
    Human parsing has recently attracted a lot of research interests due to its huge application potentials. However existing datasets have limited number of images and annotations, and lack the variety of human appearances and the coverage of challenging cases in unconstrained environment. In this paper, we introduce a new benchmark "Look into Person (LIP)" that makes a significant advance in terms of scalability, diversity and difficulty, a contribution that we feel is crucial for future developments in human-centric analysis. This comprehensive dataset contains over 50,000 elaborately annotated images with 19 semantic part labels, which are captured from a wider range of viewpoints, occlusions and background complexity. Given these rich annotations we perform detailed analyses of the leading human parsing approaches, gaining insights into the success and failures of these methods. Furthermore, in contrast to the existing efforts on improving the feature discriminative capability, we solve human parsing by exploring a novel self-supervised structure-sensitive learning approach, which imposes human pose structures into parsing results without resorting to extra supervision (i.e., no need for specifically labeling human joints in model training). Our self-supervised learning framework can be injected into any advanced neural networks to help incorporate rich high-level knowledge regarding human joints from a global perspective and improve the parsing results. Extensive evaluations on our LIP and the public PASCAL-Person-Part dataset demonstrate the superiority of our method.
  • PDF
    This paper presents a study of employing Ranking SVM and Convolutional Neural Network for two missions: legal information retrieval and question answering in the Competition on Legal Information Extraction/Entailment. For the first task, our proposed model used a triple of features (LSI, Manhattan, Jaccard), and is based on paragraph level instead of article level as in previous studies. In fact, each single-paragraph article corresponds to a particular paragraph in a huge multiple-paragraph article. For the legal question answering task, additional statistical features from information retrieval task integrated into Convolutional Neural Network contribute to higher accuracy.
  • PDF
    Given a poset $P$ we say a family $\mathcal{F}\subseteq P$ is centered if it is obtained by `taking sets as close to the middle layer as possible'. A poset $P$ is said to have the centeredness property if for any $M$, among all families of size $M$ in $P$, centered families contain the minimum number of comparable pairs. Kleitman showed that the Boolean lattice $\{0,1\}^n$ has the centeredness property. It was conjectured by Noel, Scott, and Sudakov, and by Balogh and Wagner, that the poset $\{0,1,\ldots,k\}^n$ also has the centeredness property, provided $n$ is sufficiently large compared to $k$. We show that this conjecture is false for all $k\geq 2$ and investigate the range of $M$ for which it holds. Further, we improve a result of Noel, Scott, and Sudakov by showing that the poset of subspaces of $\mathbb{F}_q^n$ has the centeredness property. Several open questions are also given.

Recent comments

Felix Leditzky May 24 2017 20:43 UTC

Yes, that's right, thanks!

For (5), you use the Cauchy-Schwarz inequality $\left| \operatorname{tr}(X^\dagger Y) \right| \leq \sqrt{\operatorname{tr}(X^\dagger X)} \sqrt{\operatorname{tr}(Y^\dagger Y)}$ for the Hilbert-Schmidt inner product $\langle X,Y\rangle := \operatorname{tr}(X^\dagger Y)$ wi

...(continued)
Michael Tolan May 24 2017 20:27 UTC

Just reading over Eq (5) on P5 concerning the diamond norm.

Should the last $\sigma_1$ on the 4th line be replaced with a $\sigma_2$? I think I can see how the proof is working but not entirely certain.

Noon van der Silk May 23 2017 11:15 UTC

I think this thread has reached it's end.

I've locked further comments, and I hope that the quantum computing community can thoughtfully find an approach to language that is inclusive to all and recognises the diverse background of all researchers, current and future.

I direct your attention t

...(continued)
Varun Narasimhachar May 23 2017 02:14 UTC

While I would never want to antagonize my peers or to allow myself to assume they were acting irrationally, I do share your concerns to an extent. I worry about the association of social justice and inclusivity with linguistic engineering, virtual lynching, censorship, etc. (the latter phenomena sta

...(continued)
Aram Harrow May 23 2017 01:30 UTC

I think you are just complaining about issues that arise from living with other people in the same society. If you disagree with their values, well, then some of them might have a negative opinion about you. If you express yourself in an aggressive way, and use words like "lynch" to mean having pe

...(continued)
Steve Flammia May 23 2017 01:04 UTC

I agree with Noon that the discussion is becoming largely off topic for SciRate, but that it might still be of interest to the community to discuss this. I invite people to post thoughtful and respectful comments over at [my earlier Quantum Pontiff post][1]. Further comments here on SciRate will be

...(continued)
Noon van der Silk May 23 2017 00:59 UTC

I've moderated a few comments on this post because I believe it has gone past useful discussion, and I'll continue to remove comments that I believe don't add anything of substantial value.

Thanks.

Aram Harrow May 22 2017 23:13 UTC

The problem with your argument is that no one is forcing anyone to say anything, or banning anything.

If the terms really were offensive or exclusionary or had other bad side effects, then it's reasonable to discuss as a community whether to keep them, and possibly decide to stop using them. Ther

...(continued)
stan May 22 2017 22:53 UTC

Fair enough. At the end of the day I think most of us are concerned with the strength of the result not the particular language used to describe it.

VeteranVandal May 22 2017 22:41 UTC

But how obvious is ancilla? To me it is not even remotely obvious (nor clear as a term, but as the literature used it so much, I see such word in much the same way as I see auxiliary, in fact - now if you want to take offense with auxiliary, what can I say? I won't invent words just to please you).

...(continued)