- Aug 30 2016 quant-ph arXiv:1608.07977v1Information geometrical structure $(g^{(D_\alpha)}, \nabla^{(D_\alpha)},\nabla^{(D_\alpha)*})$ induced from the sandwiched Rényi $\alpha$-divergence $D_\alpha(\rho\|\sigma):=\frac{1}{\alpha (\alpha-1)}\log\,{\rm Tr} \left(\sigma^{\frac{1-\alpha}{2\alpha}}\rho\,\sigma^{\frac{1-\alpha}{2\alpha}}\right)^{\alpha}$ on a finite quantum state space $\mathcal{S}$ is studied. It is shown that the Riemannian metric $g^{(D_\alpha)}$ is monotone if and only if $\alpha\in(-\infty, -1]\cup [\frac{1}{2},\infty)$, and that the quantum statistical manifold $({\mathcal{S}}, g^{(D_\alpha)}, \nabla^{(D_\alpha)},\nabla^{(D_\alpha)*})$ is dually flat if and only if $\alpha=1$.
- Recently the theory of communication developed by Shannon has been extended to the quantum realm by exploiting the rules of quantum theory. This latter stems on complex vector spaces. However complex (as well as real) numbers are just idealizations and they are not available in practice where we can only deal with rational numbers. This fact naturally leads to the question of whether the developed notions of capacities for quantum channels truly catch their ability to transmit information. Here we answer this question for the quantum capacity. To this end we resort to the notion of semi-computability in order to approximately (by rational numbers) describe quantum states and quantum channel maps. Then we introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Finally we define algorithmic quantum capacity and prove that it equals the standard one.
- State-of-the-art machine learning techniques promise to become a powerful tool in statistical mechanics via their capacity to distinguish different phases of matter in an automated way. Here we demonstrate that convolutional neural networks (CNN) can be optimized for quantum many-fermion systems such that they correctly identify and locate quantum phase transitions in such systems. Using auxiliary-field quantum Monte Carlo (QMC) simulations to sample the many-fermion system, we show that the Green's function (but not the auxiliary field) holds sufficient information to allow for the distinction of different fermionic phases via a CNN. We demonstrate that this QMC + machine learning approach works even for systems exhibiting a severe fermion sign problem where conventional approaches to extract information from the Green's function, e.g.~in the form of equal-time correlation functions, fail. We expect that this capacity of hierarchical machine learning techniques to circumvent the fermion sign problem will drive novel insights into some of the most fundamental problems in statistical physics.
- Umbral moonshine describes an unexpected relation between 23 finite groups arising from lattice symmetries and special mock modular forms. It includes the Mathieu moonshine as a special case and can itself be viewed as an example of the more general moonshine phenomenon which connects finite groups and distinguished modular objects. In this paper we introduce the notion of generalised umbral moonshine, which includes the generalised Mathieu moonshine as a special case, and provide supporting data for it. A central role is played by the deformed Drinfel'd (or quantum) double of each umbral finite group $G$, specified by a cohomology class in $H^3(G,U(1))$. We conjecture that in each of the 23 cases there exists a rule to assign an infinite-dimensional module for the deformed Drinfel'd double of the umbral finite group underlying the mock modular forms of umbral moonshine and generalised umbral moonshine. We also discuss the possible origin of the generalised umbral moonshine.
- The human face is a complex trait under strong genetic control, as evidenced by the striking visual similarity between twins. Nevertheless, heritability estimates of facial traits have often been surprisingly low or difficult to replicate. Furthermore, the construction of facial phenotypes that correspond to naturally perceived facial features remains largely a mystery. We present here a large-scale heritability study of face geometry that aims to address these issues. High-resolution, three-dimensional facial models have been acquired on a cohort of $952$ twins recruited from the TwinsUK registry, and processed through a novel landmarking workflow, GESSA (Geodesic Ensemble Surface Sampling Algorithm). The algorithm places thousands of landmarks throughout the facial surface and automatically establishes point-wise correspondence across faces. These landmarks enabled us to intuitively characterize facial geometry at a fine level of detail through curvature measurements, yielding accurate heritability maps of the human face (www.heritabilitymaps.info).
- Aug 30 2016 cs.CY arXiv:1608.08196v1We argue that there already exists de facto artificial intelligence policy - a patchwork of policies impacting the field of AI's development in myriad ways. The key question related to AI policy, then, is not whether AI should be governed at all, but how it is currently being governed, and how that governance might become more informed, integrated, effective, and anticipatory. We describe the main components of de facto AI policy and make some recommendations for how AI policy can be improved, drawing on lessons from other scientific and technological domains.
- Visual question answering (VQA) systems are emerging from a desire to empower users to ask any natural language question about visual content and receive a valid answer in response. However, close examination of the VQA problem reveals an unavoidable, entangled problem that multiple humans may or may not always agree on a single answer to a visual question. We train a model to automatically predict from a visual question whether a crowd would agree on a single answer. We then propose how to exploit this system in a novel application to efficiently allocate human effort to collect answers to visual questions. Specifically, we propose a crowdsourcing system that automatically solicits fewer human responses when answer agreement is expected and more human responses when answer disagreement is expected. Our system improves upon existing crowdsourcing systems, typically eliminating at least 20% of human effort with no loss to the information collected from the crowd.
- Recommendation and collaborative filtering systems are important in modern information and e-commerce applications. As these systems are becoming increasingly popular in the industry, their outputs could affect business decision making, introducing incentives for an adversarial party to compromise the availability or integrity of such systems. We introduce a data poisoning attack on collaborative filtering systems. We demonstrate how a powerful attacker with full knowledge of the learner can generate malicious data so as to maximize his/her malicious objectives, while at the same time mimicking normal user behavior to avoid being detected. While the complete knowledge assumption seems extreme, it enables a robust assessment of the vulnerability of collaborative filtering schemes to highly motivated attacks. We present efficient solutions for two popular factorization-based collaborative filtering algorithms: the \emphalternative minimization formulation and the \emphnuclear norm minimization method. Finally, we test the effectiveness of our proposed algorithms on real-world data and discuss potential defensive strategies.
- Topic Modeling finds human-readable structures in large sets of unstructured SE data. A widely used topic modeler is Latent Dirichlet Allocation. When run on SE data, LDA suffers from "order effects" i.e. different topics be generated if the training data was shuffled into a different order. Such order effects introduce a systematic error for any study that uses topics to make conclusions. This paper introduces LDADE, a Search-Based SE tool that tunes LDA's parameters using DE (Differential Evolution). LDADE has been tested on data from a programmer information exchange site (Stackoverflow), title and abstract text of thousands of SE papers, and software defect reports from NASA. Results were collected across different implementations of LDA (Python+Scikit-Learn, Scala+Spark); across different platforms (Linux, Macintosh) and for different kinds of LDAs (the traditional VEM method, or using Gibbs sampling). In all tests, the pattern was the same: LDADE's tunings dramatically reduces topic instability. The implications of this study for other software analytics tasks is now an open and pressing issue. In how many domains can search-based SE dramatically improve software analytics?
- Aug 30 2016 cs.CV arXiv:1608.08173v1Correlation filtering based tracking model has received lots of attention and achieved great success in real-time tracking, howev- er, the lost function in current correlation filtering paradigm could not reliably response to the appearance changes caused by occlusion and il- lumination variations. This study intends to promote the robustness of the correlation filter learning. By exploiting the anisotropy of the filter response, three sparsity related loss functions are proposed to alleviate the overfitting issue of previous methods and improve the overall tracking performance. As a result, three real-time trackers are implemented. Ex- tensive experiments in various challenging situations demonstrate that the robustness of the learned correlation filter has been greatly improved via the designed loss functions. In addition, the study reveals, from an ex- perimental perspective, how different loss functions essentially influence the tracking performance. An important conclusion is that the sensitivity of the peak values of the filter in successive frames is consistent with the tracking performance. This is a useful reference criterion in designing a robust correlation filter for visual tracking.
- Aug 30 2016 cs.CV arXiv:1608.08171v1A fundamental component of modern trackers is an online learned tracking model, which is typically modeled either globally or lo- cally. The two kinds of models perform differently in terms of effectiveness and robustness under different challenging situations. This work exploits the advantages of both models. A subspace model, from a global perspec- tive, is learned from previously obtained targets via rank-minimization to address the tracking, and a pixel-level local observation is leveraged si- multaneously, from a local point of view, to augment the subspace model. A matrix completion method is employed to integrate the two models. Unlike previous tracking methods, which locate the target among all fully observed target candidates, the proposed approach first estimates an expected target via the matrix completion through partially observed target candidates, and then, identifies the target according to the esti- mation accuracy with respect to the target candidates. Specifically, the tracking is formulated as a problem of target appearance estimation. Extensive experiments on various challenging video sequences verify the effectiveness of the proposed approach and demonstrate that the pro- posed tracker outperforms other popular state-of-the-art trackers.
- Aug 30 2016 stat.AP arXiv:1608.08168v1Choice decisions made by users of online applications can suffer from biases due to the users' level of engagement. For instance, low engagement users may make random choices with no concern for the quality of items offered. This biased choice data can corrupt estimates of user preferences for items. However, one can correct for these biases if additional behavioral data is utilized. To do this we construct a new choice engagement time model which captures the impact of user engagement on choice decisions and response times associated with these choice decisions. Response times are the behavioral data we choose because they are easily measured by online applications and reveal information about user engagement. To test our model we conduct online polls with subject populations that have different levels of engagement and measure their choice decisions and response times. We have two main empirical findings. First, choice decisions and response times are correlated, with strong preferences having faster response times than weak preferences. Second, low user engagement is manifested through more random choice data and faster response times. Both of these phenomena are captured by our choice engagement time model and we find that this model fits the data better than traditional choice models. Our work has direct implications for online applications. It lets these applications remove the bias of low engagement users when estimating preferences for items. It also allows for the segmentation of users according to their level of engagement, which can be useful for targeted advertising or marketing campaigns.
- The aim of this paper is to define certain algebraic structures coming from generalized Reidemeister moves of singular knot theory. We give examples, show that the set of colorings by these algebraic structures is an invariant of singular links. As an application we distinguish several singular knots and links.
- Aug 30 2016 cs.CV arXiv:1608.08149v1We aim to track the endoscope location inside the surgical scene and provide 3D reconstruction, in real-time, from the sole input of the image sequence captured by the monocular endoscope. This information offers new possibilities for developing surgical navigation and augmented reality applications. The main benefit of this approach is the lack of extra tracking elements which can disturb the surgeon performance in the clinical routine. It is our first contribution to exploit ORBSLAM, one of the best performing monocular SLAM algorithms, to estimate both of the endoscope location, and 3D structure of the surgical scene. However, the reconstructed 3D map poorly describe textureless soft organ surfaces such as liver. It is our second contribution to extend ORBSLAM to be able to reconstruct a semi-dense map of soft organs. Experimental results on in-vivo pigs, shows a robust endoscope tracking even with organs deformations and partial instrument occlusions. It also shows the reconstruction density, and accuracy against ground truth surface obtained from CT.
- Aug 30 2016 cs.AI arXiv:1608.08144v1This paper describes an approach to the methodology of answer set programming (ASP) that can facilitate the design of encodings that are easy to understand and provably correct. Under this approach, after appending a rule or a small group of rules to the emerging program we include a comment that states what has been "achieved" so far. This strategy allows us to set out our understanding of the design of the program by describing the roles of small parts of the program in a mathematically precise way.
- This work presents a retrieval pipeline and evaluation scheme for the problem of finding the last appearance of personal objects in a large dataset of images captured from a wearable camera. Each personal object is modelled by a small set of images that define a query for a visual search engine.The retrieved results are reranked considering the temporal timestamps of the images to increase the relevance of the later detections. Finally, a temporal interleaving of the results is introduced for robustness against false detections. The Mean Reciprocal Rank is proposed as a metric to evaluate this problem. This application could help into developing personal assistants capable of helping users when they do not remember where they left their personal belongings.
- Aug 30 2016 cs.CV arXiv:1608.08128v1This thesis explore different approaches using Convolutional and Recurrent Neural Networks to classify and temporally localize activities on videos, furthermore an implementation to achieve it has been proposed. As the first step, features have been extracted from video frames using an state of the art 3D Convolutional Neural Network. This features are fed in a recurrent neural network that solves the activity classification and temporally location tasks in a simple and flexible way. Different architectures and configurations have been tested in order to achieve the best performance and learning of the video dataset provided. In addition it has been studied different kind of post processing over the trained network's output to achieve a better results on the temporally localization of activities on the videos. The results provided by the neural network developed in this thesis have been submitted to the ActivityNet Challenge 2016 of the CVPR, achieving competitive results using a simple and flexible architecture.
- Aug 30 2016 astro-ph.EP arXiv:1608.08125v1Sagan et al. (1993) used the Galileo space probe data and first principles to find evidence of life on Earth. Here we ask whether Sagan et al. (1993) could also have detected whether life on Earth had three-dimensional structure, based on the Galileo space probe data. We reanalyse the data from this probe to see if structured vegetation could have been detected in regions with abundant photosynthetic pigments through the anisotropy of reflected shortwave radiation. We compare changing brightness of the Amazon forest (a region where Sagan et al. (1993) noted a red edge in the reflectance spectrum, indicative of photosynthesis) as the planet rotates to a common model of reflectance anisotropy and found measured increase of surface reflectance of 0.019 versus a 0.007 predicted from only anisotropic effects. We hypothesize the difference was due to minor cloud contamination. However, the Galileo dataset had only a small change in phase angle (sun-satellite position) which reduced the observed anisotropy signal and we demonstrate that theoretically if the probe had a variable phase angle between 0-20, there would have been a much larger predicted change in surface reflectance of 0.06 and under such a scenario three-dimensional vegetation structure on Earth could possibly have been detected. These results suggest that anisotropic effects may be useful to help determine whether exoplanets have three-dimensional vegetation structure in the future but that further comparisons between empirical and theoretical results are first necessary.
- Aug 30 2016 hep-th arXiv:1608.08116v1The Pentagon Operator Product Expansion represents polygonal Wilson loops in planar $\mathcal{N}=4$ super Yang-Mills in terms of a series of flux tube excitations for finite coupling. We demonstrate how to re-sum this series at the one loop level for the hexagonal Wilson loop dual to the six-point MHV amplitude. By summing over a series of effective excitations we find expressions which integrate to logarithms and polylogarithms, reproducing the known one-loop result.
- Wasserstein Discriminant Analysis (WDA) is a new supervised method that can improve classification of high-dimensional data by computing a suitable linear map onto a lower dimensional subspace. Following the blueprint of classical Linear Discriminant Analysis (LDA), WDA selects the projection matrix that maximizes the ratio of two quantities: the dispersion of projected points coming from different classes, divided by the dispersion of projected points coming from the same class. To quantify dispersion, WDA uses regularized Wasserstein distances, rather than cross-variance measures which have been usually considered, notably in LDA. Thanks to the the underlying principles of optimal transport, WDA is able to capture both global (at distribution scale) and local (at samples scale) interactions between classes. Regularized Wasserstein distances can be computed using the Sinkhorn matrix scaling algorithm; We show that the optimization of WDA can be tackled using automatic differentiation of Sinkhorn iterations. Numerical experiments show promising results both in terms of prediction and visualization on toy examples and real life datasets such as MNIST and on deep features obtained from a subset of the Caltech dataset.
- Clustering high-dimensional data often requires some form of dimensionality reduction, where clustered variables are separated from "noise-looking" variables. We cast this problem as finding a low-dimensional projection of the data which is well-clustered. This yields a one-dimensional projection in the simplest situation with two clusters, and extends naturally to a multi-label scenario for more than two clusters. In this paper, (a) we first show that this joint clustering and dimension reduction formulation is equivalent to previously proposed discriminative clustering frameworks, thus leading to convex relaxations of the problem, (b) we propose a novel sparse extension, which is still cast as a convex relaxation and allows estimation in higher dimensions, (c) we propose a natural extension for the multi-label scenario, (d) we provide a new theoretical analysis of the performance of these formulations with a simple probabilistic model, leading to scalings over the form $d=O(\sqrt{n})$ for the affine invariant case and $d=O(n)$ for the sparse case, where $n$ is the number of examples and $d$ the ambient dimension, and finally, (e) we propose an efficient iterative algorithm with running-time complexity proportional to $O(nd^2)$, improving on earlier algorithms which had quadratic complexity in the number of examples.
- Aug 30 2016 cs.CV arXiv:1608.08049v1Tree-like structures such as retinal images are widely studied in computer-aided diagnosis systems in large-scale screening programs. Despite several segmentation and tracking methods proposed in the literature, there still exist several limitations specifically when two or more curvilinear structures cross or bifurcate, or in the presence of interrupted lines or highly curved blood vessels. In this paper, we propose a novel approach based on multi-orientation scores augmented with a contextual affinity matrix, which both are inspired by the geometry of the primary visual cortex (V1) and their contextual connections. The connectivity is described with a four-dimensional kernel obtained as the fundamental solution of the Fokker-Planck equation modelling the cortical connectivity in the lifted space of positions, orientations and curvatures. It is further used in a self-tuning spectral clustering step to identify the main perceptual units in the stimuli. The proposed method has been validated on several easy and challenging structures in a set of artificial images and actual retinal patches. Supported by quantitative and qualitative results, the method is capable of overcoming the limitations of current state-of-the-art techniques.
- Aug 30 2016 quant-ph arXiv:1608.08046v1Asymmetry of quantum states is a useful resource in applications such as quantum metrology, but it tends to be corrupt for open systems under covariant operations. A challenge in exploiting the resource is to preserve the asymmetry of states from the destroy caused by covariant operations. In this paper, we investigate under which dynamical conditions the asymmetry of a state is totally unaffected by covariant operations. We find that all asymmetry measures are frozen for a state under a covariant operation if and only if the relative entropy of asymmetry is frozen for the state. Our finding reveals the existence of the universal freezing of asymmetry for the first time, and provides a necessary and sufficient condition under which the asymmetry of an open system is totally unaffected by covariant operations.
- Aug 30 2016 cs.HC arXiv:1608.08041v1The application of psychophysiological in human-computer interaction is a growing field with significant potential for future smart personalised systems. Working in this emerging field requires comprehension of an array of physiological signals and analysis techniques. Electromyography (EMG)is a useful signal to estimate the emotional context of individuals, because it is relatively robust, and simple to record and analyze. Common uses are to infer emotional valence in response to som stimulus, and to index some symptoms of stress. However, in order to interpret EMG signals, they must be considered alongside data on physical, social and intentional context. Here we present a short review on the application of EMG in human-computer interaction. This paper aims to serve as a primer for the novice, enabling rapid familiarisation with the latest core concepts. We put special emphasis on everyday human-computer interface applications to distinguish from the more common clinical or sports uses of psychophysiology. This paper is an extract from a comprehensive review of the entire field of ambulatory psychophysiology, including 12 similar chapters, plus application guidelines and systematic review. Thus any citation should be made using the following reference: B. Cowley, M. Filetti, K. Lukander, J. Torniainen, A. Henelius, L. Ahonen, O. Barral, I. Kosunen, T. Valtonen, M. Huotilainen, N. Ravaja, G. Jacucci. The Psychophysiology Primer: a guide to methods and a broad review with a focus on human-computer interaction. Foundations and Trends in Human-Computer Interaction, vol. 9, no. 3-4, pp. 150--307, 2016.
- Aug 30 2016 q-bio.NC physics.bio-ph arXiv:1608.08040v1Existing models of birdsong learning assume that brain area LMAN introduces variability into song for trial-and-error learning. Recent data suggest that LMAN also encodes a corrective bias driving short-term improvements in song. These later consolidate in area RA, a motor cortex analogue downstream of LMAN. We develop a new model of such two-stage learning. Using a stochastic gradient descent approach, we derive how 'tutor' circuits should match plasticity mechanisms in 'student' circuits for efficient learning. We further describe a reinforcement learning framework with which the tutor can build its teaching signal. We show that mismatching the tutor signal and plasticity mechanism can impair or abolish learning. Applied to birdsong, our results predict the temporal structure of the corrective bias from LMAN given a plasticity rule in RA. Our framework can be applied predictively to other paired brain areas showing two-stage learning.
- Aug 30 2016 cs.CV arXiv:1608.08029v1In this paper, we propose a novel edge preserving and multi-scale contextual neural network for salient object detection. The proposed framework is aiming to address two limits of the existing CNN based methods. First, region-based CNN methods lack sufficient context to accurately locate salient object since they deal with each region independently. Second, pixel-based CNN methods suffer from blurry boundaries due to the presence of convolutional and pooling layers. Motivated by these, we first propose an end-to-end edge-preserved neural network based on Fast R-CNN framework (named RegionNet) to efficiently generate saliency map with sharp object boundaries. Later, to further improve it, multi-scale spatial context is attached to RegionNet to consider the relationship between regions and the global scenes. Furthermore, our method can be generally applied to RGB-D saliency detection by depth refinement. The proposed framework achieves both clear detection boundary and multi-scale contextual robustness simultaneously for the first time, and thus achieves an optimized performance. Experiments on six RGB and two RGB-D benchmark datasets demonstrate that the proposed method outperforms previous methods by a large margin, in particular, we achieve relative improvement by 6.1% and 10.1% on F-measure on ECSSD and DUT-OMRON dataset, respectively.
- Aug 30 2016 cs.AI arXiv:1608.08028v1We show how, under certain conditions, the asymptotic behaviour of an Ordinary Differential Equation under non-constant interventions can be modelled using Dynamic Structural Causal Models. In contrast to earlier work, we study not only the effect of interventions on equilibrium states; rather, we model asymptotic behaviour that is dynamic under interventions that vary in time, and include as a special case the study of static equilibria.
- Aug 30 2016 cs.CV arXiv:1608.08021v1This paper presents how we can achieve the state-of-the-art accuracy in multi-category object detection task while minimizing the computational cost by adapting and combining recent technical innovations. Following the common pipeline of "CNN feature extraction + region proposal + RoI classification", we mainly redesign the feature extraction part, since region proposal part is not computationally expensive and classification part can be efficiently compressed with common techniques like truncated SVD. Our design principle is "less channels with more layers" and adoption of some building blocks including concatenated ReLU, Inception, and HyperNet. The designed network is deep and thin and trained with the help of batch normalization, residual connections, and learning rate scheduling based on plateau detection. We obtained solid results on well-known object detection benchmarks: 81.8% mAP (mean average precision) on VOC2007 and 82.5% mAP on VOC2012 (2nd place), while taking only 750ms/image on Intel i7-6700K CPU with a single core and 46ms/image on NVIDIA Titan X GPU. Theoretically, our network requires only 12.3% of the computational cost compared to ResNet-101, the winner on VOC2012.
- Aug 30 2016 cs.AI arXiv:1608.08015v1Explanations have been introduced in the previous century. Their interest in reducing the search space is no longer questioned. Yet, their efficient implementation into CSP solver is still a challenge. In this paper, we introduce ESeR, an Event Selection Rules algorithm that filters events generated during propagation. This dynamic selection enables an efficient computation of explanations for intelligent backtracking al- gorithms. We show the effectiveness of our approach on the instances of the last three MiniZinc challenges
- Aug 30 2016 quant-ph arXiv:1608.08006v1The Hamilton operator of an open quantum system is non-Hermitian. Its eigenvalues are, generally, complex and provide not only the energies but also the lifetimes of the states of the system. The states may couple via the common environment of scattering wavefunctions into which the system is embedded. This causes an \it external mixing (EM) of the states. Mathematically, EM is related to the existence of singular (the so-called exceptional) points (EPs). The eigenfunctions of a non-Hermitian operator are biorthogonal, in contrast to the orthogonal eigenfunctions of a Hermitian operator. A quantitative measure for the ratio between biorthogonality and orthogonality is the phase rigidity of the wavefunctions. At and near an EP, the phase rigidity takes its minimum value. The lifetimes of two nearby eigenstates of a quantum system bifurcate under the influence of an EP. At the parameter value of maximum width bifurcation, the phase rigidity approaches the value one, meaning that the two eigenfunctions become orthogonal. However, the eigenfunctions are externally mixed at this parameter value. The S-matrix and therewith the cross section do contain, in the one-channel case, almost no information on the EM of the states. The situation is completely different in the case with two (or more) channels where the resonance structure is strongly influenced by the EM of the states and interesting features of non-Hermitian quantum physics are revealed. We provide numerical results for two and three nearby eigenstates of a non-Hermitian Hamilton operator which are embedded in one common continuum and influenced by two adjoining EPs. The results are discussed. They are of interest for an experimental test of the non-Hermitian quantum physics as well as for applications.
- Neural avalanches in size and duration exhibit a power law distribution illustrating as a straight line when plotted on the logarithmic scales. The power-law exponent is interpreted as the signature of criticality and it is assumed that the resting brain operates near criticality. However, there is no clear evidence that supports this assumption, and even there are extensive research studies conflicting one another. The model of the current paper is an extension of a previous publication wherein we used an integrate-and-fire model on a regular lattice with periodic boundary conditions and introduced the temporal complexity as a genuine signature of criticality. However, in that model the power-law distribution of neural avalanches were manifestation of super-criticality rather than criticality. Here, however, we show that replacing the discrete noise in the model with a Gaussian noise and continuous time solution of the equation leads to coincidence of temporal complexity and spatiotemporal patterns of neural avalanches at a control parameter which is assumed to be the critical value of the model.
- Aug 30 2016 cs.CV arXiv:1608.07973v1We present a noval, bi-directional mapping neural network architecture for the task of matching vectors from two data-sources. Our approach employs two tied neural network channels to project two views into a common, maximally correlated space, using the euclidean loss. To achieve both maximally correlated projections we built an encoder-decoder framework composed of two parallel networks each captures the features of each of the views. We show a direct link between the correlation loss and euclidean loss enabling the use of euclidean loss for optimizing correlation maximization problem. To overcome common euclidean regression optimization problems, we incorporated batch-normalization layers and dropout layers adapted to the model at hand. We show state of the art results on a number of computer vision matching tasks including MNIST image matching and sentence-image matching on the flickr8k and flickr30k datasets.
- Aug 30 2016 cs.IR arXiv:1608.07952v1Fine-grained user profile generation approaches have made it increasingly feasible to display on a profile page in which topics a user has expertise or interest. Earlier work on topical user profiling has been directed at enhancing search and personalization functionality, but making such profiles useful for human consumption presents new challenges. With this work, we have taken a first step toward a semantic layout method for topical user profiles. We have developed a topical generalization approach which forms clusters of topics, based on their association with broader topics in the Wikipedia category graph. This approach was evaluated with a task-based user study that reveals several of its current limitations, and indicates directions for its improvement.
- Aug 30 2016 cs.CV arXiv:1608.07951v1Computational color constancy refers to the problem of computing the illuminant color so that the images of a scene under varying illumination can be normalized to an image under the canonical illumination. In this paper, we adopt a deep learning framework for the illumination estimation problem. The proposed method works under the assumption of uniform illumination over the scene and aims for the accurate illuminant color computation. Specifically, we trained the convolutional neural network to solve the problem by casting the color constancy problem as an illumination classification problem. We designed the deep learning architecture so that the output of the network can be directly used for computing the color of the illumination. Experimental results show that our deep network is able to extract useful features for the illumination estimation and our method outperforms all previous color constancy methods on multiple test datasets.
- Explosive growth in the use of smart wireless devices has necessitated the provision of higher data rates and always-on connectivity, which are the main motivators for designing the fifth generation (5G) systems. To achieve higher system efficiency, massive antenna deployment with tight coordination is one potential strategy for designing 5G systems, but has two types of associated system overhead. First is the synchronization overhead, which can be reduced by implementing a cloud radio access network (CRAN)-based architecture design, that separates the baseband processing and radio access functionality to achieve better system synchronization. Second is the overhead for acquiring channel state information (CSI) of the users present in the system, which, however, increases tremendously when instantaneous CSI is used to serve high-mobility users. To serve a large number of users, a CRAN system with a dense deployment of remote radio heads (RRHs) is considered, such that each user has a line-of-sight (LOS) link with the corresponding RRH. Since, the trajectory of movement for high-mobility users is predictable; therefore, fairly accurate position estimates for those users can be obtained, and can be used for resource allocation to serve the considered users. The resource allocation is dependent upon various correlated system parameters, and these correlations can be learned using well-known \emphmachine learning algorithms. This paper proposes a novel \emphlearning-based resource allocation scheme for time division duplex (TDD) based 5G CRAN systems with dense RRH deployment, by using only the users' position estimates for resource allocation, thus avoiding the need for CSI acquisition. This reduces the overall system overhead significantly, while still achieving near-optimal system performance; thus, better (effective) system efficiency is achieved. (See the paper for full abstract)
- Feature selection is an important task in many problems occurring in pattern recognition, bioinformatics, machine learning and data mining applications. The feature selection approach enables us to reduce the computation burden and the falling accuracy effect of dealing with huge number of features in typical learning problems. There is a variety of techniques for feature selection in supervised learning problems based on different selection metrics. In this paper, we propose a novel unified framework for feature selection built on the graphical models and information theoretic tools. The proposed approach exploits the structure learning among features to select more relevant and less redundant features to the predictive modeling problem according to a primary novel likelihood based criterion. In line with the selection of the optimal subset of features through the proposed method, it provides us the Bayesian network classifier without the additional cost of model training on the selected subset of features. The optimal properties of our method are established through empirical studies and computational complexity analysis. Furthermore the proposed approach is evaluated on a bunch of benchmark datasets based on the well-known classification algorithms. Extensive experiments confirm the significant improvement of the proposed approach compared to the earlier works.
- This paper introduces a novel technique to track structures in time varying graphs. The method uses a maximum a posteriori approach for adjusting a three-dimensional co-clustering of the source vertices, the destination vertices and the time, to the data under study, in a way that does not require any hyper-parameter tuning. The three dimensions are simultaneously segmented in order to build clusters of source vertices, destination vertices and time segments where the edge distributions across clusters of vertices follow the same evolution over the time segments. The main novelty of this approach lies in that the time segments are directly inferred from the evolution of the edge distribution between the vertices, thus not requiring the user to make any a priori quantization. Experiments conducted on artificial data illustrate the good behavior of the technique, and a study of a real-life data set shows the potential of the proposed approach for exploratory data analysis.
- Ihara initiated to study a certain Galois representation which may be seen as an arithmetic analogue of the Artin representation of a pure braid group. We pursue the analogies in Ihara theory further, following after some issues and their inter-relations in the theory of braids and links such as Milnor invariants, Johnson homomorphisms, Magnus-Gassner cocycles and Alexander invariants, and study relations with arithmetic in Ihara theory.
- Aug 30 2016 stat.ME arXiv:1608.07921v1This study extends the Bayesian nonparametric instrumental variable regression model to determine the structural effects of covariates on the conditional quantile of the response variable. The error distribution is nonparametrically modelled using a Dirichlet mixture of bivariate normal distributions. The mean functions include the smooth effects of the covariates represented using the spline functions in an additive manner. The conditional variance of the second-stage error is also modelled using the spline functions such that it varies smoothly with the covariates. Accordingly, the proposed model allows for considerable flexibility in the shape of the quantile function while correcting for an endogeneity effect. The posterior inference for the proposed model is based on the Markov chain Monte Carlo method that requires no Metropolis-Hastings update. The approach is demonstrated using simulated and real data on the death rate in Japan during the inter-war period.
- Convolutional network techniques have recently achieved great success in vision based detection tasks. This paper introduces the recent development of our research on transplanting the fully convolutional network technique to the detection tasks on 3D range scan data. Specifically, the scenario is set as the vehicle detection task from the range data of Velodyne 64E lidar. We proposes to present the data in a 2D point map and use a single 2D end-to-end fully convolutional network to predict the objectness confidence and the bounding boxes simultaneously. By carefully design the bounding box encoding, it is able to predict full 3D bounding boxes even using a 2D convolutional network. Experiments on the KITTI dataset shows the state-of-the-art performance of the proposed method.
- We study the decoherence of massive fields during inflation based on the Zurek's density matrix approach. With the cubic interaction between inflaton and massive fields, the reduced density matrix for the massive fields can be calculated in the Schrödinger picture which is related to the variance of the non-Gaussian exponent in the wave functional. The decoherence rate is computed in the one-loop form from functional integration. For heavy fields with $m\gtrsim \mathcal{O}(H)$, quantum fluctuations will easily stay in the quantum state and decoherence is unlikely. While for light fields with mass smaller than $\mathcal{O}(H)$, quantum fluctuations are easily decohered within $5\sim10$ e-folds after Hubble crossing. Thus heavy fields can play a key role in studying problems involving inflationary quantum information.
- Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.
- We discuss social network analysis from the perspective of economics. We organize the presentaion around the theme of externalities: the effects that one's behavior has on others' well-being. Externalities underlie the interdependencies that make networks interesting. We discuss network formation, as well as interactions between peoples' behaviors within a given network, and the implications in a variety of settings. Finally, we highlight some empirical challenges inherent in the statistical analysis of network-based data.
- Early supervised machine learning algorithms have relied on reliable expert labels to build predictive models. However, the gates of data generation have recently been opened to a wider base of users who started participating increasingly with casual labeling, rating, annotating, etc. The increased online presence and participation of humans has led not only to a democratization of unchecked inputs to algorithms, but also to a wide democratization of the "consumption" of machine learning algorithms' outputs by general users. Hence, these algorithms, many of which are becoming essential building blocks of recommender systems and other information filters, started interacting with users at unprecedented rates. The result is machine learning algorithms that consume more and more data that is unchecked, or at the very least, not fitting conventional assumptions made by various machine learning algorithms. These include biased samples, biased labels, diverging training and testing sets, and cyclical interaction between algorithms, humans, information consumed by humans, and data consumed by algorithms. Yet, the continuous interaction between humans and algorithms is rarely taken into account in machine learning algorithm design and analysis. In this paper, we present a preliminary theoretical model and analysis of the mutual interaction between humans and algorithms, based on an iterated learning framework that is inspired from the study of human language evolution. We also define the concepts of human and algorithm blind spots and outline machine learning approaches to mend iterated bias through two novel notions: antidotes and reactive learning.
- Recurrent neural network (RNN)'s architecture is a key factor influencing its performance. We propose algorithms to optimize hidden sizes under running time constraint. We convert the discrete optimization into a subset selection problem. By novel transformations, the objective function becomes submodular and constraint becomes supermodular. A greedy algorithm with bounds is suggested to solve the transformed problem. And we show how transformations influence the bounds. To speed up optimization, surrogate functions are proposed which balance exploration and exploitation. Experiments show that our algorithms can find more accurate models or faster models than manually tuned state-of-the-art and random search. We also compare popular RNN architectures using our algorithms.
- Aug 30 2016 cs.NI arXiv:1608.07891v1Energy efficient mobility management is an important problem in modern wireless networks with heterogeneous cell sizes and increased nodes densities. We show that optimization-based mobility protocols cannot achieve long-term optimal energy consumption, particularly for ultra-dense networks (UDN). To address the complex dynamics of UDN, we propose a non-stochastic online-learning approach which does not make any assumption on the statistical behavior of the small base station (SBS) activities. In addition, we introduce handover cost to the overall energy consumption, which forces the resulting solution to explicitly minimize frequent handovers. The proposed Batched Randomization with Exponential Weighting (BREW) algorithm relies on batching to explore in bulk, and hence reduces unnecessary handovers. We prove that the regret of BREW is sublinear in time, thus guaranteeing its convergence to the optimal SBS selection. We further study the robustness of the BREW algorithm to delayed or missing feedback. Moreover, we study the setting where SBSs can be dynamically turned on and off. We prove that sublinear regret is impossible with respect to arbitrary SBS on/off, and then develop a novel learning strategy, called ranking expert (RE), that simultaneously takes into account the handover cost and the availability of SBS. To address the high complexity of RE, we propose a contextual ranking expert (CRE) algorithm that only assigns experts in a given context. Rigorous regret bounds are proved for both RE and CRE with respect to the best expert. Simulations show that not only do the proposed mobility algorithms greatly reduce the system energy consumption, but they are also robust to various dynamics which are common in practical ultra-dense wireless networks.
- This paper presents a new framework for analyzing and designing no-regret algorithms for dynamic (possibly adversarial) systems. The proposed framework generalizes the popular online convex optimization framework and extends it to its natural limit allowing it to capture a notion of regret that is intuitive for more general problems such as those encountered in game theory and variational inequalities. The framework hinges on a special choice of a system-wide loss function we have developed. Using this framework, we prove that a simple update scheme provides a no-regret algorithm for monotone systems. While previous results in game theory prove individual agents can enjoy unilateral no-regret guarantees, our result proves monotonicity sufficient for guaranteeing no-regret when considering the adjustments of multiple agent strategies in parallel. Furthermore, to our knowledge, this is the first framework to provide a suitable notion of regret for variational inequalities. Most importantly, our proposed framework ensures monotonicity a sufficient condition for employing multiple online learners safely in parallel.
- Aug 30 2016 cs.GT arXiv:1608.07886v1We consider crowdsourcing problems where the users are asked to provide evaluations for items; the user evaluations are then used directly, or aggregated into a consensus value. Lacking an incentive scheme, users have no motive in making effort in completing the evaluations, providing inaccurate answers instead. We propose incentive schemes that are truthful and cheap: truthful as the optimal user behavior consists in providing accurate evaluations, and cheap because the truthfulness is achieved with little overhead cost. We consider both discrete evaluation tasks, where an evaluation can be done either correctly, or incorrectly, with no degrees of approximation in between, and quantitative evaluation tasks, where evaluations are real numbers, and the error is measured as distance from the correct value. For both types of tasks, we propose hierarchical incentive schemes that can be effected with a small amount of additional evaluations, and that scale to arbitrarily large crowd sizes: they have the property that the strength of the incentive does not weaken with increasing hierarchy depth. Interestingly, we show that for these schemes to work, the only requisite is that workers know their place in the hierarchy in advance.
- Accountability aims to provide explanations for why unwanted situations occurred, thus providing means to assign responsibility and liability. As such, accountability has slightly different meanings across the sciences. In computer science, our focus is on providing explanations for technical systems, in particular if they interact with their physical environment using sensors and actuators and may do serious harm. Accountability is relevant when considering safety, security and privacy properties and we realize that all these incarnations are facets of the same core idea. Hence, in this paper we motivate and propose a model for accountability infrastructures that is expressive enough to capture all of these domains. At its core, this model leverages formal causality models from the literature in order to provide a solid reasoning framework. We show how this model can be instantiated for several real-world use cases.
- The theory of actual causality, defined by Halpern and Pearl, and its quantitative measure - the degree of responsibility - was shown to be extremely useful in various areas of computer science due to a good match between the results it produces and our intuition. In this paper, I describe the applications of causality to formal verification, namely, explanation of counterexamples, refinement of coverage metrics, and symbolic trajectory evaluation. I also briefly discuss recent applications of causality to legal reasoning.