- Mar 30 2017 cs.CR arXiv:1703.09827v1Recently, the integration of geographical coordinates into a picture has become more and more popular. Indeed almost all smartphones and many cameras today have a built-in GPS receiver that stores the location information in the Exif header when a picture is taken. Although the automatic embedding of geotags in pictures is often ignored by smart phone users as it can lead to endless discussions about privacy implications, these geotags could be really useful for investigators in analysing criminal activity. Currently, there are many free tools as well as commercial tools available in the market that can help computer forensics investigators to cover a wide range of geographic information related to criminal scenes or activities. However, there are not specific forensic tools available to deal with the geolocation of pictures taken by smart phones or cameras. In this paper, we propose and develop an image scanning and mapping tool for investigators. This tool scans all the files in a given directory and then displays particular photos based on optional filters (date, time, device, localisation) on Google Map. The file scanning process is not based on the file extension but its header. This tool can also show efficiently to users if there is more than one image on the map with the same GPS coordinates, or even if there are images with no GPS coordinates taken by the same device in the same timeline. Moreover, this new tool is portable; investigators can run it on any operating system without any installation. Another useful feature is to be able to work in a read-only environment, so that forensic results will not be modified. We also present and evaluate this tool real world application in this paper.
- Among several quantitative invariants found in evolutionary genomics, one of the most striking is the scaling of the overall abundance of proteins, or protein domains, sharing a specific functional annotation across genomes of given size. The size of these functional categories change, on average, as power-laws in the total number of protein-coding genes. Here, we show that such regularities are not restricted to the overall behavior of high-level functional categories, but also exist systematically at the level of single evolutionary families of protein domains. Specifically, the number of proteins within each family follows family-specific scaling laws with genome size. Functionally similar sets of families tend to follow similar scaling laws, but this is not always the case. To understand this systematically, we provide a comprehensive classification of families based on their scaling properties. Additionally, we develop a quantitative score for the heterogeneity of the scaling of families belonging to a given category or predefined group. Under the common reasonable assumption that selection is driven solely or mainly by biological function, these findings point to fine-tuned and interdependent functional roles of specific protein domains, beyond our current functional annotations. This analysis provides a deeper view on the links between evolutionary expansion of protein families and the functional constraints shaping the gene repertoire of bacterial genomes.
- Mar 30 2017 cs.CL arXiv:1703.09817v1A significant source of errors in Automatic Speech Recognition (ASR) systems is due to pronunciation variations which occur in spontaneous and conversational speech. Usually ASR systems use a finite lexicon that provides one or more pronunciations for each word. In this paper, we focus on learning a similarity function between two pronunciations. The pronunciation can be the canonical and the surface pronunciations of the same word or it can be two surface pronunciations of different words. This task generalizes problems such as lexical access (the problem of learning the mapping between words and their possible pronunciations), and defining word neighborhoods. It can also be used to dynamically increase the size of the pronunciation lexicon, or in predicting ASR errors. We propose two methods, which are based on recurrent neural networks, to learn the similarity function. The first is based on binary classification, and the second is based on learning the ranking of the pronunciations. We demonstrate the efficiency of our approach on the task of lexical access using a subset from the Switchboard conversational speech corpus. Results suggest that our method is superior to previous methods which are based on graphical Bayesian methods.
- Mar 30 2017 cond-mat.str-el physics.comp-ph arXiv:1703.09814v1We propose a numeric approach for simulating the ground states of infinite quantum many-body lattice models in higher dimensions. Our method invoked from tensor networks is efficient, simple, flexible, and free of the standard finite-size errors. The basic principle is to transform the Hamiltonian on an infinite lattice to an effective one of a finite-size cluster embedded in an "entanglement bath". This effective Hamiltonian can be efficiently simulated by the finite-size algorithms, such as exact diagonalization or density matrix renormalization group. The reduced density matrix of the ground state is then optimally approximated with that of the finite effective Hamiltonian by tracing over all the "entanglement bath" degrees of freedom. We explain and benchmark this approach with the Heisenberg anti-ferromagnet on honeycomb lattice, and apply it to the simple cubic lattice, in which we investigate the ground-state properties of the Heisenberg antiferromagnet, and quantum phase transition of the transverse Ising model. Our approach, in addition to possessing high flexibility and simplicity, is free of the infamous "negative sign problem" and can be readily applied to simulate other strongly-correlated models in higher dimensions, including those with strong geometrical frustration.
- Mar 30 2017 cs.DB arXiv:1703.09807v1The data mining field is an important source of large-scale applications and datasets which are getting more and more common. In this paper, we present grid-based approaches for two basic data mining applications, and a performance evaluation on an experimental grid environment that provides interesting monitoring capabilities and configuration tools. We propose a new distributed clustering approach and a distributed frequent itemsets generation well-adapted for grid environments. Performance evaluation is done using the Condor system and its workflow manager DAGMan. We also compare this performance analysis to a simple analytical model to evaluate the overheads related to the workflow engine and the underlying grid system. This will specifically show that realistic performance expectations are currently difficult to achieve on the grid.
- Mar 30 2017 cs.CV arXiv:1703.09788v1We propose a temporal segmentation and procedure learning model for long untrimmed and unconstrained videos, e.g., videos from YouTube. The proposed model segments a video into segments that constitute a procedure and learns the underlying temporal dependency among the procedure segments. The output procedure segments can be applied for other tasks, such as video description generation or activity recognition. Two aspects distinguish our work from the existing literature. First, we introduce the problem of learning long-range temporal structure for procedure segments within a video, in contrast to the majority of efforts that focus on understanding short-range temporal structure. Second, the proposed model segments an unseen video with only visual evidence and can automatically determine the number of segments to predict. For evaluation, there is no large-scale dataset with annotated procedure steps available. Hence, we collect a new cooking video dataset, named YouCookII, with the procedure steps localized and described. Our ProcNets model achieves state-of-the-art performance in procedure segmentation.
- Mar 30 2017 stat.ML arXiv:1703.09772v1Automatic Music Transcription (AMT) consists in automatically estimating the notes in an audio recording, through three attributes: onset time, duration and pitch. Probabilistic Latent Component Analysis (PLCA) has become very popular for this task. PLCA is a spectrogram factorization method, able to model a magnitude spectrogram as a linear combination of spectral vectors from a dictionary. Such methods use the Expectation-Maximization (EM) algorithm to estimate the parameters of the acoustic model. This algorithm presents well-known inherent defaults (local convergence, initialization dependency), making EM-based systems limited in their applications to AMT, particularly in regards to the mathematical form and number of priors. To overcome such limits, we propose in this paper to employ a different estimation framework based on Particle Filtering (PF), which consists in sampling the posterior distribution over larger parameter ranges. This framework proves to be more robust in parameter estimation, more flexible and unifying in the integration of prior knowledge in the system. Note-level transcription accuracies of 61.8 $\%$ and 59.5 $\%$ were achieved on evaluation sound datasets of two different instrument repertoires, including the classical piano (from MAPS dataset) and the marovany zither, and direct comparisons to previous PLCA-based approaches are provided. Steps for further development are also outlined.
- Mar 30 2017 cs.CV arXiv:1703.09771v1We present a temporal 6-DOF tracking method which leverages deep learning to achieve state-of-the-art performance on challenging datasets of real world capture. Our method is both more accurate and more robust to occlusions than the existing best performing approaches while maintaining real-time performance. To assess its efficacy, we evaluate our approach on several challenging RGBD sequences of real objects in a variety of conditions. Notably, we systematically evaluate robustness to occlusions through a series of sequences where the object to be tracked is increasingly occluded. Finally, our approach is purely data-driven and does not require any hand-designed features: robust tracking is automatically learned from data.
- Intrusion detection for computer network systems becomes one of the most critical tasks for network administrators today. It has an important role for organizations, governments and our society due to its valuable resources on computer networks. Traditional misuse detection strategies are unable to detect new and unknown intrusion. Besides, anomaly detection in network security is aim to distinguish between illegal or malicious events and normal behavior of network systems. Anomaly detection can be considered as a classification problem where it builds models of normal network behavior, which it uses to detect new patterns that significantly deviate from the model. Most of the cur- rent research on anomaly detection is based on the learning of normally and anomaly behaviors. They do not take into account the previous, re- cent events to detect the new incoming one. In this paper, we propose a real time collective anomaly detection model based on neural network learning and feature operating. Normally a Long Short Term Memory Recurrent Neural Network (LSTM RNN) is trained only on normal data and it is capable of predicting several time steps ahead of an input. In our approach, a LSTM RNN is trained with normal time series data before performing a live prediction for each time step. Instead of considering each time step separately, the observation of prediction errors from a certain number of time steps is now proposed as a new idea for detecting collective anomalies. The prediction errors from a number of the latest time steps above a threshold will indicate a collective anomaly. The model is built on a time series version of the KDD 1999 dataset. The experiments demonstrate that it is possible to offer reliable and efficient for collective anomaly detection.
- Mar 30 2017 cs.CV arXiv:1703.09746v1Very large-scale Deep Neural Networks (DNNs) have achieved remarkable successes in a large variety of computer vision tasks. However, the high computation intensity of DNNs makes it challenging to deploy these models on resource-limited systems. Some studies used low-rank approaches that approximate the filters by low-rank basis to accelerate the testing. Those works directly decomposed the pre-trained DNNs by Low-Rank Approximations (LRA). How to train DNNs toward lower-rank space for more efficient DNNs, however, remains as an open area. To solve the issue, in this work, we propose Force Regularization, which uses attractive forces to enforce filters so as to coordinate more weight information into lower-rank space. We mathematically and empirically prove that after applying our technique, standard LRA methods can reconstruct filters using much lower basis and thus result in faster DNNs. The effectiveness of our approach is comprehensively evaluated in ResNets, AlexNet, and GoogLeNet. In AlexNet, for example, Force Regularization gains 2x speedup on modern GPU without accuracy loss and 4.05x speedup on CPU by paying small accuracy degradation. Moreover, Force Regularization better initializes the low-rank DNNs such that the fine-tuning can converge faster toward higher accuracy. The obtained lower-rank DNNs can be further sparsified, proving that Force Regularization can be integrated with state-of-the-art sparsity-based acceleration methods.
- Mar 30 2017 cs.CR arXiv:1703.09745v1Users of electronic devices, e.g., laptop, smartphone, etc. have characteristic behaviors while surfing the Web. Profiling this behavior can help identify the person using a given device. In this paper, we introduce a technique to profile users based on their web transactions. We compute several features extracted from a sequence of web transactions and use them with one-class classification techniques to profile a user. We assess the efficacy and speed of our method at differentiating 25 users on a dataset representing 6 months of web traffic monitoring from a small company network.
- Deep learning-based approaches have been widely used for training controllers for autonomous vehicles due to their powerful ability to approximate nonlinear functions or policies. However, the training process usually requires large labeled data sets and takes a lot of time. In this paper, we analyze the influences of features on the performance of controllers trained using the convolutional neural networks (CNNs), which gives a guideline of feature selection to reduce computation cost. We collect a large set of data using The Open Racing Car Simulator (TORCS) and classify the image features into three categories (sky-related, roadside-related, and road-related features).We then design two experimental frameworks to investigate the importance of each single feature for training a CNN controller.The first framework uses the training data with all three features included to train a controller, which is then tested with data that has one feature removed to evaluate the feature's effects. The second framework is trained with the data that has one feature excluded, while all three features are included in the test data. Different driving scenarios are selected to test and analyze the trained controllers using the two experimental frameworks. The experiment results show that (1) the road-related features are indispensable for training the controller, (2) the roadside-related features are useful to improve the generalizability of the controller to scenarios with complicated roadside information, and (3) the sky-related features have limited contribution to train an end-to-end autonomous vehicle controller.
- String theory models of axion monodromy inflation exhibit scalar potentials which are quadratic for small values of the inflaton field and evolve to a more complicated function for large field values. Oftentimes the large field behaviour is gentler than quadratic, lowering the tensor-to-scalar ratio. This effect, known as flattening, has been observed in the string theory context through the properties of the DBI+CS D-brane action. We revisit such flattening effects in type IIB flux compactifications with mobile D7-branes, with the inflaton identified with the D7-brane position. We observe that, with a generic choice of background fluxes, flattening effects are larger than previously observed, allowing to fit these models within current experimental bounds. In particular, we compute the cosmological observables in scenarios compatible with closed-string moduli stabilisation, finding tensor-to-scalar ratios as low as r ~ 0.04. These are models of single field inflation in which the inflaton is much lighter than the other scalars through a mild tuning of the compactification data.
- The growing field of large-scale time domain astronomy requires methods for probabilistic data analysis that are computationally tractable, even with large datasets. Gaussian Processes are a popular class of models used for this purpose but, since the computational cost scales as the cube of the number of data points, their application has been limited to relatively small datasets. In this paper, we present a method for Gaussian Process modeling in one-dimension where the computational requirements scale linearly with the size of the dataset. We demonstrate the method by applying it to simulated and real astronomical time series datasets. These demonstrations are examples of probabilistic inference of stellar rotation periods, asteroseismic oscillation spectra, and transiting planet parameters. The method exploits structure in the problem when the covariance function is expressed as a mixture of complex exponentials, without requiring evenly spaced observations or uniform noise. This form of covariance arises naturally when the process is a mixture of stochastically-driven damped harmonic oscillators - providing a physical motivation for and interpretation of this choice - but we also demonstrate that it is effective in many other cases. We present a mathematical description of the method, the details of the implementation, and a comparison to existing scalable Gaussian Process methods. The method is flexible, fast, and most importantly, interpretable, with a wide range of potential applications within astronomical data analysis and beyond. We provide well-tested and documented open-source implementations of this method in C++, Python, and Julia.
- Inverse reinforcement learning (IRL) aims to explain observed complex behavior by fitting reinforcement learning models to behavioral data. However, traditional IRL methods are only applicable when the observations are in the form of state-action paths. This is a problem in many real-world modelling settings, where only more limited observations are easily available. To address this issue, we extend the traditional IRL problem formulation. We call this new formulation the inverse reinforcement learning from summary data (IRL-SD) problem, where instead of state-action paths, only summaries of the paths are observed. We propose exact and approximate methods for both maximum likelihood and full posterior estimation for IRL-SD problems. Through case studies we compare these methods, demonstrating that the approximate methods can be used to solve moderate-sized IRL-SD problems in reasonable time.
- Causal ordering of key events in the cell cycle is essential for proper functioning of an organism. Yet, it remains a mystery how a specific temporal program of events is maintained despite ineluctable stochasticity in the biochemical dynamics which dictate timing of cellular events. We propose that if a change of cell fate is triggered by the \em time-integral of the underlying stochastic biochemical signal, rather than the original signal, then a dramatic improvement in temporal specificity results. Exact analytical results for stochastic models of hourglass-timers and pendulum-clocks, two important paradigms for biological timekeeping, elucidate how temporal specificity is achieved through time-integration. En route, we introduce a natural representation for time-integrals of stochastic processes, provide an analytical prescription for evaluating corresponding first-passage-time distributions, and uncover a mechanism by which a population of identical cells can spontaneously bifurcate into subpopulations of early and late responders, depending on hierarchy of timescales in the dynamics. Moreover, our approach reveals how time-integration of stochastic signals may be realized biochemically, through a simple chemical reaction scheme.
- Despite the rapid progress of the techniques for image classification, video annotation has remained a challenging task. Automated video annotation would be a breakthrough technology, enabling users to search within the videos. Recently, Google introduced the Cloud Video Intelligence API for video analysis. As per the website, the system "separates signal from noise, by retrieving relevant information at the video, shot or per frame." A demonstration website has been also launched, which allows anyone to select a video for annotation. The API then detects the video labels (objects within the video) as well as shot labels (description of the video events over time). In this paper, we examine the usability of the Google's Cloud Video Intelligence API in adversarial environments. In particular, we investigate whether an adversary can manipulate a video in such a way that the API will return only the adversary-desired labels. For this, we select an image that is different from the content of the Video and insert it, periodically and at a very low rate, into the video. We found that if we insert one image every two seconds, the API is deceived into annotating the entire video as if it only contains the inserted image. Note that the modification to the video is hardly noticeable as, for instance, for a typical frame rate of 25, we insert only one image per 50 video frames. We also found that, by inserting one image per second, all the shot labels returned by the API are related to the inserted image. We perform the experiments on the sample videos provided by the API demonstration website and show that our attack is successful with different videos and images.
- In this paper we follow our previous research in the area of Computerized Adaptive Testing (CAT). We present three different methods for CAT. One of them, the item response theory, is a well established method, while the other two, Bayesian and neural networks, are new in the area of educational testing. In the first part of this paper, we present the concept of CAT and its advantages and disadvantages. We collected data from paper tests performed with grammar school students. We provide the summary of data used for our experiments in the second part. Next, we present three different model types for CAT. They are based on the item response theory, Bayesian networks, and neural networks. The general theory associated with each type is briefly explained and the utilization of these models for CAT is analyzed. Future research is outlined in the concluding part of the paper. It shows many interesting research paths that are important not only for CAT but also for other areas of artificial intelligence.
- This paper investigates a novel task of generating texture images from perceptual descriptions. Previous work on texture generation focused on either synthesis from examples or generation from procedural models. Generating textures from perceptual attributes have not been well studied yet. Meanwhile, perceptual attributes, such as directionality, regularity and roughness are important factors for human observers to describe a texture. In this paper, we propose a joint deep network model that combines adversarial training and perceptual feature regression for texture generation, while only random noise and user-defined perceptual attributes are required as input. In this model, a preliminary trained convolutional neural network is essentially integrated with the adversarial framework, which can drive the generated textures to possess given perceptual attributes. An important aspect of the proposed model is that, if we change one of the input perceptual features, the corresponding appearance of the generated textures will also be changed. We design several experiments to validate the effectiveness of the proposed method. The results show that the proposed method can produce high quality texture images with desired perceptual properties.
- Mar 30 2017 cs.CV arXiv:1703.09779v1Deep Neural Networks are becoming the de-facto standard models for image understanding, and more generally for computer vision tasks. As they involve highly parallelizable computations, CNN are well suited to current fine grain programmable logic devices. Thus, multiple CNN accelerators have been successfully implemented on FPGAs. Unfortunately, FPGA resources such as logic elements or DSP units remain limited. This work presents a holistic method relying on approximate computing and design space exploration to optimize the DSP block utilization of a CNN implementation on an FPGA. This method was tested when implementing a reconfigurable OCR convolutional neural network on an Altera Stratix V device and varying both data representation and CNN topology in order to find the best combination in terms of DSP block utilization and classification accuracy. This exploration generated dataflow architectures of 76 CNN topologies with 5 different fixed point representation. Most efficient implementation performs 883 classifications/sec at 256 x 256 resolution using 8% of the available DSP blocks.
- Mar 30 2017 cond-mat.mtrl-sci arXiv:1703.10156v1
- Mar 30 2017 hep-ph arXiv:1703.10151v1
- Mar 30 2017 astro-ph.CO astro-ph.GA arXiv:1703.10149v1
- Mar 30 2017 math.LO arXiv:1703.10148v1
- Mar 30 2017 hep-th arXiv:1703.10147v1
- Mar 30 2017 math.DG arXiv:1703.10145v1
- Mar 30 2017 math.LO arXiv:1703.10144v1
- Mar 30 2017 cond-mat.mtrl-sci arXiv:1703.10142v1
- Mar 30 2017 cs.RO arXiv:1703.10139v1
- Mar 30 2017 physics.soc-ph cs.SI arXiv:1703.10138v1
- Mar 30 2017 stat.ME arXiv:1703.10136v1
- Mar 30 2017 astro-ph.SR astro-ph.EP arXiv:1703.10130v1
- Mar 30 2017 cond-mat.str-el arXiv:1703.10129v1
- Mar 30 2017 cs.CV arXiv:1703.10125v1
- Mar 30 2017 q-bio.BM arXiv:1703.10124v1