Top arXiv papers

sign in to customize
  • PDF
    Recently, the integration of geographical coordinates into a picture has become more and more popular. Indeed almost all smartphones and many cameras today have a built-in GPS receiver that stores the location information in the Exif header when a picture is taken. Although the automatic embedding of geotags in pictures is often ignored by smart phone users as it can lead to endless discussions about privacy implications, these geotags could be really useful for investigators in analysing criminal activity. Currently, there are many free tools as well as commercial tools available in the market that can help computer forensics investigators to cover a wide range of geographic information related to criminal scenes or activities. However, there are not specific forensic tools available to deal with the geolocation of pictures taken by smart phones or cameras. In this paper, we propose and develop an image scanning and mapping tool for investigators. This tool scans all the files in a given directory and then displays particular photos based on optional filters (date, time, device, localisation) on Google Map. The file scanning process is not based on the file extension but its header. This tool can also show efficiently to users if there is more than one image on the map with the same GPS coordinates, or even if there are images with no GPS coordinates taken by the same device in the same timeline. Moreover, this new tool is portable; investigators can run it on any operating system without any installation. Another useful feature is to be able to work in a read-only environment, so that forensic results will not be modified. We also present and evaluate this tool real world application in this paper.
  • PDF
    Among several quantitative invariants found in evolutionary genomics, one of the most striking is the scaling of the overall abundance of proteins, or protein domains, sharing a specific functional annotation across genomes of given size. The size of these functional categories change, on average, as power-laws in the total number of protein-coding genes. Here, we show that such regularities are not restricted to the overall behavior of high-level functional categories, but also exist systematically at the level of single evolutionary families of protein domains. Specifically, the number of proteins within each family follows family-specific scaling laws with genome size. Functionally similar sets of families tend to follow similar scaling laws, but this is not always the case. To understand this systematically, we provide a comprehensive classification of families based on their scaling properties. Additionally, we develop a quantitative score for the heterogeneity of the scaling of families belonging to a given category or predefined group. Under the common reasonable assumption that selection is driven solely or mainly by biological function, these findings point to fine-tuned and interdependent functional roles of specific protein domains, beyond our current functional annotations. This analysis provides a deeper view on the links between evolutionary expansion of protein families and the functional constraints shaping the gene repertoire of bacterial genomes.
  • PDF
    A significant source of errors in Automatic Speech Recognition (ASR) systems is due to pronunciation variations which occur in spontaneous and conversational speech. Usually ASR systems use a finite lexicon that provides one or more pronunciations for each word. In this paper, we focus on learning a similarity function between two pronunciations. The pronunciation can be the canonical and the surface pronunciations of the same word or it can be two surface pronunciations of different words. This task generalizes problems such as lexical access (the problem of learning the mapping between words and their possible pronunciations), and defining word neighborhoods. It can also be used to dynamically increase the size of the pronunciation lexicon, or in predicting ASR errors. We propose two methods, which are based on recurrent neural networks, to learn the similarity function. The first is based on binary classification, and the second is based on learning the ranking of the pronunciations. We demonstrate the efficiency of our approach on the task of lexical access using a subset from the Switchboard conversational speech corpus. Results suggest that our method is superior to previous methods which are based on graphical Bayesian methods.
  • PDF
    We propose a numeric approach for simulating the ground states of infinite quantum many-body lattice models in higher dimensions. Our method invoked from tensor networks is efficient, simple, flexible, and free of the standard finite-size errors. The basic principle is to transform the Hamiltonian on an infinite lattice to an effective one of a finite-size cluster embedded in an "entanglement bath". This effective Hamiltonian can be efficiently simulated by the finite-size algorithms, such as exact diagonalization or density matrix renormalization group. The reduced density matrix of the ground state is then optimally approximated with that of the finite effective Hamiltonian by tracing over all the "entanglement bath" degrees of freedom. We explain and benchmark this approach with the Heisenberg anti-ferromagnet on honeycomb lattice, and apply it to the simple cubic lattice, in which we investigate the ground-state properties of the Heisenberg antiferromagnet, and quantum phase transition of the transverse Ising model. Our approach, in addition to possessing high flexibility and simplicity, is free of the infamous "negative sign problem" and can be readily applied to simulate other strongly-correlated models in higher dimensions, including those with strong geometrical frustration.
  • PDF
    The data mining field is an important source of large-scale applications and datasets which are getting more and more common. In this paper, we present grid-based approaches for two basic data mining applications, and a performance evaluation on an experimental grid environment that provides interesting monitoring capabilities and configuration tools. We propose a new distributed clustering approach and a distributed frequent itemsets generation well-adapted for grid environments. Performance evaluation is done using the Condor system and its workflow manager DAGMan. We also compare this performance analysis to a simple analytical model to evaluate the overheads related to the workflow engine and the underlying grid system. This will specifically show that realistic performance expectations are currently difficult to achieve on the grid.
  • PDF
    We propose a temporal segmentation and procedure learning model for long untrimmed and unconstrained videos, e.g., videos from YouTube. The proposed model segments a video into segments that constitute a procedure and learns the underlying temporal dependency among the procedure segments. The output procedure segments can be applied for other tasks, such as video description generation or activity recognition. Two aspects distinguish our work from the existing literature. First, we introduce the problem of learning long-range temporal structure for procedure segments within a video, in contrast to the majority of efforts that focus on understanding short-range temporal structure. Second, the proposed model segments an unseen video with only visual evidence and can automatically determine the number of segments to predict. For evaluation, there is no large-scale dataset with annotated procedure steps available. Hence, we collect a new cooking video dataset, named YouCookII, with the procedure steps localized and described. Our ProcNets model achieves state-of-the-art performance in procedure segmentation.
  • PDF
    Automatic Music Transcription (AMT) consists in automatically estimating the notes in an audio recording, through three attributes: onset time, duration and pitch. Probabilistic Latent Component Analysis (PLCA) has become very popular for this task. PLCA is a spectrogram factorization method, able to model a magnitude spectrogram as a linear combination of spectral vectors from a dictionary. Such methods use the Expectation-Maximization (EM) algorithm to estimate the parameters of the acoustic model. This algorithm presents well-known inherent defaults (local convergence, initialization dependency), making EM-based systems limited in their applications to AMT, particularly in regards to the mathematical form and number of priors. To overcome such limits, we propose in this paper to employ a different estimation framework based on Particle Filtering (PF), which consists in sampling the posterior distribution over larger parameter ranges. This framework proves to be more robust in parameter estimation, more flexible and unifying in the integration of prior knowledge in the system. Note-level transcription accuracies of 61.8 $\%$ and 59.5 $\%$ were achieved on evaluation sound datasets of two different instrument repertoires, including the classical piano (from MAPS dataset) and the marovany zither, and direct comparisons to previous PLCA-based approaches are provided. Steps for further development are also outlined.
  • PDF
    We present a temporal 6-DOF tracking method which leverages deep learning to achieve state-of-the-art performance on challenging datasets of real world capture. Our method is both more accurate and more robust to occlusions than the existing best performing approaches while maintaining real-time performance. To assess its efficacy, we evaluate our approach on several challenging RGBD sequences of real objects in a variety of conditions. Notably, we systematically evaluate robustness to occlusions through a series of sequences where the object to be tracked is increasingly occluded. Finally, our approach is purely data-driven and does not require any hand-designed features: robust tracking is automatically learned from data.
  • PDF
    Intrusion detection for computer network systems becomes one of the most critical tasks for network administrators today. It has an important role for organizations, governments and our society due to its valuable resources on computer networks. Traditional misuse detection strategies are unable to detect new and unknown intrusion. Besides, anomaly detection in network security is aim to distinguish between illegal or malicious events and normal behavior of network systems. Anomaly detection can be considered as a classification problem where it builds models of normal network behavior, which it uses to detect new patterns that significantly deviate from the model. Most of the cur- rent research on anomaly detection is based on the learning of normally and anomaly behaviors. They do not take into account the previous, re- cent events to detect the new incoming one. In this paper, we propose a real time collective anomaly detection model based on neural network learning and feature operating. Normally a Long Short Term Memory Recurrent Neural Network (LSTM RNN) is trained only on normal data and it is capable of predicting several time steps ahead of an input. In our approach, a LSTM RNN is trained with normal time series data before performing a live prediction for each time step. Instead of considering each time step separately, the observation of prediction errors from a certain number of time steps is now proposed as a new idea for detecting collective anomalies. The prediction errors from a number of the latest time steps above a threshold will indicate a collective anomaly. The model is built on a time series version of the KDD 1999 dataset. The experiments demonstrate that it is possible to offer reliable and efficient for collective anomaly detection.
  • PDF
    Very large-scale Deep Neural Networks (DNNs) have achieved remarkable successes in a large variety of computer vision tasks. However, the high computation intensity of DNNs makes it challenging to deploy these models on resource-limited systems. Some studies used low-rank approaches that approximate the filters by low-rank basis to accelerate the testing. Those works directly decomposed the pre-trained DNNs by Low-Rank Approximations (LRA). How to train DNNs toward lower-rank space for more efficient DNNs, however, remains as an open area. To solve the issue, in this work, we propose Force Regularization, which uses attractive forces to enforce filters so as to coordinate more weight information into lower-rank space. We mathematically and empirically prove that after applying our technique, standard LRA methods can reconstruct filters using much lower basis and thus result in faster DNNs. The effectiveness of our approach is comprehensively evaluated in ResNets, AlexNet, and GoogLeNet. In AlexNet, for example, Force Regularization gains 2x speedup on modern GPU without accuracy loss and 4.05x speedup on CPU by paying small accuracy degradation. Moreover, Force Regularization better initializes the low-rank DNNs such that the fine-tuning can converge faster toward higher accuracy. The obtained lower-rank DNNs can be further sparsified, proving that Force Regularization can be integrated with state-of-the-art sparsity-based acceleration methods.
  • PDF
    Users of electronic devices, e.g., laptop, smartphone, etc. have characteristic behaviors while surfing the Web. Profiling this behavior can help identify the person using a given device. In this paper, we introduce a technique to profile users based on their web transactions. We compute several features extracted from a sequence of web transactions and use them with one-class classification techniques to profile a user. We assess the efficacy and speed of our method at differentiating 25 users on a dataset representing 6 months of web traffic monitoring from a small company network.
  • PDF
    Deep learning-based approaches have been widely used for training controllers for autonomous vehicles due to their powerful ability to approximate nonlinear functions or policies. However, the training process usually requires large labeled data sets and takes a lot of time. In this paper, we analyze the influences of features on the performance of controllers trained using the convolutional neural networks (CNNs), which gives a guideline of feature selection to reduce computation cost. We collect a large set of data using The Open Racing Car Simulator (TORCS) and classify the image features into three categories (sky-related, roadside-related, and road-related features).We then design two experimental frameworks to investigate the importance of each single feature for training a CNN controller.The first framework uses the training data with all three features included to train a controller, which is then tested with data that has one feature removed to evaluate the feature's effects. The second framework is trained with the data that has one feature excluded, while all three features are included in the test data. Different driving scenarios are selected to test and analyze the trained controllers using the two experimental frameworks. The experiment results show that (1) the road-related features are indispensable for training the controller, (2) the roadside-related features are useful to improve the generalizability of the controller to scenarios with complicated roadside information, and (3) the sky-related features have limited contribution to train an end-to-end autonomous vehicle controller.
  • PDF
    String theory models of axion monodromy inflation exhibit scalar potentials which are quadratic for small values of the inflaton field and evolve to a more complicated function for large field values. Oftentimes the large field behaviour is gentler than quadratic, lowering the tensor-to-scalar ratio. This effect, known as flattening, has been observed in the string theory context through the properties of the DBI+CS D-brane action. We revisit such flattening effects in type IIB flux compactifications with mobile D7-branes, with the inflaton identified with the D7-brane position. We observe that, with a generic choice of background fluxes, flattening effects are larger than previously observed, allowing to fit these models within current experimental bounds. In particular, we compute the cosmological observables in scenarios compatible with closed-string moduli stabilisation, finding tensor-to-scalar ratios as low as r ~ 0.04. These are models of single field inflation in which the inflaton is much lighter than the other scalars through a mild tuning of the compactification data.
  • PDF
    The growing field of large-scale time domain astronomy requires methods for probabilistic data analysis that are computationally tractable, even with large datasets. Gaussian Processes are a popular class of models used for this purpose but, since the computational cost scales as the cube of the number of data points, their application has been limited to relatively small datasets. In this paper, we present a method for Gaussian Process modeling in one-dimension where the computational requirements scale linearly with the size of the dataset. We demonstrate the method by applying it to simulated and real astronomical time series datasets. These demonstrations are examples of probabilistic inference of stellar rotation periods, asteroseismic oscillation spectra, and transiting planet parameters. The method exploits structure in the problem when the covariance function is expressed as a mixture of complex exponentials, without requiring evenly spaced observations or uniform noise. This form of covariance arises naturally when the process is a mixture of stochastically-driven damped harmonic oscillators - providing a physical motivation for and interpretation of this choice - but we also demonstrate that it is effective in many other cases. We present a mathematical description of the method, the details of the implementation, and a comparison to existing scalable Gaussian Process methods. The method is flexible, fast, and most importantly, interpretable, with a wide range of potential applications within astronomical data analysis and beyond. We provide well-tested and documented open-source implementations of this method in C++, Python, and Julia.
  • PDF
    Inverse reinforcement learning (IRL) aims to explain observed complex behavior by fitting reinforcement learning models to behavioral data. However, traditional IRL methods are only applicable when the observations are in the form of state-action paths. This is a problem in many real-world modelling settings, where only more limited observations are easily available. To address this issue, we extend the traditional IRL problem formulation. We call this new formulation the inverse reinforcement learning from summary data (IRL-SD) problem, where instead of state-action paths, only summaries of the paths are observed. We propose exact and approximate methods for both maximum likelihood and full posterior estimation for IRL-SD problems. Through case studies we compare these methods, demonstrating that the approximate methods can be used to solve moderate-sized IRL-SD problems in reasonable time.
  • PDF
    Causal ordering of key events in the cell cycle is essential for proper functioning of an organism. Yet, it remains a mystery how a specific temporal program of events is maintained despite ineluctable stochasticity in the biochemical dynamics which dictate timing of cellular events. We propose that if a change of cell fate is triggered by the \em time-integral of the underlying stochastic biochemical signal, rather than the original signal, then a dramatic improvement in temporal specificity results. Exact analytical results for stochastic models of hourglass-timers and pendulum-clocks, two important paradigms for biological timekeeping, elucidate how temporal specificity is achieved through time-integration. En route, we introduce a natural representation for time-integrals of stochastic processes, provide an analytical prescription for evaluating corresponding first-passage-time distributions, and uncover a mechanism by which a population of identical cells can spontaneously bifurcate into subpopulations of early and late responders, depending on hierarchy of timescales in the dynamics. Moreover, our approach reveals how time-integration of stochastic signals may be realized biochemically, through a simple chemical reaction scheme.
  • PDF
    Despite the rapid progress of the techniques for image classification, video annotation has remained a challenging task. Automated video annotation would be a breakthrough technology, enabling users to search within the videos. Recently, Google introduced the Cloud Video Intelligence API for video analysis. As per the website, the system "separates signal from noise, by retrieving relevant information at the video, shot or per frame." A demonstration website has been also launched, which allows anyone to select a video for annotation. The API then detects the video labels (objects within the video) as well as shot labels (description of the video events over time). In this paper, we examine the usability of the Google's Cloud Video Intelligence API in adversarial environments. In particular, we investigate whether an adversary can manipulate a video in such a way that the API will return only the adversary-desired labels. For this, we select an image that is different from the content of the Video and insert it, periodically and at a very low rate, into the video. We found that if we insert one image every two seconds, the API is deceived into annotating the entire video as if it only contains the inserted image. Note that the modification to the video is hardly noticeable as, for instance, for a typical frame rate of 25, we insert only one image per 50 video frames. We also found that, by inserting one image per second, all the shot labels returned by the API are related to the inserted image. We perform the experiments on the sample videos provided by the API demonstration website and show that our attack is successful with different videos and images.
  • PDF
    In this paper we follow our previous research in the area of Computerized Adaptive Testing (CAT). We present three different methods for CAT. One of them, the item response theory, is a well established method, while the other two, Bayesian and neural networks, are new in the area of educational testing. In the first part of this paper, we present the concept of CAT and its advantages and disadvantages. We collected data from paper tests performed with grammar school students. We provide the summary of data used for our experiments in the second part. Next, we present three different model types for CAT. They are based on the item response theory, Bayesian networks, and neural networks. The general theory associated with each type is briefly explained and the utilization of these models for CAT is analyzed. Future research is outlined in the concluding part of the paper. It shows many interesting research paths that are important not only for CAT but also for other areas of artificial intelligence.
  • PDF
    This paper investigates a novel task of generating texture images from perceptual descriptions. Previous work on texture generation focused on either synthesis from examples or generation from procedural models. Generating textures from perceptual attributes have not been well studied yet. Meanwhile, perceptual attributes, such as directionality, regularity and roughness are important factors for human observers to describe a texture. In this paper, we propose a joint deep network model that combines adversarial training and perceptual feature regression for texture generation, while only random noise and user-defined perceptual attributes are required as input. In this model, a preliminary trained convolutional neural network is essentially integrated with the adversarial framework, which can drive the generated textures to possess given perceptual attributes. An important aspect of the proposed model is that, if we change one of the input perceptual features, the corresponding appearance of the generated textures will also be changed. We design several experiments to validate the effectiveness of the proposed method. The results show that the proposed method can produce high quality texture images with desired perceptual properties.
  • PDF
    Deep Neural Networks are becoming the de-facto standard models for image understanding, and more generally for computer vision tasks. As they involve highly parallelizable computations, CNN are well suited to current fine grain programmable logic devices. Thus, multiple CNN accelerators have been successfully implemented on FPGAs. Unfortunately, FPGA resources such as logic elements or DSP units remain limited. This work presents a holistic method relying on approximate computing and design space exploration to optimize the DSP block utilization of a CNN implementation on an FPGA. This method was tested when implementing a reconfigurable OCR convolutional neural network on an Altera Stratix V device and varying both data representation and CNN topology in order to find the best combination in terms of DSP block utilization and classification accuracy. This exploration generated dataflow architectures of 76 CNN topologies with 5 different fixed point representation. Most efficient implementation performs 883 classifications/sec at 256 x 256 resolution using 8% of the available DSP blocks.
  • PDF
    Leptonic rare decays of $B^0_{s,d}$ mesons offer a powerful tool to search for physics beyond the Standard Model. The $B^0_{s}\to\mu^+\mu^-$ decay has been observed at the Large Hadron Collider and the first measurement of the effective lifetime of this channel was presented, in accordance with the Standard Model. On the other hand, $B^0_{s}\to\tau^+\tau^-$ and $B^0_{s}\to e^+e^-$ have received considerably less attention: while LHCb has recently reported a first upper limit of $6.8\times10^{-3}$ (95% C.L.) for the $B^0_s\to\tau^+\tau^-$ branching ratio, the upper bound $2.8\times 10^{-7}$ (90% C.L.) for the branching ratio of $B^0_s\to e^+e^-$ was reported by CDF back in 2009. We discuss the current status of the interpretation of the measurement of $B^0_{s}\to\mu^+\mu^-$, and explore the space for New-Physics effects in the other $B^0_{s,d}\to\ell^+\ell^-$ decays in a scenario with heavy new particles and the feature of Minimal Flavour Violation, including in particular the corresponding version of the MSSM. While the New-Physics effects are strongly suppressed by the ratio $m_\mu/m_\tau$ of the lepton masses in $B^0_s\to\tau^+\tau^-$, they are hugely enhanced by $m_e/m_\mu$ in $B^0_s\to e^+e^-$, and may result in a $B^0_s\to e^+e^-$ branching ratio about twice as large as the one of $B^0_{s}\to\mu^+\mu^-$, which is about a factor of 50 below the CDF bound; a similar feature arises in $B^0_{d}\to e^+e^-$. Consequently, it would be most interesting to search for the $B^0_{s,d}\to e^+e^-$ channels at the LHC and Belle II, which may result in an unambiguous signal for physics beyond the Standard Model.
  • PDF
    A general principle of statistical mechanics is that low energy excitations of a thermal state change expectation values of observables by a very small amount. However some observables in the vicinity of the horizon of a large black hole in anti-de Sitter space naively seem to violate this bound. This potential violation is related to the question of whether the black hole interior can be described in AdS/CFT. Here we point out that if the possible excitations are limited to those produced by a simple local source on the boundary, and the possible observables are limited to products of field operators in a single causal patch in the bulk, then these violations disappear.
  • PDF
    We compute the spectrum of the tensor and scalar bound states along the baryonic branch of the Klebanov-Strassler (KS) field theory. We exploit the dual gravity description in terms of the 1-parameter family of regular background solutions of type-IIB supergravity that interpolate between the KS background and the Maldacena-Nunez one (CVMN). We make use of the five-dimensional consistent truncation on T^1,1 corresponding to the Papadopoulos-Tseytlin ansatz, and adopt a gauge invariant formalism in the treatment of the fluctuations, that we interpret in terms of bound states of the field theory. The tensor spectrum interpolates between the discrete spectrum of the KS background and the continuum spectrum of the CVMN background, in particular showing the emergence of a finite energy range containing a dense set of states, as expected from dimensional deconstruction. The scalar spectrum shows analogous features, and in addition it contains one state that becomes parametrically light far from the origin along the baryonic branch.
  • PDF
    We present an extraction of unpolarized partonic transverse momentum distributions (TMDs) from a simultaneous fit of available data measured in semi-inclusive deep-inelastic scattering, Drell-Yan and Z-boson production. To connect data at different scales, we use TMD evolution at next-to-leading logarithmic accuracy. The analysis is restricted to the low-transverse-momentum region, with no matching to fixed-order calculations at high transverse momentum. We introduce specific choices to deal with TMD evolution at low scales, of the order of 1 GeV$^2$. This could be considered as a first attempt at a global fit of TMDs.
  • PDF
    We report a comparison of the $1/T_1$ spin lattice relaxation rates (SLR) for $^9Li$ and $^8Li$ in Pt and SrTiO$_{3}$, in order to differentiate between magnetic and electric quadrupolar relaxation mechanisms. In Pt, the ratio of the $1/T_{1}$ spin relaxation rates $R_{Pt}$ was found to be 6.82(29), which is close to but less than the theoretical limit of $\sim7.68$ for pure magnetic relaxation. In SrTiO$_{3}$ this ratio was found to be 2.7(3), which is close but larger than the theoretical limit of $\sim2.14$ expected for pure electric quadrupolar relaxation. These results bring new insight into the nature of the fluctuations in the local environment of implanted $^8Li$ observed by $\beta$-NMR.
  • PDF
    We have investigated the spin interaction and the gravitational radiation thermally allowed in a head-on collision of two rotating Hayward black holes. The Hayward black hole is a regular black hole in the modified Einstein equation, so this can be an appropriate model to describe how much the quantum effect near the horizon affects the interaction and the radiation. If one of the black holes is assumed to be much smaller than the other one, the potential of the spin interaction can be analytically obtained and is dependent on the alignment of angular momenta of the black holes. For the collision of massive black holes, the gravitational radiation is numerically obtained as the upper bound using the laws of thermodynamics. The effect of the Hayward black hole tends to increase the radiation energy, but we can limit the effect by comparing it with the gravitational waves GW150914 and GW151226.
  • PDF
    A semigroup prime of a commutative ring $R$ is a prime ideal of the semigroup $(R,\cdot)$. One of the purposes of this paper is to study, from a topological point of view, the space $\scal(R)$ of prime semigroups of $R$. We show that, under a natural topology introduced by B. Olberding in 2010, $\scal(R)$ is a spectral space (after Hochster), spectral extension of $\Spec(R)$, and that the assignment $R\mapsto\scal(R)$ induces a contravariant functor. We then relate -- in the case $R$ is an integral domain -- the topology on $\scal(R)$ with the Zariski topology on the set of overrings of $R$. Furthermore, we investigate the relationship between $\scal(R)$ and the space $\boldsymbol{\mathcal{X}}(R)$ consisting of all nonempty inverse-closed subspaces of $\spec(R)$, which has been introduced and studied in C.A. Finocchiaro, M. Fontana and D. Spirito, "The space of inverse-closed subsets of a spectral space is spectral" (submitted). In this context, we show that $\scal( R)$ is a spectral retract of $\boldsymbol{\mathcal{X}}(R)$ and we characterize when $\scal( R)$ is canonically homeomorphic to $\boldsymbol{\mathcal{X}}(R)$, both in general and when $\spec(R)$ is a Noetherian space. In particular, we obtain that, when $R$ is a Bézout domain, $\scal( R)$ is canonically homeomorphic both to $\boldsymbol{\mathcal{X}}(R)$ and to the space $\overr(R)$ of the overrings of $R$ (endowed with the Zariski topology). Finally, we compare the space $\boldsymbol{\mathcal{X}}(R)$ with the space $\scal(R(T))$ of semigroup primes of the Nagata ring $R(T)$, providing a canonical spectral embedding $\xcal(R)\hookrightarrow\scal(R(T))$ which makes $\xcal(R)$ a spectral retract of $\scal(R(T))$.
  • PDF
    The lack of evidence for weak scale supersymmetry from LHC Run-I and Run-II results along with null results from direct/indirect dark matter detection experiment have caused a paradigm shift in expected phenomenology of SUSY models. The SUSY dark matter candidate, neutralino $\widetilde{Z}_1$, can only satisfy the measured dark matter abundance due to resonance- and co-annihilations in cMSSM model. Moreover, viable parameter space is highly fine-tuned in the cMSSM. In models that can still satisfy the naturalness condition such as NUHM2, the neutralino is underproduced due to its higgsino nature. Neutralino combined with axion that solves the strong CP problem can explain the observed dark matter in the universe. Here I briefly discuss implications of the mixed axion-neutralino scenario.
  • PDF
    We study the generalization of quasipositive links from the three-sphere to arbitrary closed, orientable three-manifolds. As in the classical case, we see that this generalization of quasipositivity is intimately connected to contact and complex geometry. The main result is an essentially three-dimensional proof that every link in the boundary of a Stein domain which bounds a complex curve in the interior is quasipositive, generalizing a theorem of Boileau-Orevkov.
  • PDF
    Star-forming galaxies at high redshifts are the ideal targets to probe the hypothetical variation of the fine-structure constant alpha over cosmological time scales. We propose a modification of the alkali doublets method which allows us to search for variation in alpha combining far infrared and submillimeter spectroscopic observations. This variation manifests as velocity offsets between the observed positions of the fine-structure and gross-structure transitions when compared to laboratory wavelengths. Here we describe our method whose sensitivity limit to the fractional changes in alpha is about 5*10^-7. We also demonstrate that current spectral observations of hydrogen and [C II] 158 micron lines provide an upper limit on |Delta alpha/alpha| < 6*10^-5 at redshifts z = 3.1 and z = 4.7.
  • PDF
    We extend Solovay's theorem about definable subsets of the Baire space to the generalized Baire space ${}^\lambda\lambda$, where $\lambda$ is an uncountable cardinal with $\lambda^{<\lambda}=\lambda$. In the first main theorem, we show that that the perfect set property for all subsets of ${}^{\lambda}\lambda$ that are definable from elements of ${}^\lambda\mathrm{Ord}$ is consistent relative to the existence of an inaccessible cardinal above $\lambda$. In the second main theorem, we introduce a Banach-Mazur type game of length $\lambda$ and show that the determinacy of this game, for all subsets of ${}^\lambda\lambda$ that are definable from elements of ${}^\lambda\mathrm{Ord}$ as winning conditions, is consistent relative to the existence of an inaccessible cardinal above $\lambda$. We further obtain some related results about definable functions on ${}^\lambda\lambda$ and consequences of resurrection axioms for definable subsets of ${}^\lambda\lambda$.
  • PDF
    We set up a tree-level six point scattering process in which two strings are separated longitudinally such that they could only interact directly via a non-local spreading effect such as that predicted by light cone gauge calculations and the Gross-Mende saddle point. One string, the `detector', is produced at a finite time with energy $E$ by an auxiliary $2\to 2$ sub-process, with kinematics such that it has sufficient resolution to detect the longitudinal spreading of an additional incoming string, the `source'. We test this hypothesis in a gauge-invariant S-matrix calculation convolved with an appropriate wavepacket peaked at a separation $X$ between the central trajectories of the source and produced detector. The amplitude exhibits support for scattering at the predicted longitudinal separation $X\sim\alpha' E$, in sharp contrast to the analogous quantum field theory amplitude (whose support manifestly traces out a tail of the position-space wavefunction). The effect arises in a regime in which the string amplitude is not obtained as a convergent sum of such QFT amplitudes, and has larger amplitude than similar QFT models (with the same auxiliary four point amplitude). In a linear dilaton background, the amplitude depends on the string coupling as expected if the scattering is not simply occuring on the wavepacket tail in string theory. This manifests the scale of longitudinal spreading in a gauge-invariant S-matrix amplitude, in a calculable process with significant amplitude. It simulates a key feature of the dynamics of time-translated horizon infallers.
  • PDF
    The stochastic block model (SBM) is a random graph model with planted clusters. It is widely employed as a canonical model to study clustering and community detection, and provides generally a fertile ground to study the statistical and computational tradeoffs that arise in network and data sciences. This note surveys the recent developments that establish the fundamental limits for community detection in the SBM, both with respect to information-theoretic and computational thresholds, and for various recovery requirements such as exact, partial and weak recovery (a.k.a., detection). The main results discussed are the phase transitions for exact recovery at the Chernoff-Hellinger threshold, the phase transition for weak recovery at the Kesten-Stigum threshold, the optimal distortion-SNR tradeoff for partial recovery, the learning of the SBM parameters and the gap between information-theoretic and computational thresholds. The note also covers some of the algorithms developed in the quest of achieving the limits, in particular two-round algorithms via graph-splitting, semi-definite programming, linearized belief propagation, classical and nonbacktracking spectral methods. A few open problems are also discussed.
  • PDF
    Rigging technique introduced in \citebi0 is a convenient way to address the study of null hypersurfaces. It offers in addition the extra benefit of inducing a Riemannian structure on the null hypersurface which is used to study geometric and topological properties on it. In this paper we develop this technique showing new properties and applications. We first discuss the very existence of the rigging fields under prescribed geometric and topological constraints. We consider the completeness of the induced rigged Riemannian structure. This is potentially important because it allows to use most of the usual Riemannian techniques.
  • PDF
    If $(X,d)$ is a Polish metric space of dimension $0$, then by Wadge's lemma, no more than two Borel subsets of $X$ can be incomparable with respect to continuous reducibility. In contrast, our main result shows that for any metric space $(X,d)$ of positive dimension, there are uncountably many Borel subsets of $(X,d)$ that are pairwise incomparable with respect to continuous reducibility. The reducibility that is given by the collection of continuous functions on a topological space $(X,\tau)$ is called the \emphWadge quasi-order for $(X,\tau)$. We further show that this quasi-order, restricted to the Borel subsets of a Polish space $(X,\tau)$, is a \emphwell-quasiorder (wqo) if and only if $(X,\tau)$ has dimension $0$, as an application of the main result. Moreover, we give further examples of applications of the technique, which is based on a construction of graph colorings.
  • PDF
    We provide some comments on the article `High-dimensional simultaneous inference with the bootstrap' by Ruben Dezeure, Peter Buhlmann and Cun-Hui Zhang.
  • PDF
    We revisit the problem of local moment formation in graphene due to chemisorption of individual atomic hydrogen or other analogous sp$^3$ covalent functionalizations. We describe graphene with the single orbital Hubbard model, so that the H chemisorption is equivalent to a vacancy in the honeycomb lattice. In order to circumvent artefacts related to periodic unit cells, we use either huge simulation cells of up to $8\times10^5$ sites, or an embedding scheme that allows the modelling of a single vacancy in an otherwise pristine infinite honeycomb lattice. We find three results that stress the anomalous nature of the magnetic moment ($m$) in this system. First, in the non-interacting ($U=0$), zero temperature ($T=0$) case, the $m(B)$ is a continuous smooth curve with divergent susceptibility, different from the step-wise constant function found for a single unpaired spins in a gapped system. Second, for $U=0$ and $T>0$, the linear susceptibility follows a power law $\propto{T}^{-\alpha}$ with an exponent of $\alpha=0.77$ different from conventional Curie's law. For $U>0$, in the mean field approximation, the integrated moment is smaller than $m=1\mu_B$, in contrast with results using periodic unit cells. These three results highlight that the magnetic response of the local moment induced by sp$^3$ functionalizations in graphene is different both from that of local moments in gaped systems, for which the magnetic moment is quantized and follows a Curie law, and from Pauli paramagnetism in conductors, for which a linear susceptibility can be defined at $T=0$.
  • PDF
    Aims. The purpose of this investigation is to characterize the temporal evolution of an emerging flux region, the associated photospheric and chromospheric flow fields, and the properties of the accompanying arch filament system. Methods. This study is based on imaging spectroscopy with the Göttingen Fabry-Pérot Interferometer at the Vacuum Tower Telescope, on 2008 August 7. Cloud model (CM) inversions of line scans in the strong chromospheric absorption H$\alpha$ line yielded CM parameters, which describe the cool plasma contained in the arch filament system. Results. The observations cover the decay and convergence of two micro-pores with diameters of less than one arcsecond and provide decay rates for intensity and area. The photospheric horizontal flow speed is suppressed near the two micro-pores indicating that the magnetic field is sufficiently strong to affect the convective energy transport. The micro-pores are accompanied by an arch filament system, where small-scale loops connect two regions with H$\alpha$ line-core brightenings containing an emerging flux region with opposite polarities. The chromospheric velocity of the cloud material is predominantly directed downwards near the footpoints of the loops with velocities of up to 12 km/s, whereas loop tops show upward motions of about 3 km/s. Conclusions. Micro-pores are the smallest magnetic field concentrations leaving a photometric signature in the photosphere. In the observed case, they are accompanied by a miniature arch filament system indicative of newly emerging flux in the form of $\Omega$-loops. Flux emergence and decay take place on a time-scale of about two days, whereas the photometric decay of the micro-pores is much more rapid (a few hours), which is consistent with the incipient submergence of $\Omega$-loops. The results are representative for the smallest emerging flux regions still recognizable as such.
  • PDF
    In this paper, we introduce a design principle to develop novel soft modular robots based on tensegrity structures and inspired by the cytoskeleton of living cells. We describe a novel strategy to realize tensegrity structures using planar manufacturing techniques, such as 3D printing. We use this strategy to develop icosahedron tensegrity structures with programmable variable stiffness that can deform in a three-dimensional space. We also describe a tendon-driven contraction mechanism to actively control the deformation of the tensegrity mod-ules. Finally, we validate the approach in a modular locomotory worm as a proof of concept.
  • PDF
    We analyze using computer simulations, the evolution of opinions in a population of individuals who constantly interact with a common source of user-generated content (i.e. the internet) and are also subject to propaganda. The model is based on the bounded confidence approach. In the absence of propaganda, computer simulations show that the online population as a whole is either fragmented, polarized or in perfect harmony on a certain issue or ideology depending on the uncertainty of individuals in accepting opinions not closer to theirs.On applying the model to simulate radicalization, we observe that a proportion of the online population, subject to extremist propaganda radicalize depending on their pre-conceived opinions and opinion uncertainty. We observe that, an optimal counter propaganda that prevents radicalization is not necessarily centrist.
  • PDF
    We study the existence of universal measuring comodules Q(M,N) for a pair of modules M,N in a braided monoidal closed category, and the associated enrichment of the global category of modules over the monoidal global category of comodules. In the process, we use results for general fibred adjunctions encompassing the fibred structure of modules over monoids and the opfibred structure of comodules over comonoids. We also explore applications to the theory of Hopf modules.
  • PDF
    Deciphering potential associations between network structures and the corresponding nodal attributes of interest is a core problem in network science. As the network topology is structured and often high-dimensional, many nonparametric statistical tests are not directly applicable, whereas model-based approaches are dominant in network inference. In this paper, we propose a model-free approach to test independence between network topology and nodal attributes, via diffusion maps and distance-based correlations. We prove in theory that the diffusion maps based on the adjacency matrix from an infinitely exchangeable graph can provide a set of conditionally independent coordinates for each node in graph, which yields a consistent test statistic for network dependence testing with distance-based correlations combined. The new approach excels in capturing nonlinear and high-dimensional network dependencies, and is robust against parameter choices and noise, as demonstrated by superior testing powers throughout various popular network models. An application on brain data is provided to illustrate its advantage and utility.
  • PDF
    Concrete two-set (module-like and algebra-like) algebraic structures are investigated from the viewpoint that the initial arities of all operations are arbitrary. The relations between operations appearing from the structure definitions lead to restrictions, which determine their arity shape and lead to the partial arity freedom principle. In this manner, polyadic vector spaces and algebras, dual vector spaces, direct sums, tensor products and inner pairing spaces are reconsidered. As one application, elements of polyadic operator theory are outlined: multistars and polyadic analogs of adjoints, operator norms, isometries and projections, as well as polyadic $C^{*}$-algebras, Toeplitz algebras and Cuntz algebras represented by polyadic operators are introduced. Another application is connected with number theory, and it is shown that the congruence classes are polyadic rings of a special kind. Polyadic numbers are introduced, see Definition 6.16. Diophantine equations over these polyadic rings are then considered. Polyadic analogs of the Lander-Parkin-Selfridge conjecture and Fermat's last theorem are formulated. For the derived polyadic ring operations neither of these holds, and counterexamples are given. A procedure for obtaining new solutions to the equal sums of like powers equation over polyadic rings by applying Frolov's theorem for the Tarry-Escott problem is presented.
  • PDF
    We analyze short cadence K2 light curve of the TRAPPIST-1 system. Fourier analysis of the data suggests $P_\mathrm{rot}=3.295\pm0.003$ days. The light curve shows several flares, of which we analyzed 42 events, these have integrated flare energies of $1.26\times10^{30}-1.24\times10^{33}$ ergs. Approximately 12% of the flares were complex, multi-peaked eruptions. The flaring and the possible rotational modulation shows no obvious correlation. The flaring activity of TRAPPIST-1 probably continuously alters the atmospheres of the orbiting exoplanets, making these less favorable for hosting life.
  • PDF
    We investigate the magnetic and the transport properties of diluted magnetic semiconductors using a spin-fermion Monte-Carlo method on a 3D lattice in the intermediate coupling regime. The ferromagnetic transition temperature $T_c$ shows an optimization behavior, first increases and then decreases, with respect to the absolute carrier density $p_{abs}$ for a given magnetic impurity concentration $x$, as seen in the experiment. Our calculations also show an insulator-metal-insulator transition across the optimum $p_{abs}$ where the $T_c$ is maximum. Remarkably, the optimum $p_{abs}$ values lie in a narrow range around 0.11 for all $x$ values and the ferromagnetic $T_c$ increases with $x$. We explain our results using the polaron percolation mechanism and outline a new route to enhance the ferromagnetic transition temperature in experiments.
  • PDF
    A microscopic configuration-interaction (CI) methodology is introduced to enable bottom-up Schroedinger-equation emulation of unconventional superconductivity in ultracold optical traps. We illustrate the method by exploring the properties of Lithium-6 atoms in a single square plaquette in the hole-pairing regime, and by analyzing the entanglement (symmetry-preserving) and disentanglement physics (via symmetry-breaking, associated with the separation of charge and spin density waves) of two coupled plaquettes in the same regime. The single-occupancy RVB states contribute only partially to the exact many-body solutions, and the CI results map onto a Hubbard Hamiltonian, but not onto the double-occupancy-excluding t-J one. For the double-plaquette case, effects brought about by breaking the symmetry between two weakly-interacting plaquettes, either by distorting, or by tilting and detuning, one of the plaquettes with respect to the other, as well as spectral changes caused by increased coupling between the two plaquettes, are explored.
  • PDF
    We develop differentially private hypothesis testing methods for the small sample regime. Given a sample $\cal D$ from a categorical distribution $p$ over some domain $\Sigma$, an explicitly described distribution $q$ over $\Sigma$, some privacy parameter $\varepsilon$, accuracy parameter $\alpha$, and requirements $\beta_{\rm I}$ and $\beta_{\rm II}$ for the type I and type II errors of our test, the goal is to distinguish between $p=q$ and $d_{\rm{TV}}(p,q) \geq \alpha$. We provide theoretical bounds for the sample size $|{\cal D}|$ so that our method both satisfies $(\varepsilon,0)$-differential privacy, and guarantees $\beta_{\rm I}$ and $\beta_{\rm II}$ type I and type II errors. We show that differential privacy may come for free in some regimes of parameters, and we always beat the sample complexity resulting from running the $\chi^2$-test with noisy counts, or standard approaches such as repetition for endowing non-private $\chi^2$-style statistics with differential privacy guarantees. We experimentally compare the sample complexity of our method to that of recently proposed methods for private hypothesis testing.
  • PDF
    The Swift test was originally proposed as a formability test to reproduce the conditions observed in deep drawing operations. This test consists on forming a cylindrical cup from a circular blank, using a flat bottom cylindrical punch and has been extensively studied using both analytical and numerical methods. This test can also be combined with the Demeri test, which consists in cutting a ring from the wall of a cylindrical cup, in order to open it afterwards to measure the springback. This combination allows their use as benchmark test, in order to improve the knowledge concerning the numerical simulation models, through the comparison between experimental and numerical results. The focus of this study is the experimental and numerical analyses of the Swift cup test, followed by the Demeri test, performed with an AA5754-O alloy at room temperature. In this context, a detailed analysis of the punch force evolution, the thickness evolution along the cup wall, the earing profile, the strain paths and their evolution and the ring opening is performed. The numerical simulation is performed using the finite element code ABAQUS, with solid and solid-shell elements, in order to compare the computational efficiency of these type of elements. The results show that the solid-shell element is more cost-effective than the solid, presenting global accurate predictions, excepted for the thinning zones. Both the von Mises and the Hill48 yield criteria predict the strain distributions in the final cup quite accurately. However, improved knowledge concerning the stress states is still required, because the Hill48 criterion showed difficulties in the correct prediction of the springback, whatever the type of finite element adopted.
  • PDF
    We propose a framework for Google Map aided UAV navigation in GPS-denied environment. Geo-referenced navigation provides drift-free localization and does not require loop closures. The UAV position is initialized via correlation, which is simple and efficient. We then use optical flow to predict its position in subsequent frames. During pose tracking, we obtain inter-frame translation either by motion field or homography decomposition, and we use HOG features for registration on Google Map. We employ particle filter to conduct a coarse to fine search to localize the UAV. Offline test using aerial images collected by our quadrotor platform shows promising results as our approach eliminates the drift in dead-reckoning, and the small localization error indicates the superiority of our approach as a supplement to GPS.
  • PDF
    Collapsin response mediator protein CRMP2 (gene: DPYSL2) is crucial for neuronal development. The homotetrameric CRMP2 complex is regulated via two mechanisms, first by phosphorylation at, and second by reduction and oxidation of the Cys504 residues of two adjacent subunits. Here, we analyzed the effects of this redox switch on the protein in vitro combined with force field molecular dynamics (MD). Earlier X-ray data contain the structure of the rigid body of the molecule but lack the flexible C-terminus with the important sites for phosphorylation and redox regulation. An in silico model for this part was established by replica exchange simulations and homology modelling, which is consistent with results gained from CD spectroscopy with recombinant protein. Thermofluor data indicated that the protein aggregates at bivalent ion concentrations below 200 mM. In simulations the protein surface was covered at these conditions by large amounts of ions, which most likely prevent aggregation. A tryptophan residue (Trp295) in close proximity to the forming disulfide allowed the measurement of the structural relaxation of the rigid body upon reduction by fluorescent quenching. We were also able to determine the second order rate constant of CRMP2 oxidation by H2O2. The simulated solvent accessible surface of the hydroxyl group of Ser518 significantly increased upon reduction of the disulfide bond. Our results give first detailed insight in the profound structural changes of the tetrameric CRMP2 due to oxidation and indicate a tightly connected regulation by phosphorylation and redox modification.

Recent comments

Steve Flammia Mar 30 2017 20:12 UTC

Yes, I did indeed mean that the results of the previous derivations are correct and that predictions from experiments lie within the stated error bounds. To me, it is a different issue if someone derives something with a theoretical guarantee that might have sufficient conditions that are too strong

...(continued)
Robin Blume-Kohout Mar 30 2017 16:55 UTC

I agree with much of your comment. But, the assertion you're disagreeing with isn't really mine. I was trying to summarize the content of the present paper (and 1702.01853, hereafter referred to as [PRYSB]). I'll quote a few passages from the present paper to support my interpretation:

1. "[T

...(continued)
Steve Flammia Mar 30 2017 15:41 UTC

I disagree with the assertion (1) that the previous theory didn't give "the right answers." The previous theory was sound; no one is claiming that there are any mistakes in any of the proofs. However, there were nonetheless some issues.

The first issue is that the previous analysis of gate-depe

...(continued)
Robin Blume-Kohout Mar 30 2017 12:07 UTC

That's a hard question to answer. I suspect that on any questions that aren't precisely stated (and technical), there's going to be some disagreement between the authors of the two papers. After one read-through, my tentative view is that each of the two papers addresses three topics which are pre

...(continued)
LogiQ Mar 30 2017 03:23 UTC

So what is the deal?

Does this negate all the problems with https://scirate.com/arxiv/1702.01853 ?

Laura Mančinska Mar 28 2017 13:09 UTC

Great result!

For those familiar with I_3322, William here gives an example of a nonlocal game exhibiting a behaviour that many of us suspected (but couldn't prove) to be possessed by I_3322.

gae spedalieri Mar 13 2017 14:13 UTC

1) Sorry but this is false.

1a) That analysis is specifically for reducing QECC protocol to an entanglement distillation protocol over certain class of discrete variable channels. Exactly as in BDSW96. Task of the protocol is changed in the reduction.

1b) The simulation is not via a general LOCC b

...(continued)
Siddhartha Das Mar 13 2017 13:22 UTC

We feel that we have cited and credited previous works appropriately in our paper. To clarify:

1) The LOCC simulation of a channel and the corresponding adaptive reduction can be found worked out in full generality in the 2012 Master's thesis of Muller-Hermes. We have cited the original paper BD

...(continued)
gae spedalieri Mar 13 2017 08:56 UTC

This is one of those papers where the contribution of previous literature is omitted and not fairly represented.

1- the LOCC simulation of quantum channels (not necessarily teleportation based) and the corresponding general reduction of adaptive protocols was developed in PLOB15 (https://arxiv.org/

...(continued)
Noon van der Silk Mar 08 2017 04:45 UTC

I feel that while the proliferation of GUNs is unquestionable a good idea, there are many unsupervised networks out there that might use this technology in dangerous ways. Do you think Indifferential-Privacy networks are the answer? Also I fear that the extremist binary networks should be banned ent

...(continued)