# Top arXiv papers

• We generalise some well-known graph parameters to operator systems by considering their underlying quantum channels. In particular, we introduce the quantum complexity as the dimension of the smallest co-domain Hilbert space a quantum channel requires to realise a given operator system as its non-commutative confusability graph. We describe quantum complexity as a generalised minimum semidefinite rank and, in the case of a graph operator system, as a quantum intersection number. The quantum complexity and a closely related quantum version of orthogonal rank turn out to be upper bounds for the Shannon zero-error capacity of a quantum channel, and we construct examples for which these bounds beat the best previously known general upper bound for the capacity of quantum channels, given by the quantum Lovász theta number.
• The additivity problem asks if the use of entanglement can boost the information-carrying capacity of a given channel beyond what is achievable by coding with simple product states only. This has recently been shown not to be the case for phase-insensitive one-mode Gaussian channels, but remains unresolved in general. Here we consider two general classes of bosonic noise channels, which include phase-insensitive Gaussian channels as special cases: these are beamsplitters with general, potentially non-Gaussian environment states and classical noise channels with general probabilistic noise. We show that additivity violations, if existent, are rather minor for all these channels: the maximal gain in classical capacity is bounded by a constant independent of the input energy. Our proof shows that coding by simple classical modulation of coherent states is close to optimal.
• Robustness to outliers is a central issue in real-world machine learning applications. While replacing a model to a heavy-tailed one (e.g., from Gaussian to Student-t) is a standard approach for robustification, it can only be applied to simple models. In this paper, based on Zellner's optimization and variational formulation of Bayesian inference, we propose an outlier-robust pseudo-Bayesian variational method by replacing the Kullback-Leibler divergence used for data fitting to a robust divergence such as the beta and gamma-divergences. An advantage of our approach is that complex models such as deep networks can be handled. We theoretically prove that, for deep networks with ReLU activation functions, the influence function in our proposed method is bounded, while it is unbounded in the ordinary variational inference. This implies that our proposed method is robust to both of input and output outliers, while the ordinary variational method is not. We experimentally demonstrate that our robust variational method outperforms ordinary variational inference in regression and classification with deep networks.
• Light storage is the controlled and reversible mapping of light fields onto long-lived states of matter [1], forming the basis for quantum memories in optical quantum networks [2-6]. Prominent storage media are warm alkali gases due to their strong optical coupling and long-lived spin states [7-8]. In a dense gas, the random atomic collisions dominate the lifetime of the spin coherence, limiting the storage time to a few milliseconds [9-10]. Here we present and experimentally demonstrate a storage scheme that is insensitive to spin-exchange collisions, thus enabling long storage times at high atomic densities. This unique property is achieved by mapping the light field onto spin orientation, rather than onto higher spin moments. We report on a record storage lifetime for a warm system of 150 milliseconds (1/e) in cesium vapor. Furthermore, our scheme lays the foundations for hour-long quantum memories using rare-gas nuclear spins.
• Tensor decomposition methods are popular tools for learning latent variables given only lower-order moments of the data. However, the standard assumption is that we have sufficient data to estimate these moments to high accuracy. In this work, we consider the case in which certain dimensions of the data are not always observed---common in applied settings, where not all measurements may be taken for all observations---resulting in moment estimates of varying quality. We derive a weighted tensor decomposition approach that is computationally as efficient as the non-weighted approach, and demonstrate that it outperforms methods that do not appropriately leverage these less-observed dimensions.
• We prove the equivalence between CP-divisibility and the lack of information backflow for an arbitrary dynamical map. This equivalence is universal, that is, does not depend upon the invertibility of the map. Our approach reconciles various attempts where the invertibility was essential (divisibility and/or distinguishability) or inessential (guessing probability). Interestingly this result sheds new light into the structure of the time-local generators giving rise to CP-divisible evolutions. We show that if the map is not invertible then positivity of dissipation/decoherence rates is no longer necessary for CP-divisibility.
• The information of quantum pathways can be extracted in the framework of the Hamiltonian-encoding and Observable-decoding method. For closed quantum systems, only off-diagonal elements of the Hamiltonian in the Hilbert space is required to be encoded to obtain the desired transitions. For open quantum systems, environment-related terms will appear in the diagonal elements of the Hamiltonian in the Liouville space. Therefore, diagonal encodings have to be performed to differentiate different pathways, which will lead to self-to-self transitions and inconsistency of pathway amplitudes with Dyson expansion. In this work, a well-designed transformation is proposed to avoid the counter-intuitive transitions and the inconsistency, with or without control fields. A three-level open quantum system is employed for illustration, and numerical simulations show that the method are consistent with Dyson expansion.
• Recently, feature representation by learning algorithms has drawn great attention. In the music domain, it is either unsupervised or supervised by semantic labels such as music genre. However, finding discriminative features in an unsupervised way is challenging, and supervised feature learning using semantic labels may involve noisy or expensive annotation. In this paper, we present a feature learning approach that utilizes artist labels attached in every single music track as an objective meta data. To this end, we train a deep convolutional neural network to classify audio tracks into a large number of artists. We regard it as a general feature extractor and apply it to artist recognition, genre classification and music auto-tagging in transfer learning settings. The results show that the proposed approach outperforms or is comparable to previous state-of-the-art methods, indicating that the proposed approach effectively captures general music audio features.
• Oct 19 2017 math.CO arXiv:1710.06470v1
Let $D$ be a knot diagram, and let ${\mathcal D}$ denote the set of diagrams that can be obtained from $D$ by crossing exchanges. If $D$ has $n$ crossings, then ${\mathcal D}$ consists of $2^n$ diagrams. A folklore argument shows that at least one of these $2^n$ diagrams is unknot, from which it follows that every diagram has finite unknotting number. It is easy to see that this argument can be used to show that actually ${\mathcal D}$ has more than one unknot diagram, but it cannot yield more than $4n$ unknot diagrams. We improve this linear bound to a superpolynomial bound, by showing that at least $2^{\sqrt[3]{n}}$ of the diagrams in ${\mathcal D}$ are unknot. We also show that either all the diagrams in ${\mathcal D}$ are unknot, or there is a diagram in ${\mathcal D}$ that is a diagram of the trefoil knot.
• Homogeneous oscillations of the inflaton after inflation can be unstable to small spatial perturbations even without coupling to other fields. We show that for inflaton potentials $\propto |\phi|^{2n}$ near $|\phi|=0$ and flatter beyond some $|\phi|=M$, the inflaton condensate oscillations can lead to self-resonance, followed by its complete fragmentation. We find that for non-quadratic minima ($n>1$), shortly after backreaction, the equation of state parameter, $w\rightarrow1/3$. If $M\ll m_{pl}$, radiation domination is established within less than an e-fold of expansion after the end of inflation. In this case self-resonance is efficient and the condensate fragments into transient, localised spherical objects which are unstable and decay, leaving behind them a virialized field with mean kinetic and gradient energies much greater than the potential energy. This end-state yields $w=1/3$. When $M\ll m_{pl}$ we observe slow and steady, self-resonace that can last many \it e-folds before backreaction eventually shuts it off, followed by fragmentation and $w\rightarrow 1/3$. We provide analytical estimates for the duration to $w\rightarrow 1/3$ after inflation, which can be used as an upper bound (under certain assumptions) on the duration of the transition between the inflationary and the radiation dominated states of expansion. This upper bound can reduce uncertainties in CMB observables such as the spectral tilt $n_{\rm{s}}$, and the tensor-to-scalar ratio $r$. For quadratic minima ($n=1$), $w\rightarrow0$ regardless of the value of $M$.
• Statistics are bleak for youth aging out of the United States foster care system. They are often left with few resources, are likely to experience homelessness, and are at increased risk of incarceration and exploitation. The Think of Us platform is a service for foster youth and their advocates to create personalized goals and access curated content specific to aging out of the foster care system. In this paper, we propose the use of a machine learning algorithm within the Think of Us platform to better serve youth transitioning to life outside of foster care. The algorithm collects and collates publicly available figures and data to inform caseworkers and other mentors chosen by the youth on how to best assist foster youth. It can then provide valuable resources for the youth and their advocates targeted directly towards their specific needs. Finally, we examine machine learning as a support system and aid for caseworkers to buttress and protect vulnerable young adults during their transition to adulthood.
• The City of Detroit maintains an active fleet of over 2500 vehicles, spending an annual average of over \$5 million on new vehicle purchases and over \$7.7 million on maintaining this fleet. Understanding the existence of patterns and trends in this data could be useful to a variety of stakeholders, particularly as Detroit emerges from Chapter 9 bankruptcy, but the patterns in such data are often complex and multivariate and the city lacks dedicated resources for detailed analysis of this data. This work, a data collaboration between the Michigan Data Science Team (http://midas.umich.edu/mdst) and the City of Detroit's Operations and Infrastructure Group, seeks to address this unmet need by analyzing data from the City of Detroit's entire vehicle fleet from 2010-2017. We utilize tensor decomposition techniques to discover and visualize unique temporal patterns in vehicle maintenance; apply differential sequence mining to demonstrate the existence of common and statistically unique maintenance sequences by vehicle make and model; and, after showing these time-dependencies in the dataset, demonstrate an application of a predictive Long Short Term Memory (LSTM) neural network model to predict maintenance sequences. Our analysis shows both the complexities of municipal vehicle fleet data and useful techniques for mining and modeling such data.
• In the realm of multimodal communication, sign language is, and continues to be, one of the most understudied areas. In line with recent advances in the field of deep learning, there are far reaching implications and applications that neural networks can have for sign language interpretation. In this paper, we present a method for using deep convolutional networks to classify images of both the the letters and digits in American Sign Language.
• The principle goal of computational mechanics is to define pattern and structure so that the organization of complex systems can be detected and quantified. Computational mechanics developed from efforts in the 1970s and early 1980s to identify strange attractors as the mechanism driving weak fluid turbulence via the method of reconstructing attractor geometry from measurement time series and in the mid-1980s to estimate equations of motion directly from complex time series. In providing a mathematical and operational definition of structure it addressed weaknesses of these early approaches to discovering patterns in natural systems. Since then, computational mechanics has led to a range of results from theoretical physics and nonlinear mathematics to diverse applications---from closed-form analysis of Markov and non-Markov stochastic processes that are ergodic or nonergodic and their measures of information and intrinsic computation to complex materials and deterministic chaos and intelligence in Maxwellian demons to quantum compression of classical processes and the evolution of computation and language. This brief review clarifies several misunderstandings and addresses concerns recently raised regarding early works in the field (1980s). We show that misguided evaluations of the contributions of computational mechanics are groundless and stem from a lack of familiarity with its basic goals and from a failure to consider its historical context. For all practical purposes, its modern methods and results largely supersede the early works. This not only renders recent criticism moot and shows the solid ground on which computational mechanics stands but, most importantly, shows the significant progress achieved over three decades and points to the many intriguing and outstanding challenges in understanding the computational nature of complex dynamic systems.
• Using the recent theory of Krein--von Neumann extensions for positive functionals we present several simple criteria to decide whether a given positive functional on the full operator algebra is normal. We also characterize those functionals defined on the left ideal of finite rank operators that have a normal extension.
• The finite-difference time-domain (FDTD) method is a well established method for solving the time evolution of Maxwell's equations. Unfortunately the scheme introduces numerical dispersion and therefore phase and group velocities which deviate from the correct values. The solution to Maxwell's equations in more than one dimension results in non-physical predictions such as numerical dispersion or numerical Cherenkov radiation emitted by a relativistic electron beam propagating in vacuum. Improved solvers, which keep the staggered Yee-type grid for electric and magnetic fields, generally modify the spatial derivative operator in the Maxwell-Faraday equation by increasing the computational stencil. These modified solvers can be characterized by different sets of coefficients, leading to different dispersion properties. In this work we introduce a norm function to rewrite the choice of coefficients into a minimization problem. We solve this problem numerically and show that the minimization procedure leads to phase and group velocities that are considerably closer to $c$ as compared to schemes with manually set coefficients available in the literature. Depending on a specific problem at hand (e.g. electron beam propagation in plasma, high-order harmonic generation from plasma surfaces, etc), the norm function can be chosen accordingly, for example, to minimize the numerical dispersion in a certain given propagation direction. Particle-in-cell simulations of an electron beam propagating in vacuum using our solver are provided.
• Mild traumatic brain injury (mTBI) is a growing public health problem with an estimated incidence of one million people annually in US. Neurocognitive tests are used to both assess the patient condition and to monitor the patient progress. This work aims to directly use MR images taken shortly after injury to detect whether a patient suffers from mTBI, by incorporating machine learning and computer vision techniques to learn features suitable discriminating between mTBI and normal patients. We focus on 3 regions in brain, and extract multiple patches from them, and use bag-of-visual-word technique to represent each subject as a histogram of representative patterns derived from patches from all training subjects. After extracting the features, we use greedy forward feature selection, to choose a subset of features which achieves highest accuracy. We show through experimental studies that BoW features perform better than the simple mean value features which were used previously.
• In this work, we pose the question of whether, by considering qualitative information such as a sample target image as input, one can produce a rendered image of scientific data that is similar to the target. The algorithm resulting from our research allows one to ask the question of whether features like those in the target image exists in a given dataset. In that way, our method is one of imagery query or reverse engineering, as opposed to manual parameter tweaking of the full visualization pipeline. For target images, we can use real-world photographs of physical phenomena. Our method leverages deep neural networks and evolutionary optimization. Using a trained similarity function that measures the difference between renderings of a phenomenon and real-world photographs, our method optimizes rendering parameters. We demonstrate the efficacy of our method using a superstorm simulation dataset and images found online. We also discuss a parallel implementation of our method, which was run on NCSA's Blue Waters.
• University curriculum, both on a campus level and on a per-major level, are affected in a complex way by many decisions of many administrators and faculty over time. As universities across the United States share an urgency to significantly improve student success and success retention, there is a pressing need to better understand how the student population is progressing through the curriculum, and how to provide better supporting infrastructure and refine the curriculum for the purpose of improving student outcomes. This work has developed a visual knowledge discovery system called eCamp that pulls together a variety of populationscale data products, including student grades, major descriptions, and graduation records. These datasets were previously disconnected and only available to and maintained by independent campus offices. The framework models and analyzes the multi-level relationships hidden within these data products, and visualizes the student flow patterns through individual majors as well as through a hierarchy of majors. These results support analytical tasks involving student outcomes, student retention, and curriculum design. It is shown how eCamp has revealed student progression information that was previously unavailable.
• Despite the appeal of deep neural networks that largely replace the traditional handmade filters, they still suffer from isolated cases that cannot be properly handled only by the training of convolutional filters. Abnormal factors, including real-world noise, blur, or other quality degradations, ruin the output of a neural network. These unexpected problems can produce critical complications, and it is surprising that there has only been minimal research into the effects of noise in the deep neural network model. Therefore, we present an exhaustive investigation into the effect of noise in image classification and suggest a generalized architecture of a dual-channel model to treat quality degraded input images. We compare the proposed dual-channel model with a simple single model and show it improves the overall performance of neural networks on various types of quality degraded input datasets.
• We apply machine learning techniques in an attempt to predict and classify stellar properties from noisy and sparse time series data. We preprocessed over 94 GB of Kepler light curves from MAST to classify according to ten distinct physical properties using both representation learning and feature engineering approaches. Studies using machine learning in the field have been primarily done on simulated data, making our study one of the first to use real light curve data for machine learning approaches. We tuned our data using previous work with simulated data as a template and achieved mixed results between the two approaches. Representation learning using a Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) produced no successful predictions, but our work with feature engineering was successful for both classification and regression. In particular, we were able to achieve values for stellar density, stellar radius, and effective temperature with low error (~ 2 - 4%) and good accuracy (~ 75%) for classifying the number of transits for a given star. The results show promise for improvement for both approaches upon using larger datasets with a larger minority class. This work has the potential to provide a foundation for future tools and techniques to aid in the analysis of astrophysical data.
• Stabilization, by deformation, of the Poincaré-Heisenberg algebra requires both the introduction of a fundamental lentgh and the noncommutativity of translations which is associated to the gravitational field. The noncommutative geometry structure that follows from the deformed algebra is studied both for the non-commutative tangent space and the full space with gravity. The contact points of this approach with the work of David Finkelstein are emphasized.
• Maddah-Ali and Niesen's original coded caching scheme for shared-link broadcast networks is now known to be optimal to within a factor two, and has been applied to other types of networks. For practical reasons, this paper considers that a server communicates to cache-aided users through $H$ intermediate relays. In particular, it focuses on combination networks where each of the $K = \binom{H}{r}$ users is connected to a distinct $r$-subsets of relays. By leveraging the symmetric topology of the network, this paper proposes a novel method to general multicast messages and to deliver them to the users. By numerical evaluations, the proposed scheme is shown to reduce the download time compared to the schemes available in the literature. The idea is then extended to decentralized combination networks, more general relay networks, and combination networks with cache-aided relays and users. Also in these cases the proposed scheme outperforms known ones.
• We propose in this paper a construction of a diffusion process on the space $\mathcal {P}_2(\mathbb{R})$ of probability measures with a second-order moment. This process was introduced in several papers by Konarovskyi (see e.g. "A system of coalescing heavy diffusion particles on the real line", 2017) and consists of the limit when $N$ tends to $+\infty$ of a system of $N$ coalescing and mass-carrying particles. It has properties analogous to those of a standard Euclidean Brownian motion, in a sense that we will precise in this paper. We also compare it to the Wasserstein diffusion on $\mathcal {P}_2(\mathbb{R})$ constructed by von Renesse and Sturm (see Entropic measure and Wasserstein diffusion). We obtain that process by the construction of a system of particles having short-range interactions and by letting the range of interactions tend to zero. This construction can be seen as an approximation of the singular process of Konarovskyi by a sequence of smoother processes.
• We present an algorithm that generates a collision-free configuration-space path that closely follows, according to the discrete Fréchet metric, a desired path in task space. By leveraging the Fréchet metric and other tools from computational geometry, we approximate the search space using a cross-product graph. This structure allows us to efficiently search for the solution using a simple variant of Dijkstra's graph search algorithm. Additionally, we can incrementally update and improve the solution in an anytime fashion. We compare multiple proposed densification strategies and show that our algorithm outperforms a previously proposed optimization-based approach.
• Deep neural networks (DNNs) have become increasingly important due to their excellent empirical performance on a wide range of problems. However, regularization is generally achieved by indirect means, largely due to the complex set of functions defined by a network and the difficulty in measuring function complexity. There exists no method in the literature for additive regularization based on a norm of the function, as is classically considered in statistical learning theory. In this work, we propose sampling-based approximations to weighted function norms as regularizers for deep neural networks. We provide, to the best of our knowledge, the first proof in the literature of the NP-hardness of computing function norms of DNNs, motivating the necessity of a stochastic optimization strategy. Based on our proposed regularization scheme, stability-based bounds yield a $\mathcal{O}(N^{-\frac{1}{2}})$ generalization error for our proposed regularizer when applied to convex function sets. We demonstrate broad conditions for the convergence of stochastic gradient descent on our objective, including for non-convex function sets such as those defined by DNNs. Finally, we empirically validate the improved performance of the proposed regularization strategy for both convex function sets as well as DNNs on real-world classification and segmentation tasks.
• In this paper, we propose an approach for the detection of clickbait posts in online social media (OSM). Clickbait posts are short catchy phrases that attract a user's attention to click to an article. The approach is based on a machine learning (ML) classifier capable of distinguishing between clickbait and legitimate posts published in OSM. The suggested classifier is based on a variety of features, including image related features, linguistic analysis, and methods for abuser detection. In order to evaluate our method, we used two datasets provided by Clickbait Challenge 2017. The best performance obtained by the ML classifier was an AUC of 0.8, an accuracy of 0.812, precision of 0.819, and recall of 0.966. In addition, as opposed to previous studies, we found that clickbait post titles are statistically significant shorter than legitimate post titles. Finally, we found that counting the number of formal English words in the given content is useful for clickbait detection.
• Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Such optima have been well characterized in several biologically relevant cases over the past decade. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the `capacity') achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single modulator controls N targets. Assuming (i) sufficiently large N, (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.
• Oct 19 2017 math.CO arXiv:1710.06680v1
A vertex u of a graph t-dominates a vertex v if there are at most t vertices different from u,v that are adjacent to v and not to u; and a graph is t-dominating if for every pair of distinct vertices, one of them t-dominates the other. Our main result says that if a graph is t-dominating, then it is close (in an appropriate sense) to being 0-dominating. We also show that an analogous statement for digraphs is false; and discuss some connections with the Erdos-Hajnal conjecture.
• Dropout Variational Inference, or Dropout Sampling, has been recently proposed as an approximation technique for Bayesian Deep Learning and evaluated for image classification and regression tasks. This paper investigates the utility of Dropout Sampling for object detection for the first time. We demonstrate how label uncertainty can be extracted from a state-of-the-art object detection system via Dropout Sampling. We show that this uncertainty can be utilized to increase object detection performance under the open-set conditions that are typically encountered in robotic vision. We evaluate this approach on a large synthetic dataset with 30,000 images, and a real-world dataset captured by a mobile robot in a versatile campus environment.
• The vertical displacement of a tokamak plasma is modelled during its non-linear phase by considering a free-moving current-carrying rod inductively coupled to a set of fixed conducting wires or a cylindrical conducting shell. The models capture the leading term in a Taylor expansion of the Green's function for the interaction between the plasma column and the surrounding vacuum vessel. The plasma shape and profiles are assumed not to vary during the vertical drifting phase such that the plasma column behaves as a rigid body. In the limit of perfectly conducting structures, the plasma is prevented to come in contact with the wall due to steep effective potential barriers created by the induced Eddy currents. Consequently, the plasma wire oscillates at Alfvénic frequencies about a given force-free position. In addition to damping oscillations, resistivity in the wall allows for the equilibrium point to drift towards the vessel on the slow timescale of flux penetration. The initial exponential motion of the plasma, understood as a resistive vertical instability, is succeeded by a non-linear "sinking" behaviour, that is analytically shown to be algebraic and decelerating. The acceleration of the plasma column often observed in experiments is thus conjectured to originate from an early sharing of toroidal current between the core, the halo plasma and the wall or from the thermal quench dynamics precipitating loss of plasma current.
• We introduce a novel approach to visualizing temporal clickstream behaviour in the context of a degree-satisfying online course, Habitable Worlds, offered through Arizona State University. The current practice for visualizing behaviour within a digital learning environment has been to utilize state space graphs and other plots of descriptive statistics on resource transitions. While these forms can be visually engaging, they rely on conditional frequency tabulations which lack contextual depth and require assumptions about the patterns being sought. Skip-grams and other representation learning techniques position elements into a vector space which can capture a wide scope of regularities in the data. These regularities can then be projected onto a two-dimensional perceptual space using dimensionality reduction techniques designed to retain relationships information encoded in the learned representations. While these visualization techniques have been used before in the broader machine learning community to better understand the makeup of a neural network hidden layer or the relationship between word vectors, we apply them to online behavioral learner data and go a step further; exploring the impact of the parameters of the model on producing tangible, non-trivial observations of behaviour that are illuminating and suggestive of pedagogical improvement to the course designers and instructors. The methodology introduced in this paper led to an improved understanding of passing and non-passing student behavior in the course and is widely applicable to other datasets of clickstream activity where investigators and stakeholders wish to organically surface principal behavioral patterns.
• Inverse problems appear in many applications such as image deblurring and inpainting. The common approach to address them is to design a specific algorithm for each problem. The Plug-and-Play (P&P) framework, which has been recently introduced, allows solving general inverse problems by leveraging the impressive capabilities of existing denoising algorithms. While this fresh strategy has found many applications, a burdensome parameter tuning is often required in order to obtain high-quality results. In this work, we propose an alternative method for solving inverse problems using denoising algorithms, that requires less parameter tuning. We demonstrate that it is competitive with task-specific techniques and the P&P approach for image inpainting and deblurring.
• Many experiments in physics involve searching for a localized excess over background expectations in an observed spectrum. If the background is known and there is Gaussian noise, the amount of excess of successive observations can be quantified by the runs statistic taking care of the look-elsewhere effect. The distribution of the runs statistic under the background model is known analytically but the computation becomes too expensive for more than about a hundred observations. This work demonstrates a principled high-precision extrapolation from a few dozen up to millions of data points. It is most precise in the interesting regime when an excess is present. The method is verified for benchmark cases and successfully applied to real data from an axion search. The code that implements our method is available at https://github.com/fredRos/runs.
• Lexical ambiguity can impede NLP systems from accurate understanding of semantics. Despite its potential benefits, the integration of sense-level information into NLP systems has remained understudied. By incorporating a novel disambiguation algorithm into a state-of-the-art classification model, we create a pipeline to integrate sense-level information into downstream NLP applications. We show that a simple disambiguation of the input text can lead to consistent performance improvement on multiple topic categorization and polarity detection datasets, particularly when the fine granularity of the underlying sense inventory is reduced and the document is sufficiently large. Our results also point to the need for sense representation research to focus more on in vivo evaluations which target the performance in downstream NLP applications rather than artificial benchmarks.
• We demonstrate identification of position, material, orientation and shape of objects imaged by an $^{85}$Rb atomic magnetometer performing electromagnetic induction imaging supported by machine learning. Machine learning maximizes the information extracted from the images created by the magnetometer, demonstrating the use of hidden data. Localization 2.6 times better than the spatial resolution of the imaging system and successful classification up to 97$\%$ are obtained. This circumvents the need of solving the inverse problem, and demonstrates the extension of machine learning to diffusive systems such as low-frequency electrodynamics in media. Automated collection of task-relevant information from quantum-based electromagnetic imaging will have a relevant impact from biomedicine to security.
• The ICDAR Robust Reading Competition (RRC), initiated in 2003 and re-established in 2011, has become the de-facto evaluation standard for the international community. Concurrent with its second incarnation in 2011, a continuous effort started to develop an online framework to facilitate the hosting and management of competitions. This short paper briefly outlines the Robust Reading Competition Annotation and Evaluation Platform, the backbone of the Robust Reading Competition, comprising a collection of tools and processes that aim to simplify the management and annotation of data, and to provide online and offline performance evaluation and analysis services.
• At VAST 2016, a characterization of guidance has been presented. It includes a definition of guidance and a model of guidance based on van Wijk's model of visualization. This note amends the original characterization of guidance in two aspects. First, we provide a clarification of what guidance actually is (and is not). Second, we insert into the model a conceptually relevant link that was missing in the original version.
• We consider the problem of minimization of a convex function on a simple set with convex non-smooth inequality constraint and describe first-order methods to solve such problems in different situations: smooth or non-smooth objective function; convex or strongly convex objective and constraint; deterministic or randomized information about the objective and constraint. We hope that it is convenient for a reader to have all the methods for different settings in one place. Described methods are based on Mirror Descent algorithm and switching subgradient scheme. One of our focus is to propose, for the listed different settings, a Mirror Descent with adaptive stepsizes and adaptive stopping rule. This means that neither stepsize nor stopping rule require to know the Lipschitz constant of the objective or constraint. We also construct Mirror Descent for problems with objective function, which is not Lipschitz continuous, e.g. is a quadratic function. Besides that, we address the problem of recovering the solution of the dual problem.
• Given a real-world graph, how can we measure relevance scores for ranking and link prediction? Random walk with restart (RWR) provides an excellent measure for this and has been applied to various applications such as friend recommendation, community detection, anomaly detection, etc. However, RWR suffers from two problems: 1) using the same restart probability for all the nodes limits the expressiveness of random walk, and 2) the restart probability needs to be manually chosen for each application without theoretical justification. We have two main contributions in this paper. First, we propose Random Walk with Extended Restart (RWER), a random walk based measure which improves the expressiveness of random walks by using a distinct restart probability for each node. The improved expressiveness leads to superior accuracy for ranking and link prediction. Second, we propose SuRe (Supervised Restart for RWER), an algorithm for learning the restart probabilities of RWER from a given graph. SuRe eliminates the need to heuristically and manually select the restart parameter for RWER. Extensive experiments show that our proposed method provides the best performance for ranking and link prediction tasks, improving the MAP (Mean Average Precision) by up to 15.8% on the best competitor.
• Automated segmentation approaches are crucial to quantitatively analyze large-scale 3D microscopy images. Particularly in deep tissue regions, automatic methods still fail to provide error-free segmentations. To improve the segmentation quality throughout imaged samples, we present a new supervoxel-based 3D segmentation approach that outperforms current methods and reduces the manual correction effort. The algorithm consists of gentle preprocessing and a conservative super-voxel generation method followed by supervoxel agglomeration based on local signal properties and a postprocessing step to fix under-segmentation errors using a Convolutional Neural Network. We validate the functionality of the algorithm on manually labeled 3D confocal images of the plant Arabidopis thaliana and compare the results to a state-of-the-art meristem segmentation algorithm.
• We demonstrate strong coupling between travelling magnons in an Yttrium Iron Garnet film and 3D microwave cavity photons at milli-Kelvin temperatures. The coupling strength of $350$MHz or $7.3$\% of resonance frequency is observed. The magnonic subsystem is represented by the Damon-Eshbach magnetostatic surface wave with a distribution of wave numbers giving the linewidth of 15MHz. The ways to improve this parameter are discussed. The energy gap in the spectrum given by the Zeeman energy and the shape-anisotropy energy in the film geometry give rise to a significant asymmetry of the double peak structure of the photon-magnon avoided level crossing. A structure of two parallel YIG films is investigated using the same re-entrant magnetostatic surface wave transducer revealing a higher order magnon modes existing in both films. Combination of a multi-post re-entrant cavity and multiple films is a potential base for engineering both magnon and photon spectra.
• In this short communication paper, we propose, for the first time, a convergent bounded degree hierarchy of second-order cone programming (SOCP) relaxations for solving nonconvex polynomial optimization problems. A key feature of the proposed SOCP hierarchy is that the size and the number of the second-order cone constraints of the relaxations are independent of the step of the relaxation in the hierarchy. Using the Krivine-Stengle's certificate of positivity in real algebraic geometry, we establish the asymptotic convergence of the hierarchy under a simple condition which, for instance, is automatically satisfied for box constraints. Finally, by establishing new structured sums-of-squares representation results for nonnegative polynomials, we show that finite convergence is achieved at the first step of the hierarchy for two classes of polynomial optimization problems: polynomial optimization problems satisfying a new numerically tractable convexity condition, called SOCP-convexity, and polynomials problems with essentially non-positive coefficients. Numerical examples are also provided to illustrate the significance of the results.
• We consider random Schrödinger operators with Dirichlet boundary conditions outside lattice approximations of a smooth Euclidean domain and study the behavior of its lowest-lying eigenvalues in the limit when the lattice spacing tends to zero. Under a suitable moment assumption on the random potential and regularity of the spatial dependence of its mean, we prove that the eigenvalues of the random operator converge to those of a deterministic Schrödinger operator. Assuming also regularity of the variance, the fluctuation of the random eigenvalues around their mean are shown to obey a multivariate central limit theorem. This extends the authors' recent work where similar conclusions have been obtained for bounded random potentials.
• Learning social media data embedding by deep models has attracted extensive research interest as well as boomed a lot of applications, such as link prediction, classification, and cross-modal search. However, for social images which contain both link information and multimodal contents (e.g., text description, and visual content), simply employing the embedding learnt from network structure or data content results in sub-optimal social image representation. In this paper, we propose a novel social image embedding approach called Deep Multimodal Attention Networks (DMAN), which employs a deep model to jointly embed multimodal contents and link information. Specifically, to effectively capture the correlations between multimodal contents, we propose a multimodal attention network to encode the fine-granularity relation between image regions and textual words. To leverage the network structure for embedding learning, a novel Siamese-Triplet neural network is proposed to model the links among images. With the joint deep model, the learnt embedding can capture both the multimodal contents and the nonlinear network information. Extensive experiments are conducted to investigate the effectiveness of our approach in the applications of multi-label classification and cross-modal search. Compared to state-of-the-art image embeddings, our proposed DMAN achieves significant improvement in the tasks of multi-label classification and cross-modal search.
• Experience replay is a key technique behind many recent advances in deep reinforcement learning. Allowing the agent to learn from earlier memories can speed up learning and break undesirable temporal correlations. Despite its wide-spread application, very little is understood about the properties of experience replay. How does the amount of memory kept affect learning dynamics? Does it help to prioritize certain experiences? In this paper, we address these questions by formulating a dynamical systems ODE model of Q-learning with experience replay. We derive analytic solutions of the ODE for a simple setting. We show that even in this very simple setting, the amount of memory kept can substantially affect the agent's performance. Too much or too little memory both slow down learning. Moreover, we characterize regimes where prioritized replay harms the agent's learning. We show that our analytic solutions have excellent agreement with experiments. Finally, we propose a simple algorithm for adaptively changing the memory buffer size which achieves consistently good empirical performance.
• A number of recent papers have provided evidence that practical design questions about neural networks may be tackled theoretically by studying the behavior of random networks. However, until now the tools available for analyzing random neural networks have been relatively ad-hoc. In this work, we show that the distribution of pre-activations in random neural networks can be exactly mapped onto lattice models in statistical physics. We argue that several previous investigations of stochastic networks actually studied a particular factorial approximation to the full lattice model. For random linear networks and random rectified linear networks we show that the corresponding lattice models in the wide network limit may be systematically approximated by a Gaussian distribution with covariance between the layers of the network. In each case, the approximate distribution can be diagonalized by Fourier transformation. We show that this approximation accurately describes the results of numerical simulations of wide random neural networks. Finally, we demonstrate that in each case the large scale behavior of the random networks can be approximated by an effective field theory.
• In this paper we study the Liouville type properties for solutions to the steady incompressible Navier-Stoks equations in $\mathbf{R}^{3}$. It is shown that any solution to the steady Navier-Stokes equations in $\mathbf{R}^{3}$ with finite Dirichlet integral and vanishing velocity field at far fields must be trivial. This solves an open problem. The key ingredients of the proof include a Hodge decomposition of the energy-flux and the observation that the square of the deformation matrix lies in the local Hardy space. As a by-product, we also obtain a Liouville type theorem for the steady density-dependent Navier-Stokes equations.
• If X and Y are real valued random variables such that the first moments of X, Y, and XY exist and the conditional expectation of Y given X is an affine function of X, then the intercept and slope of the conditional expectation equal the intercept and slope of the least squares linear regression function, even though Y may not have a finite second moment. As a consequence, the affine in X form of the conditional expectation and zero covariance imply mean independence.
• An increasing number of sensors on mobile, Internet of things (IoT), and wearable devices generate time-series measurements of physical activities. Though access to the sensory data is critical to the success of many beneficial applications such as health monitoring or activity recognition, a wide range of potentially sensitive information about the individuals can also be discovered through these datasets and this cannot easily be protected using traditional privacy approaches. In this paper, we propose an integrated sensing framework for managing access to personal time-series data in order to provide utility while protecting individuals' privacy. We introduce \textitReplacement AutoEncoder, a novel feature-learning algorithm which learns how to transform discriminative features of multidimensional time-series that correspond to sensitive inferences, into some features that have been more observed in non-sensitive inferences, to protect users' privacy. The main advantage of Replacement AutoEncoder is its ability to keep important features of desired inferences unchanged to preserve the utility of the data. We evaluate the efficacy of the algorithm with an activity recognition task in a multi-sensing environment using extensive experiments on three benchmark datasets. We show that it can retain the recognition accuracy of state-of-the-art techniques while simultaneously preserving the privacy of sensitive information. We use a Generative Adversarial Network to attempt to detect the replacement of sensitive data with fake non-sensitive data. We show that this approach does not detect the replacement unless the network can train using the users' original unmodified data.

Siddhartha Das Oct 06 2017 03:18 UTC

Here is a work in related direction: "Unification of Bell, Leggett-Garg and Kochen-Specker inequalities: Hybrid spatio-temporal inequalities", Europhysics Letters 104, 60006 (2013), which may be relevant to the discussions in your paper. [https://arxiv.org/abs/1308.0270]

Bin Shi Oct 05 2017 00:07 UTC

Welcome to give the comments for this paper!

Bassam Helou Sep 22 2017 17:21 UTC

The initial version of the article does not adequately and clearly explain how certain equations demonstrate whether a particular interpretation of QM violates the no-signaling condition.
A revised and improved version is scheduled to appear on September 25.

James Wootton Sep 21 2017 05:41 UTC

What does this imply for https://scirate.com/arxiv/1608.00263? I'm guessing they still regard it as valid (it is ref [14]), but just too hard to implement for now.

Ben Criger Sep 08 2017 08:09 UTC

Oh look, there's another technique for decoding surface codes subject to X/Z correlated errors: https://scirate.com/arxiv/1709.02154

Aram Harrow Sep 06 2017 07:54 UTC

The paper only applies to conformal field theories, and such a result cannot hold for more general 1-D systems by 0705.4077 and other papers (assuming standard complexity theory conjectures).

Felix Leditzky Sep 05 2017 21:27 UTC

Thanks for the clarification, Philippe!

Philippe Faist Sep 05 2017 21:09 UTC

Hi Felix, thanks for the good question.

We've found it more convenient to consider trace-nonincreasing and $\Gamma$-sub-preserving maps (and this is justified by the fact that they can be dilated to fully trace-preserving and $\Gamma$-preserving maps on a larger system). The issue arises because

...(continued)