# Top arXiv papers

• Providing system-size independent lower bounds on the spectral gap of local Hamiltonian is in general a hard problem. For the case of finite-range, frustration free Hamiltonians on a spin lattice of arbitrary dimension, we show that a property of the ground state space is sufficient to obtain such a bound. We furthermore show that such a condition is necessary and equivalent to a constant spectral gap. Thanks to this equivalence, we can prove that for gapless models in any dimension, the spectral gap on regions of diameter $n$ is at most $o\left(\frac{\log(n)^{2+\epsilon}}{n}\right)$ for any positive $\epsilon$.
• The tensor rank of a tensor is the smallest number r such that the tensor can be decomposed as a sum of r simple tensors. Let s be a k-tensor and let t be an l-tensor. The tensor product of s and t is a (k + l)-tensor (not to be confused with the "tensor Kronecker product" used in algebraic complexity theory, which multiplies two k-tensors to get a k-tensor). Tensor rank is sub-multiplicative under the tensor product. We revisit the connection between restrictions and degenerations. It is well-known that tensor rank is not in general multiplicative under the tensor Kronecker product. A result of our study is that tensor rank is also not in general multiplicative under the tensor product. This answers a question of Draisma and Saptharishi. It remains an open question whether border rank and asymptotic rank are multiplicative under the tensor product. Interestingly, all lower bounds on border rank obtained from Young flattenings are multiplicative under the tensor product.
• We show that the border support rank of the tensor corresponding to two-by-two matrix multiplication is seven over the complex numbers. We do this by constructing two polynomials that vanish on all complex tensors with format four-by-four-by-four and border rank at most six, but that do not vanish simultaneously on any tensor with the same support as the two-by-two matrix multiplication tensor. This extends the work of Hauenstein, Ikenmeyer, and Landsberg. We also give two proofs that the support rank of the two-by-two matrix multiplication tensor is seven over any field: one proof using a result of De Groote saying that the decomposition of this tensor is unique up to sandwiching, and another proof via the substitution method. These results answer a question asked by Cohn and Umans. Studying the border support rank of the matrix multiplication tensor is relevant for the design of matrix multiplication algorithms, because upper bounds on the border support rank of the matrix multiplication tensor lead to upper bounds on the computational complexity of matrix multiplication, via a construction of Cohn and Umans. Moreover, support rank has applications in quantum communication complexity.
• In the lore of quantum metrology, one often hears (or reads) the following no-go theorem: If you put vacuum into one input port of a balanced Mach-Zehnder Interferometer, then no matter what you put into the other input port, and no matter what your detection scheme, the sensitivity can never be better than the shot noise limit (SNL). Often the proof of this theorem is cited to be in Ref. [C. Caves, Phys. Rev. D 23, 1693 (1981)], but upon further inspection, no such claim is made there. A quantum-Fisher-information-based argument suggestive of this no-go theorem appears in Ref. [M. Lang and C. Caves, Phys. Rev. Lett. 111, 173601 (2013)], but is not stated in its full generality. Here we thoroughly explore this no-go theorem and give the rigorous statement: the no-go theorem holds whenever the unknown phase shift is split between both arms of the interferometer, but remarkably does not hold when only one arm has the unknown phase shift. In the latter scenario, we provide an explicit measurement strategy that beats the SNL. We also point out that these two scenarios are physically different and correspond to different types of sensing applications.
• We study the relations between the recently proposed machine-independent quantum complexity of P. Gacs~\citeGacs and the entropy of classical and quantum systems. On one hand, by restricting Gacs complexity to ergodic classical dynamical systems, we retrieve the equality between the Kolmogorov complexity rate and the Shannon entropy rate derived by A.A. Brudno~\citeBrudno. On the other hand, using the quantum Shannon-Mc Millan theorem~\citeBSM, we show that such an equality holds densely in the case of ergodic quantum spin chains.
• Neural networks can efficiently encode the probability distribution of errors in an error correcting code. Moreover, these distributions can be conditioned on the syndromes of the corresponding errors. This paves a path forward for a decoder that employs a neural network to calculate the conditional distribution, then sample from the distribution - the sample will be the predicted error for the given syndrome. We present an implementation of such an algorithm that can be applied to any stabilizer code. Testing it on the toric code, it has higher threshold than a number of known decoders thanks to naturally finding the most probable error and accounting for correlations between errors.
• Zero-shot learning, which studies the problem of object classification for categories for which we have no training examples, is gaining increasing attention from community. Most existing ZSL methods exploit deterministic transfer learning via an in-between semantic embedding space. In this paper, we try to attack this problem from a generative probabilistic modelling perspective. We assume for any category, the observed representation, e.g. images or texts, is developed from a unique prototype in a latent space, in which the semantic relationship among prototypes is encoded via linear reconstruction. Taking advantage of this assumption, virtual instances of unseen classes can be generated from the corresponding prototype, giving rise to a novel ZSL model which can alleviate the domain shift problem existing in the way of direct transfer learning. Extensive experiments on three benchmark datasets show our proposed model can achieve state-of-the-art results.
• The ground states of topological orders condense extended objects and support topological excitations. This nontrivial property leads to nonzero topological entanglement entropy $S_{topo}$ for conventional topological orders. Fracton topological order is an exotic class of topological order which is beyond the description of TQFT. With some assumptions about the condensates and the topological excitations, we derive a lower bound of the nonlocal entanglement entropy $S_{nonlocal}$ (a generalization of $S_{topo}$). The lower bound applies to Abelian stabilizer models including conventional topological orders as well as type I and type II fracton models. For fracton models, the lower bound shows that $S_{nonlocal}$ could obtain geometry-dependent values, and $S_{nonlocal}$ is extensive for certain choices of subsystems. The stability of the lower bound under local perturbations is discussed. A variant of our method for non-Abelian models will be presented in an upcoming work.
• This paper focuses on style transfer on the basis of non-parallel text. This is an instance of a broader family of problems including machine translation, decipherment, and sentiment modification. The key technical challenge is to separate the content from desired text characteristics such as sentiment. We leverage refined alignment of latent representations across mono-lingual text corpora with different characteristics. We deliberately modify encoded examples according to their characteristics, requiring the reproduced instances to match available examples with the altered characteristics as a population. We demonstrate the effectiveness of this cross-alignment method on three tasks: sentiment modification, decipherment of word substitution ciphers, and recovery of word order.
• We study causal inference in a multi-environment setting, in which the functional relations for producing the variables from their direct causes remain the same across environments, while the distribution of exogenous noises may vary. We introduce the idea of using the invariance of the functional relations of the variables to their causes across a set of environments. We define a notion of completeness for a causal inference algorithm in this setting and prove the existence of such algorithm by proposing the baseline algorithm. Additionally, we present an alternate algorithm that has significantly improved computational and sample complexity compared to the baseline algorithm. The experiment results show that the proposed algorithm outperforms the other existing algorithms.
• A photodetector may be characterized by various figures of merit such as response time, bandwidth, dark count rate, efficiency, wavelength resolution, and photon-number resolution. On the other hand, quantum theory says that any measurement device is fully described by its POVM, which stands for Positive-Operator-Valued Measure, and which generalizes the textbook notion of the eigenstates of the appropriate hermitian operator (the "observable") as measurement outcomes. Here we show how to define a multitude of photodetector figures of merit in terms of a given POVM. We distinguish classical and quantum figures of merit and issue a conjecture regarding trade-off relations between them. We discuss the relationship between POVM elements and photodetector clicks, and how models of photodetectors may be tested by measuring either POVM elements or figures of merit. Finally, the POVM is advertised as an excellent platform-independent way of comparing different types of photodetectors, since any such POVM refers to the Hilbert space of the incoming light, and not to any Hilbert space internal to the detector.
• We present a semiclassical study of the spectrum of a few-body system consisting of two short-range interacting bosonic particles in one dimension, a particular case of a general class of integrable many-body systems where the energy spectrum is given by the solution of algebraic transcendental equations. By an exact mapping between $\delta$-potentials and boundary conditions on the few-body wavefunctions, we are able to extend previous semiclassical results for single-particle systems with mixed boundary conditions to the two-body problem. The semiclassical approach allows us to derive explicit analytical results for the smooth part of the two-body density of states that are in excellent agreement with numerical calculations. It further enables us to include the effect of bound states in the attractive case. Remarkably, for the particular case of two particles in one dimension, the discrete energy levels obtained through a requantization condition of the smooth density of states are essentially in perfect agreement with the exact ones.
• The batch exponentiated gradient (EG) method provides a principled approach to convex smooth minimization on the probability simplex or the space of quantum density matrices. However, it is not always guaranteed to converge. Existing convergence analyses of the EG method require certain quantitative smoothness conditions on the loss function, e.g., Lipschitz continuity of the loss function or its gradient, but those conditions may not hold in important applications. In this paper, we prove that the EG method with Armijo line search always converges for any convex loss function with a locally Lipschitz continuous gradient. Because of our convergence guarantee, the EG method with Armijo line search becomes the fastest guaranteed-to-converge algorithm for maximum-likelihood quantum state estimation, on the real datasets we have.
• We present a technique for learning control Lyapunov (potential) functions, which are used in turn to synthesize controllers for nonlinear dynamical systems. The learning framework uses a demonstrator that implements a black-box, untrusted strategy presumed to solve the problem of interest, a learner that poses finitely many queries to the demonstrator to infer a candidate function and a verifier that checks whether the current candidate is a valid control Lyapunov function. The overall learning framework is iterative, eliminating a set of candidates on each iteration using the counterexamples discovered by the verifier and the demonstrations over these counterexamples. We prove its convergence using ellipsoidal approximation techniques from convex optimization. We also implement this scheme using nonlinear MPC controllers to serve as demonstrators for a set of state and trajectory stabilization problems for nonlinear dynamical systems. Our approach is able to synthesize relatively simple polynomial control Lyapunov functions, and in that process replace the MPC using a guaranteed and computationally less expensive controller.
• Despite recent advances in 3D pose estimation of human hands, especially thanks to the advent of CNNs and depth cameras, this task is still far from being solved. This is mainly due to the highly non-linear dynamics of fingers, which makes hand model training a challenging task. In this paper, we exploit a novel hierarchical tree-like structured CNN, in which branches are trained to become specialized in predefined subsets of hand joints, called local poses. We further fuse local pose features, extracted from hierarchical CNN branches, to learn higher order dependencies among joints in the final pose by end-to-end training. Lastly, the loss function used is also defined to incorporate appearance and physical constraints about doable hand motion and deformation. Experimental results suggest that feeding a tree-shaped CNN, specialized in local poses, into a fusion network for modeling joints correlations, helps to increase the precision of final estimations, outperforming state-of-the-art results on NYU and MSRA datasets.
• May 29 2017 cs.CV arXiv:1705.09602v1
Object tracking is an essential task in computer vision that has been studied since the early days of the field. Being able to follow objects that undergo different transformations in the video sequence, including changes in scale, illumination, shape and occlusions, makes the problem extremely difficult. One of the real challenges is to keep track of the changes in objects appearance and not drift towards the background clutter. Different from previous approaches, we obtain robustness against background with a tracker model that is composed of many different parts. They are classifiers that respond at different scales and locations. The tracker system functions as a society of parts, each having its own role and level of credibility. Reliable classifiers decide the tracker's next move, while newcomers are first monitored before gaining the necessary level of reliability to participate in the decision process. Some parts that loose their consistency are rejected, while others that show consistency for a sufficiently long time are promoted to permanent roles. The tracker system, as a whole, could also go through different phases, from the usual, normal functioning to states of weak agreement and even crisis. The tracker system has different governing rules in each state. What truly distinguishes our work from others is not necessarily the strength of individual tracking parts, but the way in which they work together and build a strong and robust organization. We also propose an efficient way to learn simultaneously many tracking parts, with a single closed-form formulation. We obtain a fast and robust tracker with state of the art performance on the challenging OTB50 dataset.
• As a competitive alternative to least squares regression, quantile regression is popular in analyzing heterogenous data. For quantile regression model specified for one single quantile level $\tau$, major difficulties of semiparametric efficient estimation are the unavailability of a parametric efficient score and the conditional density estimation. In this paper, with the help of the least favorable submodel technique, we first derive the semiparametric efficient scores for linear quantile regression models that are assumed for a single quantile level, multiple quantile levels and all the quantile levels in $(0,1)$ respectively. Our main discovery is a one-step (nearly) semiparametric efficient estimation for the regression coefficients of the quantile regression models assumed for multiple quantile levels, which has several advantages: it could be regarded as an optimal way to pool information across multiple/other quantiles for efficiency gain; it is computationally feasible and easy to implement, as the initial estimator is easily available; due to the nature of quantile regression models under investigation, the conditional density estimation is straightforward by plugging in an initial estimator. The resulting estimator is proved to achieve the corresponding semiparametric efficiency lower bound under regularity conditions. Numerical studies including simulations and an example of birth weight of children confirms that the proposed estimator leads to higher efficiency compared with the Koenker-Bassett quantile regression estimator for all quantiles of interest.
• Vasculature is known to be of key biological significance, especially in the study of cancer. As such, considerable effort has been focused on the automated measurement and analysis of vasculature in medical and pre-clinical images. In tumors in particular, the vascular networks may be extremely irregular and the appearance of the individual vessels may not conform to classical descriptions of vascular appearance. Typically, vessels are extracted by either a segmentation and thinning pipeline, or by direct tracking. Neither of these methods are well suited to microscopy images of tumor vasculature. In order to address this we propose a method to directly extract a medial representation of the vessels using Convolutional Neural Networks. We then show that these two-dimensional centerlines can be meaningfully extended into 3D in anisotropic and complex microscopy images using the recently popularized Convolutional Long Short-Term Memory units (ConvLSTM). We demonstrate the effectiveness of this hybrid convolutional-recurrent architecture over both 2D and 3D convolutional comparators.
• We propose an object detection method that improves the accuracy of the conventional SSD (Single Shot Multibox Detector), which is one of the top object detection algorithms in both aspects of accuracy and speed. The performance of a deep network is known to be improved as the number of feature maps increases. However, it is difficult to improve the performance by simply raising the number of feature maps. In this paper, we propose and analyze how to use feature maps effectively to improve the performance of the conventional SSD. The enhanced performance was obtained by changing the structure close to the classifier network, rather than growing layers close to the input data, e.g., by replacing VGGNet with ResNet. The proposed network is suitable for sharing the weights in the classifier networks, by which property, the training can be faster with better generalization power. For the Pascal VOC 2007 test set trained with VOC 2007 and VOC 2012 training sets, the proposed network with the input size of 300 x 300 achieved 78.5% mAP (mean average precision) at the speed of 35.0 FPS (frame per second), while the network with a 512 x 512 sized input achieved 80.8% mAP at 16.6 FPS using Nvidia Titan X GPU. The proposed network shows state-of-the-art mAP, which is better than those of the conventional SSD, YOLO, Faster-RCNN and RFCN. Also, it is faster than Faster-RCNN and RFCN.
• May 29 2017 cs.CL arXiv:1705.09585v1
Individuals on social media may reveal themselves to be in various states of crisis (e.g. suicide, self-harm, abuse, or eating disorders). Detecting crisis from social media text automatically and accurately can have profound consequences. However, detecting a general state of crisis without explaining why has limited applications. An explanation in this context is a coherent, concise subset of the text that rationalizes the crisis detection. We explore several methods to detect and explain crisis using a combination of neural and non-neural techniques. We evaluate these techniques on a unique data set obtained from Koko, an anonymous emotional support network available through various messaging applications. We annotate a small subset of the samples labeled with crisis with corresponding explanations. Our best technique significantly outperforms the baseline for detection and explanation.
• Many data producers seek to provide users access to confidential data without unduly compromising data subjects' privacy and confidentiality. When intense redaction is needed to do so, one general strategy is to require users to do analyses without seeing the confidential data, for example, by releasing fully synthetic data or by allowing users to query remote systems for disclosure-protected outputs of statistical models. With fully synthetic data or redacted outputs, the analyst never really knows how much to trust the resulting findings. In particular, if the user did the same analysis on the confidential data, would regression coefficients of interest be statistically significant or not? We present algorithms for assessing this question that satisfy differential privacy. We describe conditions under which the algorithms should give accurate answers about statistical significance. We illustrate the properties of the methods using artificial and genuine data.
• May 29 2017 stat.ML cs.AI cs.CV cs.LG arXiv:1705.09558v1
Generative adversarial networks (GANs) can implicitly learn rich distributions over images, audio, and data which are hard to model with an explicit likelihood. We present a practical Bayesian formulation for unsupervised and semi-supervised learning with GANs, in conjunction with stochastic gradient Hamiltonian Monte Carlo to marginalize the weights of the generator and discriminator networks. The resulting approach is straightforward and obtains good performance without any standard interventions such as feature matching, or mini-batch discrimination. By exploring an expressive posterior over the parameters of the generator, the Bayesian GAN avoids mode-collapse, produces interpretable candidate samples with notable variability, and in particular provides state-of-the-art quantitative results for semi-supervised learning on benchmarks including SVHN, CelebA, and CIFAR-10, outperforming DCGAN, Wasserstein GANs, and DCGAN ensembles.
• Deep networks have recently been shown to be vulnerable to universal perturbations: there exist very small image-agnostic perturbations that cause most natural images to be misclassified by such classifiers. In this paper, we propose the first quantitative analysis of the robustness of classifiers to universal perturbations, and draw a formal link between the robustness to universal perturbations, and the geometry of the decision boundary. Specifically, we establish theoretical bounds on the robustness of classifiers under two decision boundary models (flat and curved models). We show in particular that the robustness of deep networks to universal perturbations is driven by a key property of their curvature: there exists shared directions along which the decision boundary of deep networks is systematically positively curved. Under such conditions, we prove the existence of small universal perturbations. Our analysis further provides a novel geometric method for computing universal perturbations, in addition to explaining their properties.
• The goal of this paper is to analyze the geometric properties of deep neural network classifiers in the input space. We specifically study the topology of classification regions created by deep networks, as well as their associated decision boundary. Through a systematic empirical investigation, we show that state-of-the-art deep nets learn connected classification regions, and that the decision boundary in the vicinity of datapoints is flat along most directions. We further draw an essential connection between two seemingly unrelated properties of deep networks: their sensitivity to additive perturbations in the inputs, and the curvature of their decision boundary. The directions where the decision boundary is curved in fact remarkably characterize the directions to which the classifier is the most vulnerable. We finally leverage a fundamental asymmetry in the curvature of the decision boundary of deep nets, and propose a method to discriminate between original images, and images perturbed with small adversarial examples. We show the effectiveness of this purely geometric approach for detecting small adversarial perturbations in images, and for recovering the labels of perturbed images.
• Purpose: Atrial fibrillation (AF) is the most common cardiac arrhythmia and is correlated with increased morbidity and mortality. It is associated with atrial fibrosis, which may be assessed non-invasively using late gadolinium-enhanced (LGE) magnetic resonance imaging (MRI) where scar tissue is visualised as a region of signal enhancement. In this study, we proposed a novel fully automatic pipeline to achieve an accurate and objective atrial scarring segmentation and assessment of LGE MRI scans for the AF patients. Methods: Our fully automatic pipeline uniquely combined: (1) a multi-atlas based whole heart segmentation (MA-WHS) to determine the cardiac anatomy from an MRI Roadmap acquisition which is then mapped to LGE MRI, and (2) a super-pixel and supervised learning based approach to delineate the distribution and extent of atrial scarring in LGE MRI. Results: Both our MA-WHS and atrial scarring segmentation showed accurate delineations of cardiac anatomy (mean Dice = 89%) and atrial scarring (mean Dice =79%) respectively compared to the established ground truth from manual segmentation. Compared with previously studied methods with manual interventions, our innovative pipeline demonstrated comparable results, but was computed fully automatically. Conclusion: The proposed segmentation methods allow LGE MRI to be used as an objective assessment tool for localisation, visualisation and quantification of atrial scarring.
• The Bonferroni adjustment, or the union bound, is commonly used to study rate optimality properties of statistical methods in high-dimensional problems. However, in practice, the Bonferroni adjustment is overly conservative. The extreme value theory has been proven to provide more accurate multiplicity adjustments in a number of settings, but only on ad hoc basis. Recently, Gaussian approximation has been used to justify bootstrap adjustments in large scale simultaneous inference in some general settings when $n \gg (\log p)^7$, where $p$ is the multiplicity of the inference problem and $n$ is the sample size. The thrust of this theory is the validity of the Gaussian approximation for maxima of sums of independent random vectors in high-dimension. In this paper, we reduce the sample size requirement to $n \gg (\log p)^5$ for the consistency of the empirical bootstrap and the multiplier/wild bootstrap in the Kolmogorov-Smirnov distance, possibly in the regime where the Gaussian approximation is not available. New comparison and anti-concentration theorems, which are of considerable interest in and of themselves, are developed as existing ones interweaved with Gaussian approximation are no longer applicable.
• In several physical systems, important properties characterizing the system itself are theoretically related with specific degrees of freedom. Although standard Monte Carlo simulations provide an effective tool to accurately reconstruct the physical configurations of the system, they are unable to isolate the different contributions corresponding to different degrees of freedom. Here we show that unsupervised deep learning can become a valid support to MC simulation, coupling useful insights in the phases detection task with good reconstruction performance. As a testbed we consider the 2D XY model, showing that a deep neural network based on variational autoencoders can detect the continuous Kosterlitz-Thouless (KT) transitions, and that, if endowed with the appropriate constrains, they generate configurations with meaningful physical content.
• Graph-based methods have been quite successful in solving unsupervised and semi-supervised learning problems, as they provide a means to capture the underlying geometry of the dataset. It is often desirable for the constructed graph to satisfy two properties: first, data points that are similar in the feature space should be strongly connected on the graph, and second, the class label information should vary smoothly with respect to the graph, where smoothness is measured using the spectral properties of the graph Laplacian matrix. Recent works have justified some of these smoothness conditions by showing that they are strongly linked to the semi-supervised smoothness assumption and its variants. In this work, we reinforce this connection by viewing the problem from a graph sampling theoretic perspective, where class indicator functions are treated as bandlimited graph signals (in the eigenvector basis of the graph Laplacian) and label prediction as a bandlimited reconstruction problem. Our approach involves analyzing the bandwidth of class indicator signals generated from statistical data models with separable and nonseparable classes. These models are quite general and mimic the nature of most real-world datasets. Our results show that in the asymptotic limit, the bandwidth of any class indicator is also closely related to the geometry of the dataset. This allows one to theoretically justify the assumption of bandlimitedness of class indicator signals, thereby providing a sampling theoretic interpretation of graph-based semi-supervised classification.
• We study the complexity of computing the VC Dimension and Littlestone's Dimension. Given an explicit description of a finite universe and a concept class (a binary matrix whose $(x,C)$-th entry is $1$ iff element $x$ belongs to concept $C$), both can be computed exactly in quasi-polynomial time ($n^{O(\log n)}$). Assuming the randomized Exponential Time Hypothesis (ETH), we prove nearly matching lower bounds on the running time, that hold even for approximation algorithms.
• Biomedical events describe complex interactions between various biomedical entities. Event trigger is a word or a phrase which typically signifies the occurrence of an event. Event trigger identification is an important first step in all event extraction methods. However many of the current approaches either rely on complex hand-crafted features or consider features only within a window. In this paper we propose a method that takes the advantage of recurrent neural network (RNN) to extract higher level features present across the sentence. Thus hidden state representation of RNN along with word and entity type embedding as features avoid relying on the complex hand-crafted features generated using various NLP toolkits. Our experiments have shown to achieve state-of-art F1-score on Multi Level Event Extraction (MLEE) corpus. We have also performed category-wise analysis of the result and discussed the importance of various features in trigger identification task.
• This paper addresses the problem of automatic speech recognition (ASR) error detection and their use for improving spoken language understanding (SLU) systems. In this study, the SLU task consists in automatically extracting, from ASR transcriptions , semantic concepts and concept/values pairs in a e.g touristic information system. An approach is proposed for enriching the set of semantic labels with error specific labels and by using a recently proposed neural approach based on word embeddings to compute well calibrated ASR confidence measures. Experimental results are reported showing that it is possible to decrease significantly the Concept/Value Error Rate with a state of the art system, outperforming previously published results performance on the same experimental data. It also shown that combining an SLU approach based on conditional random fields with a neural encoder/decoder attention based architecture , it is possible to effectively identifying confidence islands and uncertain semantic output segments useful for deciding appropriate error handling actions by the dialogue manager strategy .
• It is known that the Cardassian universe is successful in describing the accelerated expansion of the universe, but its dynamical equations are hard to get from the action principle. In this paper, we establish the connection between the Cardassian universe and $f(T, \mathcal{T})$ gravity, where $T$ is the torsion scalar and $\mathcal{T}$ is the trace of the matter energy-momentum tensor. For dust matter, we find that the modified Friedmann equations from $f(T, \mathcal{T})$ gravity can correspond to those of Cardassian models, and thus, a possible origin of Cardassian universe is given. We obtain the original Cardassian model, the modified polytropic Cardassian model, and the exponential Cardassian model from the Lagrangians of $f(T,\mathcal{T})$ theory. Furthermore, by adding an additional term to the corresponding Lagrangians, we give three generalized Cardassian models from $f(T,\mathcal{T})$ theory. Using the observation data of type Ia supernovae, cosmic microwave background radiation, and baryon acoustic oscillations, we get the fitting results of the cosmological parameters and give constraints of model parameters for all of these models.
• Chapter 7 in High-Luminosity Large Hadron Collider (HL-LHC) : Preliminary Design Report. The Large Hadron Collider (LHC) is one of the largest scientific instruments ever built. Since opening up a new energy frontier for exploration in 2010, it has gathered a global user community of about 7,000 scientists working in fundamental particle physics and the physics of hadronic matter at extreme temperature and density. To sustain and extend its discovery potential, the LHC will need a major upgrade in the 2020s. This will increase its luminosity (rate of collisions) by a factor of five beyond the original design value and the integrated luminosity (total collisions created) by a factor ten. The LHC is already a highly complex and exquisitely optimised machine so this upgrade must be carefully conceived and will require about ten years to implement. The new configuration, known as High Luminosity LHC (HL-LHC), will rely on a number of key innovations that push accelerator technology beyond its present limits. Among these are cutting-edge 11-12 tesla superconducting magnets, compact superconducting cavities for beam rotation with ultra-precise phase control, new technology and physical processes for beam collimation and 300 metre-long high-power superconducting links with negligible energy dissipation. The present document describes the technologies and components that will be used to realise the project and is intended to serve as the basis for the detailed engineering design of HL-LHC.
• Traditional approaches to stereo visual SLAM rely on point features to estimate the camera trajectory and build a map of the environment. In low-textured environments, though, it is often difficult to find a sufficient number of reliable point features and, as a consequence, the performance of such algorithms degrades. This paper proposes PL-SLAM, a stereo visual SLAM system that combines both points and line segments to work robustly in a wider variety of scenarios, particularly in those where point features are scarce or not well-distributed in the image. PL-SLAM leverages both points and segments at all the instances of the process: visual odometry, keyframe selection, bundle adjustment, etc. We contribute also with a loop closure procedure through a novel bag-of-words approach that exploits the combined descriptive power of the two kinds of features. Additionally, the resulting map is richer and more diverse in 3D elements, which can be exploited to infer valuable, high-level scene structures like planes, empty spaces, ground plane, etc. (not addressed in this work). Our proposal has been tested with several popular datasets (such as KITTI and EuRoC), and is compared to state of the art methods like ORB-SLAM, revealing superior performance in most of the experiments, while still running in real-time. An open source version of the PL-SLAM C++ code will be released for the benefit of the community.
• Automatically learning features, especially robust features, has attracted much attention in the machine learning community. In this paper, we propose a new method to learn non-linear robust features by taking advantage of the data manifold structure. We first follow the commonly used trick of the trade, that is learning robust features with artificially corrupted data, which are training samples with manually injected noise. Following the idea of the auto-encoder, we first assume features should contain much information to well reconstruct the input from its corrupted copies. However, merely reconstructing clean input from its noisy copies could make data manifold in the feature space noisy. To address this problem, we propose a new method, called Incremental Auto-Encoders, to iteratively denoise the extracted features. We assume the noisy manifold structure is caused by a diffusion process. Consequently, we reverse this specific diffusion process to further contract this noisy manifold, which results in an incremental optimization of model parameters . Furthermore, we show these learned non-linear features can be stacked into a hierarchy of features. Experimental results on real-world datasets demonstrate the proposed method can achieve better classification performances.
• Predicting human interaction is challenging as the on-going activity has to be inferred based on a partially observed video. Essentially, a good algorithm should effectively model the mutual influence between the two interacting subjects. Also, only a small region in the scene is discriminative for identifying the on-going interaction. In this work, we propose a relative attention model to explicitly address these difficulties. Built on a tri-coupled deep recurrent structure representing both interacting subjects and global interaction status, the proposed network collects spatio-temporal information from each subject, rectified with global interaction information, yielding effective interaction representation. Moreover, the proposed network also unifies an attention module to assign higher importance to the regions which are relevant to the on-going action. Extensive experiments have been conducted on two public datasets, and the results demonstrate that the proposed relative attention network successfully predicts informative regions between interacting subjects, which in turn yields superior human interaction prediction accuracy.
• May 29 2017 cs.SI arXiv:1705.09462v1
Compared to the well-studied topic of human mobility in real geographic space, very few studies focus on human mobility in virtual space, such as interests, knowledge, ideas, and so forth. However, it relates to the issues of management of public opinions, knowledge diffusion, and innovation. In this paper, we assume that the interests of a group of online users can span a Euclidean space which is called interest space, and the transfers of user interests can be modeled as the Levy Flight on the interest space. To consider the interaction between users, we assume that the random walkers are not independent but interact each other indirectly via the digital resources in the interest space. The model can successfully reproduce a set of scaling laws for describing the growth of the attention flow networks of real online communities, and the ranges of the exponents of the scaling are similar with the empirical data. Further, we can infer parameters for describing the individual behaviors of the users according to the scaling laws of the empirical attention flow network. Our model can not only provide theoretical understanding on human online behaviors, but also has wide potential applications, such as dissemination and management of public opinions, online recommendation, etc.
• Watermarking techniques have been proposed during the last 10 years as an approach to trace network flows for intrusion detection purposes. These techniques aim to impress a hidden signature on a traffic flow. A central property of network flow watermarking is invisibility, i.e., the ability to go unidentified by an unauthorized third party. Although widely sought after, the development of an invisible watermark is a challenging task that has not yet been accomplished. In this paper we take a step forward in addressing the invisibility problem with DROPWAT, an active network flow watermarking technique developed for tracing Internet flows directed to the staging server that is the final destination in a data exfiltration attack, even in the presence of several intermediate stepping stones or an anonymous network. DROPWAT is a timing-based technique that indirectly modifies interpacket delays by exploiting network reaction to packet loss. We empirically demonstrate that the watermark embedded by means of DROPWAT is invisible to a third party observing the watermarked traffic. We also validate DROPWAT and analyze its performance in a controlled experimental framework involving the execution of a series of experiments on the Internet, using Web proxy servers as stepping stones executed on several instances in Amazon Web Services, as well as the TOR anonymous network in the place of the stepping stones. Our results show that the detection algorithm is able to identify an embedded watermark achieving over 95% accuracy while being invisible.
• Chapter 2 in High-Luminosity Large Hadron Collider (HL-LHC) : Preliminary Design Report. The Large Hadron Collider (LHC) is one of the largest scientific instruments ever built. Since opening up a new energy frontier for exploration in 2010, it has gathered a global user community of about 7,000 scientists working in fundamental particle physics and the physics of hadronic matter at extreme temperature and density. To sustain and extend its discovery potential, the LHC will need a major upgrade in the 2020s. This will increase its luminosity (rate of collisions) by a factor of five beyond the original design value and the integrated luminosity (total collisions created) by a factor ten. The LHC is already a highly complex and exquisitely optimised machine so this upgrade must be carefully conceived and will require about ten years to implement. The new configuration, known as High Luminosity LHC (HL-LHC), will rely on a number of key innovations that push accelerator technology beyond its present limits. Among these are cutting-edge 11-12 tesla superconducting magnets, compact superconducting cavities for beam rotation with ultra-precise phase control, new technology and physical processes for beam collimation and 300 metre-long high-power superconducting links with negligible energy dissipation. The present document describes the technologies and components that will be used to realise the project and is intended to serve as the basis for the detailed engineering design of HL-LHC.
• Online music services are increasing in popularity. They enable us to analyze people's music listening behavior based on play logs. Although it is known that people listen to music based on topic (e.g., rock or jazz), we assume that when a user is addicted to an artist, s/he chooses the artist's songs regardless of topic. Based on this assumption, in this paper, we propose a probabilistic model to analyze people's music listening behavior. Our main contributions are three-fold. First, to the best of our knowledge, this is the first study modeling music listening behavior by taking into account the influence of addiction to artists. Second, by using real-world datasets of play logs, we showed the effectiveness of our proposed model. Third, we carried out qualitative experiments and showed that taking addiction into account enables us to analyze music listening behavior from a new viewpoint in terms of how people listen to music according to the time of day, how an artist's songs are listened to by people, etc. We also discuss the possibility of applying the analysis results to applications such as artist similarity computation and song recommendation.
• Identifying the underlying models in a set of data points contaminated by noise and outliers, leads to a highly complex multi-model fitting problem. This problem can be posed as a clustering problem by the projection of higher order affinities between data points into a graph, which can then be clustered using spectral clustering. Calculating all possible higher order affinities is computationally expensive. Hence in most cases only a subset is used. In this paper, we propose an effective sampling method to obtain a highly accurate approximation of the full graph required to solve multi-structural model fitting problems in computer vision. The proposed method is based on the observation that the usefulness of a graph for segmentation improves as the distribution of hypotheses (used to build the graph) approaches the distribution of actual parameters for the given data. In this paper, we approximate this actual parameter distribution using a k-th order statistics based cost function and the samples are generated using a greedy algorithm coupled with a data sub-sampling strategy. The experimental analysis shows that the proposed method is both accurate and computationally efficient compared to the state-of-the-art robust multi-model fitting techniques. The code is publicly available from https://github.com/RuwanT/model-fitting-cbs.
• Trajectory Prediction of dynamic objects is a widely studied topic in the field of artificial intelligence. Thanks to a large number of applications like predicting abnormal events, navigation system for the blind, etc. there have been many approaches to attempt learning patterns of motion directly from data using a wide variety of techniques ranging from hand-crafted features to sophisticated deep learning models for unsupervised feature learning. All these approaches have been limited by problems like inefficient features in the case of hand crafted features, large error propagation across the predicted trajectory and no information of static artefacts around the dynamic moving objects. We propose an end to end deep learning model to learn the motion patterns of humans using different navigational modes directly from data using the much popular sequence to sequence model coupled with a soft attention mechanism. We also propose a novel approach to model the static artefacts in a scene and using these to predict the dynamic trajectories. The proposed method, tested on trajectories of pedestrians, consistently outperforms previously proposed state of the art approaches on a variety of large scale data sets. We also show how our architecture can be naturally extended to handle multiple modes of movement (say pedestrians, skaters, bikers and buses) simultaneously.
• We present a deep learning framework for computer-aided lung cancer diagnosis. Our multi-stage framework detects nodules in 3D lung CAT scans, determines if each nodule is malignant, and finally assigns a cancer probability based on these results. We discuss the challenges and advantages of our framework. In the Kaggle Data Science Bowl 2017, our framework ranked 41st out of 1972 teams.
• May 29 2017 cs.DB arXiv:1705.09427v1
Data-driven workflows, of which IBM's Business Artifacts are a prime exponent, have been successfully deployed in practice, adopted in industrial standards, and have spawned a rich body of research in academia, focused primarily on static analysis. In previous work we obtained theoretical results on the verification of a rich model incorporating core elements of IBM's successful Guard-Stage- Milestone (GSM) artifact model. The results showed decidability of verification of temporal properties of a large class of GSM workflows, and established its complexity. Following up on these results, the present paper presents a practical implementation of a verifier relying on Spin, the classical model-checking tool. The implementation includes nontrivial optimizations and achieves satisfactory performance in real-world business process examples. Our results shed light on the capabilities and limitations of off-the-shelf verifiers in the context of data-driven workflows.
• Saliency detection, finding the most important parts of an image, has become increasingly popular in computer vision. In this paper, we introduce Hierarchical Cellular Automata (HCA) -- a temporally evolving model to intelligently detect salient objects. HCA consists of two main components: Single-layer Cellular Automata (SCA) and Cuboid Cellular Automata (CCA). As an unsupervised propagation mechanism, Single-layer Cellular Automata can exploit the intrinsic relevance of similar regions through interactions with neighbors. Low-level image features as well as high-level semantic information extracted from deep neural networks are incorporated into the SCA to measure the correlation between different image patches. With these hierarchical deep features, an impact factor matrix and a coherence matrix are constructed to balance the influences on each cell's next state. The saliency values of all cells are iteratively updated according to a well-defined update rule. Furthermore, we propose CCA to integrate multiple saliency maps generated by SCA at different scales in a Bayesian framework. Therefore, single-layer propagation and multi-layer integration are jointly modeled in our unified HCA. Surprisingly, we find that the SCA can improve all existing methods that we applied it to, resulting in a similar precision level regardless of the original results. The CCA can act as an efficient pixel-wise aggregation algorithm that can integrate state-of-the-art methods, resulting in even better results. Extensive experiments on four challenging datasets demonstrate that the proposed algorithm outperforms state-of-the-art conventional methods and is competitive with deep learning based approaches.
• In this paper, a 3D Convolutional Neural Network (3D-CNN) architecture has been utilized for text-independent speaker verification. At the development phase, a CNN is trained to classify speakers at the utterance-level. In the enrollment stage, the trained network is utilized to directly create a speaker model for each speaker based on the extracted features. Finally, in the evaluation phase, the extracted features from the test utterance will be compared to the stored speaker model to verify the claimed identity. One of the main challenges is the creation of the speaker models. Previously-reported approaches create speaker models based on averaging the extracted features from utterances of the speaker, which is known as a d-vector system. In our paper, we propose to use the 3D-CNNs for direct speaker model creation in which, for both development and enrollment phases, an identical number of speaker utterances is fed to the network for representing the speaker utterances and creation of the speaker model. This leads to simultaneously capturing the speaker-related information and building a more robust system to cope with within-speaker variation. We demonstrate that the proposed method significantly outperforms the d-vector verification system.
• Applying standard statistical methods after model selection may yield inefficient estimators and hypothesis tests that fail to achieve nominal type-I error rates. The main issue is the fact that the post-selection distribution of the data differs from the original distribution. In particular, the observed data is constrained to lie in a subset of the original sample space that is determined by the selected model. This often makes the post-selection likelihood of the observed data intractable and maximum likelihood inference difficult. In this work, we get around the intractable likelihood by generating noisy unbiased estimates of the post-selection score function and using them in a stochastic ascent algorithm that yields correct post-selection maximum likelihood estimates. We apply the proposed technique to the problem of estimating linear models selected by the lasso. In an asymptotic analysis the resulting estimates are shown to be consistent for the selected parameters and to have a limiting truncated normal distribution. Confidence intervals constructed based on the asymptotic distribution obtain close to nominal coverage rates in all simulation settings considered, and the point estimates are shown to be superior to the lasso estimates when the true model is sparse.
• Blocks-based programming has become the lingua franca for introductory coding. Studies have found that experience with blocks-based programming can help beginners learn more traditional text-based languages. We explore how blocks environments improve learnability for novices by 1) favoring recognition over recall, 2) reducing cognitive load, and 3) preventing errors. Increased usability of blocks programming has led to widespread adoption within introductory programming contexts across a range of ages. Ongoing work explores further reducing barriers to programming, supporting novice programmers in expanding their programming skills, and transitioning to textual programming. New blocks frameworks are making it easier to access a variety of APIs through blocks environments, opening the doors to a greater diversity of programming domains and supporting greater experimentation for novices and professionals alike.
• For decades, optimization has played a central role in addressing wireless resource management problems such as power control and beamformer design. However, these algorithms often require a considerable number of iterations for convergence, which poses challenges for real-time processing. In this work, we propose a new learning-based approach for wireless resource management. The key idea is to treat the input and output of a resource allocation algorithm as an unknown non-linear mapping and to use a deep neural network (DNN) to approximate it. If the non-linear mapping can be learned accurately and effectively by a DNN of moderate size, then such DNN can be used for resource allocation in almost \emphreal time, since passing the input through a DNN to get the output only requires a small number of simple operations. In this work, we first characterize a class of `learnable algorithms' and then design DNNs to approximate some algorithms of interest in wireless communications. We use extensive numerical simulations to demonstrate the superior ability of DNNs for approximating two considerably complex algorithms that are designed for power allocation in wireless transmit signal design, while giving orders of magnitude speedup in computational time.
• K-Nearest Neighbours (k-NN) is a popular classification and regression algorithm, yet one of its main limitations is the difficulty in choosing the number of neighbours. We present a Bayesian algorithm to compute the posterior probability distribution for k given a target point within a data-set, efficiently and without the use of Markov Chain Monte Carlo (MCMC) methods or simulation - alongside an exact solution for distributions within the exponential family. The central idea is that data points around our target are generated by the same probability distribution, extending outwards over the appropriate, though unknown, number of neighbours. Once the data is projected onto a distance metric of choice, we can transform the choice of k into a change-point detection problem, for which there is an efficient solution: we recursively compute the probability of the last change-point as we move towards our target, and thus de facto compute the posterior probability distribution over k. Applying this approach to both a classification and a regression UCI data-sets, we compare favourably and, most importantly, by removing the need for simulation, we are able to compute the posterior probability of k exactly and rapidly. As an example, the computational time for the Ripley data-set is a few milliseconds compared to a few hours when using a MCMC approach.

Eddie Smolansky May 26 2017 05:23 UTC

Updated summary [here](https://github.com/eddiesmo/papers).

# How they made the dataset
- automated filtering with yolo and landmark detection projects
- crowd source final filtering (AMT - give 50 face images to turks and ask which don't belong)
- quality control through s

...(continued)
Felix Leditzky May 24 2017 20:43 UTC

Yes, that's right, thanks!

For (5), you use the Cauchy-Schwarz inequality $\left| \operatorname{tr}(X^\dagger Y) \right| \leq \sqrt{\operatorname{tr}(X^\dagger X)} \sqrt{\operatorname{tr}(Y^\dagger Y)}$ for the Hilbert-Schmidt inner product $\langle X,Y\rangle := \operatorname{tr}(X^\dagger Y)$ wi

...(continued)
Michael Tolan May 24 2017 20:27 UTC

Just reading over Eq (5) on P5 concerning the diamond norm.

Should the last $\sigma_1$ on the 4th line be replaced with a $\sigma_2$? I think I can see how the proof is working but not entirely certain.

Noon van der Silk May 23 2017 11:15 UTC

I think this thread has reached it's end.

I've locked further comments, and I hope that the quantum computing community can thoughtfully find an approach to language that is inclusive to all and recognises the diverse background of all researchers, current and future.

...(continued)
Varun Narasimhachar May 23 2017 02:14 UTC

While I would never want to antagonize my peers or to allow myself to assume they were acting irrationally, I do share your concerns to an extent. I worry about the association of social justice and inclusivity with linguistic engineering, virtual lynching, censorship, etc. (the latter phenomena sta

...(continued)
Aram Harrow May 23 2017 01:30 UTC

I think you are just complaining about issues that arise from living with other people in the same society. If you disagree with their values, well, then some of them might have a negative opinion about you. If you express yourself in an aggressive way, and use words like "lynch" to mean having pe

...(continued)
Steve Flammia May 23 2017 01:04 UTC

I agree with Noon that the discussion is becoming largely off topic for SciRate, but that it might still be of interest to the community to discuss this. I invite people to post thoughtful and respectful comments over at [my earlier Quantum Pontiff post][1]. Further comments here on SciRate will be

...(continued)
Noon van der Silk May 23 2017 00:59 UTC

I've moderated a few comments on this post because I believe it has gone past useful discussion, and I'll continue to remove comments that I believe don't add anything of substantial value.

Thanks.

Aram Harrow May 22 2017 23:13 UTC

The problem with your argument is that no one is forcing anyone to say anything, or banning anything.

If the terms really were offensive or exclusionary or had other bad side effects, then it's reasonable to discuss as a community whether to keep them, and possibly decide to stop using them. Ther

...(continued)
stan May 22 2017 22:53 UTC

Fair enough. At the end of the day I think most of us are concerned with the strength of the result not the particular language used to describe it.