results for au:Li_J in:cs

- Unsupervised domain adaptation of speech signal aims at adapting a well-trained source-domain acoustic model to the unlabeled data from target domain. This can be achieved by adversarial training of deep neural network (DNN) acoustic models to learn an intermediate deep representation that is both senone-discriminative and domain-invariant. Specifically, the DNN is trained to jointly optimize the primary task of senone classification and the secondary task of domain classification with adversarial objective functions. In this work, instead of only focusing on learning a domain-invariant feature (i.e. the shared component between domains), we also characterize the difference between the source and target domain distributions by explicitly modeling the private component of each domain through a private component extractor DNN. The private component is trained to be orthogonal with the shared component and thus implicitly increases the degree of domain-invariance of the shared component. A reconstructor DNN is used to reconstruct the original speech feature from the private and shared components as a regularization. This domain separation framework is applied to the unsupervised environment adaptation task and achieved 11.08% relative WER reduction from the gradient reversal layer training, a representative adversarial training method, for automatic speech recognition on CHiME-3 dataset.
- Nov 21 2017 cs.DS arXiv:1711.07454v1We use the Sum of Squares method to develop new efficient algorithms for learning well-separated mixtures of Gaussians and robust mean estimation, both in high dimensions, that substantially improve upon the statistical guarantees achieved by previous efficient algorithms. Firstly, we study mixtures of $k$ distributions in $d$ dimensions, where the means of every pair of distributions are separated by at least $k^{\varepsilon}$. In the special case of spherical Gaussian mixtures, we give a $(dk)^{O(1/\varepsilon^2)}$-time algorithm that learns the means assuming separation at least $k^{\varepsilon}$, for any $\varepsilon > 0$. This is the first algorithm to improve on greedy ("single-linkage") and spectral clustering, breaking a long-standing barrier for efficient algorithms at separation $k^{1/4}$. We also study robust estimation. When an unknown $(1-\varepsilon)$-fraction of $X_1,\ldots,X_n$ are chosen from a sub-Gaussian distribution with mean $\mu$ but the remaining points are chosen adversarially, we give an algorithm recovering $\mu$ to error $\varepsilon^{1-1/t}$ in time $d^{O(t^2)}$, so long as sub-Gaussian-ness up to $O(t)$ moments can be certified by a Sum of Squares proof. This is the first polynomial-time algorithm with guarantees approaching the information-theoretic limit for non-Gaussian distributions. Previous algorithms could not achieve error better than $\varepsilon^{1/2}$. Both of these results are based on a unified technique. Inspired by recent algorithms of Diakonikolas et al. in robust statistics, we devise an SDP based on the Sum of Squares method for the following setting: given $X_1,\ldots,X_n \in \mathbb{R}^d$ for large $d$ and $n = poly(d)$ with the promise that a subset of $X_1,\ldots,X_n$ were sampled from a probability distribution with bounded moments, recover some information about that distribution.
- Nov 17 2017 cs.CV arXiv:1711.06055v1Face analytics benefits many multimedia applications. It consists of a number of tasks, such as facial emotion recognition and face parsing, and most existing approaches generally treat these tasks independently, which limits their deployment in real scenarios. In this paper we propose an integrated Face Analytics Network (iFAN), which is able to perform multiple tasks jointly for face analytics with a novel carefully designed network architecture to fully facilitate the informative interaction among different tasks. The proposed integrated network explicitly models the interactions between tasks so that the correlations between tasks can be fully exploited for performance boost. In addition, to solve the bottleneck of the absence of datasets with comprehensive training data for various tasks, we propose a novel cross-dataset hybrid training strategy. It allows "plug-in and play" of multiple datasets annotated for different tasks without the requirement of a fully labeled common dataset for all the tasks. We experimentally show that the proposed iFAN achieves state-of-the-art performance on multiple face analytics tasks using a single integrated model. Specifically, iFAN achieves an overall F-score of 91.15% on the Helen dataset for face parsing, a normalized mean error of 5.81% on the MTFL dataset for facial landmark localization and an accuracy of 45.73% on the BNU dataset for emotion recognition with a single model.
- We propose a framework, named Aggregated Wasserstein, for computing a dissimilarity measure or distance between two Hidden Markov Models with state conditional distributions being Gaussian. For such HMMs, the marginal distribution at any time position follows a Gaussian mixture distribution, a fact exploited to softly match, aka register, the states in two HMMs. We refer to such HMMs as Gaussian mixture model-HMM (GMM-HMM). The registration of states is inspired by the intrinsic relationship of optimal transport and the Wasserstein metric between distributions. Specifically, the components of the marginal GMMs are matched by solving an optimal transport problem where the cost between components is the Wasserstein metric for Gaussian distributions. The solution of the optimization problem is a fast approximation to the Wasserstein metric between two GMMs. The new Aggregated Wasserstein distance is a semi-metric and can be computed without generating Monte Carlo samples. It is invariant to relabeling or permutation of states. The distance is defined meaningfully even for two HMMs that are estimated from data of different dimensionality, a situation that can arise due to missing variables. This distance quantifies the dissimilarity of GMM-HMMs by measuring both the difference between the two marginal GMMs and that between the two transition matrices. Our new distance is tested on tasks of retrieval, classification, and t-SNE visualization of time series. Experiments on both synthetic and real data have demonstrated its advantages in terms of accuracy as well as efficiency in comparison with existing distances based on the Kullback-Leibler divergence.
- Nov 16 2017 cs.CL arXiv:1711.05380v2Neural machine translation systems encode a source sequence into a vector from which a target sequence is generated via a decoder. Different from the traditional statistical machine translation, source and target words are not directly mapped to each other in translation rules. They are at the two ends of a long information channel in the encoder-decoder neural network, separated by source and target hidden states. This may lead to translations with inconceivable word alignments. In this paper, we try to bridge source and target word embeddings so as to shorten their distance. We propose three strategies to bridge them: 1) a source state bridging model that moves source word embeddings one step closer to their counterparts, 2) a target state bridging model that explores relevant source word embeddings for target state prediction, and 3) a direct link bridging model that directly connects source and target word embeddings so as to minimize their discrepancy. Experiments and analysis demonstrate that the proposed bridging models are able to significantly improve quality of both translation and word alignments.
- Nov 15 2017 cs.IR arXiv:1711.04725v1Given e-commerce scenarios that user profiles are invisible, session-based recommendation is proposed to generate recommendation results from short sessions. Previous work only considers the user's sequential behavior in the current session, whereas the user's main purpose in the current session is not emphasized. In this paper, we propose a novel neural networks framework, i.e., Neural Attentive Recommendation Machine (NARM), to tackle this problem. Specifically, we explore a hybrid encoder with an attention mechanism to model the user's sequential behavior and capture the user's main purpose in the current session, which are combined as a unified session representation later. We then compute the recommendation scores for each candidate item with a bi-linear matching scheme based on this unified session representation. We train NARM by jointly learning the item and session representations as well as their matchings. We carried out extensive experiments on two benchmark datasets. Our experimental results show that NARM outperforms state-of-the-art baselines on both datasets. Furthermore, we also find that NARM achieves a significant improvement on long sessions, which demonstrates its advantages in modeling the user's sequential behavior and main purpose simultaneously.
- In this work, we propose a covert communication scheme where the transmitter attempts to hide its transmission to a full-duplex receiver, from a warden that is to detect this covert transmission using a radiometer. Specifically, we first derive the detection error rate at the warden, based on which the optimal detection threshold for its radiometer is analytically determined and its expected detection error rate over wireless fading channels is achieved in a closed-form expression. Our analysis indicates that the artificial noise deliberately produced by the receiver with a random transmit power, although causes self-interference, offers the capability of achieving a positive effective covert rate for any transmit power (can be infinity) subject to any given covertness requirement on the expected detection error rate. This work is the first study on the use of the full-duplex receiver with controlled artificial noise for achieving covert communications and invites further investigation in this regard.
- Caching algorithms are usually described by the eviction method and analyzed using a metric of hit probability. Since contents have different importance (e.g. popularity), the utility of a high hit probability, and the cost of transmission can vary across contents. In this paper, we consider timer-based (TTL) policies across a cache network, where contents have differentiated timers over which we optimize. Each content is associated with a utility measured in terms of the corresponding hit probability. We start our analysis from a linear cache network: we propose a utility maximization problem where the objective is to maximize the sum of utilities and a cost minimization problem where the objective is to minimize the content transmission cost across the network. These frameworks enable us to design online algorithms for cache management, for which we prove achieving optimal performance. Informed by the results of our analysis, we formulate a non-convex optimization problem for a general cache network. We show that the duality gap is zero, hence we can develop a distributed iterative primal-dual algorithm for content management in the network. Finally, we consider two applications of our cache network model: (i) directly mapping to content distribution and (ii) generalization to wireless sensor network by jointly considering content caching and content compression. We characterize the tradeoff among caching, compression and communication via a nonlinear non-convex optimization problem. We show that it can be transformed into an equivalent convex problem. The obtained numerical results provide us with insights into how to optimize the performance.
- Nov 08 2017 cs.CL arXiv:1711.02212v1Achieving high accuracy with end-to-end speech recognizers requires careful parameter initialization prior to training. Otherwise, the networks may fail to find a good local optimum. This is particularly true for low-latency online networks, such as unidirectional LSTMs. Currently, the best strategy to train such systems is to bootstrap the training from a tied-triphone system. However, this is time consuming, and more importantly, is impossible for languages without a high-quality pronunciation lexicon. In this work, we propose an initialization strategy that uses teacher-student learning to transfer knowledge from a large, well-trained, offline end-to-end speech recognition model to an online end-to-end model, eliminating the need for a lexicon or any other linguistic resources. We also explore curriculum learning and label smoothing and show how they can be combined with the proposed teacher-student learning for further improvements. We evaluate our methods on a Microsoft Cortana personal assistant task and show that the proposed method results in a 19% relative improvement in word error rate compared to a randomly-initialized baseline system.
- Nov 07 2017 cs.DB arXiv:1711.01960v1Currently data explosion poses great challenges to approximate aggregation on efficiency and accuracy. To address this problem, we propose a novel approach to calculate aggregation answers in high accuracy using only a small share of data. We introduce leverages to reflect individual differences of samples from the statistical perspective. Two kinds of estimators, the leverage-based estimator and the sketch estimator (a "rough picture" of the aggregation answer), are in constraint relations and iteratively improved according to the actual conditions until their difference is below a threshold. Due to the iteration mechanism and the leverages, our approach achieves high accuracy. Moreover, some features, including not requiring recording sampled data and easy to extend to various execution modes (such as, the online mode), make our approach well suited to deal with big data. Experiments show that our approach has extraordinary performance, and when compared with the uniform sampling, our approach can achieve high-quality answers with only 1/3 of the same sample size.
- Nov 07 2017 physics.soc-ph cs.SI arXiv:1711.01404v1Scientific coauthorship, generated by collaborations and competitions among researchers, reflects effective organizations of human resources. Researchers, their expected benefits through collaborations, and their cooperative costs constitute the elements of a game. Hence we propose a cooperative game model to explore the evolution mechanisms of scientific coauthorship networks. The model generates geometric hypergraphs, where the costs are modelled by space distances, and the benefits are expressed by node reputations, i. e. geometric zones that depend on node position in space and time. Modelled cooperative strategies conditioned on positive benefit-minus-cost reflect the spatial reciprocity principle in collaborations, and generate high clustering and degree assortativity, two typical features of coauthorship networks. Modelled reputations generate the generalized Poisson parts and fat tails appeared in specific distributions of empirical data, e. g. paper team size distribution. The combined effect of modelled costs and reputations reproduces the transitions emerged in degree distribution, in the correlation between degree and local clustering coefficient, etc. The model provides an example of how individual strategies induce network complexity, as well as an application of game theory to social affiliation networks.
- Falsely identifying different authors as one is called merging error in the name disambiguation of coauthorship networks. Research on the measurement and distribution of merging errors helps to collect high quality coauthorship networks. In the aspect of measurement, we provide a Bayesian model to measure the errors through author similarity. We illustratively use the model and coauthor similarity to measure the errors caused by initial-based name disambiguation methods. The empirical result on large-scale coauthorship networks shows that using coauthor similarity cannot increase the accuracy of disambiguation through surname and the initial of the first given name. In the aspect of distribution, expressing coauthorship data as hypergraphs and supposing the merging error rate is proper to hyperdegree with an exponent, we find that hypergraphs with a range of network properties highly similar to those of low merging error hypergraphs can be constructed from high merging error hypergraphs. It implies that focusing on the error correction of high hyperdegree nodes is a labor- and time-saving approach of improving the data quality for coauthorship network analysis.
- Nov 03 2017 cs.CV arXiv:1711.00648v4It is a difficult task to classify images with multiple class labels using only a small number of labeled examples, especially when the label (class) distribution is imbalanced. Emotion classification is such an example of imbalanced label distribution, because some classes of emotions like \emphdisgusted are relatively rare comparing to other labels like \it happy or sad. In this paper, we propose a data augmentation method using generative adversarial networks (GAN). It can complement and complete the data manifold and find better margins between neighboring classes. Specifically, we design a framework with a CNN model as the classifier and a cycle-consistent adversarial networks (CycleGAN) as the generator. In order to avoid gradient vanishing problem, we employ the least-squared loss as adversarial loss. We also propose several evaluation methods on three benchmark datasets to validate GAN's performance. Empirical results show that we can obtain 5%~10% increase in the classification accuracy after employing the GAN-based data augmentation techniques.
- Nov 01 2017 cs.RO arXiv:1710.11319v1Constructing a smart wheelchair on a commercially available powered wheelchair (PWC) platform avoids a host of seating, mechanical design and reliability issues but requires methods of predicting and controlling the motion of a device never intended for robotics. Analog joystick inputs are subject to black-box transformations which may produce intuitive and adaptable motion control for human operators, but complicate robotic control approaches; furthermore, installation of standard axle mounted odometers on a commercial PWC is difficult. In this work, we present an integrated hardware and software system for predicting the motion of a commercial PWC platform that does not require any physical or electronic modification of the chair beyond plugging into an industry standard auxiliary input port. This system uses an RGB-D camera and an Arduino interface board to capture motion data, including visual odometry and joystick signals, via ROS communication. Future motion is predicted using an autoregressive sparse Gaussian process model. We evaluate the proposed system on real-world short-term path prediction experiments. Experimental results demonstrate the system's efficacy when compared to a baseline neural network model.
- Automatic relation extraction (RE) for types of interest is of great importance for interpreting massive text corpora in an efficient manner. Traditional RE models have heavily relied on human-annotated corpus for training, which can be costly in generating labeled data and become obstacles when dealing with more relation types. Thus, more RE extraction systems have shifted to be built upon training data automatically acquired by linking to knowledge bases (distant supervision). However, due to the incompleteness of knowledge bases and the context-agnostic labeling, the training data collected via distant supervision (DS) can be very noisy. In recent years, as increasing attention has been brought to tackling question-answering (QA) tasks, user feedback or datasets of such tasks become more accessible. In this paper, we propose a novel framework, ReQuest, to leverage question-answer pairs as an indirect source of supervision for relation extraction, and study how to use such supervision to reduce noise induced from DS. Our model jointly embeds relation mentions, types, QA entity mention pairs and text features in two low-dimensional spaces (RE and QA), where objects with same relation types or semantically similar question-answer pairs have similar representations. Shared features connect these two spaces, carrying clearer semantic knowledge from both sources. ReQuest, then use these learned embeddings to estimate the types of test relation mentions. We formulate a global objective function and adopt a novel margin-based QA loss to reduce noise in DS by exploiting semantic evidence from the QA dataset. Our experimental results achieve an average of 11% improvement in F1 score on two public RE datasets combined with TREC QA dataset.
- Oct 31 2017 cs.SI arXiv:1710.10738v1Unveiling the general structural properties of various complex networks is one of the main tasks in network science. We define the degree of a bunch of nodes as the number of common neighbors they share in the network, namely common neighbor based degree (CNBD). We provide a general model, namely unified ring model, which unifies all the ring based models. We propose a general framework based on the generating function to calculate the CNBD distributions of complex networks. We find that in the ER network, the CNBD distribution obey the Poisson law for node sets of any size. We also study the CNBD distribution for the other types of complex networks including the regular ring lattice, small-world model, scale-free model, and real-world networks.
- Oct 30 2017 cs.DC arXiv:1710.10090v1We present an Edge-as-a-Service (EaaS) platform for realising distributed cloud architectures and integrating the edge of the network in the computing ecosystem. The EaaS platform is underpinned by (i) a lightweight discovery protocol that identifies edge nodes and make them publicly accessible in a computing environment, and (ii) a scalable resource provisioning mechanism for offloading workloads from the cloud on to the edge for servicing multiple user requests. We validate the feasibility of EaaS on an online game use-case to highlight the improvement in the QoS of the application hosted on our cloud-edge platform. On this platform we demonstrate (i) low overheads of less than 6%, (ii) reduced data traffic to the cloud by up to 95% and (iii) minimised application latency between 40%-60%.
- Millimeter-wave multi-input multi-output (mm-Wave MIMO) systems are one of the candidate schemes for 5G wireless standardization efforts. In this context, the main contributions of this article are three-fold. 1) We describe parallel sets of measurements at identical transmit-receive location pairs with 2.9, 29 and 61 GHz carrier frequencies in indoor office, shopping mall, and outdoor settings. These measurements provide insights on propagation, blockage and material penetration losses, and the key elements necessary in system design to make mm-Wave systems viable in practice. 2) One of these elements is hybrid beamforming necessary for better link margins by reaping the array gain with large antenna dimensions. From the class of fully-flexible hybrid beamformers, we describe a robust class of directional beamformers towards meeting the high data-rate requirements of mm-Wave systems. 3) Leveraging these design insights, we then describe an experimental prototype system at 28 GHz that realizes high data-rates on both the downlink and uplink and robustly maintains these rates in outdoor and indoor mobility scenarios. In addition to maintaining large signal constellation sizes in spite of radio frequency challenges, this prototype leverages the directional nature of the mm-Wave channel to perform seamless beam switching and handover across mm-Wave base-stations thereby overcoming the path losses in non-line-of-sight links and blockages encountered at mm-Wave frequencies.
- We present a technique to synthesize and analyze volume-rendered images using generative models. We use the Generative Adversarial Network (GAN) framework to compute a model from a large collection of volume renderings, conditioned on (1) viewpoint and (2) transfer functions for opacity and color. Our approach facilitates tasks for volume analysis that are challenging to achieve using existing rendering techniques such as ray casting or texture-based methods. We show how to guide the user in transfer function editing by quantifying expected change in the output image. Additionally, the generative model transforms transfer functions into a view-invariant latent space specifically designed to synthesize volume-rendered images. We use this space directly for rendering, enabling the user to explore the space of volume-rendered images. As our model is independent of the choice of volume rendering process, we show how to analyze volume-rendered images produced by direct and global illumination lighting, for a variety of volume datasets.
- Wireless communication systems, such as wireless sensor networks and RFIDs, are increasingly adopted to transfer potential highly sensitive information. Since the wireless medium has a sharing nature, adversaries have a chance to eavesdrop confidential information from the communication systems. Adding artificial noises caused by friendly jammers emerges as a feasible defensive technique against adversaries. This paper studies the schedule strategies of friendly jammers, which are randomly and redundantly deployed in a circumscribed geographical area and can be unrechargeable or rechargeable, to maximize the lifetime of the jammer networks and prevent the cracking of jamming effect made by the eavesdroppers, under the constraints of geographical area, energy consumption, transmission power, and threshold level. An approximation algorithm as baseline is first proposed using the integer linear programming model. To further reduce the computational complexity, a heuristic algorithm based on the greedy strategy that less consumption leads to longer lifetime is also proposed. Finally, extensive simulation results show that the proposed algorithms are effective and efficient.
- Oct 25 2017 cs.DS arXiv:1710.08488v1In the $k$-Cut problem, we are given an edge-weighted graph $G$ and an integer $k$, and have to remove a set of edges with minimum total weight so that $G$ has at least $k$ connected components. Prior work on this problem gives, for all $h \in [2,k]$, a $(2-h/k)$-approximation algorithm for $k$-cut that runs in time $n^{O(h)}$. Hence to get a $(2 - \varepsilon)$-approximation algorithm for some absolute constant $\varepsilon$, the best runtime using prior techniques is $n^{O(k\varepsilon)}$. Moreover, it was recently shown that getting a $(2 - \varepsilon)$-approximation for general $k$ is NP-hard, assuming the Small Set Expansion Hypothesis. If we use the size of the cut as the parameter, an FPT algorithm to find the exact $k$-Cut is known, but solving the $k$-Cut problem exactly is $W[1]$-hard if we parameterize only by the natural parameter of $k$. An immediate question is: \emphcan we approximate $k$-Cut better in FPT-time, using $k$ as the parameter? We answer this question positively. We show that for some absolute constant $\varepsilon > 0$, there exists a $(2 - \varepsilon)$-approximation algorithm that runs in time $2^{O(k^6)} \cdot \widetilde{O} (n^4)$. This is the first FPT algorithm that is parameterized only by $k$ and strictly improves the $2$-approximation.
- The directionality of millimeter-wave (mmWave) communications creates a significant challenge in serving fast-moving mobile terminals on, e.g., high-speed vehicles, trains, and UAVs. This challenge is exacerbated in mmWave systems using analog antenna arrays, because of the inherent non-convexity in the control of the phase shifters. In this paper, we develop a recursive beam tracking algorithm which can simultaneously achieve fast tracking speed, high tracking accuracy, low complexity, and low pilot overhead. In static scenarios, this algorithm converges to the minimum CramÃ©r-Rao lower bound (CRLB) of beam tracking with high probability. In dynamic scenarios, even at SNRs as low as 0dB, our algorithm is capable of tracking a mobile moving at an angular velocity of 10-20 degrees per second, using only 5 pilot symbols per second. If combining with a simple TDMA pilot pattern, this algorithm can track hundreds of high-speed mobiles in 5G configurations. Our simulations show that the tracking performance of this algorithm is much better than several state-of-the-art algorithms. The key analytical tools used in our algorithm design are stochastic approximation and recursive estimation with a control parameter.
- Oct 11 2017 cs.CV arXiv:1710.03553v1The timely provision of traffic sign information to drivers is essential for the drivers to respond, to ensure safe driving, and to avoid traffic accidents in a timely manner. We proposed a timely visual recognizability quantitative evaluation method for traffic signs in large-scale transportation environments. To achieve this goal, we first address the concept of a visibility field to reflect the visible distribution of three-dimensional (3D) space and construct a traffic sign Visibility Evaluation Model (VEM) to measure the traffic sign visibility for a given viewpoint. Then, based on the VEM, we proposed the concept of the Visual Recognizability Field (VRF) to reflect the visual recognizability distribution in 3D space and established a Visual Recognizability Evaluation Model (VREM) to measure a traffic sign visual recognizability for a given viewpoint. Next, we proposed a Traffic Sign Timely Visual Recognizability Evaluation Model (TSTVREM) by combining VREM, the actual maximum continuous visual recognizable distance, and traffic big data to measure a traffic sign visual recognizability in different lanes. Finally, we presented an automatic algorithm to implement the TSTVREM model through traffic sign and road marking detection and classification, traffic sign environment point cloud segmentation, viewpoints calculation, and TSTVREM model realization. The performance of our method for traffic sign timely visual recognizability evaluation is tested on three road point clouds acquired by a mobile laser scanning system (RIEGL VMX-450) according to Road Traffic Signs and Markings (GB 5768-1999 in China), showing that our method is feasible and efficient.
- Oct 10 2017 cs.DB arXiv:1710.02817v1Missing and incorrect values often cause serious consequences. To deal with these data quality problems, a class of common employed tools are dependency rules, such as Functional Dependencies (FDs), Conditional Functional Dependencies (CFDs) and Edition Rules (ERs), etc. The stronger expressing ability a dependency has, data with the better quality can be obtained. To the best of our knowledge, all previous dependencies treat each attribute value as a non-splittable whole. Actually however, in many applications, part of a value may contains meaningful information, indicating that more powerful dependency rules to handle data quality problems are possible. In this paper, we consider of discovering such type of dependencies in which the left hand side is part of a regular-expression-like paradigm, named Paradigm Dependencies (PDs). PDs tell that if a string matches the paradigm, element at the specified position can decides a certain other attribute's value. We propose a framework in which strings with similar coding rules and different lengths are clustered together and aligned vertically, from which PDs can be discovered directly. The aligning problem is the key component of this framework and is proved in NP-Complete. A greedy algorithm is introduced in which the clustering and aligning tasks can be accomplished simultaneously. Because of the greedy algorithm's high time complexity, several pruning strategies are proposed to reduce the running time. In the experimental study, three real datasets as well as several synthetical datasets are employed to verify our methods' effectiveness and efficiency.
- Oct 10 2017 cs.SI arXiv:1710.02971v2Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of the normalized Laplacian matrix of a network; (2) LINE, in theory, is a special case of DeepWalk when the size of vertex context is set to one; (3) As an extension to LINE, PTE can be viewed as the joint factorization of multiple Laplacian matrices; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE (up to 38% relatively) in several conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representations.
- Oct 05 2017 cs.CV arXiv:1710.01462v1Convolutional neural networks (CNNs) have been widely used over many areas in compute vision. Especially in classification. Recently, FlowNet and several works on opti- cal estimation using CNNs shows the potential ability of CNNs in doing per-pixel regression. We proposed several CNNs network architectures that can estimate optical flow, and fully unveiled the intrinsic different between these structures.
- Sep 29 2017 cs.LG arXiv:1709.09820v1Generative Adversarial Networks (GANs) have shown impressive performance in generating photo-realistic images. They fit generative models by minimizing certain distance measure between the real image distribution and the generated data distribution. Several distance measures have been used, such as Jensen-Shannon divergence, $f$-divergence, and Wasserstein distance, and choosing an appropriate distance measure is very important for training the generative network. In this paper, we choose to use the maximum mean discrepancy (MMD) as the distance metric, which has several nice theoretical guarantees. In fact, generative moment matching network (GMMN) (Li, Swersky, and Zemel 2015) is such a generative model which contains only one generator network $G$ trained by directly minimizing MMD between the real and generated distributions. However, it fails to generate meaningful samples on challenging benchmark datasets, such as CIFAR-10 and LSUN. To improve on GMMN, we propose to add an extra network $F$, called mapper. $F$ maps both real data distribution and generated data distribution from the original data space to a feature representation space $\mathcal{R}$, and it is trained to maximize MMD between the two mapped distributions in $\mathcal{R}$, while the generator $G$ tries to minimize the MMD. We call the new model generative adversarial mapping networks (GAMNs). We demonstrate that the adversarial mapper $F$ can help $G$ to better capture the underlying data distribution. We also show that GAMN significantly outperforms GMMN, and is also superior to or comparable with other state-of-the-art GAN based methods on MNIST, CIFAR-10 and LSUN-Bedrooms datasets.
- Sep 28 2017 cs.DB arXiv:1709.09471v3Mining dense subgraphs on multi-layer graphs is an interesting problem, which has witnessed lots of applications in practice. To overcome the limitations of the quasi-clique-based approach, we propose d-coherent core (d-CC), a new notion of dense subgraph on multi-layer graphs, which has several elegant properties. We formalize the diversified coherent core search (DCCS) problem, which finds k d-CCs that can cover the largest number of vertices. We propose a greedy algorithm with an approximation ratio of 1 - 1/e and two search algorithms with an approximation ratio of 1/4. The experiments verify that the search algorithms are faster than the greedy algorithm and produce comparably good results as the greedy algorithm in practice. As opposed to the quasi-clique-based approach, our DCCS algorithms can fast detect larger dense subgraphs that cover most of the quasi-clique-based results.
- Sep 28 2017 cs.CV arXiv:1709.09297v1Label estimation is an important component in an unsupervised person re-identification (re-ID) system. This paper focuses on cross-camera label estimation, which can be subsequently used in feature learning to learn robust re-ID models. Specifically, we propose to construct a graph for samples in each camera, and then graph matching scheme is introduced for cross-camera labeling association. While labels directly output from existing graph matching methods may be noisy and inaccurate due to significant cross-camera variations, this paper proposes a dynamic graph matching (DGM) method. DGM iteratively updates the image graph and the label estimation process by learning a better feature space with intermediate estimated labels. DGM is advantageous in two aspects: 1) the accuracy of estimated labels is improved significantly with the iterations; 2) DGM is robust to noisy initial training data. Extensive experiments conducted on three benchmarks including the large-scale MARS dataset show that DGM yields competitive performance to fully supervised baselines, and outperforms competing unsupervised learning methods.
- Sep 26 2017 cs.CV arXiv:1709.08325v1Feature extraction and matching are two crucial components in person Re-Identification (ReID). The large pose deformations and the complex view variations exhibited by the captured person images significantly increase the difficulty of learning and matching of the features from person images. To overcome these difficulties, in this work we propose a Pose-driven Deep Convolutional (PDC) model to learn improved feature extraction and matching models from end to end. Our deep architecture explicitly leverages the human part cues to alleviate the pose variations and learn robust feature representations from both the global image and different local parts. To match the features from global human body and local body parts, a pose driven feature weighting sub-network is further designed to learn adaptive feature fusions. Extensive experimental analyses and results on three popular datasets demonstrate significant performance improvements of our model over all published state-of-the-art methods.
- We present a robust multi-robot convoying approach that relies on visual detection of the leading agent, thus enabling target following in unstructured 3-D environments. Our method is based on the idea of tracking-by-detection, which interleaves efficient model-based object detection with temporal filtering of image-based bounding box estimation. This approach has the important advantage of mitigating tracking drift (i.e. drifting away from the target object), which is a common symptom of model-free trackers and is detrimental to sustained convoying in practice. To illustrate our solution, we collected extensive footage of an underwater robot in ocean settings, and hand-annotated its location in each frame. Based on this dataset, we present an empirical comparison of multiple tracker variants, including the use of several convolutional neural networks, both with and without recurrent connections, as well as frequency-based model-free trackers. We also demonstrate the practicality of this tracking-by-detection strategy in real-world scenarios by successfully controlling a legged underwater robot in five degrees of freedom to follow another robot's independent motion.
- Sep 25 2017 cs.LO arXiv:1709.07495v1Temporal synthesis is the automated design of a system that interacts with an environment, using the declarative specification of the system's behavior. A popular language for providing such a specification is Linear Temporal Logic, or LTL. LTL synthesis in the general case has remained, however, a hard problem to solve in practice. Because of this, many works have focused on developing synthesis procedures for specific fragments of LTL, with an easier synthesis problem. In this work, we focus on Safety LTL, defined here to be the Until-free fragment of LTL in Negation Normal Form~(NNF), and shown to express a fragment of safe LTL formulas. The intrinsic motivation for this fragment is the observation that in many cases it is not enough to say that something "good" will eventually happen, we need to say by when it will happen. We show here that Safety LTL synthesis is significantly simpler algorithmically than LTL synthesis. We exploit this simplicity in two ways, first by describing an explicit approach based on a reduction to Horn-SAT, which can be solved in linear time in the size of the game graph, and then through an efficient symbolic construction, allowing a BDD-based symbolic approach which significantly outperforms extant LTL-synthesis tools.
- We consider two basic problems in parallel multi-block Alternating Direction Method of Multipliers (ADMM): constructing convergence conditions, and controlling the variables among a family of solving the subproblems. To address these problems, we propose a new NeuralADMM framework, which consists of two fundamental steps. First, ADMM is converted into a discrete-time recurrent neural network (DT-RNN) since ADMM recursively updates the variables or solve the subproblems. Second, we study the stability of DT-RNN for the convergence of ADMM by employing quadratic Lyapunov functions. We also propose some convergence conditions and design variable feedbacks for ensuring the convergence of ADMM. Semidefinite programming is easily employed to check these conditions. Numerical experiments further establish the effectiveness of our proposed framework.
- Predicating macroscopic influences of drugs on human body, like efficacy and toxicity, is a central problem of small-molecule based drug discovery. Molecules can be represented as an undirected graph, and we can utilize graph convolution networks to predication molecular properties. However, graph convolutional networks and other graph neural networks all focus on learning node-level representation rather than graph-level representation. Previous works simply sum all feature vectors for all nodes in the graph to obtain the graph feature vector for drug predication. In this paper, we introduce a dummy super node that is connected with all nodes in the graph by a directed edge as the representation of the graph and modify the graph operation to help the dummy super node learn graph-level feature. Thus, we can handle graph-level classification and regression in the same way as node-level classification and regression. In addition, we apply focal loss to address class imbalance in drug datasets. The experiments on MoleculeNet show that our method can effectively improve the performance of molecular properties predication.
- Sep 12 2017 cs.CV arXiv:1709.02898v2In this letter, to break the limit of the traditional linear models for SAR image despeckling, we propose a novel deep learning approach by learning a non-linear end-to-end mapping between the noisy and clean SAR images with a dilated residual network (SAR-DRN). SAR-DRN is based on dilated convolutions, which can both enlarge the receptive field and maintain the filter size and layer depth with a lightweight structure. In addition, skip connections are added to the despeckling model to reduce the vanishing gradient problem. Compared with the traditional despeckling methods, the proposed method shows superior performance over the state-of-the-art methods on both quantitative and visual assessments, especially for strong speckle noise.
- In OFDM relay networks with IQ imbalances and full-duplex relay station (RS), how to optimize pilot pattern and power allocation using the criterion of minimizing the sum of mean square errors (Sum-MSE) for the frequency-domain least-squares channel estimator has a heavy impact on self-interference cancellation. Firstly, the design problem of pilot pattern is casted as a convex optimization. From the KKT conditions, the optimal analytical expression is derived given the fixed source power and RS power. Subsequently, an optimal power allocation (OPA) strategy is proposed and presented to further alleviate the effect of Sum-MSE under the total transmit power sum constraint of source node and RS. Simulation results show that the proposed OPA performs better than equal power allocation (EPA) in terms of Sum-MSE, and the Sum-MSE performance gain grows with deviating $\rho$ from the value of $\rho^o$ minimizing the Sum-MSE, where $\rho$ is defined as the average ratio of the residual SI channel at RS to the intended channel from source to RS. For example, the OPA achieves about 5dB SNR gain over EPA by shrinking or stretching $\rho$ with a factor $4$. More importantly, as $\rho$ decreases or increases more, the performance gain becomes more significant.
- Sep 05 2017 cs.CV arXiv:1709.00516v1Artificial intelligence is making great changes in academy and industry with the fast development of deep learning, which is a branch of machine learning and statistical learning. Fully convolutional network [1] is the standard model for semantic segmentation. Conditional random fields coded as CNN [2] or RNN [3] and connected with FCN has been successfully applied in object detection [4]. In this paper, we introduce a multi-resolution neural network for FCN and apply Gaussian filter to the extended CRF kernel neighborhood and the label image to reduce the oscillating effect of CRF neural network segmentation, thus achieve higher precision and faster training speed.
- The deployment of deep convolutional neural networks (CNNs) in many real world applications is largely hindered by their high computational cost. In this paper, we propose a novel learning scheme for CNNs to simultaneously 1) reduce the model size; 2) decrease the run-time memory footprint; and 3) lower the number of computing operations, without compromising accuracy. This is achieved by enforcing channel-level sparsity in the network in a simple but effective way. Different from many existing approaches, the proposed method directly applies to modern CNN architectures, introduces minimum overhead to the training process, and requires no special software/hardware accelerators for the resulting models. We call our approach network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. We empirically demonstrate the effectiveness of our approach with several state-of-the-art CNN models, including VGGNet, ResNet and DenseNet, on various image classification datasets. For VGGNet, a multi-pass version of network slimming gives a 20x reduction in model size and a 5x reduction in computing operations.
- In this note we construct a series of small subsets containing a non-d-th power element in a finite field by applying certain bounds on incomplete character sums. Precisely, let $h=\lfloor q^{\delta}\rfloor>1$ and $d\mid q^h-1$. Let $r$ be a prime divisor of $q-1$ such that the largest prime power part of $q-1$ has the form $r^s$. Then there is a constant $0<\epsilon<1$ such that for a ratio at least $ {q^{-\epsilon h}}$ of $\alpha\in \mathbb{F}_{q^{h}} \backslash\mathbb{F}_{q}$, the set $S=\{ \alpha-x^t, x\in\mathbb{F}_{q}\}$ of cardinality $1+\frac {q-1} {M(h)}$ contains a non-d-th power in $\mathbb{F}_{q^{\lfloor q^\delta\rfloor}}$, where $t$ is the largest power of $r$ such that $t<\sqrt{q}/h$ and $M(h)$ is defined as $$M(h)=\max_r âˆ£(q-1) r^\min{v_r(q-1), âŒŠ\log_rq/2-\log_r hâŒ‹}.$$ Here $r$ runs thourgh prime divisors and $v_r(x)$ is the $r$-adic oder of $x$. For odd $q$, the choice of $\delta=\frac 12-d, d=o(1)>0$ shows that there exists an explicit subset of cardinality $q^{1-d}=O(\log^{2+\epsilon'}(q^h))$ containing a non-quadratic element in the field $\mathbb{F}_{q^h}$. On the other hand, the choice of $h=2$ shows that for any odd prime power $q$, there is an explicit subset of cardinality $1+\frac {q-1}{M(2)}$ containing a non-quadratic element in $\mathbb{F}_{q^2}$. This improves a $q-1$ construction by Coulter and Kosick \citeCK since $\lfloor \log_2{(q-1)}\rfloor\leq M(2) < \sqrt{q}$. In addition, we obtain a similar construction for small sets containing a primitive element. The construction works well provided $\phi(q^h-1)$ is very small, where $\phi$ is the Euler's totient function.
- Aug 22 2017 cs.DC arXiv:1708.05746v1Spark is an in-memory analytics platform that targets commodity server environments today. It relies on the Hadoop Distributed File System (HDFS) to persist intermediate checkpoint states and final processing results. In Spark, immutable data are used for storing data updates in each iteration, making it inefficient for long running, iterative workloads. A non-deterministic garbage collector further worsens this problem. Sparkle is a library that optimizes memory usage in Spark. It exploits large shared memory to achieve better data shuffling and intermediate storage. Sparkle replaces the current TCP/IP-based shuffle with a shared memory approach and proposes an off-heap memory store for efficient updates. We performed a series of experiments on scale-out clusters and scale-up machines. The optimized shuffle engine leveraging shared memory provides 1.3x to 6x faster performance relative to Vanilla Spark. The off-heap memory store along with the shared-memory shuffle engine provides more than 20x performance increase on a probabilistic graph processing workload that uses a large-scale real-world hyperlink graph. While Sparkle benefits at most from running on large memory machines, it also achieves 1.6x to 5x performance improvements over scale out cluster with equivalent hardware setting.
- Aug 21 2017 cs.CL arXiv:1708.05466v1High accuracy speech recognition requires a large amount of transcribed data for supervised training. In the absence of such data, domain adaptation of a well-trained acoustic model can be performed, but even here, high accuracy usually requires significant labeled data from the target domain. In this work, we propose an approach to domain adaptation that does not require transcriptions but instead uses a corpus of unlabeled parallel data, consisting of pairs of samples from the source domain of the well-trained model and the desired target domain. To perform adaptation, we employ teacher/student (T/S) learning, in which the posterior probabilities generated by the source-domain model can be used in lieu of labels to train the target-domain model. We evaluate the proposed approach in two scenarios, adapting a clean acoustic model to noisy speech and adapting an adults speech acoustic model to children speech. Significant improvements in accuracy are obtained, with reductions in word error rate of up to 44% over the original source model without the need for transcribed data in the target domain. Moreover, we show that increasing the amount of unlabeled data results in additional model robustness, which is particularly beneficial when using simulated training data in the target-domain.
- Aug 17 2017 cs.CR arXiv:1708.04749v1Relationship-based access control (ReBAC) provides a high level of expressiveness and flexibility that promotes security and information sharing. We formulate ReBAC as an object-oriented extension of attribute-based access control (ABAC) in which relationships are expressed using fields that refer to other objects, and path expressions are used to follow chains of relationships between objects. ReBAC policy mining algorithms have potential to significantly reduce the cost of migration from legacy access control systems to ReBAC, by partially automating the development of a ReBAC policy from an existing access control policy and attribute data. This paper presents an algorithm for mining ReBAC policies from access control lists (ACLs) and attribute data represented as an object model, and an evaluation of the algorithm on four sample policies and two large case studies. Our algorithm can be adapted to mine ReBAC policies from access logs and object models. It is the first algorithm for these problems.
- As an inevitable trend of future 5G networks, Software Defined architecture has many advantages in providing central- ized control and flexible resource management. But it is also confronted with various security challenges and potential threats with emerging services and technologies. As the focus of network security, Intrusion Detection Systems (IDS) are usually deployed separately without collaboration. They are also unable to detect novel attacks with limited intelligent abilities, which are hard to meet the needs of software defined 5G. In this paper, we propose an intelligent intrusion system taking the advances of software defined technology and artificial intelligence based on Software Defined 5G architecture. It flexibly combines security function mod- ules which are adaptively invoked under centralized management and control with a globle view. It can also deal with unknown intrusions by using machine learning algorithms. Evaluation results prove that the intelligent intrusion detection system achieves a better performance.
- We present Deeply Supervised Object Detector (DSOD), a framework that can learn object detectors from scratch. State-of-the-art object objectors rely heavily on the off-the-shelf networks pre-trained on large-scale classification datasets like ImageNet, which incurs learning bias due to the difference on both the loss functions and the category distributions between classification and detection tasks. Model fine-tuning for the detection task could alleviate this bias to some extent but not fundamentally. Besides, transferring pre-trained models from classification to detection between discrepant domains is even more difficult (e.g. RGB to depth images). A better solution to tackle these two critical problems is to train object detectors from scratch, which motivates our proposed DSOD. Previous efforts in this direction mostly failed due to much more complicated loss functions and limited training data in object detection. In DSOD, we contribute a set of design principles for training object detectors from scratch. One of the key findings is that deep supervision, enabled by dense layer-wise connections, plays a critical role in learning a good detector. Combining with several other principles, we develop DSOD following the single-shot detection (SSD) framework. Experiments on PASCAL VOC 2007, 2012 and MS COCO datasets demonstrate that DSOD can achieve better results than the state-of-the-art solutions with much more compact models. For instance, DSOD outperforms SSD on all three benchmarks with real-time detection speed, while requires only 1/2 parameters to SSD and 1/10 parameters to Faster RCNN. Our code and models are available at: https://github.com/szq0214/DSOD .
- Aug 04 2017 cs.CV arXiv:1708.01001v1Low-bit deep neural networks (DNNs) become critical for embedded applications due to their low storage requirement and computing efficiency. However, they suffer much from the non-negligible accuracy drop. This paper proposes the stochastic quantization (SQ) algorithm for learning accurate low-bit DNNs. The motivation is due to the following observation. Existing training algorithms approximate the real-valued elements/filters with low-bit representation all together in each iteration. The quantization errors may be small for some elements/filters, while are remarkable for others, which lead to inappropriate gradient direction during training, and thus bring notable accuracy drop. Instead, SQ quantizes a portion of elements/filters to low-bit with a stochastic probability inversely proportional to the quantization error, while keeping the other portion unchanged with full-precision. The quantized and full-precision portions are updated with corresponding gradients separately in each iteration. The SQ ratio is gradually increased until the whole network is quantized. This procedure can greatly compensate the quantization error and thus yield better accuracy for low-bit DNNs. Experiments show that SQ can consistently and significantly improve the accuracy for different low-bit DNNs on various datasets and various network structures.
- Covert communication aims to hide the very existence of wireless transmissions in order to guarantee a strong security in wireless networks. In this work, we examine the possibility and achievable performance of covert communication in one-way relay networks. Specifically, the relay is greedy and opportunistically transmits its own information to the destination covertly on top of forwarding the source's message, while the source tries to detect this covert transmission to discover the illegitimate usage of the resource (e.g., power, spectrum) allocated only for the purpose of forwarding source's information. We propose two strategies for the relay to transmit its covert information, namely fixed-rate and fixed-power transmission schemes, for which the source's detection limits are analysed in terms of the false alarm and miss detection rates and the achievable effective covert rates from the relay to destination are derived. Our examination determines the conditions under which the fixed-rate transmission scheme outperforms the fixed-power transmission scheme, and vice versa, which enables the relay to achieve the maximum effective covert rate. Our analysis indicates that the relay has to forward the source's message to shield its covert transmission and the effective covert rate increases with its forwarding ability (e.g., its maximum transmit power).
- Training deep learning based video classifiers for action recognition requires a large amount of labeled videos. The labeling process is labor-intensive and time-consuming. On the other hand, large amount of weakly-labeled images are uploaded to the Internet by users everyday. To harness the rich and highly diverse set of Web images, a scalable approach is to crawl these images to train deep learning based classifier, such as Convolutional Neural Networks (CNN). However, due to the domain shift problem, the performance of Web images trained deep classifiers tend to degrade when directly deployed to videos. One way to address this problem is to fine-tune the trained models on videos, but sufficient amount of annotated videos are still required. In this work, we propose a novel approach to transfer knowledge from image domain to video domain. The proposed method can adapt to the target domain (i.e. video data) with limited amount of training data. Our method maps the video frames into a low-dimensional feature space using the class-discriminative spatial attention map for CNNs. We design a novel Siamese EnergyNet structure to learn energy functions on the attention maps by jointly optimizing two loss functions, such that the attention map corresponding to a ground truth concept would have higher energy. We conduct extensive experiments on two challenging video recognition datasets (i.e. TVHI and UCF101), and demonstrate the efficacy of our proposed method.
- Aug 03 2017 cs.CV arXiv:1708.00634v1Since the beginning of early civilizations, social relationships derived from each individual fundamentally form the basis of social structure in our daily life. In the computer vision literature, much progress has been made in scene understanding, such as object detection and scene parsing. Recent research focuses on the relationship between objects based on its functionality and geometrical relations. In this work, we aim to study the problem of social relationship recognition, in still images. We have proposed a dual-glance model for social relationship recognition, where the first glance fixates at the individual pair of interest and the second glance deploys attention mechanism to explore contextual cues. We have also collected a new large scale People in Social Context (PISC) dataset, which comprises of 22,670 images and 76,568 annotated samples from 9 types of social relationship. We provide benchmark results on the PISC dataset, and qualitatively demonstrate the efficacy of the proposed model.
- Compared to basic fork-join queues, a job in (n, k) fork-join queues only needs its k out of all n sub-tasks to be finished. Since (n, k) fork-join queues are prevalent in modern popular distributed systems such as Cassandra, MapReduce and Spark, estimating the sojourn time of such queues is critical for the performance measurement and resource plan of these systems. However, the estimating keeps to be a well-known open challenge for years, and only rough bounds for a partial range of load factors have been given. In this paper, we developed a close-form linear transformation technique for identical random variables: An order statistic can be represented by a linear combination of maxima. This brand-new technique is then used to transform the sojourn time of non-purging (n, k) fork-join queues into a linear combination of the sojourn times of basic (k, k)~(n, n) fork-join queues. Consequently, existing approximations for basic fork-join queues can be bridged to the approximations for non-purging (n, k) fork-join queues. The uncovered approximations are then used to improve the upper bounds for purging (n, k) fork-join queues. Simulation experiments show that this linear transformation approach is practiced well for moderate n and relatively large k.
- Unsupervised single-channel overlapped speech recognition is one of the hardest problems in automatic speech recognition (ASR). Permutation invariant training (PIT) is a state of the art model-based approach, which applies a single neural network to solve this single-input, multiple-output modeling problem. We propose to advance the current state of the art by imposing a modular structure on the neural network, applying a progressive pretraining regimen, and improving the objective function with transfer learning and a discriminative training criterion. The modular structure splits the problem into three sub-tasks: frame-wise interpreting, utterance-level speaker tracing, and speech recognition. The pretraining regimen uses these modules to solve progressively harder tasks. Transfer learning leverages parallel clean speech to improve the training targets for the network. Our discriminative training formulation is a modification of standard formulations, that also penalizes competing outputs of the system. Experiments are conducted on the artificial overlapped Switchboard and hub5e-swb dataset. The proposed framework achieves over 30% relative improvement of WER over both a strong jointly trained system, PIT for ASR, and a separately optimized system, PIT for speech separation with clean speech ASR model. The improvement comes from better model generalization, training efficiency and the sequence level linguistic knowledge integration.
- With over-deployed network infrastructures, network densification is shown to hinder the improvement of user experience and system performance. In this paper, we adopt multi-antenna techniques to overcome the bottleneck and investigate the performance of single-user beamforming, an effective method to enhance desired signal power, in small cell networks from the perspective of user coverage probability (CP) and network spatial throughput (ST). Pessimistically, it is proved that, even when multi-antenna techniques are applied, both CP and ST would be degraded and even asymptotically diminish to zero with the increasing base station (BS) density. Moreover, the results also reveal that the increase of ST is at the expense of the degradation of CP. Therefore, to balance the tradeoff between user and system performance, we further study the critical density, under which ST could be maximized under the CP constraint. Accordingly, the impact of key system parameters on critical density is quantified via the derived closed-form expression. Especially, the critical density is shown to be inversely proportional to the square of antenna height difference between BSs and users. Meanwhile, single-user beamforming, albeit incapable of improving CP and ST scaling laws, is shown to significantly increase the critical density, compared to the single-antenna regime.
- In this paper, the performance of uplink spectral efficiency in massive multiple input multiple output (MIMO) over spacially correlated Ricean fading channel is presented. The maximum ratio combining (MRC) receiver is employed at the base station (BS) for two different methods of channel estimation. The first method is based on pilot-assisted least minimum mean square error (LMMSE) estimation, the second one is based on line-of-sight (LOS) part. The respective analytical expressions of uplink data rate are given for these two methods. Due to the existence of pilot contamination, the uplink data rate of pilot-assisted LMMSE estimation method approaches to a finite value (we name it as asymptotic rate in the paper) when the BS antenna number is very large. However, the data rate of LOS method goes linearly with the number of BS antennas. The expression of the uplink rate of LOS method also show that for Ricean channel, the spacial correlation between the BS antennas may not only decrease the rate, but also increase the rate, which depends on the locations of the users. This conclusion explains why the spacial correlation may increase, rather than decrease the data rate of pilot-assisted LMMSE. We also discuss the power scaling law of the two methods, and the asymptotic expressions of the two methods are the same and both independent of the antenna correlation.
- Jul 07 2017 cs.CV arXiv:1707.01629v2In this work, we present a simple, highly efficient and modularized Dual Path Network (DPN) for image classification which presents a new topology of connection paths internally. By revealing the equivalence of the state-of-the-art Residual Network (ResNet) and Densely Convolutional Network (DenseNet) within the HORNN framework, we find that ResNet enables feature re-usage while DenseNet enables new features exploration which are both important for learning good representations. To enjoy the benefits from both path topologies, our proposed Dual Path Network shares common features while maintaining the flexibility to explore new features through dual path architectures. Extensive experiments on three benchmark datasets, ImagNet-1k, Places365 and PASCAL VOC, clearly demonstrate superior performance of the proposed DPN over state-of-the-arts. In particular, on the ImagNet-1k dataset, a shallow DPN surpasses the best ResNeXt-101(64x4d) with 26% smaller model size, 25% less computational cost and 8% lower memory consumption, and a deeper DPN (DPN-131) further pushes the state-of-the-art single model performance with about 2 times faster training speed. Experiments on the Places365 large-scale scene dataset, PASCAL VOC detection dataset, and PASCAL VOC segmentation dataset also demonstrate its consistently better performance than DenseNet, ResNet and the latest ResNeXt model over various applications.
- Jul 07 2017 cs.RO arXiv:1707.01532v1In this paper, we develop a high-dimensional map building technique that incorporates raw pixelated semantic measurements into the map representation. The proposed technique uses Gaussian Processes (GPs) multi-class classification for map inference and is the natural extension of GP occupancy maps from binary to multi-class form. The technique exploits the continuous property of GPs and, as a result, the map can be inferred with any resolution. In addition, the proposed GP Semantic Map (GPSM) learns the structural and semantic correlation from measurements rather than resorting to assumptions, and can flexibly learn the spatial correlation as well as any additional non-spatial correlation between map points. We extend the OctoMap to Semantic OctoMap representation and compare with the GPSM mapping performance using NYU Depth V2 dataset. Evaluations of the proposed technique on multiple partially labeled RGBD scans and labels from noisy image segmentation show that the GP semantic map can handle sparse measurements, missing labels in the point cloud, as well as noise corrupted labels.
- Jul 05 2017 cs.CV arXiv:1707.00811v1Fine-Grained Visual Categorization (FGVC) has achieved significant progress recently. However, the number of fine-grained species could be huge and dynamically increasing in real scenarios, making it difficult to recognize unseen objects under the current FGVC framework. This raises an open issue to perform large-scale fine-grained identification without a complete training set. Aiming to conquer this issue, we propose a retrieval task named One-Shot Fine-Grained Instance Retrieval (OSFGIR). "One-Shot" denotes the ability of identifying unseen objects through a fine-grained retrieval task assisted with an incomplete auxiliary training set. This paper first presents the detailed description to OSFGIR task and our collected OSFGIR-378K dataset. Next, we propose the Convolutional and Normalization Networks (CN-Nets) learned on the auxiliary dataset to generate a concise and discriminative representation. Finally, we present a coarse-to-fine retrieval framework consisting of three components, i.e., coarse retrieval, fine-grained retrieval, and query expansion, respectively. The framework progressively retrieves images with similar semantics, and performs fine-grained identification. Experiments show our OSFGIR framework achieves significantly better accuracy and efficiency than existing FGVC and image retrieval methods, thus could be a better solution for large-scale fine-grained object identification.
- Jul 05 2017 cs.CV arXiv:1707.00798v2Learning discriminative representations for unseen person images is critical for person Re-Identification (ReID). Most of current approaches learn deep representations in classification tasks, which essentially minimize the empirical classification risk on the training set. As shown in our experiments, such representations commonly focus on several body parts discriminative to the training set, rather than the entire human body. Inspired by the structural risk minimization principle in SVM, we revise the traditional deep representation learning procedure to minimize both the empirical classification risk and the representation learning risk. The representation learning risk is evaluated by the proposed part loss, which automatically generates several parts for an image, and computes the person classification loss on each part separately. Compared with traditional global classification loss, simultaneously considering multiple part loss enforces the deep network to focus on the entire human body and learn discriminative representations for different parts. Experimental results on three datasets, i.e., Market1501, CUHK03, VIPeR, show that our representation outperforms the existing deep representations.
- Generative Adversarial Networks (GANs) have recently been proposed as a promising avenue towards learning generative models with deep neural networks. While GANs have demonstrated state-of-the-art performance on multiple vision tasks, their learning dynamics are not yet well understood, both in theory and in practice. To address this issue, we take a first step towards a rigorous study of GAN dynamics. We propose a simple model that exhibits several of the common problematic convergence behaviors (e.g., vanishing gradient, mode collapse, diverging or oscillatory behavior) and still allows us to establish the first convergence bounds for parametric GAN dynamics. We find an interesting dichotomy: a GAN with an optimal discriminator provably converges, while a first order approximation of the discriminator leads to unstable GAN dynamics and mode collapse. Our model and analysis point to a specific challenge in practical GAN training that we call discriminator collapse.
- Jun 29 2017 cs.CV arXiv:1706.09274v2We took part in the YouTube-8M Video Understanding Challenge hosted on Kaggle, and achieved the 10th place within less than one month's time. In this paper, we present an extensive analysis and solution to the underlying machine-learning problem based on frame-level data, where major challenges are identified and corresponding preliminary methods are proposed. It's noteworthy that, with merely the proposed strategies and uniformly-averaging multi-crop ensemble was it sufficient for us to reach our ranking. We also report the methods we believe to be promising but didn't have enough time to train to convergence. We hope this paper could serve, to some extent, as a review and guideline of the YouTube-8M multi-label video classification benchmark, inspiring future attempts and research.
- Jun 20 2017 physics.soc-ph cs.DL arXiv:1706.05858v1The features of collaboration behaviors are often considered to be different from discipline to discipline. Meanwhile, collaborating among disciplines is an obvious feature emerged in modern scientific research, which incubates several interdisciplines, such as sustainability science. The features of collaborations in and among the disciplines of biological, physical and social sciences are analyzed based on 52,803 papers published in a multidisciplinary journal PNAS during 1999 to 2013. In the aspect of similarities, the data emerge the similar transitivity and assortativity of collaboration behaviors, the identical distribution type of collaborators per author and that of papers per author. In the aspect of interactions, the data show a considerable proportion of authors engaging in interdisciplinary research, and the more collaborators and papers an author has, the more likely the author pursues interdisciplinary research. The analysis of the paper contents illustrates that the development of each science category has an equilibrium relationship in the long-run with the developments of typical research paradigms and transdisciplinary disciplines. Hence, those unified methodologies can be viewed as grounds for the interactions.
- Jun 19 2017 cs.CV arXiv:1706.05274v2Detecting small objects is notoriously challenging due to their low resolution and noisy representation. Existing object detection pipelines usually detect small objects through learning representations of all the objects at multiple scales. However, the performance gain of such ad hoc architectures is usually limited to pay off the computational cost. In this work, we address the small object detection problem by developing a single architecture that internally lifts representations of small objects to "super-resolved" ones, achieving similar characteristics as large objects and thus more discriminative for detection. For this purpose, we propose a new Perceptual Generative Adversarial Network (Perceptual GAN) model that improves small object detection through narrowing representation difference of small objects from the large ones. Specifically, its generator learns to transfer perceived poor representations of the small objects to super-resolved ones that are similar enough to real large objects to fool a competing discriminator. Meanwhile its discriminator competes with the generator to identify the generated representation and imposes an additional perceptual requirement - generated representations of small objects must be beneficial for detection purpose - on the generator. Extensive evaluations on the challenging Tsinghua-Tencent 100K and the Caltech benchmark well demonstrate the superiority of Perceptual GAN in detecting small objects, including traffic signs and pedestrians, over well-established state-of-the-arts.
- Jun 15 2017 cs.LG arXiv:1706.04371v3Dimensionality reduction techniques play important roles in the analysis of big data. Traditional dimensionality reduction approaches, such as principal component analysis (PCA) and linear discriminant analysis (LDA), have been studied extensively in the past few decades. However, as the dimensionality of data increases, the computational cost of traditional dimensionality reduction methods grows exponentially, and the computation becomes prohibitively intractable. These drawbacks have triggered the development of random projection (RP) techniques, which map high-dimensional data onto a low-dimensional subspace with extremely reduced time cost. However, the RP transformation matrix is generated without considering the intrinsic structure of the original data and usually leads to relatively high distortion. Therefore, in recent years, methods based on RP have been proposed to address this problem. In this paper, we summarize the methods used in different situations to help practitioners to employ the proper techniques for their specific applications. Meanwhile, we enumerate the benefits and limitations of the various methods and provide further references for researchers to develop novel RP-based approaches.
- Jun 14 2017 cs.CV arXiv:1706.04062v2Obesity treatment requires obese patients to record all food intakes per day. Computer vision has been introduced to estimate calories from food images. In order to increase accuracy of detection and reduce the error of volume estimation in food calorie estimation, we present our calorie estimation method in this paper. To estimate calorie of food, a top view and side view is needed. Faster R-CNN is used to detect the food and calibration object. GrabCut algorithm is used to get each food's contour. Then the volume is estimated with the food and corresponding object. Finally we estimate each food's calorie. And the experiment results show our estimation method is effective.
- Consider the following random process: we are given $n$ queues, into which elements of increasing labels are inserted uniformly at random. To remove an element, we pick two queues at random, and remove the element of lower label (higher priority) among the two. The cost of a removal is the rank of the label removed, among labels still present in any of the queues, that is, the distance from the optimal choice at each step. Variants of this strategy are prevalent in state-of-the-art concurrent priority queue implementations. Nonetheless, it is not known whether such implementations provide any rank guarantees, even in a sequential model. We answer this question, showing that this strategy provides surprisingly strong guarantees: Although the single-choice process, where we always insert and remove from a single randomly chosen queue, has degrading cost, going to infinity as we increase the number of steps, in the two choice process, the expected rank of a removed element is $O( n )$ while the expected worst-case cost is $O( n \log n )$. These bounds are tight, and hold irrespective of the number of steps for which we run the process. The argument is based on a new technical connection between "heavily loaded" balls-into-bins processes and priority scheduling. Our analytic results inspire a new concurrent priority queue implementation, which improves upon the state of the art in terms of practical performance.
- Network embedding leverages the node proximity manifested to learn a low-dimensional node vector representation. The learned embeddings could advance various learning tasks such as node classification, network clustering, and link prediction. Most, if not all, of the existing work, is overwhelmingly performed in the context of plain and static networks. Nonetheless, in reality, network structure often evolves over time with addition/deletion of links and nodes. Also, a vast majority of real-world networks are associated with a rich set of node attributes, and their attribute values are also naturally changing, with the emerging of new content and the fading of old content. These changing characteristics motivate us to seek an effective embedding representation to capture network and attribute evolving patterns, which is of fundamental importance for learning in a dynamic environment. To our best knowledge, we are the first to tackle this problem with the following two challenges: (1) the inherently correlated network and node attributes could be noisy and incomplete, it necessitates a robust consensus representation to capture their individual properties and correlations; (2) the embedding learning needs to be performed in an online fashion to adapt to the changes accordingly. In this paper, we tackle this problem by proposing a novel dynamic attributed network embedding framework - DANE. In particular, DANE provides an offline method for a consensus embedding first and then leverages matrix perturbation theory to maintain the freshness of the end embedding results in an online manner. We perform extensive experiments on both synthetic and real attributed networks to corroborate the effectiveness and efficiency of the proposed framework.
- In this paper, we focus on fraud detection on a signed graph with only a small set of labeled training data. We propose a novel framework that combines deep neural networks and spectral graph analysis. In particular, we use the node projection (called as spectral coordinate) in the low dimensional spectral space of the graph's adjacency matrix as input of deep neural networks. Spectral coordinates in the spectral space capture the most useful topology information of the network. Due to the small dimension of spectral coordinates (compared with the dimension of the adjacency matrix derived from a graph), training deep neural networks becomes feasible. We develop and evaluate two neural networks, deep autoencoder and convolutional neural network, in our fraud detection framework. Experimental results on a real signed graph show that our spectrum based deep neural networks are effective in fraud detection.
- We study the combinatorial pure exploration problem Best-Set in stochastic multi-armed bandits. In a Best-Set instance, we are given $n$ arms with unknown reward distributions, as well as a family $\mathcal{F}$ of feasible subsets over the arms. Our goal is to identify the feasible subset in $\mathcal{F}$ with the maximum total mean using as few samples as possible. The problem generalizes the classical best arm identification problem and the top-$k$ arm identification problem, both of which have attracted significant attention in recent years. We provide a novel instance-wise lower bound for the sample complexity of the problem, as well as a nontrivial sampling algorithm, matching the lower bound up to a factor of $\ln|\mathcal{F}|$. For an important class of combinatorial families, we also provide polynomial time implementation of the sampling algorithm, using the equivalence of separation and optimization for convex program, and approximate Pareto curves in multi-objective optimization. We also show that the $\ln|\mathcal{F}|$ factor is inevitable in general through a nontrivial lower bound construction. Our results significantly improve several previous results for several important combinatorial constraints, and provide a tighter understanding of the general Best-Set problem. We further introduce an even more general problem, formulated in geometric terms. We are given $n$ Gaussian arms with unknown means and unit variance. Consider the $n$-dimensional Euclidean space $\mathbb{R}^n$, and a collection $\mathcal{O}$ of disjoint subsets. Our goal is to determine the subset in $\mathcal{O}$ that contains the $n$-dimensional vector of the means. The problem generalizes most pure exploration bandit problems studied in the literature. We provide the first nearly optimal sample complexity upper and lower bounds for the problem.
- This paper considers the slotted ALOHA protocol in a communication channel shared by N users. It is assumed that the channel has the multiple-packet reception (MPR) capability that allows the correct reception of up to M ($1 \leq M < N$) time-overlapping packets. To evaluate the reliability in the scenario that a packet needs to be transmitted within a strict delivery deadline D ($D \geq 1$) (in unit of slot) since its arrival at the head of queue, we consider the successful delivery probability (SDP) of a packet as performance metric of interest. We derive the optimal transmission probability that maximizes the SDP for any $1 \leq M < N$ and any $D \geq 1$, and show it can be computed by a fixed-point iteration. In particular, the case for D = 1 (i.e., throughput maximization) is first completely addressed in this paper. Based on these theoretical results, for real-life scenarios where N may be unknown and changing, we develop a distributed algorithm that enables each user to tune its transmission probability at runtime according to the estimate of N. Simulation results show that the proposed algorithm is effective in dynamic scenarios, with near-optimal performance.
- The multi-agent path-finding (MAPF) problem has recently received a lot of attention. However, it does not capture important characteristics of many real-world domains, such as automated warehouses, where agents are constantly engaged with new tasks. In this paper, we therefore study a lifelong version of the MAPF problem, called the multi-agent pickup and delivery (MAPD) problem. In the MAPD problem, agents have to attend to a stream of delivery tasks in an online setting. One agent has to be assigned to each delivery task. This agent has to first move to a given pickup location and then to a given delivery location while avoiding collisions with other agents. We present two decoupled MAPD algorithms, Token Passing (TP) and Token Passing with Task Swaps (TPTS). Theoretically, we show that they solve all well-formed MAPD instances, a realistic subclass of MAPD instances. Experimentally, we compare them against a centralized strawman MAPD algorithm without this guarantee in a simulated warehouse system. TP can easily be extended to a fully distributed MAPD algorithm and is the best choice when real-time computation is of primary concern since it remains efficient for MAPD instances with hundreds of agents and tasks. TPTS requires limited communication among agents and balances well between TP and the centralized MAPD algorithm.
- Jun 01 2017 cs.CL arXiv:1705.11160v1In the past few years, attention mechanisms have become an indispensable component of end-to-end neural machine translation models. However, previous attention models always refer to some source words when predicting a target word, which contradicts with the fact that some target words have no corresponding source words. Motivated by this observation, we propose a novel attention model that has the capability of determining when a decoder should attend to source words and when it should not. Experimental results on NIST Chinese-English translation tasks show that the new model achieves an improvement of 0.8 BLEU score over a state-of-the-art baseline.
- We develop an assume-guarantee contract framework for the design of cyber-physical systems, modeled as closed-loop control systems, under probabilistic requirements. We use a variant of signal temporal logic, namely, Stochastic Signal Temporal Logic (StSTL) to specify system behaviors as well as contract assumptions and guarantees, thus enabling automatic reasoning about requirements of stochastic systems. Given a stochastic linear system representation and a set of requirements captured by bounded StSTL contracts, we propose algorithms that can check contract compatibility, consistency, and refinement, and generate a controller to guarantee that a contract is satisfied, following a stochastic model predictive control approach. Our algorithms leverage encodings of the verification and control synthesis tasks into mixed integer optimization problems, and conservative approximations of probabilistic constraints that produce both sound and tractable problem formulations. We illustrate the effectiveness of our approach on a few examples, including the design of embedded controllers for aircraft power distribution networks.