This paper studies index coding with two senders. In this setup, source messages are distributed among the senders (possibly with overlapping of messages). In addition, there are multiple receivers, where each receiver having some messages a priori, known as side-information, is requesting one unique message such that each message is requested by only one receiver. Index coding in this setup is called two-sender unicast index coding (TSUIC). The main goal is to find the shortest aggregate encoded messages while allowing all receivers to decode their requested messages. Firstly, we form three independent sub-problems of a TSUIC problem based on whether the requested messages by receivers of those sub-problems are available only in one of the senders or in both senders. Then we express the optimal broadcast rate (the shortest normalized codelength) of the TSUIC problem as a function of the optimal broadcast rates of those independent sub-problems. In this way, we devise the structural characteristics of TSUIC. For the proofs of our results, we apply the notion of confusion graphs and a code-forming technique. To color a confusion graph in TSUIC, we introduce a new graph-coloring approach (different from the normal graph coloring), called two-sender graph coloring, and propose a way of grouping the vertices to analyze the number of colors used. Finally, we determine a class of TSUIC instances where a certain type of side-information can be removed without affecting their optimal broadcast rates.
Computer poetry generation is our first step towards computer writing. Writing must have a theme. The current approaches of using sequence-to-sequence models with attention often produce non-thematic poems. We present a conditional variational autoencoder with augmented word2vec architecture that explicitly represents the topic or theme information. This approach significantly improves the relevance of the generated poems by representing each line of the poem not only in a context-sensitive manner but also in a holistic way that is highly related to the given keyword and the learned topic. The proposed augmented word2vec model further improves the rhythm and symmetry. We also present a straightforward evaluation metric RHYTHM score to automatically measure the rule-consistency of generated poems. Tests show that 45.24% generated poems by our model are judged by humans to be written by real people.
Scientific coauthorship, generated by collaborations and competitions among researchers, reflects effective organizations of human resources. Researchers, their expected benefits through collaborations, and their cooperative costs constitute the elements of a game. Hence we propose a cooperative game model to explore the evolution mechanisms of scientific coauthorship networks. The model generates geometric hypergraphs, where the costs are modelled by space distances, and the benefits are expressed by node reputations, i. e. geometric zones that depend on node position in space and time. Modelled cooperative strategies conditioned on positive benefit-minus-cost reflect the spatial reciprocity principle in collaborations, and generate high clustering and degree assortativity, two typical features of coauthorship networks. Modelled reputations generate the generalized Poisson parts and fat tails appeared in specific distributions of empirical data, e. g. paper team size distribution. The combined effect of modelled costs and reputations reproduces the transitions emerged in degree distribution, in the correlation between degree and local clustering coefficient, etc. The model provides an example of how individual strategies induce network complexity, as well as an application of game theory to social affiliation networks.
Falsely identifying different authors as one is called merging error in the name disambiguation of coauthorship networks. Research on the measurement and distribution of merging errors helps to collect high quality coauthorship networks. In the aspect of measurement, we provide a Bayesian model to measure the errors through author similarity. We illustratively use the model and coauthor similarity to measure the errors caused by initial-based name disambiguation methods. The empirical result on large-scale coauthorship networks shows that using coauthor similarity cannot increase the accuracy of disambiguation through surname and the initial of the first given name. In the aspect of distribution, expressing coauthorship data as hypergraphs and supposing the merging error rate is proper to hyperdegree with an exponent, we find that hypergraphs with a range of network properties highly similar to those of low merging error hypergraphs can be constructed from high merging error hypergraphs. It implies that focusing on the error correction of high hyperdegree nodes is a labor- and time-saving approach of improving the data quality for coauthorship network analysis.
The paper proposes an inductive semi-supervised learning method, called Smooth Neighbors on Teacher Graphs (SNTG). At each iteration during training, a graph is dynamically constructed based on predictions of the teacher model, i.e., the implicit self-ensemble of models. Then the graph serves as a similarity measure with respect to which the representations of "similar" neighboring points are learned to be smooth on the low dimensional manifold. We achieve state-of-the-art results on semi-supervised learning benchmarks. The error rates are 9.89%, 3.99% for CIFAR-10 with 4000 labels, SVHN with 500 labels, respectively. In particular, the improvements are significant when the labels are scarce. For non-augmented MNIST with only 20 labels, the error rate is reduced from previous 4.81% to 1.36%. Our method is also effective under noisy supervision and shows robustness to incorrect labels.
Oct 31 2017 cs.CL
Fine-grained entity typing aims to assign entity mentions in the free text with types arranged in a hierarchical structure. Traditional distant supervision based methods employ a structured data source as a weak supervision and do not need hand-labeled data, but they neglect the label noise in the automatically labeled training corpus. Although recent studies use many features to prune wrong data ahead of training, they suffer from error propagation and bring much complexity. In this paper, we propose an end-to-end typing model, called the path-based attention neural model (PAN), to learn a noise- robust performance by leveraging the hierarchical structure of types. Experiments demonstrate its effectiveness.
Oct 31 2017 cs.DS
In this paper we study the classical scheduling problem of minimizing the total weighted completion time on a single machine with the constraint that one specific job must be scheduled at a specified position. We give dynamic programs with pseudo-polynomial running time, and a fully polynomial-time approximation scheme (FPTAS).
Millimeter wave (mmWave) communications have been considered as a key technology for next generation cellular systems and Wi-Fi networks because of its advances in providing orders-of-magnitude wider bandwidth than current wireless networks. Economical and energy efficient analog/digial hybrid precoding and combining transceivers have been often proposed for mmWave massive multiple-input multiple-output (MIMO) systems to overcome the severe propagation loss of mmWave channels. One major shortcoming of existing solutions lies in the assumption of infinite or high-resolution phase shifters (PSs) to realize the analog beamformers. However, low-resolution PSs are typically adopted in practice to reduce the hardware cost and power consumption. Motivated by this fact, in this paper, we investigate the practical design of hybrid precoders and combiners with low-resolution PSs in mmWave MIMO systems. In particular, we propose an iterative algorithm which successively designs the low-resolution analog precoder and combiner pair for each data stream, aiming at conditionally maximizing the spectral efficiency. Then, the digital precoder and combiner are computed based on the obtained effective baseband channel to further enhance the spectral efficiency. In an effort to achieve an even more hardware-efficient large antenna array, we also investigate the design of hybrid beamformers with one-bit resolution (binary) PSs, and present a novel binary analog precoder and combiner optimization algorithm with quadratic complexity in the number of antennas. The proposed low-resolution hybrid beamforming design is further extended to multiuser MIMO communication systems. Simulation results demonstrate the performance advantages of the proposed algorithms compared to existing low-resolution hybrid beamforming designs, particularly for the one-bit resolution PS scenario.
In cognitive radio networks (CRNs), dynamic spectrum access has been proposed to improve the spectrum utilization, but it also generates spectrum misuse problems. One common solution to these problems is to deploy monitors to detect misbehaviors on certain channel. However, in multi-channel CRNs, it is very costly to deploy monitors on every channel. With a limited number of monitors, we have to decide which channels to monitor. In addition, we need to determine how long to monitor each channel and in which order to monitor, because switching channels incurs costs. Moreover, the information about the misuse behavior is not available a priori. To answer those questions, we model the spectrum monitoring problem as an adversarial multi-armed bandit problem with switching costs (MAB-SC), propose an effective framework, and design two online algorithms, SpecWatch-II and SpecWatch-III, based on the same framework. To evaluate the algorithms, we use weak regret, i.e., the performance difference between the solution of our algorithm and optimal (fixed) solution in hindsight, as the metric. We prove that the expected weak regret of SpecWatch-II is O(T^2/3), where T is the time horizon. Whereas, the actual weak regret of SpecWatch-III is O(T^2/3) with probability 1 - \delta, for any \delta in (0, 1). Both algorithms guarantee the upper bounds matching the lower bound of the general adversarial MAB- SC problem. Therefore, they are all asymptotically optimal.
Proteins are the main workhorses of biological functions in a cell, a tissue, or an organism. Identification and quantification of proteins in a given sample, e.g. a cell type under normal/disease conditions, are fundamental tasks for the understanding of human health and disease. In this paper, we present DeepNovo, a deep learning-based tool to address the problem of protein identification from tandem mass spectrometry data. The idea was first proposed in the context of de novo peptide sequencing  in which convolutional neural networks and recurrent neural networks were applied to predict the amino acid sequence of a peptide from its spectrum, a similar task to generating a caption from an image. We further develop DeepNovo to perform sequence database search, the main technique for peptide identification that greatly benefits from numerous existing protein databases. We combine two modules de novo sequencing and database search into a single deep learning framework for peptide identification, and integrate de Bruijn graph assembly technique to offer a complete solution to reconstruct protein sequences from tandem mass spectrometry data. This paper describes a comprehensive protocol of DeepNovo for protein identification, including training neural network models, dynamic programming search, database querying, estimation of false discovery rate, and de Bruijn graph assembly. Training and testing data, model implementations, and comprehensive tutorials in form of IPython notebooks are available in our GitHub repository (https://github.com/nh2tran/DeepNovo).
Appropriate comments of code snippets provide insight for code functionality, which are helpful for program comprehension. However, due to the great cost of authoring with the comments, many code projects do not contain adequate comments. Automatic comment generation techniques have been proposed to generate comments from pieces of code in order to alleviate the human efforts in annotating the code. Most existing approaches attempt to exploit certain correlations (usually manually given) between code and generated comments, which could be easily violated if the coding patterns change and hence the performance of comment generation declines. In this paper, we first build C2CGit, a large dataset from open projects in GitHub, which is more than 20$\times$ larger than existing datasets. Then we propose a new attention module called Code Attention to translate code to comments, which is able to utilize the domain features of code snippets, such as symbols and identifiers. We make ablation studies to determine effects of different parts in Code Attention. Experimental results demonstrate that the proposed module has better performance over existing approaches in both BLEU and METEOR.
Sep 20 2017 cs.LG
Massive data exist among user local platforms that usually cannot support deep neural network (DNN) training due to computation and storage resource constraints. Cloud-based training schemes provide beneficial services but suffer from potential privacy risks due to excessive user data collection. To enable cloud-based DNN training while protecting the data privacy simultaneously, we propose to leverage the intermediate representations of the data, which is achieved by splitting the DNNs and deploying them separately onto local platforms and the cloud. The local neural network (NN) is used to generate the feature representations. To avoid local training and protect data privacy, the local NN is derived from pre-trained NNs. The cloud NN is then trained based on the extracted intermediate representations for the target learning task. We validate the idea of DNN splitting by characterizing the dependency of privacy loss and classification accuracy on the local NN topology for a convolutional NN (CNN) based image classification task. Based on the characterization, we further propose PrivyNet to determine the local NN topology, which optimizes the accuracy of the target learning task under the constraints on privacy loss, local computation, and storage. The efficiency and effectiveness of PrivyNet are demonstrated with CIFAR-10 dataset.
Sep 11 2017 cs.NE
The quality of solution sets generated by decomposition-based evolutionary multiobjective optimisation (EMO) algorithms depends heavily on the consistency between a given problem's Pareto front shape and the specified weights' distribution. A set of weights distributed uniformly in a simplex often lead to a set of well-distributed solutions on a Pareto front with a simplex-like shape, but may fail on other Pareto front shapes. It is an open problem on how to specify a set of appropriate weights without the information of the problem's Pareto front beforehand. In this paper, we propose an approach to adapt the weights during the evolutionary process (called AdaW). AdaW progressively seeks a suitable distribution of weights for the given problem by elaborating five parts in the weight adaptation --- weight generation, weight addition, weight deletion, archive maintenance, and weight update frequency. Experimental results have shown the effectiveness of the proposed approach. AdaW works well for Pareto fronts with very different shapes: 1) the simplex-like, 2) the inverted simplex-like, 3) the highly nonlinear, 4) the disconnect, 5) the degenerated, 6) the badly-scaled, and 7) the high-dimensional.
Designing an e-commerce recommender system that serves hundreds of millions of active users is a daunting challenge. From a human vision perspective, there're two key factors that affect users' behaviors: items' attractiveness and their matching degree with users' interests. This paper proposes Telepath, a vision-based bionic recommender system model, which understands users from such perspective. Telepath is a combination of a convolutional neural network (CNN), a recurrent neural network (RNN) and deep neural networks (DNNs). Its CNN subnetwork simulates the human vision system to extract key visual signals of items' attractiveness and generate corresponding activations. Its RNN and DNN subnetworks simulate cerebral cortex to understand users' interest based on the activations generated from browsed items. In practice, the Telepath model has been launched to JD's recommender system and advertising system. For one of the major item recommendation blocks on the JD app, click-through rate (CTR), gross merchandise value (GMV) and orders have increased 1.59%, 8.16% and 8.71% respectively. For several major ads publishers of JD demand-side platform, CTR, GMV and return on investment have increased 6.58%, 61.72% and 65.57% respectively by the first launch, and further increased 2.95%, 41.75% and 41.37% respectively by the second launch.
Aug 25 2017 cs.CE
The paper presents a topology optimization approach that designs an optimal structure, called a self-supporting structure, which is ready to be fabricated via additive manufacturing without the usage of additional support structures. Such supports in general have to be created during the fabricating process so that the primary object can be manufactured layer by layer without collapse, which is very time-consuming and waste of material. The proposed approach resolves this problem by formulating the self-supporting requirements as a novel explicit quadratic continuous constraint in the topology optimization problem, or specifically, requiring the number of unsupported elements (in terms of the sum of squares of their densities) to be zero. Benefiting form such novel formulations, computing sensitivity of the self-supporting constraint with respect to the design density is straightforward, which otherwise would require lots of research efforts in general topology optimization studies. The derived sensitivity for each element is only linearly dependent on its sole density, which, different from previous layer-based sensitivities, consequently allows for a parallel implementation and possible higher convergence rate. In addition, a discrete convolution operator is also designed to detect the unsupported elements as involved in each step of optimization iteration, and improves the detection process 100 times as compared with simply enumerating these elements. The approach works for cases of general overhang angle, or general domain, and produces an optimized structures, and their associated optimal compliance, very close to that of the reference structure obtained without considering the self-supporting constraint, as demonstrated by extensive 2D and 3D benchmark examples.
We present LADDER, the first deep reinforcement learning agent that can successfully learn control policies for large-scale real-world problems directly from raw inputs composed of high-level semantic information. The agent is based on an asynchronous stochastic variant of DQN (Deep Q Network) named DASQN. The inputs of the agent are plain-text descriptions of states of a game of incomplete information, i.e. real-time large scale online auctions, and the rewards are auction profits of very large scale. We apply the agent to an essential portion of JD's online RTB (real-time bidding) advertising business and find that it easily beats the former state-of-the-art bidding policy that had been carefully engineered and calibrated by human experts: during JD.com's June 18th anniversary sale, the agent increased the company's ads revenue from the portion by more than 50%, while the advertisers' ROI (return on investment) also improved significantly.
Aug 21 2017 cs.CV
Automatic generation of facial images has been well studied after the Generative Adversarial Network (GAN) came out. There exists some attempts applying the GAN model to the problem of generating facial images of anime characters, but none of the existing work gives a promising result. In this work, we explore the training of GAN models specialized on an anime facial image dataset. We address the issue from both the data and the model aspect, by collecting a more clean, well-suited dataset and leverage proper, empirical application of DRAGAN. With quantitative analysis and case studies we demonstrate that our efforts lead to a stable and high-quality model. Moreover, to assist people with anime character design, we build a website (http://make.girls.moe) with our pre-trained model available online, which makes the model easily accessible to general public.
Aug 17 2017 cs.CV
Neural segmentation has a great impact on the smooth implementation of local anesthesia surgery. At present, the network for the segmentation includes U-NET  and SegNet . U-NET network has short training time and less training parameters, but the depth is not deep enough. SegNet network has deeper structure, but it needs longer training time, and more training samples. In this paper, we propose an improved U-NET neural network for the segmentation. This network deepens the original structure through importing residual network. Compared with U-NET and SegNet, the improved U-NET network has fewer training parameters, shorter training time and get a great improvement in segmentation effect. The improved U-NET network structure has a good application scene in neural segmentation.
Aug 09 2017 cs.RO
Environmental conditions and external effects, such as shocks, have a significant impact on the calibration parameters of visual-inertial sensor systems. Thus long-term operation of these systems cannot fully rely on factory calibration. Since the observability of certain parameters is highly dependent on the motion of the device, using short data segments at device initialization may yield poor results. When such systems are additionally subject to energy constraints, it is also infeasible to use full-batch approaches on a big dataset and careful selection of the data is of high importance. In this paper, we present a novel approach for resource efficient self-calibration of visual-inertial sensor systems. This is achieved by casting the calibration as a segment-based optimization problem that can be run on a small subset of informative segments. Consequently, the computational burden is limited as only a predefined number of segments is used. We also propose an efficient information-theoretic selection to identify such informative motion segments. In evaluations on a challenging dataset, we show our approach to significantly outperform state-of-the-art in terms of computational burden while maintaining a comparable accuracy.
Objective: To apply deep learning pose estimation algorithms for vision-based assessment of parkinsonism and levodopa-induced dyskinesia (LID). Methods: Nine participants with Parkinson's disease (PD) and LID completed a levodopa infusion protocol, where symptoms were assessed at regular intervals using the Unified Dyskinesia Rating Scale (UDysRS) and Unified Parkinson's Disease Rating Scale (UPDRS). A state-of-the-art deep learning pose estimation method was used to extract movement trajectories from videos of PD assessments. Features of the movement trajectories were used to detect and estimate the severity of parkinsonism and LID using random forest. Communication and drinking tasks were used to assess LID, while leg agility and toe tapping tasks were used to assess parkinsonism. Feature sets from tasks were also combined to predict total UDysRS and UPDRS Part III scores. Results: For LID, the communication task yielded the best results for dyskinesia (severity estimation: r = 0.661, detection: AUC = 0.930). For parkinsonism, leg agility had better results for severity estimation (r = 0.618), while toe tapping was better for detection (AUC = 0.773). UDysRS and UPDRS Part III scores were predicted with r = 0.741 and 0.530, respectively. Conclusion: This paper presents the first application of deep learning for vision-based assessment of parkinsonism and LID and demonstrates promising performance for the future translation of deep learning to PD clinical practices. Significance: The proposed system provides insight into the potential of computer vision and deep learning for clinical application in PD.
Jul 25 2017 cs.CL
We study the helpful product reviews identification problem in this paper. We observe that the evidence-conclusion discourse relations, also known as arguments, often appear in product reviews, and we hypothesise that some argument-based features, e.g. the percentage of argumentative sentences, the evidences-conclusions ratios, are good indicators of helpful reviews. To validate this hypothesis, we manually annotate arguments in 110 hotel reviews, and investigate the effectiveness of several combinations of argument-based features. Experiments suggest that, when being used together with the argument-based features, the state-of-the-art baseline features can enjoy a performance boost (in terms of F1) of 11.01\% in average.
Jul 13 2017 cs.CR
Intel Software Guard Extension (SGX) offers software applications enclave to protect their confidentiality and integrity from malicious operating systems. The SSL/TLS protocol, which is the de facto standard for protecting transport-layer network communications, has been broadly deployed for a secure communication channel. However, in this paper, we show that the marriage between SGX and SSL may not be smooth sailing. Particularly, we consider a category of side-channel attacks against SSL/TLS implementations in secure enclaves, which we call the control-flow inference attacks. In these attacks, the malicious operating system kernel may perform a powerful man-in-the-kernel attack to collect execution traces of the enclave programs at page, cacheline, or branch level, while positioning itself in the middle of the two communicating parties. At the center of our work is a differential analysis framework, dubbed Stacco, to dynamically analyze the SSL/TLS implementations and detect vulnerabilities that can be exploited as decryption oracles. Surprisingly, we found exploitable vulnerabilities in the latest versions of all the SSL/TLS libraries we have examined. To validate the detected vulnerabilities, we developed a man-in-the-kernel adversary to demonstrate Bleichenbacher attacks against the latest OpenSSL library running in the SGX enclave (with the help of Graphene) and completely broke the PreMasterSecret encrypted by a 4096-bit RSA public key with only 57286 queries. We also conducted CBC padding oracle attacks against the latest GnuTLS running in Graphene-SGX and an open-source SGX-implementation of mbedTLS (i.e., mbedTLS-SGX) that runs directly inside the enclave, and showed that it only needs 48388 and 25717 queries, respectively, to break one block of AES ciphertext. Empirical evaluation suggests these man-in-the-kernel attacks can be completed within 1 or 2 hours.
Jul 05 2017 cs.NI
In the creation of a smart future information society, Internet of Things (IoT) and Content Centric Networking (CCN) break two key barriers for both the front-end sensing and back-end networking. However, we still observe the missing piece of the research that dominates the current design, e.g., lacking of the knowledge penetrated into both sensing and networking to glue them holistically. In this paper, we introduce and discuss a new networking paradigm, called Knowledge Centric Networking (KCN), as a promising solution. The key insight of KCN is to leverage emerging machine learning or deep learning techniques to create knowledge for networking system designs, and extract knowledge from collected data to facilitate enhanced system intelligence and interactivity, improved quality of service, communication with better controllability, and lower cost. This paper presents the KCN design rationale, the KCN benefits and also the potential research opportunities.
Jul 04 2017 cs.CG
In this paper, we propose a general framework for constructing IGA-suitable planar B-spline parameterizations from given complex CAD boundaries consisting of a set of B-spline curves. Instead of forming the computational domain by a simple boundary, planar domains with high genus and more complex boundary curves are considered. Firstly, some pre-processing operations including Bézier extraction and subdivision are performed on each boundary curve in order to generate a high-quality planar parameterization; then a robust planar domain partition framework is proposed to construct high-quality patch-meshing results with few singularities from the discrete boundary formed by connecting the end points of the resulting boundary segments. After the topology information generation of quadrilateral decomposition, the optimal placement of interior Bézier curves corresponding to the interior edges of the quadrangulation is constructed by a global optimization method to achieve a patch-partition with high quality. Finally, after the imposition of C1=G1-continuity constraints on the interface of neighboring Bézier patches with respect to each quad in the quadrangulation, the high-quality Bézier patch parameterization is obtained by a C1-constrained local optimization method to achieve uniform and orthogonal iso-parametric structures while keeping the continuity conditions between patches. The efficiency and robustness of the proposed method are demonstrated by several examples which are compared to results obtained by the skeleton-based parameterization approach.
Jun 29 2017 cs.NI
As an important part of the Internet-of-Things (IoT), machine-to-machine (M2M) communications have attracted great attention. In this paper, we introduce mobile edge computing (MEC) into virtualized cellular networks with M2M communications, to decrease the energy consumption and optimize the computing resource allocation as well as improve computing capability. Moreover, based on different functions and quality of service (QoS) requirements, the physical network can be virtualized into several virtual networks, and then each MTCD selects the corresponding virtual network to access. Meanwhile, the random access process of MTCDs is formulated as a partially observable Markov decision process (POMDP) to minimize the system cost, which consists of both the energy consumption and execution time of computing tasks. Furthermore, to facilitate the network architecture integration, software-defined networking (SDN) is introduced to deal with the diverse protocols and standards in the networks. Extensive simulation results with different system parameters reveal that the proposed scheme could significantly improve the system performance compared to the existing schemes.
Maximum Likelihood Estimation (MLE) suffers from data sparsity problem in sequence prediction tasks where training resource is rare. In order to alleviate this problem, in this paper, we propose a novel generative bridging network (GBN) to train sequence prediction models, which contains a generator and a bridge. Unlike MLE directly maximizing the likelihood of the ground truth, the bridge extends the point-wise ground truth to a bridge distribution (containing inexhaustible examples), and the generator is trained to minimize their KL-divergence. In order to guide the training of generator with additional signals, the bridge distribution can be set or trained to possess specific properties, by using different constraints. More specifically, to increase output diversity, enhance language smoothness and relieve learning burden, three different regularization constraints are introduced to construct bridge distributions. By combining these bridges with a sequence generator, three independent GBNs are proposed, namely uniform GBN, language-model GBN and coaching GBN. Experiment conducted on two recognized sequence prediction tasks (machine translation and abstractive text summarization) shows that our proposed GBNs can yield significant improvements over strong baseline systems. Furthermore, by analyzing samples drawn from bridge distributions, expected influences on the sequence model training are verified.
The features of collaboration behaviors are often considered to be different from discipline to discipline. Meanwhile, collaborating among disciplines is an obvious feature emerged in modern scientific research, which incubates several interdisciplines, such as sustainability science. The features of collaborations in and among the disciplines of biological, physical and social sciences are analyzed based on 52,803 papers published in a multidisciplinary journal PNAS during 1999 to 2013. In the aspect of similarities, the data emerge the similar transitivity and assortativity of collaboration behaviors, the identical distribution type of collaborators per author and that of papers per author. In the aspect of interactions, the data show a considerable proportion of authors engaging in interdisciplinary research, and the more collaborators and papers an author has, the more likely the author pursues interdisciplinary research. The analysis of the paper contents illustrates that the development of each science category has an equilibrium relationship in the long-run with the developments of typical research paradigms and transdisciplinary disciplines. Hence, those unified methodologies can be viewed as grounds for the interactions.
The availability of big data recorded from massively multiplayer online role-playing games (MMORPGs) allows us to gain a deeper understanding of the potential connection between individuals' network positions and their economic outputs. We use a statistical filtering method to construct dependence networks from weighted friendship networks of individuals. We investigate the 30 distinct motif positions in the 13 directed triadic motifs which represent microscopic dependences among individuals. Based on the structural similarity of motif positions, we further classify individuals into different groups. The node position diversity of individuals is found to be positively correlated with their economic outputs. We also find that the economic outputs of leaf nodes are significantly lower than that of the other nodes in the same motif. Our findings shed light on understanding the influence of network structure on economic activities and outputs in socioeconomic system.
In this paper, we explore SPPIM-based text classification method, and the experiment reveals that the SPPIM method is equal to or even superior than SGNS method in text classification task on three international and standard text datasets, namely 20newsgroups, Reuters52 and WebKB. Comparing to SGNS, although SPPMI provides a better solution, it is not necessarily better than SGNS in text classification tasks. Based on our analysis, SGNS takes into the consideration of weight calculation during decomposition process, so it has better performance than SPPIM in some standard datasets. Inspired by this, we propose a WL-SPPIM semantic model based on SPPIM model, and experiment shows that WL-SPPIM approach has better classification and higher scalability in the text classification task compared with LDA, SGNS and SPPIM approaches.
In this paper, we present the Role Playing Learning (RPL) scheme for a mobile robot to navigate socially with its human companion in populated environments. Neural networks (NN) are constructed to parameterize a stochastic policy that directly maps sensory data collected by the robot to its velocity outputs, while respecting a set of social norms. An efficient simulative learning environment is built with maps and pedestrians trajectories collected from a number of real-world crowd data sets. In each learning iteration, a robot equipped with the NN policy is created virtually in the learning environment to play itself as a companied pedestrian and navigate towards a goal in a socially concomitant manner. Thus, we call this process Role Playing Learning, which is formulated under a reinforcement learning (RL) framework. The NN policy is optimized end-to-end using Trust Region Policy Optimization (TRPO), with consideration of the imperfectness of robot's sensor measurements. Simulative and experimental results are provided to demonstrate the efficacy and superiority of our method.
May 08 2017 cs.CV
One of the major challenges in object detection is to propose detectors with highly accurate localization of objects. The online sampling of high-loss region proposals (hard examples) uses the multitask loss with equal weight settings across all loss types (e.g, classification and localization, rigid and non-rigid categories) and ignores the influence of different loss distributions throughout the training process, which we find essential to the training efficacy. In this paper, we present the Stratified Online Hard Example Mining (S-OHEM) algorithm for training higher efficiency and accuracy detectors. S-OHEM exploits OHEM with stratified sampling, a widely-adopted sampling technique, to choose the training examples according to this influence during hard example mining, and thus enhance the performance of object detectors. We show through systematic experiments that S-OHEM yields an average precision (AP) improvement of 0.5% on rigid categories of PASCAL VOC 2007 for both the IoU threshold of 0.6 and 0.7. For KITTI 2012, both results of the same metric are 1.6%. Regarding the mean average precision (mAP), a relative increase of 0.3% and 0.5% (1% and 0.5%) is observed for VOC07 (KITTI12) using the same set of IoU threshold. Also, S-OHEM is easy to integrate with existing region-based detectors and is capable of acting with post-recognition level regressors.
May 08 2017 cs.CL
Argument Component Boundary Detection (ACBD) is an important sub-task in argumentation mining; it aims at identifying the word sequences that constitute argument components, and is usually considered as the first sub-task in the argumentation mining pipeline. Existing ACBD methods heavily depend on task-specific knowledge, and require considerable human efforts on feature-engineering. To tackle these problems, in this work, we formulate ACBD as a sequence labeling problem and propose a variety of Recurrent Neural Network (RNN) based methods, which do not use domain specific or handcrafted features beyond the relative position of the sentence in the document. In particular, we propose a novel joint RNN model that can predict whether sentences are argumentative or not, and use the predicted results to more precisely detect the argument component boundaries. We evaluate our techniques on two corpora from two different genres; results suggest that our joint RNN model obtain the state-of-the-art performance on both datasets.
May 08 2017 cs.CL
Argumentation mining aims at automatically extracting the premises-claim discourse structures in natural language texts. There is a great demand for argumentation corpora for customer reviews. However, due to the controversial nature of the argumentation annotation task, there exist very few large-scale argumentation corpora for customer reviews. In this work, we novelly use the crowdsourcing technique to collect argumentation annotations in Chinese hotel reviews. As the first Chinese argumentation dataset, our corpus includes 4814 argument component annotations and 411 argument relation annotations, and its annotations qualities are comparable to some widely used argumentation corpora in other languages.
May 02 2017 cs.NE
Rapid development of evolutionary algorithms in handling many-objective optimization problems requires viable methods of visualizing a high-dimensional solution set. Parallel coordinates which scale well to high-dimensional data are such a method, and have been frequently used in evolutionary many-objective optimization. However, the parallel coordinates plot is not as straightforward as the classic scatter plot to present the information contained in a solution set. In this paper, we make some observations of the parallel coordinates plot, in terms of comparing the quality of solution sets, understanding the shape and distribution of a solution set, and reflecting the relation between objectives. We hope that these observations could provide some guidelines as to the proper use of parallel coordinates in evolutionary many-objective optimization.
Millimeter-wave (mmWave) communications have been considered as a key technology for future 5G wireless networks because of the orders-of-magnitude wider bandwidth than current cellular bands. In this paper, we consider the problem of codebook-based joint analog-digital hybrid precoder and combiner design for spatial multiplexing transmission in a mmWave multiple-input multiple-output (MIMO) system. We propose to jointly select analog precoder and combiner pair for each data stream successively aiming at maximizing the channel gain while suppressing the interference between different data streams. After all analog precoder/combiner pairs have been determined, we can obtain the effective baseband channel. Then, the digital precoder and combiner are computed based on the obtained effective baseband channel to further mitigate the interference and maximize the sum-rate. Simulation results demonstrate that our proposed algorithm exhibits prominent advantages in combating interference between different data streams and offer satisfactory performance improvement compared to the existing codebook-based hybrid beamforming schemes.
Millimeter wave (mmWave) communications have been considered as a key technology for future 5G wireless networks. In order to overcome the severe propagation loss of mmWave channel, mmWave multiple-input multiple-output (MIMO) systems with analog/digital hybrid precoding and combining transceiver architecture have been widely considered. However, physical layer security (PLS) in mmWave MIMO systems and the secure hybrid beamformer design have not been well investigated. In this paper, we consider the problem of hybrid precoder and combiner design for secure transmission in mmWave MIMO systems in order to protect the legitimate transmission from eavesdropping. When eavesdropper's channel state information (CSI) is known, we first propose a joint analog precoder and combiner design algorithm which can prevent the information leakage to the eavesdropper. Then, the digital precoder and combiner are computed based on the obtained effective baseband channel to further maximize the secrecy rate. Next, if prior knowledge of the eavesdropper's CSI is unavailable, we develop an artificial noise (AN)-based hybrid beamforming approach, which can jam eavesdropper's reception while maintaining the quality-of-service (QoS) of intended receiver at the pre-specified level. Simulation results demonstrate that our proposed algorithms offer significant secrecy performance improvement compared with other hybrid beamforming algorithms.
Apr 27 2017 cs.CV
This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter $m$. We further derive specific $m$ to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge show the superiority of A-Softmax loss in FR tasks. The code has also been made publicly available.
Physical layer security has been considered as an important security approach in wireless communications to protect legitimate transmission from passive eavesdroppers. This paper investigates the physical layer security of a wireless multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) communication system in the presence of a multiple-antenna eavesdropper. We first propose a transmit-filter-assisted secure MIMO-OFDM system which can destroy the orthogonality of eavesdropper's signals. Our proposed transmit filter can disturb the reception of eavesdropper while maintaining the quality of legitimate transmission. Then, we propose another artificial noise (AN)-assisted secure MIMO-OFDM system to further improve the security of the legitimate transmission. The time-domain AN signal is designed to disturb the reception of eavesdropper while the legitimate transmission will not be affected. Simulation results are presented to demonstrate the security performance of the proposed transmit filter design and AN-assisted scheme in the MIMO-OFDM system.
Analog/digital hybrid precoder and combiner have been widely used in millimeter wave (mmWave) multiple-input multiple-output (MIMO) systems due to its energy-efficient and economic superiorities. Infinite resolution of phase shifters (PSs) for the analog beamformer can achieve very close performance compared to the full-digital scheme but will result in high complexity and intensive power consumption. Thus, more cost effective and energy efficient low resolution PSs are typically used in practical mmWave MIMO systems. In this paper, we consider the joint hybrid precoder and combiner design with one-bit quantized PSs in mmWave MIMO systems. We propose to firstly design the analog precoder and combiner pair for each data stream successively, aiming at conditionally maximizing the spectral efficiency. We present a novel binary analog precoder and combiner optimization algorithm under a Rank-1 approximation of the interference-included equivalent channel with lower than quadratic complexity. Then the digital precoder and combiner are computed based on the obtained baseband effective channel to further enhance the spectral efficiency. Simulation results demonstrate that the proposed algorithm outperforms the existing one-bit PSs based hybrid beamforming scheme.
This paper investigates the problem of hybrid precoder and combiner design for multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems operating in millimeter-wave (mmWave) bands. We propose a novel iterative scheme to design the codebook-based analog precoder and combiner in forward and reverse channels. During each iteration, we apply compressive sensing (CS) technology to efficiently estimate the equivalent MIMO-OFDM mmWave channel. Then, the analog precoder or combiner is obtained based on the orthogonal matching pursuit (OMP) algorithm to alleviate the interference between different data streams as well as maximize the spectral efficiency. The digital precoder and combiner are finally obtained based on the effective baseband channel to further enhance the spectral efficiency. Simulation results demonstrate the proposed iterative hybrid precoder and combiner algorithm has significant performance advantages.
Apr 18 2017 cs.RO
Unmanned aerial vehicles (UAVs) have gained much attention in recent years for both commercial and military applications. The progress in this field has gained much popularity and the research has encompassed various fields of scientific domain. Cyber securing a UAV communication has been one of the active research field since the attack on Predator UAV video stream hijacking in 2009. Since UAVs rely heavily on on-board autopilot to function, it is important to develop an autopilot system that is robust to possible cyber attacks. In this work, we present a biometric system to encrypt the UAV communication by generating a key which is derived from Beta component of the EEG signal of a user. We have developed a safety mechanism that would be activated in case the communication of the UAV from the ground control station gets attacked. This system has been validated on a commercial UAV under malicious attack conditions during which we implement a procedure where the UAV return safely to a "home" position.
Apr 17 2017 cs.CL
Chinese discourse coherence modeling remains a challenge taskin Natural Language Processing field.Existing approaches mostlyfocus on the need for feature engineering, whichadoptthe sophisticated features to capture the logic or syntactic or semantic relationships acrosssentences within a text.In this paper, we present an entity-drivenrecursive deep modelfor the Chinese discourse coherence evaluation based on current English discourse coherenceneural network model. Specifically, to overcome the shortage of identifying the entity(nouns) overlap across sentences in the currentmodel, Our combined modelsuccessfully investigatesthe entities information into the recursive neural network freamework.Evaluation results on both sentence ordering and machine translation coherence rating task show the effectiveness of the proposed model, which significantly outperforms the existing strong baseline.
Apr 06 2017 cs.CV
This paper focuses on a novel and challenging vision task, dense video captioning, which aims to automatically describe a video clip with multiple informative and diverse caption sentences. The proposed method is trained without explicit annotation of fine-grained sentence to video region-sequence correspondence, but is only based on weak video-level sentence annotations. It differs from existing video captioning systems in three technical aspects. First, we propose lexical fully convolutional neural networks (Lexical-FCN) with weakly supervised multi-instance multi-label learning to weakly link video regions with lexical labels. Second, we introduce a novel submodular maximization scheme to generate multiple informative and diverse region-sequences based on the Lexical-FCN outputs. A winner-takes-all scheme is adopted to weakly associate sentences to region-sequences in the training phase. Third, a sequence-to-sequence learning based language model is trained with the weakly supervised information obtained through the association process. We show that the proposed method can not only produce informative and diverse dense captions, but also outperform state-of-the-art single video captioning methods by a large margin.
Problems in modeling and simulation require significantly different workflow management technologies than standard grid-based workflow management systems. Computational scientists typically interact with simulation software in a feedback driven way were solutions and workflows are developed iteratively and simultaneously. This work describes common activities in workflows and how combinations of these activities form unique workflows. It presents the Eclipse Integrated Computational Environment as a workflow management system and development environment for the modeling and simulation community. Examples of the Environment's applicability to problems in energy science, general multiphysics simulations, quantum computing and other areas are presented as well as its impact on the community.
Mar 31 2017 cs.AI
Knowledge graph embedding aims to embed entities and relations of knowledge graphs into low-dimensional vector spaces. Translating embedding methods regard relations as the translation from head entities to tail entities, which achieve the state-of-the-art results among knowledge graph embedding methods. However, a major limitation of these methods is the time consuming training process, which may take several days or even weeks for large knowledge graphs, and result in great difficulty in practical applications. In this paper, we propose an efficient parallel framework for translating embedding methods, called ParTrans-X, which enables the methods to be paralleled without locks by utilizing the distinguished structures of knowledge graphs. Experiments on two datasets with three typical translating embedding methods, i.e., TransE , TransH , and a more efficient variant TransE- AdaGrad  validate that ParTrans-X can speed up the training process by more than an order of magnitude.
Mar 31 2017 cs.CV
Lossy image compression is generally formulated as a joint rate-distortion optimization to learn encoder, quantizer, and decoder. However, the quantizer is non-differentiable, and discrete entropy estimation usually is required for rate control. These make it very challenging to develop a convolutional network (CNN)-based image compression system. In this paper, motivated by that the local information content is spatially variant in an image, we suggest that the bit rate of the different parts of the image should be adapted to local content. And the content aware bit rate is allocated under the guidance of a content-weighted importance map. Thus, the sum of the importance map can serve as a continuous alternative of discrete entropy estimation to control compression rate. And binarizer is adopted to quantize the output of encoder due to the binarization scheme is also directly defined by the importance map. Furthermore, a proxy function is introduced for binary operation in backward propagation to make it differentiable. Therefore, the encoder, decoder, binarizer and importance map can be jointly optimized in an end-to-end manner by using a subset of the ImageNet database. In low bit rate image compression, experiments show that our system significantly outperforms JPEG and JPEG 2000 by structural similarity (SSIM) index, and can produce the much better visual result with sharp edges, rich textures, and fewer artifacts.
This paper develops a randomized approach for incrementally building deep neural networks, where a supervisory mechanism is proposed to constrain the random assignment of the weights and biases, and all the hidden layers have direct links to the output layer. A fundamental result on the universal approximation property is established for such a class of randomized leaner models, namely deep stochastic configuration networks (DeepSCNs). A learning algorithm is presented to implement DeepSCNs with either specific architecture or self-organization. The read-out weights attached with all direct links from each hidden layer to the output layer are evaluated by the least squares method. Given a set of training examples, DeepSCNs can speedily produce a learning representation, that is, a collection of random basis functions with the cascaded inputs together with the read-out weights. An empirical study on a function approximation is carried out to demonstrate some properties of the proposed deep learner model.
Allometric scaling can reflect underlying mechanisms, dynamics and structures in complex systems; examples include typical scaling laws in biology, ecology and urban development. In this work, we study allometric scaling in scientific fields. By performing an analysis of the outputs/inputs of various scientific fields, including the numbers of publications, citations, and references, with respect to the number of authors, we find that in all fields that we have studied thus far, including physics, mathematics and economics, there are allometric scaling laws relating the outputs/inputs and the sizes of scientific fields. Furthermore, the exponents of the scaling relations have remained quite stable over the years. We also find that the deviations of individual subfields from the overall scaling laws are good indicators for ranking subfields independently of their sizes.
Neural networks have been widely used as predictive models to fit data distribution, and they could be implemented through learning a collection of samples. In many applications, however, the given dataset may contain noisy samples or outliers which may result in a poor learner model in terms of generalization. This paper contributes to a development of robust stochastic configuration networks (RSCNs) for resolving uncertain data regression problems. RSCNs are built on original stochastic configuration networks with weighted least squares method for evaluating the output weights, and the input weights and biases are incrementally and randomly generated by satisfying with a set of inequality constrains. The kernel density estimation (KDE) method is employed to set the penalty weights for each training samples, so that some negative impacts, caused by noisy data or outliers, on the resulting learner model can be reduced. The alternating optimization technique is applied for updating a RSCN model with improved penalty weights computed from the kernel density estimation function. Performance evaluation is carried out by a function approximation, four benchmark datasets and a case study on engineering application. Comparisons to other robust randomised neural modelling techniques, including the probabilistic robust learning algorithm for neural networks with random weights and improved RVFL networks, indicate that the proposed RSCNs with KDE perform favourably and demonstrate good potential for real-world applications.
Feb 13 2017 cs.NE
This paper contributes to a development of randomized methods for neural networks. The proposed learner model is generated incrementally by stochastic configuration (SC) algorithms, termed as Stochastic Configuration Networks (SCNs). In contrast to the existing randomised learning algorithms for single layer feed-forward neural networks (SLFNNs), we randomly assign the input weights and biases of the hidden nodes in the light of a supervisory mechanism, and the output weights are analytically evaluated in either constructive or selective manner. As fundamentals of SCN-based data modelling techniques, we establish some theoretical results on the universal approximation property. Three versions of SC algorithms are presented for regression problems (applicable for classification problems as well) in this work. Simulation results concerning both function approximation and real world data regression indicate some remarkable merits of our proposed SCNs in terms of less human intervention on the network size setting, the scope adaptation of random parameters, fast learning and sound generalization.
Sparse signatures have been proposed for the CDMA uplink to reduce multi-user detection complexity, but they have not yet been fully exploited for its downlink counterpart. In this work, we propose a Multi-Carrier CDMA (MC-CDMA) downlink communication, where regular sparse signatures are deployed in the frequency domain. Taking the symbol detection point of view, we formulate a problem appropriate for the downlink with discrete alphabets as inputs. The solution to the problem provides a power-efficient precoding algorithm for the base station, subject to minimum symbol error probability (SEP) requirements at the mobile stations. In the algorithm, signature sparsity is shown to be crucial for reducing precoding complexity. Numerical results confirm system-load-dependent power reduction gain from the proposed precoding over the zero-forcing precoding and the regularized zero-forcing precoding with optimized regularization parameter under the same SEP targets. For a fixed system load, it is also demonstrated that sparse MC-CDMA with a proper choice of sparsity level attains almost the same power efficiency and link throughput as that of dense MC-CDMA yet with reduced precoding complexity, thanks to the sparse signatures.
One of the most common approaches for multiobjective optimization is to generate a solution set that well approximates the whole Pareto-optimal frontier to facilitate the later decision-making process. However, how to evaluate and compare the quality of different solution sets remains challenging. Existing measures typically require additional problem knowledge and information, such as a reference point or a substituted set of the Pareto-optimal frontier. In this paper, we propose a quality measure, called dominance move (DoM), to compare solution sets generated by multiobjective optimizers. Given two solution sets, DoM measures the minimum sum of move distances for one set to weakly Pareto dominate the other set. DoM can be seen as a natural reflection of the difference between two solutions, capturing all aspects of solution sets' quality, being compliant with Pareto dominance, and does not need any additional problem knowledge and parameters. We present an exact method to calculate the DoM in the biobjective case. We show the necessary condition of constructing the optimal partition for a solution set's minimum move, and accordingly propose an efficient algorithm to recursively calculate the DoM. Finally, DoM is evaluated on several groups of artificial and real test cases as well as by a comparison with two well-established quality measures.
In this paper, we establish new capacity bounds for the multi-sender unicast index-coding problem. We first revisit existing outer and inner bounds proposed by Sadeghi et al. and identify the suboptimality of their inner bounds in general. We then present an alternative multi-sender maximal-acyclic-induced-subgraph outer bound that simplifies the existing one. For the inner bound, we identify shortcomings of the state-of-the-art partitioned Distributed Composite Coding (DCC) in the strategy of sender partitioning and in the implementation of multi-sender composite coding. We then modify the existing sender partitioning by a new joint link-and-sender partitioning technique, which allows each sender to split its link capacity so as to contribute to collaborative transmissions in multiple groups if necessary. This leads to a modified DCC (mDCC) scheme that is shown to outperform partitioned DCC and suffice to achieve optimality for some index-coding instances. We also propose cooperative compression of composite messages in composite coding to exploit the potential overlapping of messages at different senders to support larger composite rates than those by point-to-point compression in the existing DCC schemes. Combining joint partitioning and cooperative compression proposed, we develop a new multi-sender Cooperative Composite Coding (CCC) scheme for the problem. The CCC scheme improves upon partitioned DCC and mDCC in general, and is the key to achieve optimality for a number of index-coding instances. The usefulness of each scheme is illuminated via examples, and the capacity region is established for each example.
Jan 13 2017 cs.CV
Structural learning, a method to estimate the parameters for discrete energy minimization, has been proven to be effective in solving computer vision problems, especially in 3D scene parsing. As the complexity of the models increases, structural learning algorithms turn to approximate inference to retain tractability. Unfortunately, such methods often fail because the approximation can be arbitrarily poor. In this work, we propose a method to overcome this limitation through exploiting the properties of the joint problem of training time inference and learning. With the help of the learning framework, we transform the inapproximable inference problem into a polynomial time solvable one, thereby enabling tractable exact inference while still allowing an arbitrary graph structure and full potential interactions. Our learning algorithm is guaranteed to return a solution with a bounded error to the global optimal within the feasible parameter space. We demonstrate the effectiveness of this method on two point cloud scene parsing datasets. Our approach runs much faster and solves a problem that is intractable for previous, well-known approaches.
Jan 10 2017 cs.CL
Identifying the different varieties of the same language is more challenging than unrelated languages identification. In this paper, we propose an approach to discriminate language varieties or dialects of Mandarin Chinese for the Mainland China, Hong Kong, Taiwan, Macao, Malaysia and Singapore, a.k.a., the Greater China Region (GCR). When applied to the dialects identification of the GCR, we find that the commonly used character-level or word-level uni-gram feature is not very efficient since there exist several specific problems such as the ambiguity and context-dependent characteristic of words in the dialects of the GCR. To overcome these challenges, we use not only the general features like character-level n-gram, but also many new word-level features, including PMI-based and word alignment-based features. A series of evaluation results on both the news and open-domain dataset from Wikipedia show the effectiveness of the proposed approach.
City traffic is a dynamic system of enormous complexity. Modeling and predicting city traffic flow remains to be a challenge task and the main difficulties are how to specify the supply and demands and how to parameterize the model. In this paper we attempt to solve these problems with the help of large amount of floating car data. We propose a coarse-grained cellular automata model that simulates vehicles moving on uniform grids whose size are much larger compared with the microscopic cellular automata model. The car-car interaction in the microscopic model is replaced by the coupling between vehicles and coarse-grained state variables in our model. To parameterize the model, flux-occupancy relations are fitted from the historical data at every grids, which serve as the coarse-grained fundamental diagrams coupling the occupancy and speed. To evaluate the model, we feed it with the historical travel demands and trajectories obtained from the floating car data and use the model to predict road speed one hour into the future. Numerical results show that our model can capture the traffic flow pattern of the entire city and make reasonable predictions. The current work can be considered a prototype for a model-based forecasting system for city traffic.
In conventional cellular networks, for base stations (BSs) that are deployed far away from each other, it is general to assume them to be mutually independent. Nevertheless, after long-term evolution of cellular networks in various generations, this assumption no longer holds. Instead, the BSs, which seem to be gradually deployed by operators in a service-oriented manner, have embedded many fundamentally distinctive features in their locations, coverage and traffic loading. These features can be leveraged to analyze the intrinsic pattern in BSs and even human community. In this paper, according to large-scale measurement datasets, we build up a correlation model of BSs by utilizing one of the most important features, ie., spatial traffic. Coupling with the theory of complex networks, we make further analysis on the structure and characteristics of this traffic load correlation model. Numerical results show that the degree distribution follows scale-free property. Also the datasets unveil the characteristics of fractality and small-world. Furthermore, we apply collective influence (CI) algorithm to localize the influential base stations and demonstrate that some low-degree BSs may outrank BSs with larger degree.
Nov 29 2016 cs.NI
Machine-to-machine (M2M) communications have attracted great attention from both academia and industry. In this paper, with recent advances in wireless network virtualization and software-defined networking (SDN), we propose a novel framework for M2M communications in software-defined cellular networks with wireless network virtualization. In the proposed framework, according to different functions and quality of service (QoS) requirements of machine-type communication devices (MTCDs), a hypervisor enables the virtualization of the physical M2M network, which is abstracted and sliced into multiple virtual M2M networks. In addition, we develop a decision-theoretic approach to optimize the random access process of M2M communications. Furthermore, we develop a feedback and control loop to dynamically adjust the number of resource blocks (RBs) that are used in the random access phase in a virtual M2M network by the SDN controller. Extensive simulation results with different system parameters are presented to show the performance of the proposed scheme.
Nov 17 2016 cs.NI
Machine-to-machine (M2M) communications have attracted great attention from both academia and industry. In this paper, with recent advances in wireless network virtualization and software-defined networking (SDN), we propose a novel framework for M2M communications in software-defined cellular networks with wireless network virtualization. In the proposed framework, according to different functions and quality of service (QoS) requirements of machine-type communication devices (MTCDs), a hypervisor enables the virtualization of the physical M2M network, which is abstracted and sliced into multiple virtual M2M networks. Moreover, we formulate a decision-theoretic approach to optimize the random access process of M2M communications. In addition, we develop a feedback and control loop to dynamically adjust the number of resource blocks (RBs) that are used in the random access phase in a virtual M2M network by the SDN controller. Extensive simulation results with different system parameters are presented to show the performance of the proposed scheme.
Nov 15 2016 cs.NI
With the growing interest in the use of internet of things (IoT), machine-to-machine (M2M) communications have become an important networking paradigm. In this paper, with recent advances in wireless network virtualization (WNV), we propose a novel framework for M2M communications in vehicular ad-hoc networks (VANETs) with WNV. In the proposed framework, according to different applications and quality of service (QoS) requirements of vehicles, a hypervisor enables the virtualization of the physical vehicular network, which is abstracted and sliced into multiple virtual networks. Moreover, the process of resource blocks (RBs) selection and random access in each virtual vehicular network is formulated as a partially observable Markov decision process (POMDP), which can achieve the maximum reward about transmission capacity. The optimal policy for RBs selection is derived by virtue of a dynamic programming approach. Extensive simulation results with different system parameters are presented to show the performance improvement of the proposed scheme.
In millimeter wave cellular communication, fast and reliable beam alignment via beam training is crucial to harvest sufficient beamforming gain for the subsequent data transmission. In this paper, we establish fundamental limits in beam-alignment performance under both the exhaustive search and the hierarchical search that adopts multi-resolution beamforming codebooks, accounting for time-domain training overhead. Specifically, we derive lower and upper bounds on the probability of misalignment for an arbitrary level in the hierarchical search, based on a single-path channel model. Using the method of large deviations, we characterize the decay rate functions of both bounds and show that the bounds coincide as the training sequence length goes large. We go on to characterize the asymptotic misalignment probability of both the hierarchical and exhaustive search, and show that the latter asymptotically outperforms the former, subject to the same training overhead and codebook resolution. We show via numerical results that this relative performance behavior holds in the non-asymptotic regime. Moreover, the exhaustive search is shown to achieve significantly higher worst-case spectrum efficiency than the hierarchical search, when the pre-beamforming signal-to-noise ratio (SNR) is relatively low. This study hence implies that the exhaustive search is more effective for users situated further from base stations, as they tend to have low SNR.
Oct 19 2016 cs.CV
This paper presents a Deep convolutional network model for Identity-Aware Transfer (DIAT) of facial attributes. Given the source input image and the reference attribute, DIAT aims to generate a facial image (i.e., target image) that not only owns the reference attribute but also keep the same or similar identity to the input image. We develop a two-stage scheme to transfer the input image to each reference attribute label. A feed-forward transform network is first trained by combining perceptual identity-aware loss and GAN-based attribute loss, and a face enhancement network is then introduced to improve the visual quality. We further define perceptual identity loss on the convolutional feature maps of the attribute discriminator, resulting in a DIAT-A model. Our DIAT and DIAT-A models can provide a unified solution for several representative facial attribute transfer tasks such as expression transfer, accessory removal, age progression, and gender transfer. The experimental results validate their effectiveness. Even for some identity-related attribute (e.g., gender), our DIAT-A can obtain visually impressive results by changing the attribute while retaining most identity features of the source image.
In this work, we extend our previous work on largeness tracing among physicists to other fields, namely mathematics, economics and biomedical science. Overall, the results confirm our previous discovery, indicating that scientists in all these fields trace large topics. Surprisingly, however, it seems that researchers in mathematics tend to be more likely to trace large topics than those in the other fields. We also find that on average, papers in top journals are less largeness-driven. We compare researchers from the USA, Germany, Japan and China and find that Chinese researchers exhibit consistently larger exponents, indicating that in all these fields, Chinese researchers trace large topics more strongly than others. Further correlation analyses between the degree of largeness tracing and the numbers of authors, affiliations and references per paper reveal positive correlations -- papers with more authors, affiliations or references are likely to be more largeness-driven, with several interesting and noteworthy exceptions: in economics, papers with more references are not necessary more largeness-driven, and the same is true for papers with more authors in biomedical science. We believe that these empirical discoveries may be valuable to science policy-makers.
Sep 02 2016 cs.CR
The pilot spoofing attack is considered as an active eavesdropping activity launched by an adversary during the reverse channel training phase. By transmitting the same pilot signal as the legitimate user, the pilot spoofing attack is able to degrade the quality of legitimate transmission and, more severely, facilitate eavesdropping. In an effort to detect the pilot spoofing attack and minimize its damages, in this paper we propose a novel random-training-assisted (RTA) pilot spoofing detection algorithm. In particular, we develop a new training mechanism by adding a random training phase after the conventional pilot training phase. By examining the difference of the estimated legitimate channels during these two phases, the pilot spoofing attack can be detected accurately. If no spoofing attack is detected, we present a computationally efficient channel estimation enhancement algorithm to further improve the channel estimation accuracy. If the existence of the pilot spoofing attack is identified, a zero-forcing (ZF)-based secure transmission scheme is proposed to protect the confidential information from the active eavesdropper. Extensive simulation results demonstrate that the proposed RTA scheme can achieve efficient pilot spoofing detection, accurate channel estimation, and secure transmission.
Aug 24 2016 cs.CV
This paper focuses on the problem of generating human face pictures from specific attributes. The existing CNN-based face generation models, however, either ignore the identity of the generated face or fail to preserve the identity of the reference face image. Here we address this problem from the view of optimization, and suggest an optimization model to generate human face with the given attributes while keeping the identity of the reference image. The attributes can be obtained from the attribute-guided image or by tuning the attribute features of the reference image. With the deep convolutional network "VGG-Face", the loss is defined on the convolutional feature maps. We then apply the gradient decent algorithm to solve this optimization problem. The results validate the effectiveness of our method for attribute driven and identity-preserving face generation.
Aug 12 2016 cs.HC
Human identification plays an important role in human-computer interaction. There have been numerous methods proposed for human identification (e.g., face recognition, gait recognition, fingerprint identification, etc.). While these methods could be very useful under different conditions, they also suffer from certain shortcomings (e.g., user privacy, sensing coverage range). In this paper, we propose a novel approach for human identification, which leverages WIFI signals to enable non-intrusive human identification in domestic environments. It is based on the observation that each person has specific influence patterns to the surrounding WIFI signal while moving indoors, regarding their body shape characteristics and motion patterns. The influence can be captured by the Channel State Information (CSI) time series of WIFI. Specifically, a combination of Principal Component Analysis (PCA), Discrete Wavelet Transform (DWT) and Dynamic Time Warping (DTW) techniques is used for CSI waveform-based human identification. We implemented the system in a 6m*5m smart home environment and recruited 9 users for data collection and evaluation. Experimental results indicate that the identification accuracy is about 88.9% to 94.5% when the candidate user set changes from 6 to 2, showing that the proposed human identification method is effective in domestic environments.
Discrete energy minimization is widely-used in computer vision and machine learning for problems such as MAP inference in graphical models. The problem, in general, is notoriously intractable, and finding the global optimal solution is known to be NP-hard. However, is it possible to approximate this problem with a reasonable ratio bound on the solution quality in polynomial time? We show in this paper that the answer is no. Specifically, we show that general energy minimization, even in the 2-label pairwise case, and planar energy minimization with three or more labels are exp-APX-complete. This finding rules out the existence of any approximation algorithm with a sub-exponential approximation ratio in the input size for these two problems, including constant factor approximations. Moreover, we collect and review the computational complexity of several subclass problems and arrange them on a complexity scale consisting of three major complexity classes -- PO, APX, and exp-APX, corresponding to problems that are solvable, approximable, and inapproximable in polynomial time. Problems in the first two complexity classes can serve as alternative tractable formulations to the inapproximable ones. This paper can help vision researchers to select an appropriate model for an application or guide them in designing new algorithms.
Neural conversational models tend to produce generic or safe responses in different contexts, e.g., reply \textit"Of course" to narrative statements or \textit"I don't know" to questions. In this paper, we propose an end-to-end approach to avoid such problem in neural generative models. Additional memory mechanisms have been introduced to standard sequence-to-sequence (seq2seq) models, so that context can be considered while generating sentences. Three seq2seq models, which memorize a fix-sized contextual vector from hidden input, hidden input/output and a gated contextual attention structure respectively, have been trained and tested on a dataset of labeled question-answering pairs in Chinese. The model with contextual attention outperforms others including the state-of-the-art seq2seq models on perplexity test. The novel contextual model generates diverse and robust responses, and is able to carry out conversations on a wide range of topics appropriately.
The communication networks in real world often couple with each other to save costs, which results in any network does not have a stand-alone function and efficiency. To investigate this, in this paper we propose a transportation model on two coupled networks with bandwidth sharing. We find that the free-flow state and the congestion state can coexist in the two coupled networks, and the free-flow path and congestion path can coexist in each network. Considering three bandwidth-sharing mechanisms, random, assortative and disassortative couplings, we also find that the transportation capacity of the network only depends on the coupling mechanism, and the fraction of coupled links only affects the performance of the system in the congestion state, such as the traveling time. In addition, with assortative coupling, the transportation capacity of the system will decrease significantly. However, the disassortative coupling has little influence on the transportation capacity of the system, which provides a good strategy to save bandwidth. Furthermore, a theoretical method is developed to obtain the bandwidth usage of each link, based on which we can obtain the congestion transition point exactly.
Collaborations and citations within scientific research grow simultaneously and interact dynamically. Modelling the coevolution between them helps to study many phenomena that can be approached only through combining citation and coauthorship data. A geometric graph for the coevolution is proposed, the mechanism of which synthetically expresses the interactive impacts of authors and papers in a geometrical way. The model is validated against a data set of papers published on PNAS during 2007-2015. The validation shows the ability to reproduce a range of features observed with citation and coauthorship data combined and separately. Particularly, in the empirical distribution of citations per author there exist two limits, in which the distribution appears as a generalized Poisson and a power-law respectively. Our model successfully reproduces the shape of the distribution, and provides an explanation for how the shape emerges via the decisions of authors. The model also captures the empirically positive correlations between the numbers of authors' papers, citations and collaborators.