results for au:Deng_Y in:cs

- We introduce a conceptual framework and an interventional calculus to steer, manipulate, and reconstruct the dynamics and generating mechanisms of dynamical systems from partial and disordered observations based on the contributions of each of the systems, by exploiting first principles from the theory of computability and algorithmic information. This calculus entails finding and applying controlled interventions to an evolving object to estimate how its algorithmic information content is affected in terms of positive or negative shifts towards and away from randomness in connection to causation. The approach is an alternative to statistical approaches for inferring causal relationships and formulating theoretical expectations from perturbation analysis. We find that the algorithmic information landscape of a system runs parallel to its dynamic attractor landscape, affording an avenue for moving systems on one plane so they can be controlled on the other plane. Based on these methods, we advance tools for reprogramming a system that do not require full knowledge or access to the system's actual kinetic equations or to probability distributions. This new approach yields a suite of powerful parameter-free algorithms of wide applicability, ranging from the discovery of causality, dimension reduction, feature selection, model generation, a maximal algorithmic-randomness principle and a system's (re)programmability index. We apply these methods to static and to evolving genetic regulatory networks. We highlight their ability to pinpoint key elements (genes) related to cell function and cell development, conforming to biological knowledge from experimentally validated data and the literature, and demonstrate how the method can reshape a system's dynamics in a controlled manner through algorithmic causal mechanisms.
- In this paper, we consider the use of structure learning methods for probabilistic graphical models to identify statistical dependencies in high-dimensional physical processes. Such processes are often synthetically characterized using PDEs (partial differential equations) and are observed in a variety of natural phenomena, including geoscience data capturing atmospheric and hydrological phenomena. Classical structure learning approaches such as the PC algorithm and variants are challenging to apply due to their high computational and sample requirements. Modern approaches, often based on sparse regression and variants, do come with finite sample guarantees, but are usually highly sensitive to the choice of hyper-parameters, e.g., parameter $\lambda$ for sparsity inducing constraint or regularization. In this paper, we present ACLIME-ADMM, an efficient two-step algorithm for adaptive structure learning, which estimates an edge specific parameter $\lambda_{ij}$ in the first step, and uses these parameters to learn the structure in the second step. Both steps of our algorithm use (inexact) ADMM to solve suitable linear programs, and all iterations can be done in closed form in an efficient block parallel manner. We compare ACLIME-ADMM with baselines on both synthetic data simulated by partial differential equations (PDEs) that model advection-diffusion processes, and real data (50 years) of daily global geopotential heights to study information flow in the atmosphere. ACLIME-ADMM is shown to be efficient, stable, and competitive, usually better than the baselines especially on difficult problems. On real data, ACLIME-ADMM recovers the underlying structure of global atmospheric circulation, including switches in wind directions at the equator and tropics entirely from the data.
- Sep 13 2017 cs.CL arXiv:1709.03815v1We introduce an open-source toolkit for neural machine translation (NMT) to support research into model architectures, feature representations, and source modalities, while maintaining competitive performance, modularity and reasonable training requirements.
- Sep 13 2017 cs.CL arXiv:1709.03814v1This paper describes SYSTRAN's systems submitted to the WMT 2017 shared news translation task for English-German, in both translation directions. Our systems are built using OpenNMT, an open-source neural machine translation system, implementing sequence-to-sequence models with LSTM encoder/decoders and attention. We experimented using monolingual data automatically back-translated. Our resulting models are further hyper-specialised with an adaptation technique that finely tunes models according to the evaluation test sentences.
- Jul 18 2017 cs.CV arXiv:1707.05251v1We introduce EnhanceGAN, an adversarial learning based model that performs automatic image enhancement. Traditional image enhancement frameworks involve training separate models for automatic cropping or color enhancement in a fully-supervised manner, which requires expensive annotations in the form of image pairs. In contrast to these approaches, our proposed EnhanceGAN only requires weak supervision (binary labels on image aesthetic quality) and is able to learn enhancement parameters for tasks including image cropping and color enhancement. The full differentiability of our image enhancement modules enables training the proposed EnhanceGAN in an end-to-end manner. A novel stage-wise learning scheme is further proposed to stabilize the training of each enhancement task and facilitate the extensibility for other image enhancement techniques. Our weakly-supervised EnhanceGAN reports competitive quantitative results against supervised models in automatic image cropping using standard benchmarking datasets, and a user study confirms that the images enhancement results are on par with or even preferred over professional enhancement.
- Jun 30 2017 cs.DM arXiv:1706.09506v1A family of graphs optimized as the topologies for supercomputer interconnection networks is proposed. The special needs of such network topologies, minimal diameter and mean path length, are met by special constructions of the weight vectors in a representation of the symplectic algebra. Such theoretical design of topologies can conveniently reconstruct the mesh and hypercubic graphs, widely used as today's network topologies. Our symplectic algebraic approach helps generate many classes of graphs suitable for network topologies.
- Information delivery using chemical molecules is an integral part of biology at multiple distance scales and has attracted recent interest in bioengineering and communication theory. Potential applications include cooperative networks with a large number of simple devices that could be randomly located (e.g., due to mobility). This paper presents the first tractable analytical model for the collective signal strength due to randomly-placed transmitters in a three-dimensional (3D) large-scale molecular communication system, either with or without degradation in the propagation environment. Transmitter locations in an unbounded and homogeneous fluid are modelled as a homogeneous Poisson point process. By applying stochastic geometry, analytical expressions are derived for the expected number of molecules absorbed by a fully-absorbing receiver or observed by a passive receiver. The bit error probability is derived under ON/OFF keying and either a constant or adaptive decision threshold. Results reveal that the combined signal strength increases proportionately with the transmitter density, and the minimum bit error probability can be improved by introducing molecule degradation. Furthermore, the analysis of the system can be generalized to other receiver designs and other performance characteristics in large-scale molecular communication systems.
- Massive Internet of Things (mIoT) has provided an auspicious opportunity to build powerful and ubiquitous connections that faces a plethora of new challenges, where cellular networks are potential solutions due to their high scalability, reliability, and efficiency. The Random Access CHannel (RACH) procedure is the first step of connection establishment between IoT devices and Base Stations (BSs) in the cellular-based mIoT network, where modelling the interactions between static properties of physical layer network and dynamic properties of queue evolving in each IoT device are challenging. To tackle this, we provide a novel traffic-aware spatio-temporal model to analyze RACH in cellular-based mIoT networks, where the physical layer network is modelled and analyzed based on stochastic geometry in the spatial domain, and the queue evolution is analyzed based on probability theory in the time domain. For performance evaluation, we derive the exact expressions for the preamble transmission success probabilities of a randomly chosen IoT device with different RACH schemes in each time slot, which offer insights into effectiveness of each RACH scheme. Our derived analytical results are verified by the realistic simulations capturing the evolution of packets in each IoT device. This mathematical model and analytical framework can be applied to evaluate the performance of other types of RACH schemes in the cellular-based networks by simply integrating its preamble transmission principle.
- We describe an open-source toolkit for neural machine translation (NMT). The toolkit prioritizes efficiency, modularity, and extensibility with the goal of supporting NMT research into model architectures, feature representations, and source modalities, while maintaining competitive performance and reasonable training requirements. The toolkit consists of modeling and translation support, as well as detailed pedagogical documentation about the underlying techniques.
- Developing efficient and guaranteed nonconvex algorithms has been an important challenge in modern machine learning. Algorithms with good empirical performance such as stochastic gradient descent often lack theoretical guarantees. In this paper, we analyze the class of homotopy or continuation methods for global optimization of nonconvex functions. These methods start from an objective function that is efficient to optimize (e.g. convex), and progressively modify it to obtain the required objective, and the solutions are passed along the homotopy path. For the challenging problem of tensor PCA, we prove global convergence of the homotopy method in the "high noise" regime. The signal-to-noise requirement for our algorithm is tight in the sense that it matches the recovery guarantee for the best degree-4 sum-of-squares algorithm. In addition, we prove a phase transition along the homotopy path for tensor PCA. This allows to simplify the homotopy method to a local search algorithm, viz., tensor power iterations, with a specific initialization and a noise injection procedure, while retaining the theoretical guarantees.
- Oct 19 2016 cs.CL arXiv:1610.05540v1Since the first online demonstration of Neural Machine Translation (NMT) by LISA, NMT development has recently moved from laboratory to production systems as demonstrated by several entities announcing roll-out of NMT engines to replace their existing technologies. NMT systems have a large number of training configurations and the training process of such systems is usually very long, often a few weeks, so role of experimentation is critical and important to share. In this work, we present our approach to production-ready systems simultaneously with release of online demonstrators covering a large variety of languages (12 languages, for 32 language pairs). We explore different practical choices: an efficient and evolutive open-source framework; data preparation; network architecture; additional implemented features; tuning for production; etc. We discuss about evaluation methodology, present our first findings and we finally outline further work. Our ultimate goal is to share our expertise to build competitive production systems for "generic" translation. We aim at contributing to set up a collaborative framework to speed-up adoption of the technology, foster further research efforts and enable the delivery and adoption to/by industry of use-case specific engines integrated in real production workflows. Mastering of the technology would allow us to build translation engines suited for particular needs, outperforming current simplest/uniform systems.
- Oct 13 2016 cs.DL arXiv:1610.03706v1Academic leadership is essential for research innovation and impact. Until now, there has been no dedicated measure of leadership by bibliometrics. Popular bibliometric indices are mainly based on academic output, such as the journal impact factor and the number of citations. Here we develop an academic leadership index based on readily available bibliometric data that is sensitive to not only academic output but also research efficiency. Our leadership index was tested in two studies on peer-reviewed journal papers by extramurally-funded principal investigators in the field of life sciences from China and the USA, respectively. The leadership performance of these principal investigators was quantified and compared relative to university rank and other factors. As a validation measure, we show that the highest average leadership index was achieved by principal investigators at top national universities in both countries. More interestingly, our results also indicate that on an individual basis, strong leadership and high efficiency are not necessarily associated with those at top-tier universities nor with the most funding. This leadership index may become the basis of a comprehensive merit system, facilitating academic evaluation and resource management.
- Oct 05 2016 cs.CV arXiv:1610.00838v2This survey aims at reviewing recent computer vision techniques used in the assessment of image aesthetic quality. Image aesthetic assessment aims at computationally distinguishing high-quality photos from low-quality ones based on photographic rules, typically in the form of binary classification or quality scoring. A variety of approaches has been proposed in the literature trying to solve this challenging problem. In this survey, we present a systematic listing of the reviewed approaches based on visual feature types (hand-crafted features and deep features) and evaluation criteria (dataset characteristics and evaluation metrics). Main contributions and novelties of the reviewed approaches are highlighted and discussed. In addition, following the emergence of deep learning techniques, we systematically evaluate recent deep learning settings that are useful for developing a robust deep model for aesthetic scoring. Experiments are conducted using simple yet solid baselines that are competitive with the current state-of-the-arts. Moreover, we discuss the possibility of manipulating the aesthetics of images through computational approaches. We hope that our survey could serve as a comprehensive reference source for future research on the study of image aesthetic assessment.
- Dropout, a simple and effective way to train deep neural networks, has led to a number of impressive empirical successes and spawned many recent theoretical investigations. However, the gap between dropout's training and inference phases, introduced due to tractability considerations, has largely remained under-appreciated. In this work, we first formulate dropout as a tractable approximation of some latent variable model, leading to a clean view of parameter sharing and enabling further theoretical analysis. Then, we introduce (approximate) expectation-linear dropout neural networks, whose inference gap we are able to formally characterize. Algorithmically, we show that our proposed measure of the inference gap can be used to regularize the standard dropout training objective, resulting in an \emphexplicit control of the gap. Our method is as simple and efficient as standard dropout. We further prove the upper bounds on the loss in accuracy due to expectation-linearization, describe classes of input distributions that expectation-linearize easily. Experiments on three image classification benchmark datasets demonstrate that reducing the inference gap can indeed improve the performance consistently.
- We present a neural encoder-decoder model to convert images into presentational markup based on a scalable coarse-to-fine attention mechanism. Our method is evaluated in the context of image-to-LaTeX generation, and we introduce a new dataset of real-world rendered mathematical expressions paired with LaTeX markup. We show that unlike neural OCR techniques using CTC-based models, attention-based approaches can tackle this non-standard OCR task. Our approach outperforms classical mathematical OCR systems by a large margin on in-domain rendered data, and, with pretraining, also performs well on out-of-domain handwritten data. To reduce the inference complexity associated with the attention-based approaches, we introduce a new coarse-to-fine attention layer that selects a support region before applying attention.
- Pairwise comparison matrix as a crucial component of AHP, presents the prefer- ence relations among alternatives. However, in many cases, the pairwise comparison matrix is difficult to complete, which obstructs the subsequent operations of the clas- sical AHP. In this paper, based on DEMATEL which has ability to derive the total relation matrix from direct relation matrix, a new completion method for incomplete pairwise comparison matrix is proposed. The proposed method provides a new per- spective to estimate the missing values with explicit physical meaning. Besides, the proposed method has low computational cost. This promising method has a wide application in multi-criteria decision-making.
- We develop an off-policy actor-critic algorithm for learning an optimal policy from a training set composed of data from multiple individuals. This algorithm is developed with a view towards its use in mobile health.
- Knowing which words have been attended to in previous time steps while generating a translation is a rich source of information for predicting what words will be attended to in the future. We improve upon the attention model of Bahdanau et al. (2014) by explicitly modeling the relationship between previous and subsequent attention levels for each word using one recurrent network per input word. This architecture easily captures informative features, such as fertility and regularities in relative distortion. In experiments, we show our parameterization of attention improves translation quality.
- We study the problem of automatically building hypernym taxonomies from textual and visual data. Previous works in taxonomy induction generally ignore the increasingly prominent visual data, which encode important perceptual semantics. Instead, we propose a probabilistic model for taxonomy induction by jointly leveraging text and images. To avoid hand-crafted feature engineering, we design end-to-end features based on distributed representations of images and words. The model is discriminatively trained given a small set of existing ontologies and is capable of building full taxonomies from scratch for a collection of unseen conceptual label items with associated images. We evaluate our model and features on the WordNet hierarchies, where our system outperforms previous approaches by a large gap.
- The performance of communication systems is fundamentally limited by the loss of energy through propagation and circuit inefficiencies. In this article, we show that it is possible to achieve ultra low energy communications at the nano-scale, if diffusive molecules are used for carrying data. Whilst the energy of electromagnetic waves will inevitably decay as a function of transmission distance and time, the energy in individual molecules does not. Over time, the receiver has an opportunity to recover some, if not all of the molecular energy transmitted. The article demonstrates the potential of ultra-low energy simultaneous molecular information and energy transfer (SMIET) through the design of two different nano-relay systems, and the discusses how molecular communications can benefit more from crowd energy harvesting than traditional wave-based systems.
- Information delivery using chemical molecules is an integral part of biology at multiple distance scales and has attracted recent interest in bioengineering and communication. The collective signal strength at the receiver (i.e., the expected number of observed molecules inside the receiver), resulting from a large number of transmitters at random distances (e.g., due to mobility), can have a major impact on the reliability and efficiency of the molecular communication system. Modeling the collective signal from multiple diffusion sources can be computationally and analytically challenging. In this paper, we present the first tractable analytical model for the collective signal strength due to randomly-placed transmitters, whose positions are modelled as a homogeneous Poisson point process in three-dimensional (3D) space. By applying stochastic geometry, we derive analytical expressions for the expected number of observed molecules at a fully absorbing receiver and a passive receiver. Our results reveal that the collective signal strength at both types of receivers increases proportionally with increasing transmitter density. The proposed framework dramatically simplifies the analysis of large-scale molecular systems in both communication and biological applications.
- May 09 2016 cs.DS arXiv:1605.02045v1Semi-labeled trees are phylogenies whose internal nodes may be labeled by higher-order taxa. Thus, a leaf labeled Mus musculus could nest within a subtree whose root node is labeled Rodentia, which itself could nest within a subtree whose root is labeled Mammalia. Suppose we are given collection $\mathcal P$ of semi-labeled trees over various subsets of a set of taxa. The ancestral compatibility problem asks whether there is a semi-labeled tree $\mathcal T$ that respects the clusterings and the ancestor/descendant relationships implied by the trees in $\mathcal P$. We give a $\tilde{O}(M_{\mathcal{P}})$ algorithm for the ancestral compatibility problem, where $M_{\mathcal{P}}$ is the total number of nodes and edges in the trees in $\mathcal P$. Unlike the best previous algorithm, the running time of our method does not depend on the degrees of the nodes in the input trees.
- This paper presents an analytical comparison of active and passive receiver models in diffusive molecular communication. In the active model, molecules are absorbed when they collide with the receiver surface. In the passive model, the receiver is a virtual boundary that does not affect molecule behavior. Two approaches are presented to derive transforms between the receiver signals. As an example, two models for an unbounded diffusion-only molecular communication system with a spherical receiver are unified. As time increases in the three-dimensional system, the transform functions have constant scaling factors, such that the receiver models are effectively equivalent. Methods are presented to enable the transformation of stochastic simulations, which are used to verify the transforms and demonstrate that transforming the simulation of a passive receiver can be more efficient and more accurate than the direct simulation of an absorbing receiver.
- We investigate beamforming and artificial noise generation at the secondary transmitters to establish secure transmission in large scale spectrum sharing networks,where multiple non-colluding eavesdroppers attempt to intercept the secondary transmission. We develop a comprehensive analytical framework to accurately assess the secrecy performance under the primary users' quality of service constraint. Our aim is to characterize the impact of beamforming and artificial noise generation on this complex large scale network. We first derive exact expressions for the average secrecy rate and the secrecy outage probability.We then derive an easy-to-evaluate asymptotic average secrecy rate and asymptotic secrecy outage probability when the number of antennas at the secondary transmitter goes to infinity. Our results show that the equal power allocation between the useful signal and artificial noise is not always the best strategy to achieve maximum average secrecy rate in large scale spectrum sharing networks. Another interesting observation is that the advantage of beamforming and artificial noise generation over beamforming on the average secrecy rate is lost when the aggregate interference from the primary and secondary transmitters is strong, such that it overtakes the effect of the generated artificial noise.
- This paper develops a tractable framework for exploiting the potential benefits of physical layer security in three-tier wireless sensor networks using stochastic geometry. In such networks, the sensing data from the remote sensors are collected by sinks with the help of access points, and the external eavesdroppers intercept the data transmissions.We focus on the secure transmission in two scenarios: i) the active sensors transmit their sensing data to the access points, and ii) the active access points forward the data to the sinks. We derive new compact expressions for the average secrecy rate in these two scenarios. We also derive a new compact expression for the overall average secrecy rate. Numerical results corroborate our analysis and show that multiple antennas at the access points can enhance the security of three-tier wireless sensor networks. Our results show that increasing the number of access points decreases the average secrecy rate between the access point and its associated sink. However, we find that increasing the number of access points first increases the overall average secrecy rate, with a critical value beyond which the overall average secrecy rate then decreases. When increasing the number of active sensors, both the average secrecy rate between the sensor and its associated access point and the overall average secrecy rate decrease. In contrast, increasing the number of sinks improves both the average secrecy rate between the access point and its associated sink, as well as the overall average secrecy rate.
- Wireless energy harvesting is regarded as a promising energy supply alternative for energy-constrained wireless networks. In this paper, a new wireless energy harvesting protocol is proposed for an underlay cognitive relay network with multiple primary user (PU) transceivers. In this protocol, the secondary nodes can harvest energy from the primary network (PN) while sharing the licensed spectrum of the PN. In order to assess the impact of different system parameters on the proposed network, we first derive an exact expression for the outage probability for the secondary network (SN) subject to three important power constraints: 1) the maximum transmit power at the secondary source (SS) and at the secondary relay (SR), 2) the peak interference power permitted at each PU receiver, and 3) the interference power from each PU transmitter to the SR and to the secondary destination (SD). To obtain practical design insights into the impact of different parameters on successful data transmission of the SN, we derive throughput expressions for both the delay-sensitive and the delay-tolerant transmission modes. We also derive asymptotic closed-form expressions for the outage probability and the delay-sensitive throughput and an asymptotic analytical expression for the delay-tolerant throughput as the number of PU transceivers goes to infinity. The results show that the outage probability improves when PU transmitters are located near SS and sufficiently far from SR and SD. Our results also show that when the number of PU transmitters is large, the detrimental effect of interference from PU transmitters outweighs the benefits of energy harvested from the PU transmitters.
- In this paper, we present an analytical model for the diffusive molecular communication (MC) system with a reversible adsorption receiver in a fluid environment. The widely used concentration shift keying (CSK) is considered for modulation. The time-varying spatial distribution of the information molecules under the reversible adsorption and desorption reaction at the surface of a receiver is analytically characterized. Based on the spatial distribution, we derive the net number of newly-adsorbed information molecules expected in any time duration. We further derive the number of newly-adsorbed molecules expected at the steady state to demonstrate the equilibrium concentration. Given the number of newly-adsorbed information molecules, the bit error probability of the proposed MC system is analytically approximated. Importantly, we present a simulation framework for the proposed model that accounts for the diffusion and reversible reaction. Simulation results show the accuracy of our derived expressions, and demonstrate the positive effect of the adsorption rate and the negative effect of the desorption rate on the error probability of reversible adsorption receiver with last transmit bit-1. Moreover, our analytical results simplify to the special cases of a full adsorption receiver and a partial adsorption receiver, both of which do not include desorption.
- Dec 29 2015 cs.LG arXiv:1512.08279v1Causal discovery algorithms based on probabilistic graphical models have emerged in geoscience applications for the identification and visualization of dynamical processes. The key idea is to learn the structure of a graphical model from observed spatio-temporal data, which indicates information flow, thus pathways of interactions, in the observed physical system. Studying those pathways allows geoscientists to learn subtle details about the underlying dynamical mechanisms governing our planet. Initial studies using this approach on real-world atmospheric data have shown great potential for scientific discovery. However, in these initial studies no ground truth was available, so that the resulting graphs have been evaluated only by whether a domain expert thinks they seemed physically plausible. This paper seeks to fill this gap. We develop a testbed that emulates two dynamical processes dominant in many geoscience applications, namely advection and diffusion, in a 2D grid. Then we apply the causal discovery based information tracking algorithms to the simulation data to study how well the algorithms work for different scenarios and to gain a better understanding of the physical meaning of the graph results, in particular of instantaneous connections. We make all data sets used in this study available to the community as a benchmark. Keywords: Information flow, graphical model, structure learning, causal discovery, geoscience.
- Latent Variable Models (LVMs) are a large family of machine learning models providing a principled and effective way to extract underlying patterns, structure and knowledge from observed data. Due to the dramatic growth of volume and complexity of data, several new challenges have emerged and cannot be effectively addressed by existing LVMs: (1) How to capture long-tail patterns that carry crucial information when the popularity of patterns is distributed in a power-law fashion? (2) How to reduce model complexity and computational cost without compromising the modeling power of LVMs? (3) How to improve the interpretability and reduce the redundancy of discovered patterns? To addresses the three challenges discussed above, we develop a novel regularization technique for LVMs, which controls the geometry of the latent space during learning to enable the learned latent components of LVMs to be diverse in the sense that they are favored to be mutually different from each other, to accomplish long-tail coverage, low redundancy, and better interpretability. We propose a mutual angular regularizer (MAR) to encourage the components in LVMs to have larger mutual angles. The MAR is non-convex and non-smooth, entailing great challenges for optimization. To cope with this issue, we derive a smooth lower bound of the MAR and optimize the lower bound instead. We show that the monotonicity of the lower bound is closely aligned with the MAR to qualify the lower bound as a desirable surrogate of the MAR. Using neural network (NN) as an instance, we analyze how the MAR affects the generalization performance of NN. On two popular latent variable models --- restricted Boltzmann machine and distance metric learning, we demonstrate that MAR can effectively capture long-tail patterns, reduce model complexity without sacrificing expressivity and improve interpretability.
- In this paper, we present an analytical model for a diffusive molecular communication (MC) system with a reversible adsorption receiver in a fluid environment. The time-varying spatial distribution of the information molecules under the reversible adsorption and desorption reaction at the surface of a bio-receiver is analytically characterized. Based on the spatial distribution, we derive the number of newly-adsorbed information molecules expected in any time duration. Importantly, we present a simulation framework for the proposed model that accounts for the diffusion and reversible reaction. Simulation results show the accuracy of our derived expressions, and demonstrate the positive effect of the adsorption rate and the negative effect of the desorption rate on the net number of newly-adsorbed information molecules expected. Moreover, our analytical results simplify to the special case of an absorbing receiver.
- Nov 24 2015 cs.LG arXiv:1511.07110v1Recently diversity-inducing regularization methods for latent variable models (LVMs), which encourage the components in LVMs to be diverse, have been studied to address several issues involved in latent variable modeling: (1) how to capture long-tail patterns underlying data; (2) how to reduce model complexity without sacrificing expressivity; (3) how to improve the interpretability of learned patterns. While the effectiveness of diversity-inducing regularizers such as the mutual angular regularizer has been demonstrated empirically, a rigorous theoretical analysis of them is still missing. In this paper, we aim to bridge this gap and analyze how the mutual angular regularizer (MAR) affects the generalization performance of supervised LVMs. We use neural network (NN) as a model instance to carry out the study and the analysis shows that increasing the diversity of hidden units in NN would reduce estimation error and increase approximation error. In addition to theoretical analysis, we also present empirical study which demonstrates that the MAR can greatly improve the performance of NN and the empirical observations are in accordance with the theoretical analysis.
- Oct 28 2015 cs.DS arXiv:1510.07758v1We consider the following basic problem in phylogenetic tree construction. Let $\mathcal{P} = \{T_1, \ldots, T_k\}$ be a collection of rooted phylogenetic trees over various subsets of a set of species. The tree compatibility problem asks whether there is a tree $T$ with the following property: for each $i \in \{1, \dots, k\}$, $T_i$ can be obtained from the restriction of $T$ to the species set of $T_i$ by contracting zero or more edges. If such a tree $T$ exists, we say that $\mathcal{P}$ is compatible. We give a $\tilde{O}(M_\mathcal{P})$ algorithm for the tree compatibility problem, where $M_\mathcal{P}$ is the total number of nodes and edges in $\mathcal{P}$. Unlike previous algorithms for this problem, the running time of our method does not depend on the degrees of the nodes in the input trees. Thus, it is equally fast on highly resolved and highly unresolved trees.
- Oct 22 2015 cs.AI arXiv:1510.06153v1In this project we outline a modularized, scalable system for comparing Amazon products in an interactive and informative way using efficient latent variable models and dynamic visualization. We demonstrate how our system can build on the structure and rich review information of Amazon products in order to provide a fast, multifaceted, and intuitive comparison. By providing a condensed per-topic comparison visualization to the user, we are able to display aggregate information from the entire set of reviews while providing an interface that is at least as compact as the "most helpful reviews" currently displayed by Amazon, yet far more informative.
- Sep 17 2015 cs.CV arXiv:1509.04874v3How can a single fully convolutional neural network (FCN) perform on object detection? We introduce DenseBox, a unified end-to-end FCN framework that directly predicts bounding boxes and object class confidences through all locations and scales of an image. Our contribution is two-fold. First, we show that a single FCN, if designed and optimized carefully, can detect multiple different objects extremely accurately and efficiently. Second, we show that when incorporating with landmark localization during multi-task learning, DenseBox further improves object detection accuray. We present experimental results on public benchmark datasets including MALF face detection and KITTI car detection, that indicate our DenseBox is the state-of-the-art system for detecting challenging objects such as faces and cars.
- Sep 14 2015 cs.LO arXiv:1509.03391v1For the model of probabilistic labelled transition systems that allow for the co-existence of nondeterminism and probabilities, we present two notions of bisimulation metrics: one is state-based and the other is distribution-based. We provide a sound and complete modal characterisation for each of them, using real-valued modal logics based on the Hennessy-Milner logic. The logic for characterising the state-based metric is much simpler than an earlier logic by Desharnais et al. as it uses only two non-expansive operators rather than the general class of non-expansive operators.
- This paper exploits the potential of physical layer security in massive multiple-input multiple-output (MIMO) aided two-tier heterogeneous networks (HetNets). We focus on the downlink secure transmission in the presence of multiple eavesdroppers. We first address the impact of massive MIMO on the maximum receive power based user association. We then derive the tractable upper bound expressions for the secrecy outage probability of a HetNets user.We show that the implementation of massive MIMO significantly improves the secrecy performance, which indicates that physical layer security could be a promising solution for safeguarding massive MIMO HetNets. Furthermore, we show that the secrecy outage probability of HetNets user first degrades and then improves with increasing the density of PBSs.
- We present an adaptive multi-GPU Exchange Monte Carlo method designed for the simulation of the 3D Random Field Model. The algorithm design is based on a two-level parallelization scheme that allows the method to scale its performance in the presence of faster and GPUs as well as multiple GPUs. The set of temperatures is adapted according to the exchange rate observed from short trial runs, leading to an increased exchange rate at zones where the exchange process is sporadic. Performance results show that parallel tempering is an ideal strategy for being implemented on the GPU, and runs between one to two orders of magnitude with respect to a single-core CPU version, with multi-GPU scaling being approximately $99\%$ efficient. The results obtained extend the possibilities of simulation to sizes of $L = 32, 64$ for a workstation with two GPUs.
- As a generation of the classical percolation, clique percolation focuses on the connection of cliques in a graph, where the connection of two $k$-cliques means that they share at least $l<k$ vertices. In this paper, we develop a theoretical approach to study clique percolation in Erdős-Rényi graphs, which gives not only the exact solutions of the critical point, but also the corresponding order parameter. Based on this, we prove theoretically that the fraction $\psi$ of cliques in the giant clique cluster always makes a continuous phase transition as the classical percolation. However, the fraction $\phi$ of vertices in the giant clique cluster for $l>1$ makes a step-function-like discontinuous phase transition in the thermodynamic limit and a continuous phase transition for $l=1$. More interesting, our analysis shows that at the critical point, the order parameter $\phi_c$ for $l>1$ is neither $0$ nor $1$, but a constant depending on $k$ and $l$. All these theoretical findings are in agreement with the simulation results, which give theoretical support and clarification for previous simulation studies of clique percolation.
- Jul 30 2015 cs.GT arXiv:1507.07966v1Quantization becomes a new way to study classical game theory since quantum strategies and quantum games have been proposed. In previous studies, many typical game models, such as prisoner's dilemma, battle of the sexes, Hawk-Dove game, have been investigated by using quantization approaches. In this paper, several game models of opinion formations have been quantized based on the Marinatto-Weber quantum game scheme, a frequently used scheme to convert classical games to quantum versions. Our results show that the quantization can change fascinatingly the properties of some classical opinion formation game models so as to generate win-win outcomes.
- Opinion dynamics, aiming to understand the evolution of collective behavior through various interaction mechanisms of opinions, represents one of the most challenges in natural and social science. To elucidate this issue clearly, binary opinion model becomes a useful framework, where agents can take an independent opinion. Inspired by the realistic observations, here we propose two basic interaction mechanisms of binary opinion model: one is the so-called BSO model in which players benefit from holding the same opinion; the other is called BDO model in which players benefit from taking different opinions. In terms of these two basic models, the synthetical effect of opinion preference and equivocators on the evolution of binary opinion is studied under the framework of evolutionary game theory (EGT), where the replicator equation (RE) is employed to mimick the evolution of opinions. By means of numerous simulations, we show the theoretical equilibrium states of binary opinion dynamics, and mathematically analyze the stability of each equilibrium state as well.
- Jul 30 2015 cs.GT arXiv:1507.07951v1Quantum game theory is a new interdisciplinary field between game theory and physical research. In this paper, we extend the classical inspection game into a quantum game version by quantizing the strategy space and importing entanglement between players. Our result shows that the quantum inspection game has various Nash equilibrium depending on the initial quantum state of the game. It is also shown that quantization can respectively help each player to increase his own payoff, yet fails to bring Pareto improvement for the collective payoff in the quantum inspection game.
- We obtain new nonexistence results of generalized bent functions from $\{Z^n}_q$ to $\Z_q$ (called type $[n,q]$) in the case that there exist cyclotomic integers in $ \Z[\zeta_{q}]$ with absolute value $q^{\frac{n}{2}}$. This result generalize the previous two scattered nonexistence results $[n,q]=[1,2\times7]$ of Pei \citePei and $[3,2\times 23^e]$ of Jiang-Deng \citeJ-D to a generalized class. In the last section, we remark that this method can apply to the GBF from $\Z^n_2$ to $\Z_m$.
- Jun 25 2015 cs.CV arXiv:1506.07310v4Face Recognition has been studied for many decades. As opposed to traditional hand-crafted features such as LBP and HOG, much more sophisticated features can be learned automatically by deep learning methods in a data-driven way. In this paper, we propose a two-stage approach that combines a multi-patch deep CNN and deep metric learning, which extracts low dimensional but very discriminative features for face verification and recognition. Experiments show that this method outperforms other state-of-the-art methods on LFW dataset, achieving 99.77% pair-wise verification accuracy and significantly better accuracy under other two more practical protocols. This paper also discusses the importance of data size and the number of patches, showing a clear path to practical high-performance face recognition systems in real world.
- Jun 09 2015 cs.CV arXiv:1506.02211v1Text image super-resolution is a challenging yet open research problem in the computer vision community. In particular, low-resolution images hamper the performance of typical optical character recognition (OCR) systems. In this article, we summarize our entry to the ICDAR2015 Competition on Text Image Super-Resolution. Experiments are based on the provided ICDAR2015 TextSR dataset and the released Tesseract-OCR 3.02 system. We report that our winning entry of text image super-resolution framework has largely improved the OCR performance with low-resolution images used as input, reaching an OCR accuracy score of 77.19%, which is comparable with that of using the original high-resolution images 78.80%.
- Apr 28 2015 cs.CV arXiv:1504.06993v1Lossy compression introduces complex compression artifacts, particularly the blocking artifacts, ringing effects and blurring. Existing algorithms either focus on removing blocking artifacts and produce blurred output, or restores sharpened images that are accompanied with ringing effects. Inspired by the deep convolutional networks (DCN) on super-resolution, we formulate a compact and efficient network for seamless attenuation of different compression artifacts. We also demonstrate that a deeper model can be effectively trained with the features learned in a shallow network. Following a similar "easy to hard" idea, we systematically investigate several practical transfer settings and show the effectiveness of transfer learning in low-level vision problems. Our method shows superior performance than the state-of-the-arts both on the benchmark datasets and the real-world use case (i.e. Twitter). In addition, we show that our method can be applied as pre-processing to facilitate other low-level vision routines when they take compressed images as input.
- Mar 02 2015 cs.GT arXiv:1502.07823v3In this paper, we consider permutation manipulations by any subset of women in the Gale-Shapley algorithm. This paper is motivated by the college admissions process in China. Our results also answer Gusfield and Irving's open problem on what can be achieved by permutation manipulations. We present an efficient algorithm to find a strategy profile such that the induced matching is stable and Pareto-optimal while the strategy profile itself is inconspicuous. Surprisingly, we show that such a strategy profile actually forms a Nash equilibrium of the manipulation game. We also show that a strong Nash equilibrium or a super-strong Nash equilibrium does not always exist in general and it is NP-hard to check the existence of these equilibria. We consider an alternative notion of strong Nash equilibria and super-strong Nash equilibrium. Under such notions, we characterize the super-strong Nash equilibrium by Pareto-optimal strategy profiles. In the end, we show that it is NP-complete to find a manipulation that is strictly better for all members of the coalition. This result demonstrates a sharp contrast between weakly better-off outcomes and strictly better-off outcomes.
- Feb 26 2015 cs.AI arXiv:1502.06956v1Dempster-Shafer evidence theory is an efficient mathematical tool to deal with uncertain information. In that theory, basic probability assignment (BPA) is the basic element for the expression and inference of uncertainty. Decision-making based on BPA is still an open issue in Dempster-Shafer evidence theory. In this paper, a novel approach of transforming basic probability assignments to probabilities is proposed based on Deng entropy which is a new measure for the uncertainty of BPA. The principle of the proposed method is to minimize the difference of uncertainties involving in the given BPA and obtained probability distribution. Numerical examples are given to show the proposed approach.
- Feb 04 2015 cs.SI physics.soc-ph arXiv:1502.00780v1Measure the similarity of the nodes in the complex networks have interested many researchers to explore it. In this paper, a new method which is based on the degree centrality and the Relative-entropy is proposed to measure the similarity of the nodes in the complex networks. The results in this paper show that, the nodes which have a common structure property always have a high similarity to others nodes. The nodes which have a high influential to others always have a small value of similarity to other nodes and the marginal nodes also have a low similar to other nodes. The results in this paper show that the proposed method is useful and reasonable to measure the similarity of the nodes in the complex networks.
- Feb 03 2015 cs.SI arXiv:1502.00111v1The local structure entropy is a new method which is proposed to identify the influential nodes in the complex networks. In this paper a new form of the local structure entropy of the complex networks is proposed based on the Tsallis entropy. The value of the entropic index $q$ will influence the property of the local structure entropy. When the value of $q$ is equal to 0, the nonextensive local structure entropy is degenerated to a new form of the degree centrality. When the value of $q$ is equal to 1, the nonextensive local structure entropy is degenerated to the existing form of the local structure entropy. We also have find a nonextensive threshold value in the nonextensive local structure entropy. When the value of $q$ is bigger than the nonextensive threshold value, change the value of $q$ will has no influence on the property of the local structure entropy, and different complex networks have different nonextensive threshold value. The results in this paper show that the new nonextensive local structure entropy is a generalised of the local structure entropy. It is more reasonable and useful than the existing one.
- Jan 27 2015 cs.SI arXiv:1501.06042v1How complex of the complex networks has attracted many researchers to explore it. The entropy is an useful method to describe the degree of the $complex$ of the complex networks. In this paper, a new method which is based on the Tsallis entropy is proposed to describe the $complex$ of the complex networks. The results in this paper show that the complex of the complex networks not only decided by the structure property of the complex networks, but also influenced by the relationship between each nodes. In other word, which kinds of nodes are chosen as the main part of the complex networks will influence the value of the entropy of the complex networks. The value of q in the Tsallis entropy of the complex networks is used to decided which kinds of nodes will be chosen as the main part in the complex networks. The proposed Tsallis entropy of the complex networks is a generalised method to describe the property of the complex networks.
- Jan 26 2015 cs.SI physics.soc-ph arXiv:1501.05714v1The network of networks(NON) research is focused on studying the properties of n interdependent networks which is ubiquitous in the real world. Identifying the influential nodes in the network of networks is theoretical and practical significance. However, it is hard to describe the structure property of the NON based on traditional methods. In this paper, a new method is proposed to identify the influential nodes in the network of networks base on the evidence theory. The proposed method can fuse different kinds of relationship between the network components to constructed a comprehensive similarity network. The nodes which have a big value of similarity are the influential nodes in the NON. The experiment results illustrate that the proposed method is reasonable and significant
- Jan 26 2015 cs.SY arXiv:1501.05705v1For the design and implementation of engineering systems, performing model-based analysis can disclose potential safety issues at an early stage. The analysis of hybrid system models is in general difficult due to the intrinsic complexity of hybrid dynamics. In this paper, a simulation-based approach to formal verification of hybrid systems is presented.
- Jan 06 2015 cs.CV arXiv:1501.00901v2Learning to recognize pedestrian attributes at far distance is a challenging problem in visual surveillance since face and body close-shots are hardly available; instead, only far-view image frames of pedestrian are given. In this study, we present an alternative approach that exploits the context of neighboring pedestrian images for improved attribute inference compared to the conventional SVM-based method. In addition, we conduct extensive experiments to evaluate the informativeness of background and foreground features for attribute recognition. Experiments are based on our newly released pedestrian attribute dataset, which is by far the largest and most diverse of its kind.
- Dec 15 2014 cs.SI physics.soc-ph arXiv:1412.3910v1Identifying influential nodes in the complex networks is of theoretical and practical significance. There are many methods are proposed to identify the influential nodes in the complex networks. In this paper, a local structure entropy which is based on the degree centrality and the statistical mechanics is proposed to identifying the influential nodes in the complex network. In the definition of the local structure entropy, each node has a local network, the local structure entropy of each node is equal to the structure entropy of the local network. The main idea in the local structure entropy is try to use the influence of the local network to replace the node's influence on the whole network. The influential nodes which are identified by the local structure entropy are the intermediate nodes in the network. The intermediate nodes which connect those nodes with a big value of degree. We use the $Susceptible-Infective$ (SI) model to evaluate the performance of the influential nodes which are identified by the local structure entropy. In the SI model the nodes use as the source of infection. According to the SI model, the bigger the percentage of the infective nodes in the network the important the node to the whole networks. The simulation on four real networks show that the proposed method is efficacious and rationality to identify the influential nodes in the complex networks.
- Nov 25 2014 cs.SI arXiv:1411.6082v1The structure entropy is one of the most important parameters to describe the structure property of the complex networks. Most of the existing struc- ture entropies are based on the degree or the betweenness centrality. In order to describe the structure property of the complex networks more reasonably, a new structure entropy of the complex networks based on the Tsallis nonextensive statistical mechanics is proposed in this paper. The influence of the degree and the betweenness centrality on the structure property is combined in the proposed structure entropy. Compared with the existing structure entropy, the proposed structure entropy is more reasonable to describe the structure property of the complex networks in some situations.
- Molecular communication is set to play an important role in the design of complex biological and chemical systems. An important class of molecular communication systems is based on the timing channel, where information is encoded in the delay of the transmitted molecule---a synchronous approach. At present, a widely used modeling assumption is the perfect synchronization between the transmitter and the receiver. Unfortunately, this assumption is unlikely to hold in most practical molecular systems. To remedy this, we introduce a clock into the model---leading to the molecular timing channel with synchronization error. To quantify the behavior of this new system, we derive upper and lower bounds on the variance-constrained capacity, which we view as the step between the mean-delay and the peak-delay constrained capacity. By numerically evaluating our bounds, we obtain a key practical insight: the drift velocity of the clock links does not need to be significantly larger than the drift velocity of the information link, in order to achieve the variance-constrained capacity with perfect synchronization.
- Arterial blood pressure is a key vital sign for the health of the human body. As such, accurate and reproducible measurement techniques are necessary for successful diagnosis. Blood pressure measurement is an example of molecular communication in regulated biological systems. In general, communication in regulated biological systems is difficult because the act of encoding information about the state of the system can corrupt the message itself. In this paper, we propose three strategies to cope with this problem to facilitate reliable molecular communication links: communicate from the outskirts; build it in; and leave a small footprint. Our strategies---inspired by communication in natural biological systems---provide a classification to guide the design of molecular communication mechanisms in synthetic biological systems. We illustrate our classification using examples of the first two strategies in natural systems. We then consider a molecular link within a model based on the Michaelis-Menten kinetics. In particular, we compute the capacity of the link, which reveals the potential of communicating using our leave a small footprint strategy. This provides a way of identifying whether the molecular link can be improved without affecting the function, and a guide to the design of synthetic biological systems.
- Jul 02 2014 cs.SI physics.soc-ph arXiv:1407.0097v1The structure entropy is an important index to illuminate the structure prop- erty of the complex network. Most of the existing structure entropies are based on the degree distribution of the complex network. But the structure entropy based on the degree can not illustrate the structure property of the weighted networks. In order to study the structure property of the weighted networks, a new structure entropy of the complex networks based on the betweenness is proposed in this paper. Comparing with the existing structure entropy, the proposed method is more reasonable to describe the structure property of the complex weighted networks.
- We give an algorithm for computing the factor ring of a given ideal in a Dedekind domain with finite rank, which runs in deterministic and polynomial-time. We provide two applications of the algorithm: judging whether a given ideal is prime or prime power. The main algorithm is based on basis representation of finite rings which is computed via Hermite and Smith normal forms.
- Jun 10 2014 cs.AI arXiv:1406.2128v1The user equilibrium in traffic assignment problem is based on the fact that travelers choose the minimum-cost path between every origin-destination pair and on the assumption that such a behavior will lead to an equilibrium of the traffic network. In this paper, we consider this problem when the traffic network links are fuzzy cost. Therefore, a Physarum-type algorithm is developed to unify the Physarum network and the traffic network for taking full of advantage of Physarum Polycephalum's adaptivity in network design to solve the user equilibrium problem. Eventually, some experiments are used to test the performance of this method. The results demonstrate that our approach is competitive when compared with other existing algorithms.
- Jun 09 2014 cs.SI physics.soc-ph arXiv:1406.1695v1The fractal and self-similarity properties are revealed in many complex networks. In order to show the influence of different part in the complex networks to the information dimension, we have proposed a new information dimension based on Tsallis entropy namely Tsallis information dimension. The Tsallis information dimension can show the fractal property from different perspective by set different value of q.
- Jun 09 2014 cs.AI arXiv:1406.1697v1Decision making is still an open issue in the application of Dempster-Shafer evidence theory. A lot of works have been presented for it. In the transferable belief model (TBM), pignistic probabilities based on the basic probability as- signments are used for decision making. In this paper, multiscale probability transformation of basic probability assignment based on the belief function and the plausibility function is proposed, which is a generalization of the pignistic probability transformation. In the multiscale probability function, a factor q based on the Tsallis entropy is used to make the multiscale prob- abilities diversified. An example is shown that the multiscale probability transformation is more reasonable in the decision making.
- As an equilibrium refinement of the Nash equilibrium, evolutionarily stable strategy (ESS) is a key concept in evolutionary game theory and has attracted growing interest. An ESS can be either a pure strategy or a mixed strategy. Even though the randomness is allowed in mixed strategy, the selection probability of pure strategy in a mixed strategy may fluctuate due to the impact of many factors. The fluctuation can lead to more uncertainty. In this paper, such uncertainty involved in mixed strategy has been further taken into consideration: a belief strategy is proposed in terms of Dempster-Shafer evidence theory. Furthermore, based on the proposed belief strategy, a belief-based ESS has been developed. The belief strategy and belief-based ESS can reduce to the mixed strategy and mixed ESS, which provide more realistic and powerful tools to describe interactions among agents.
- Jun 03 2014 cs.SI physics.soc-ph arXiv:1406.0379v1With an increasing emphasis on network security, much more attention has been attracted to the vulnerability of complex networks. The multi-scale evaluation of vulnerability is widely used since it makes use of combined powers of the links' betweenness and has an effective evaluation to vulnerability. However, how to determine the coefficient in existing multi-scale evaluation model to measure the vulnerability of different networks is still an open issue. In this paper, an improved model based on the fractal dimension of complex networks is proposed to obtain a more reasonable evaluation of vulnerability with more physical significance. Not only the structure and basic physical properties of networks is characterized, but also the covering ability of networks, which is related to the vulnerability of the network, is taken into consideration in our proposed method. The numerical examples and real applications are used to illustrate the efficiency of our proposed method.
- May 14 2014 cs.AI arXiv:1405.3175v1Efficient modeling of uncertain information in real world is still an open issue. Dempster-Shafer evidence theory is one of the most commonly used methods. However, the Dempster-Shafer evidence theory has the assumption that the hypothesis in the framework of discernment is exclusive of each other. This condition can be violated in real applications, especially in linguistic decision making since the linguistic variables are not exclusive of each others essentially. In this paper, a new theory, called as D numbers theory (DNT), is systematically developed to address this issue. The combination rule of two D numbers is presented. An coefficient is defined to measure the exclusive degree among the hypotheses in the framework of discernment. The combination rule of two D numbers is presented. If the exclusive coefficient is one which means that the hypothesis in the framework of discernment is exclusive of each other totally, the D combination is degenerated as the classical Dempster combination rule. Finally, a linguistic variables transformation of D numbers is presented to make a decision. A numerical example on linguistic evidential decision making is used to illustrate the efficiency of the proposed D numbers theory.
- Apr 21 2014 cs.AI arXiv:1404.4789v1Dempster-Shafer evidence theory is a powerful tool in information fusion. When the evidence are highly conflicting, the counter-intuitive results will be presented. To adress this open issue, a new method based on evidence distance of Jousselme and Hausdorff distance is proposed. Weight of each evidence can be computed, preprocess the original evidence to generate a new evidence. The Dempster's combination rule is used to combine the new evidence. Comparing with the existing methods, the new proposed method is efficient.
- Apr 21 2014 cs.AI arXiv:1404.4801v1Conflict management is still an open issue in the application of Dempster Shafer evidence theory. A lot of works have been presented to address this issue. In this paper, a new theory, called as generalized evidence theory (GET), is proposed. Compared with existing methods, GET assumes that the general situation is in open world due to the uncertainty and incomplete knowledge. The conflicting evidence is handled under the framework of GET. It is shown that the new theory can explain and deal with the conflicting evidence in a more reasonable way.
- Apr 15 2014 cs.AI arXiv:1404.3370v1Dempster-Shafer theory is widely applied in uncertainty modelling and knowledge reasoning due to its ability of expressing uncertain information. A distance between two basic probability assignments(BPAs) presents a measure of performance for identification algorithms based on the evidential theory of Dempster-Shafer. However, some conditions lead to limitations in practical application for Dempster-Shafer theory, such as exclusiveness hypothesis and completeness constraint. To overcome these shortcomings, a novel theory called D numbers theory is proposed. A distance function of D numbers is proposed to measure the distance between two D numbers. The distance function of D numbers is an generalization of distance between two BPAs, which inherits the advantage of Dempster-Shafer theory and strengthens the capability of uncertainty modeling. An illustrative case is provided to demonstrate the effectiveness of the proposed function.
- Efficient modeling on uncertain information plays an important role in estimating the risk of contaminant intrusion in water distribution networks. Dempster-Shafer evidence theory is one of the most commonly used methods. However, the Dempster-Shafer evidence theory has some hypotheses including the exclusive property of the elements in the frame of discernment, which may not be consistent with the real world. In this paper, based on a more effective representation of uncertainty, called D numbers, a new method that allows the elements in the frame of discernment to be non-exclusive is proposed. To demonstrate the efficiency of the proposed method, we apply it to the water distribution networks to estimate the risk of contaminant intrusion.
- Apr 03 2014 cs.SI physics.soc-ph arXiv:1404.0530v1Recently, self-similarity of complex networks have attracted much attention. Fractal dimension of complex network is an open issue. Hub repulsion plays an important role in fractal topologies. This paper models the repulsion among the nodes in the complex networks in calculation of the fractal dimension of the networks. The Coulomb's law is adopted to represent the repulse between two nodes of the network quantitatively. A new method to calculate the fractal dimension of complex networks is proposed. The Sierpinski triangle network and some real complex networks are investigated. The results are illustrated to show that the new model of self-similarity of complex networks is reasonable and efficient.