results for au:Wu_G in:cs

- The solution of large scale Sylvester matrix equation plays an important role in control and large scientific computations. A popular approach is to use the global GMRES algorithm. In this work, we first consider the global GMRES algorithm with weighting strategy, and propose some new schemes based on residual to update the weighting matrix. Due to the growth of memory requirements and computational cost, it is necessary to restart the algorithm efficiently. The deflation strategy is popular for the solution of large linear systems and large eigenvalue problems, to the best of our knowledge, little work is done on applying deflation to the global GMRES algorithm for large Sylvester matrix equations. We then consider how to combine the weighting strategy with deflated restarting, and propose a weighted global GMRES algorithm with deflation for solving large Sylvester matrix equations. Theoretical analysis is given to show why the new algorithm works effectively. Further, unlike the weighted GMRES-DR presented in [\sc M. Embree, R. B. Morgan and H. V. Nguyen, \em Weighted inner products for GMRES and GMRES-DR, (2017), arXiv:1607.00255v2], we show that in our new algorithm, there is no need to change the inner product with respect to diagonal matrix to that with non-diagonal matrix, and our scheme is much cheaper. Numerical examples illustrate the numerical behavior of the proposed algorithms.
- We propose a new generalization bound for domain adaptation when there are multiple source domains with labeled instances and one target domain with unlabeled instances. The new bound has an interesting interpretation and reduces to an existing bound when there is only one source domain. Compared with existing bounds, the new bound does not require expert knowledge about the target distribution, nor the optimal combination rule for multisource domains. Interestingly, our theory also leads to an efficient implementation using adversarial neural networks: we show how to interpret it as learning feature representations that are invariant to the multiple domain shifts while still being discriminative for the learning task. To this end, we propose two models, both of which we call multisource domain adversarial networks (MDANs): the first model optimizes directly our bound, while the second model is a smoothed approximation of the first one, leading to a more data-efficient and task-adaptive model. The optimization tasks of both models are minimax saddle point problems that can be optimized by adversarial training. To demonstrate the effectiveness of MDANs, we conduct extensive experiments showing superior adaptation performance on three real-world datasets: sentiment analysis, digit classification, and vehicle counting.
- Apr 26 2017 cs.LG arXiv:1704.07511v2Given recent deep learning results that demonstrate the ability to effectively optimize high-dimensional non-convex functions with gradient descent optimization on GPUs, we ask in this paper whether symbolic gradient optimization tools such as Tensorflow can be effective for planning in hybrid (mixed discrete and continuous) nonlinear domains with high dimensional state and action spaces? To this end, we demonstrate that hybrid planning with Tensorflow and RMSProp gradient descent is competitive with mixed integer linear program (MILP) based optimization on piecewise linear planning domains (where we can compute optimal solutions) and substantially outperforms state-of-the-art interior point methods for nonlinear planning domains. Furthermore, we remark that Tensorflow is highly scalable, converging to a strong policy on a large-scale concurrent domain with a total of 576,000 continuous actions over a horizon of 96 time steps in only 4 minutes. We provide a number of insights that clarify such strong performance including observations that despite long horizons, RMSProp avoids both the vanishing and exploding gradients problem. Together these results suggest a new frontier for highly scalable planning in nonlinear hybrid domains by leveraging GPUs and the power of recent advances in gradient descent with highly optmized toolkits like Tensorflow.
- Differential privacy is a strong privacy notion based on indistinguishability of outputs of two neighboring datasets, which represent two states of one's information is within or without of a dataset. However, when facing dependent records, the representation would lose its foundation. Motivated by the observation, we introduce a variant of differential privacy notion based on the influence of outputs to an individual's inputs. The new notion accurately captures the weakening of the dependent records to the privacy guarantee of differential privacy. Our new privacy notion gets on well with the differential privacy. When the individuals are independent, we prove that a mechanism that satisfies $\epsilon$-differential privacy would ensure to satisfy the new privacy model. When the individuals are dependent, we prove that the group privacy method to achieve differential privacy in dependent case can be used to achieve the new privacy model. When the dependence extents of the sources are weak, there exists differentially private mechanism which can satisfy the new privacy model with noise magnitude far less than the one of the group privacy based method. These results imply that the new model inherits (almost) all of the mechanisms of differential privacy. Finally, our new privacy model fits in well with the information theory. We prove that if one mechanism satisfies the new privacy notion, the mutual information of one individual to the mechanism's outputs would be upper bounded by a small valued. This implies that the rationality of our new model is based on the information theory.
- Mar 20 2017 cs.CV arXiv:1703.05868v2Understanding traffic density from large-scale web camera (webcam) videos is a challenging problem because such videos have low spatial and temporal resolution, high occlusion and large perspective. To deeply understand traffic density, we explore both deep learning based and optimization based methods. To avoid individual vehicle detection and tracking, both methods map the image into vehicle density map, one based on rank constrained regression and the other one based on fully convolution networks (FCN). The regression based method learns different weights for different blocks in the image to increase freedom degrees of weights and embed perspective information. The FCN based method jointly estimates vehicle density map and vehicle count with a residual learning framework to perform end-to-end dense prediction, allowing arbitrary image resolution, and adapting to different vehicle scales and perspectives. We analyze and compare both methods, and get insights from optimization based method to improve deep model. Since existing datasets do not cover all the challenges in our work, we collected and labelled a large-scale traffic video dataset, containing 60 million frames from 212 webcams. Both methods are extensively evaluated and compared on different counting tasks and datasets. FCN based method significantly reduces the mean absolute error from 10.99 to 5.31 on the public dataset TRANCOS compared with the state-of-the-art baseline.
- Feb 23 2017 cs.NI arXiv:1702.06872v1Full-duplex (FD) allows the exchange of data between nodes on the same temporal and spectrum resources, however, it introduces self interference (SI) and additional network interference compared to half-duplex (HD). Power control in the FD networks, which is seldom studied in the literature, is promising to mitigate the interference and improve the performance of the overall network. In this work, we investigate the random and deterministic power control strategies in the FD networks, namely, constant power control, uniform power control, fractional power control and ALOHA-like random on-off power control scheme. Based on the obtained coverage probabilities and their robust approximations, we show that power control provides remarkable gain in area spectrum efficiency (ASE) and energy efficiency (EE), and improves the fairness among the uplink (UL) and downlink (DL) transmissions with respect to the FD networks. Moreover, we evaluate the minimum SI cancellation capability to guarantee the performance of the cell-edge users in FD networks. Generally, power control is helpful to improve the performance of the transmission for long distance in the FD networks and reduce the requirement of SI cancellation capability.
- The construction of permutation trinomials over finite fields attracts people's interest recently due to their simple form and some additional properties. Motivated by some results on the construction of permutation trinomials with Niho exponents, by constructing some new fractional polynomials that permute the set of the $(q+1)$-th roots of unity in $\mathbb F_{q^2}$, we present several classes of permutation trinomials with Niho exponents over $\mathbb F_{q^2}$, where $q=5^k$.
- Feb 10 2017 cs.CR arXiv:1702.02721v2Sensitivity-based methods are extensively used to construct differentially private mechanisms. In this paper, we realize that designing a differentially private mechanism can be considered as finding a randomized mapping between two metric spaces. The metric in the first metric space can be considered as a metric about privacy and the metric in the second metric space can be considered as a metric about utility. We find that the sensitivity-based methods are those just using the metric about utility to construct mechanisms. By the observation, we design mechanisms based on the metric about privacy. Furthermore, we design mechanisms based on the composition of the two metrics. Moreover, we find that most mechanisms, such as the global sensitivity mechanism [1], the Staircase mechanism [2,3], the Ladder mechanism [4] and the K-norm mechanism [5,6], can be considered as special cases of our mechanisms. Finally, we analyze these mechanisms and apply them to the subgraph counting problem and the linear query problem. The experiments show that our mechanisms (in most cases) have more accurate results than the state of the art mechanisms.
- We study the coexistence problem in a two-tier heterogeneous network (HetNet) with cognitive small cells. In particular, we consider an underlay HetNet, where the cognitive small base station (C-SBS) is allowed to use the frequency bands of the macro cell with an access probability (AP) as long as the C-SBS satisfies a preset interference probability (IP) constraint at macro users (MUs). To enhance the AP (or transmission opportunity) of the C-SBS, we propose a learning-based algorithm for the C-SBS and exploit the distance information between the macro base station (MBS) and MUs. Generally, the signal from the MBS to a specific MU contains the distance information between the MBS to the MU. We enable the C-SBS to analyze the MBS signal on a target frequency band, and learn the distance information between the MBS and the corresponding MU. With the learnt distance information, we calculate the upper bound of the probability that the C-SBS may interfere with the MU, and design an AP with a closed-form expression under the IP constraint. Numerical results indicate that the proposed algorithm outperforms the existing methods up to $60\%$ AP (or transmission opportunity).
- In underlay heterogeneous networks (HetNets), the distance between a macro base station (MBS) and a macro user (MU) is crucial for a small-cell based station (SBS) to control the interference to the MU and achieve the coexistence. To obtain the distance between the MBS and the MU, the SBS needs a backhaul link from the macro system, such that the macro system is able to transmit the information of the distance to the SBS through the backhaul link. However, there may not exist any backhaul link from the macro system to the SBS in practical situations. Thus, it is challenging for the SBS to obtain the distance. To deal with this issue, we propose a median based (MB) estimator for the SBS to obtain the distance between the MBS and the MU without any backhaul link. Numerical results show that the estimation error of the MB estimator can be as small as $4\%$.
- Wireless industry nowadays is facing two major challenges: 1) how to support the vertical industry applications so that to expand the wireless industry market and 2) how to further enhance device capability and user experience. In this paper, we propose a technology framework to address these challenges. The proposed technology framework is based on end-to-end vertical and horizontal slicing, where vertical slicing enables vertical industry and services and horizontal slicing improves system capacity and user experience. The technology development on vertical slicing has already started in late 4G and early 5G and is mostly focused on slicing the core network. We envision this trend to continue with the development of vertical slicing in the radio access network and the air interface. Moving beyond vertical slicing, we propose to horizontally slice the computation and communication resources to form virtual computation platforms for solving the network capacity scaling problem and enhancing device capability and user experience. In this paper, we explain the concept of vertical and horizontal slicing and illustrate the slicing techniques in the air interface, the radio access network, the core network and the computation platform. This paper aims to initiate the discussion on the long-range technology roadmap and spur development on the solutions for E2E network slicing in 5G and beyond.
- Negabent functions as a class of generalized bent functions have attracted a lot of attention recently due to their applications in cryptography and coding theory. In this paper, we consider the constructions of negabent functions over finite fields. First, by using the compositional inverses of certain binomial and trinomial permutations, we present several classes of negabent functions of the form $f(x)=\Tr_1^n(\lambda x^{2^k+1})+\Tr_1^n(ux)\Tr_1^n(vx)$, where $\lambda\in \F_{2^n}$, $2\leq k\leq n-1$, $(u,v)\in \F^*_{2^n}\times \F^*_{2^n}$, and $\Tr_1^n(\cdot)$ is the trace function from $\F_{2^n}$ to $\F_{2}$. Second, by using Kloosterman sum, we prove that the condition for the cubic monomials given by Zhou and Qu (Cryptogr. Commun., to appear, DOI 10.1007/s12095-015-0167-0.) to be negabent is also necessary. In addition, a conjecture on negabent monomials whose exponents are of Niho type is given.
- The decentralized caching is studied in two-layer networks, where users request contents through intermediate nodes (helpers) from a file server. By placing contents randomly and independently in each node and carefully designing the data delivery, the correlations of the pre-stored contents across layers can be utilized to reduce the transmission rates in the network. A hybrid caching scheme is developed by exploiting the cross-layer storage correlations, the single-layer multicast opportunities from the server (each helper) to helpers (the attached users), and the cross-layer multicast opportunities from the server to users. It is observed that, by the hybrid caching scheme, the achievable rate in the first layer is reduced without compromising the achievable rate in the second layer compared with the state of art. Furthermore, the achievable rate region is shown to be order-optimal and lies within constant margins to the information theoretic optimum. In particular, the multiplicative and additive factors are carefully sharpened to be $\frac{1}{48}$ and $4$, respectively.
- In cognitive radio networks, the channel gain between primary transceivers, namely, primary channel gain, is crucial for a cognitive transmitter (CT) to control the transmit power and achieve spectrum sharing. Conventionally, the primary channel gain is estimated in the primary system and thus unavailable at the CT. To deal with this issue, two estimators are proposed by enabling the CT to sense primary signals. In particular, by adopting the maximum likelihood (ML) criterion to analyze the received primary signals, a ML estimator is first developed. After demonstrating the high computational complexity of the ML estimator, a median based (MB) estimator with proved low complexity is then proposed. Furthermore, the estimation accuracy of the MB estimation is theoretically characterized. By comparing the ML estimator and the MB estimator from the aspects of the computational complexity as well as the estimation accuracy, both advantages and disadvantages of two estimators are revealed. Numerical results show that the estimation errors of the ML estimator and the MB estimator can be as small as $0.6$ dB and $0.7$ dB, respectively.
- Apr 12 2016 cs.CR arXiv:1604.03001v1How to achieve differential privacy in the distributed setting, where the dataset is distributed among the distrustful parties, is an important problem. We consider in what condition can a protocol inherit the differential privacy property of a function it computes. The heart of the problem is the secure multiparty computation of randomized function. A notion \emphobliviousness is introduced, which captures the key security problems when computing a randomized function from a deterministic one in the distributed setting. By this observation, a sufficient and necessary condition about computing a randomized function from a deterministic one is given. The above result can not only be used to determine whether a protocol computing differentially private function is secure, but also be used to construct secure one. Then we prove that the differential privacy property of a function can be inherited by the protocol computing it if the protocol privately computes it. A composition theorem of differentially private protocols is also presented. We also construct some protocols to generate random variate in the distributed setting, such as the uniform random variates and the inversion method. By using these fundamental protocols, we construct protocols of the Gaussian mechanism, the Laplace mechanism and the Exponential mechanism. Importantly, all these protocols satisfy obliviousness and so can be proved to be secure in a simulation based manner. We also provide a complexity bound of computing randomized function in the distribute setting. Finally, to show that our results are fundamental and powerful to multiparty differential privacy, we construct a differentially private empirical risk minimization protocol.
- Matrix exponential discriminant analysis (EDA) is a generalized discriminant analysis method based on matrix exponential. It can essentially overcome the intrinsic difficulty of small sample size problem that exists in the classical linear discriminant analysis (LDA). However, for data with high dimension, one has to solve a large matrix exponential eigenproblem in this method, and the time complexity is dominated by the computation of exponential of large matrices. In this paper, we propose two inexact Krylov subspace algorithms for solving the large matrix exponential eigenproblem effectively. The contribution of this work is threefold. First, we consider how to compute matrix exponential-vector products efficiently, which is the key step in the Krylov subspace method. Second, we compare the discriminant analysis criterion of EDA and that of LDA from a theoretical point of view. Third, we establish a relationship between the accuracy of the approximate eigenvectors and the distance to nearest neighbour classifier, and show why the matrix exponential eigenproblem can be solved approximately in practice. Numerical experiments on some real-world databases show superiority of our new algorithms over many state-of-the-art algorithms for face recognition.
- Control data separation architecture (CDSA) is a more efficient architecture to overcome the overhead issue than the conventional cellular networks, especially for the huge bursty traffic like Internet of Things, and over-the-top (OTT) content service. In this paper, we study the optimization issue of network energy efficiency of the CDSA-based heterogeneous cloud radio access networks (H-CRAN) networks, which has heterogeneous fronthaul between control base station (CBS) and data base stations (DBSs). We first present a modified power consumption model for the CDSA-based H-CRAN, and then formulate the optimization problem with constraint of overall capacity of wireless fronthaul. We work out the resource assignment and power allocation by the convex relaxation approach Using fractional programming method and Lagrangian dual decomposition method, we derive the close-form optimal solution and verify it by comprehensive system-level simulation. The simulation results show that our proposed algorithm has 8% EE gain compared to the static algorithm, and the CDSA-based H-CRAN networks can achieve up to 16% EE gain compared to the conventional network even under strict fronthaul capacity limit.
- In this paper, we investigate the termination problem of a family of polynomial programs, in which all assignments to program variables are polynomials, and test conditions of loops and conditional statements are polynomial equations. Our main result is that the non-terminating inputs of such a polynomial program is algorithmically computable according to a strictly descending chain of algebraic sets, which implies that the termination problem of these programs is decidable. The complexity of the algorithm follows immediately from the length of the chain, which can be computed by Hilbert's function and Macaulay's theorem. To the best of our knowledge, the considered family of polynomial programs should be the largest one with a decidable termination problem so far. The experimental results indicate the efficiency of our approach.
- We study an one-hop device-to-device (D2D) assisted wireless caching network, where popular files are randomly and independently cached in the memory of end-users. Each user may obtain the requested files from its own memory without any transmission, or from a helper through an one-hop D2D transmission, or from the base station (BS). We formulate a joint D2D link scheduling and power allocation problem to maximize the system throughput. However, the problem is non-convex and obtaining an optimal solution is computationally hard. Alternatively, we decompose the problem into a D2D link scheduling problem and an optimal power allocation problem. To solve the two subproblems, we first develop a D2D link scheduling algorithm to select the largest number of D2D links satisfying both the signal to interference plus noise ratio (SINR) and the transmit power constraints. Then, we develop an optimal power allocation algorithm to maximize the minimum transmission rate of the scheduled D2D links. Numerical results indicate that both the number of the scheduled D2D links and the system throughput can be improved simultaneously with the Zipf-distribution caching scheme, the proposed D2D link scheduling algorithm, and the proposed optimal power allocation algorithm compared with the state of arts.
- Objectives: We develop a framework for the analysis of synergy and redundancy in the pattern of information flow between subsystems of a complex network. Methods: The presence of redundancy and/or synergy in multivariate time series data renders difficult to estimate the neat flow of information from each driver variable to a given target. We show that adopting an unnormalized definition of Granger causality one may put in evidence redundant multiplets of variables influencing the target by maximizing the total Granger causality to a given target, over all the possible partitions of the set of driving variables. Consequently we introduce a pairwise index of synergy which is zero when two independent sources additively influence the future state of the system, differently from previous definitions of synergy. Results: We report the application of the proposed approach to resting state fMRI data from the Human Connectome Project, showing that redundant pairs of regions arise mainly due to space contiguity and interhemispheric symmetry, whilst synergy occurs mainly between non-homologous pairs of regions in opposite hemispheres. Conclusions: Redundancy and synergy, in healthy resting brains, display characteristic patterns, revealed by the proposed approach. Significance: The pairwise synergy index, here introduced, maps the informational character of the system at hand into a weighted complex network: the same approach can be applied to other complex systems whose normal state corresponds to a balance between redundant and synergetic circuits.
- The use of low precision (e.g., 1-3 bits) analog-to-digital convenors (ADCs) in very large multiple-input multiple-output (MIMO) systems is a technique to reduce cost and power consumption. In this context, nevertheless, it has been shown that the training duration is required to be \em very large just to obtain an acceptable channel state information (CSI) at the receiver. A possible solution to the quantized MIMO systems is joint channel-and-data (JCD) estimation. This paper first develops an analytical framework for studying the quantized MIMO system using JCD estimation. In particular, we use the Bayes-optimal inference for the JCD estimation and realize this estimator utilizing a recent technique based on approximate message passing. Large-system analysis based on the replica method is then adopted to derive the asymptotic performances of the JCD estimator. Results from simulations confirm our theoretical findings and reveal that the JCD estimator can provide a significant gain over conventional pilot-only schemes in the quantized MIMO system.
- The null linear discriminant analysis method is a competitive approach for dimensionality reduction. The implementation of this method, however, is computationally expensive. Recently, a fast implementation of null linear discriminant analysis method using random matrix multiplication with scatter matrices was proposed. However, if the random matrix is chosen arbitrarily, the orientation matrix may be rank deficient, and some useful discriminant information will be lost. In this paper, we investigate how to choose the random matrix properly, such that the two criteria of the null LDA method are satisfied theoretically. We give a necessary and sufficient condition to guarantee full column rank of the orientation matrix. Moreover, the geometric characterization of the condition is also described.
- Efficient scheduling is of great significance to rationally make use of scarce satellite resources. Task clustering has been demonstrated to realize an effective strategy to improve the efficiency of satellite scheduling. However, the previous task clustering strategy is static. That is, it is integrated into the scheduling in a two-phase manner rather than in a dynamic fashion, without expressing its full potential in improving the satellite scheduling performance. In this study, we present an adaptive Simulated Annealing based scheduling algorithm aggregated with a dynamic task clustering strategy (or ASA-DTC for short) for satellite observation scheduling problems (SOSPs). First, we develop a formal model for the scheduling of Earth observing satellites. Second, we analyze the related constraints involved in the observation task clustering process. Thirdly, we detail an implementation of the dynamic task clustering strategy and the adaptive Simulated Annealing algorithm. The adaptive Simulated Annealing algorithm is efficient, with the endowment of some sophisticated mechanisms, i.e. adaptive temperature control, tabu-list based revisiting avoidance mechanism, and intelligent combination of neighborhood structures. Finally, we report on experimental simulation studies to demonstrate the competitive performance of ASA-DTC. Moreover, we show that ASA-DTC is especially effective when SOSPs contain a large number of targets or these targets are densely distributed in a certain area.
- Jan 16 2014 cs.NE arXiv:1401.3376v3Population-based search algorithms (PBSAs), including swarm intelligence algorithms (SIAs) and evolutionary algorithms (EAs), are competitive alternatives for solving complex optimization problems and they have been widely applied to real-world optimization problems in different fields. In this study, a novel population-based across neighbourhood search (ANS) is proposed for numerical optimization. ANS is motivated by two straightforward assumptions and three important issues raised in improving and designing efficient PBSAs. In ANS, a group of individuals collaboratively search the solution space for an optimal solution of the optimization problem considered. A collection of superior solutions found by individuals so far is maintained and updated dynamically. At each generation, an individual directly searches across the neighbourhoods of multiple superior solutions with the guidance of a Gaussian distribution. This search manner is referred to as across neighbourhood search. The characteristics of ANS are discussed and the concept comparisons with other PBSAs are given. The principle behind ANS is simple. Moreover, ANS is easy for implementation and application with three parameters being required to tune. Extensive experiments on 18 benchmark optimization functions of different types show that ANS has well balanced exploration and exploitation capabilities and performs competitively compared with many efficient PBSAs (Related Matlab codes used in the experiments are available from http://guohuawunudt.gotoip2.com/publications.html).
- In this paper, by using a powerful criterion for permutation polynomials given by Zieve, we give several classes of complete permutation monomials over $\F_{q^r}$. In addition, we present a class of complete permutation multinomials, which is a generalization of recent work.
- We present a construction for complementary pairs of arrays that exploits a set of mutually-unbiased bases, and enumerate these arrays as well as the corresponding set of complementary sequences obtained from the arrays by projection. We also sketch an algorithm to uniquely generate these sequences. The pairwise squared inner-product of members of the sequence set is shown to be $\frac{1}{2}$. Moreover, a subset of the set can be viewed as a codebook that asymptotically achieves $\sqrt{\frac{3}{2}}$ times the Welch bound.
- Device-to-device(D2D) underlaying communication brings great benefits to the cellular networks from the improvement of coverage and spectral efficiency at the expense of complicated transceiver design. With frequency spectrum sharing mode, the D2D user generates interference to the existing cellular networks either in downlink or uplink. Thus the resource allocation for D2D pairs should be designed properly in order to reduce possible interference, in particular for uplink. In this paper, we introduce a novel bandwidth allocation scheme to maximize the utilities of both D2D users and cellular users. Since the allocation problem is strongly NP-hard, we apply a relaxation to the association indicators. We propose a low-complexity distributed algorithm and prove the convergence in a static environment. The numerical result shows that the proposed scheme can significant improve the performance in terms of utilities.The performance of D2D communications depends on D2D user locations, the number of D2D users and QoS(Quality of Service) parameters.
- We propose a formal expansion of the transfer entropy to put in evidence irreducible sets of variables which provide information for the future state of each assigned target. Multiplets characterized by a large contribution to the expansion are associated to informational circuits present in the system, with an informational character which can be associated to the sign of the contribution. For the sake of computational complexity, we adopt the assumption of Gaussianity and use the corresponding exact formula for the conditional mutual information. We report the application of the proposed methodology on two EEG data sets.
- We analyze a simple dynamical network model which describes the limited capacity of nodes to process the input information. For a suitable choice of the parameters, the information flow pattern is characterized by exponential distribution of the incoming information and a fat-tailed distribution of the outgoing information, as a signature of the law of diminishing marginal returns. The analysis of a real EEG data-set shows that similar phenomena may be relevant for brain signals.
- Jan 03 2012 cs.NI arXiv:1201.0207v1Congestions in wireless sensor networks (WSNs) could potentially cause packet loss, throughput impairment and energy waste. To address this issue, a hop-by-hop cross-layer congestion control scheme (HCCC) built on contention-based MAC protocol is proposed in this paper. According to MAC-layer channel information including buffer occupancy ratio and congestion degree of local node, HCCC dynamically adjusts channel access priority in MAC layer and data transmission rate of the node to tackle the problem of congestion. Simulations have been conducted to compare HCCC against closely-related existing schemes. The results show that HCCC exhibits considerable superiority in terms of packets loss ratio, throughput and energy efficiency.
- Access control is an issue of paramount importance in cyber-physical systems (CPS). In this paper, an access control scheme, namely FEAC, is presented for CPS. FEAC can not only provide the ability to control access to data in normal situations, but also adaptively assign emergency-role and permissions to specific subjects and inform subjects without explicit access requests to handle emergency situations in a proactive manner. In FEAC, emergency-group and emergency-dependency are introduced. Emergencies are processed in sequence within the group and in parallel among groups. A priority and dependency model called PD-AGM is used to select optimal response-action execution path aiming to eliminate all emergencies that occurred within the system. Fault-tolerant access control polices are used to address failure in emergency management. A case study of the hospital medical care application shows the effectiveness of FEAC.
- Jan 03 2012 cs.NI arXiv:1201.0206v1In energy constrained wireless sensor networks, it is significant to make full use of the limited energy and maximize the network lifetime even when facing some unexpected situation. In this paper, all sensor nodes are grouped into clusters, and for each cluster, it has a mobile cluster head to manage the whole cluster. We consider an emergent situation that one of the mobile cluster heads is broken down, and hence the whole cluster is consequently out of work. An efficient approach is proposed for recovering the failure cluster by selecting multiple static sensor nodes as the cluster heads to collect packets and transmit them to the sink node. Improved simulated annealing algorithm is utilized to achieve the uniform deployment of the cluster heads. The new cluster heads are dynamically changed in order to keep balanced energy consumption. Among the new cluster heads, packets are transmitted through multi-hop forwarding path which is cost-lowest path found by Dijkstra's algorithm. A balanced energy consumption model is provided to help find the cost-lowest path and prolong the lifetime of the network. The forwarding path is updated dynamically according to the cost of the path and residual energy of the node in that path. The experimental results show that the failure cluster is recovered and the lifetime of the cluster is prolonged.
- Jan 02 2012 cs.NI arXiv:1201.0119v1In this paper, a family of ant colony algorithms called DAACA for data aggregation has been presented which contains three phases: the initialization, packet transmission and operations on pheromones. After initialization, each node estimates the remaining energy and the amount of pheromones to compute the probabilities used for dynamically selecting the next hop. After certain rounds of transmissions, the pheromones adjustment is performed periodically, which combines the advantages of both global and local pheromones adjustment for evaporating or depositing pheromones. Four different pheromones adjustment strategies are designed to achieve the global optimal network lifetime, namely Basic-DAACA, ES-DAACA, MM-DAACA and ACS-DAACA. Compared with some other data aggregation algorithms, DAACA shows higher superiority on average degree of nodes, energy efficiency, prolonging the network lifetime, computation complexity and success ratio of one hop transmission. At last we analyze the characteristic of DAACA in the aspects of robustness, fault tolerance and scalability.
- Multicast in Wireless Sensor Networks (WSNs) is an attractive mechanism for delivering data to multiple receivers as it saves bandwidth. To guarantee the security of multicast, the group key is used to encrypt and decrypt the packages. However, providing key management services in WSNs is complicated because sensor nodes possess limited resources of computing, storage and communication. To address the balance between security and limited resources, a multicast group key management protocol based on the weight-balanced 2-3 tree is proposed to generate, distribute, and update the group key securely and efficiently. The decentralized group key management method is employed. A weight-balanced 2-3 key tree is formed in every subgroup. Instead of using the conventional symmetric and non-symmetric encryption algorithms, the Maximum Distance Separable (MDS) code technique is used to distribute the multicast key dynamically. During the key updating, a series of adjustment rules are summarized to keep the tree weight-balanced, where pseudo-nodes as leaves are added to reduce the computation and communication complexity. Compared with some other group key management protocols, our scheme shows higher superiority on security and performance.
- Given $n$ points in a circular region $C$ in the plane, we study the problems of moving the $n$ points to its boundary to form a regular $n$-gon such that the maximum (min-max) or the sum (min-sum) of the Euclidean distances traveled by the points is minimized. The problems have applications, e.g., in mobile sensor barrier coverage of wireless sensor networks. The min-max problem further has two versions: the decision version and optimization version. For the min-max problem, we present an $O(n\log^2 n)$ time algorithm for the decision version and an $O(n\log^3 n)$ time algorithm for the optimization version. The previously best algorithms for the two problem versions take $O(n^{3.5})$ time and $O(n^{3.5}\log n)$ time, respectively. For the min-sum problem, we show that a special case with all points initially lying on the boundary of the circular region can be solved in $O(n^2)$ time, improving a previous $O(n^4)$ time solution. For the general min-sum problem, we present a 3-approximation $O(n^2)$ time algorithm, improving the previous $(1+\pi)$-approximation $O(n^2)$ time algorithm. A by-product of our techniques is an algorithm for dynamically maintaining the maximum matching of a circular convex bipartite graph; our algorithm can handle each vertex insertion or deletion on the graph in $O(\log^2 n)$ time. This result is interesting in its own right.
- Nov 16 2010 cs.CR arXiv:1011.3098v1In pervasive computing environments, Location- Based Services (LBSs) are becoming increasingly important due to continuous advances in mobile networks and positioning technologies. Nevertheless, the wide deployment of LBSs can jeopardize the location privacy of mobile users. Consequently, providing safeguards for location privacy of mobile users against being attacked is an important research issue. In this paper a new scheme for safeguarding location privacy is proposed. Our approach supports location K-anonymity for a wide range of mobile users with their own desired anonymity levels by clustering. The whole area of all users is divided into clusters recursively in order to get the Minimum Bounding Rectangle (MBR). The exact location information of a user is replaced by his MBR. Privacy analysis shows that our approach can achieve high resilience to location privacy threats and provide more privacy than users expect. Complexity analysis shows clusters can be adjusted in real time as mobile users join or leave. Moreover, the clustering algorithms possess strong robustness.
- Nov 16 2010 cs.OH arXiv:1011.3852v1This paper describes a mobile health monitoring system called iCare for the elderly. We use wireless body sensors and smart phones to monitor the wellbeing of the elderly. It can offer remote monitoring for the elderly anytime anywhere and provide tailored services for each person based on their personal health condition. When detecting an emergency, the smart phone will automatically alert pre-assigned people who could be the old people's family and friends, and call the ambulance of the emergency centre. It also acts as the personal health information system and the medical guidance which offers one communication platform and the medical knowledge database so that the family and friends of the served people can cooperate with doctors to take care of him/her. The system also features some unique functions that cater to the living demands of the elderly, including regular reminder, quick alarm, medical guidance, etc. iCare is not only a real-time health monitoring system for the elderly, but also a living assistant which can make their lives more convenient and comfortable.
- Body Sensor Networks (BSNs) provide continuous health monitoring and analysis of physiological parameters. A high degree of Quality-of-Service (QoS) for BSN is extremely required. Inter-user interference is introduced by the simultaneous communication of BSNs congregating in the same area. In this paper, a decentralized inter-user interference suppression algorithm for BSN, namely DISG, is proposed. Each BSN measures the SINR from other BSNs and then adaptively selects the suitable channel and transmission power. By utilizing non-cooperative game theory and no regret learning algorithm, DISG provides an adaptive inter-user interference suppression strategy. The correctness and effectiveness of DISG is theoretically proved, and the experimental results show that DISG can reduce the effect of inter-user interference effectively.
- Nov 16 2010 cs.NI arXiv:1011.3116v1A high degree of reliability for critical data transmission is required in body sensor networks (BSNs). However, BSNs are usually vulnerable to channel impairments due to body fading effect and RF interference, which may potentially cause data transmission to be unreliable. In this paper, an adaptive and flexible fault-tolerant communication scheme for BSNs, namely AFTCS, is proposed. AFTCS adopts a channel bandwidth reservation strategy to provide reliable data transmission when channel impairments occur. In order to fulfill the reliability requirements of critical sensors, fault-tolerant priority and queue are employed to adaptively adjust the channel bandwidth allocation. Simulation results show that AFTCS can alleviate the effect of channel impairments, while yielding lower packet loss rate and latency for critical sensors at runtime.
- It is an increasingly important issue to reduce the energy consumption of computing systems. In this paper, we consider partition based energy-aware scheduling of periodic real-time tasks on multicore processors. The scheduling exploits dynamic voltage scaling (DVS) and core sleep scheduling to reduce both dynamic and leakage energy consumption. If the overhead of core state switching is non-negligible, however, the performance of this scheduling strategy in terms of energy efficiency might degrade. To achieve further energy saving, we extend the static task scheduling with run-time task reallocation. The basic idea is to aggregate idle time among cores so that as many cores as possible could be put into sleep in a way that the overall energy consumption is reduced. Simulation results show that the proposed approach results in up to 20% energy saving over traditional leakage-aware DVS.
- We consider relay selection technique in a cooperative cellular network where user terminals act as mobile relays to help the communications between base station (BS) and mobile station (MS). A novel relay selection scheme, called Joint Uplink and Downlink Relay Selection (JUDRS), is proposed in this paper. Specifically, we generalize JUDRS in two key aspects: (i) relay is selected jointly for uplink and downlink, so that the relay selection overhead can be reduced, and (ii) we consider to minimize the weighted total energy consumption of MS, relay and BS by taking into account channel quality and traffic load condition of uplink and downlink. Information theoretic analysis of the diversity-multiplexing tradeoff demonstrates that the proposed scheme achieves full spatial diversity in the quantity of cooperating terminals in this network. And numerical results are provided to further confirm a significant energy efficiency gain of the proposed algorithm comparing to the previous best worse channel selection and best harmonic mean selection algorithms.
- Relay selection enhances the performance of the cooperative networks by selecting the links with higher capacity. Meanwhile link adaptation improves the spectral efficiency of wireless data-centric networks through adapting the modulation and coding schemes (MCS) to the current link condition. In this paper, relay selection is combined with link adaptation for distributed beamforming in a two-hop regenerative cooperative system. A novel signaling mechanism and related optimal algorithms are proposed for joint relay selection and link adaptation. In the proposed scheme, there is no need to feedback the relay selection results to each relay. Instead, by broadcasting the link adaptation results from the destination, each relay will automatically understand whether it is selected or not. The lower and upper bounds of the throughput of the proposed scheme are derived. The analysis and simulation results indicate that the proposed scheme provides synergistic gains compared to the pure relay selection and link adaptation schemes.
- In time-critical wireless sensor network (WSN) applications, a high degree of reliability is commonly required. A dynamical jumping real-time fault-tolerant routing protocol (DMRF) is proposed in this paper. Each node utilizes the remaining transmission time of the data packets and the state of the forwarding candidate node set to dynamically choose the next hop. Once node failure, network congestion or void region occurs, the transmission mode will switch to jumping transmission mode, which can reduce the transmission time delay, guaranteeing the data packets to be sent to the destination node within the specified time limit. By using feedback mechanism, each node dynamically adjusts the jumping probabilities to increase the ratio of successful transmission. Simulation results show that DMRF can not only efficiently reduce the effects of failure nodes, congestion and void region, but also yield higher ratio of successful transmission, smaller transmission delay and reduced number of control packets.