results for au:Huang_C in:cs

- Evaluating expression of the Human epidermal growth factor receptor 2 (Her2) by visual examination of immunohistochemistry (IHC) on invasive breast cancer (BCa) is a key part of the diagnostic assessment of BCa due to its recognised importance as a predictive and prognostic marker in clinical practice. However, visual scoring of Her2 is subjective and consequently prone to inter-observer variability. Given the prognostic and therapeutic implications of Her2 scoring, a more objective method is required. In this paper, we report on a recent automated Her2 scoring contest, held in conjunction with the annual PathSoc meeting held in Nottingham in June 2016, aimed at systematically comparing and advancing the state-of-the-art Artificial Intelligence (AI) based automated methods for Her2 scoring. The contest dataset comprised of digitised whole slide images (WSI) of sections from 86 cases of invasive breast carcinoma stained with both Haematoxylin & Eosin (H&E) and IHC for Her2. The contesting algorithms automatically predicted scores of the IHC slides for an unseen subset of the dataset and the predicted scores were compared with the 'ground truth' (a consensus score from at least two experts). We also report on a simple Man vs Machine contest for the scoring of Her2 and show that the automated methods could beat the pathology experts on this contest dataset. This paper presents a benchmark for comparing the performance of automated algorithms for scoring of Her2. It also demonstrates the enormous potential of automated algorithms in assisting the pathologist with objective IHC scoring.
- May 22 2017 cs.CV arXiv:1705.06839v1In this paper we present a new approach for efficient regression based object tracking which we refer to as Deep- LK. Our approach is closely related to the Generic Object Tracking Using Regression Networks (GOTURN) framework of Held et al. We make the following contributions. First, we demonstrate that there is a theoretical relationship between siamese regression networks like GOTURN and the classical Inverse-Compositional Lucas & Kanade (IC-LK) algorithm. Further, we demonstrate that unlike GOTURN IC-LK adapts its regressor to the appearance of the currently tracked frame. We argue that this missing property in GOTURN can be attributed to its poor performance on unseen objects and/or viewpoints. Second, we propose a novel framework for object tracking - which we refer to as Deep-LK - that is inspired by the IC-LK framework. Finally, we show impressive results demonstrating that Deep-LK substantially outperforms GOTURN. Additionally, we demonstrate comparable tracking performance to current state of the art deep-trackers whilst being an order of magnitude (i.e. 100 FPS) computationally efficient.
- Class imbalance is a challenging issue in practical classification problems for deep learning models as well as traditional models. Traditionally successful countermeasures such as synthetic over-sampling have had limited success with complex, structured data handled by deep learning models. In this paper, we propose Deep Over-sampling (DOS), a framework for extending the synthetic over-sampling method to exploit the deep feature space acquired by a convolutional neural network (CNN). Its key feature is an explicit, supervised representation learning, for which the training data presents each raw input sample with a synthetic embedding target in the deep feature space . which is sampled from the linear subspace of in-class neighbors. We implement an iterative process of training the CNN and updating the targets, which induces smaller in-class variance among the embeddings, to increase the discriminative power of the deep representation. We present an empirical study using public benchmarks, which shows that the DOS framework not only counteracts class imbalance better than the existing method, but also improves the performance of the CNN in the standard, balanced settings.
- Apr 26 2017 cs.CV arXiv:1704.07754v1Deep learning models such as convolutional neural net- work have been widely used in 3D biomedical segmentation and achieve state-of-the-art performance. However, most of them often adapt a single modality or stack multiple modalities as different input channels. To better leverage the multi- modalities, we propose a deep encoder-decoder structure with cross-modality convolution layers to incorporate different modalities of MRI data. In addition, we exploit convolutional LSTM to model a sequence of 2D slices, and jointly learn the multi-modalities and convolutional LSTM in an end-to-end manner. To avoid converging to the certain labels, we adopt a re-weighting scheme and two-phase training to handle the label imbalance. Experimental results on BRATS-2015 show that our method outperforms state-of-the-art biomedical segmentation approaches.
- Apr 24 2017 cs.NI arXiv:1704.06569v1Domain Name System (DNS), one of the important infrastructure in the Internet, was vulnerable to attacks, for the DNS designer didn't take security issues into consideration at the beginning. The defects of DNS may lead to users' failure of access to the websites, what's worse, users might suffer a huge economic loss. In order to correct the DNS wrong resource records, we propose a Self-Feedback Correction System for DNS (SFCSD), which can find and track a large number of common websites' domain name and IP address correct correspondences to provide users with a real-time auto-updated correct (IP, Domain) binary tuple list. By matching specific strings with SSL, DNS and HTTP traffic passively, filtering with the CDN CNAME and non-homepage URL feature strings, verifying with webpage fingerprint algorithm, SFCSD obtains a large number of highly possibly correct IP addresses to make an active manual correction in the end. Its self-feedback mechanism can expand search range and improve performance. Experiments show that, SFCSD can achieve 94.3% precision and 93.07% recall rate with the optimal threshold selection in the test dataset. It has 8Gbps processing speed stand-alone to find almost 1000 possibly correct (IP, Domain) per day for the each specific string and to correct almost 200.
- Apr 11 2017 cs.SY arXiv:1704.02641v1The paper provides simple formulas of Bayesian filtering for the exact recursive computation of state conditional probability density functions given quantized innovations signal measurements of a linear stochastic system. This is a topic of current interest because the innovations signal should be white and therefore efficient in its use of channel capacity and in the design and optimization of the quantizer. Earlier approaches, which we reexamine and characterize here, have relied on assumptions concerning densities or approximations to yield recursive solutions, which include the sign-of-innovations Kalman filter and a Particle filtering technique. Our approach uses the Kalman filter innovations at the transmitter side and provides a point of comparison for the other methods, since it is based on the Bayesian filter. Computational examples are provided.
- Mar 21 2017 cs.CV arXiv:1703.06256v1The histogram of oriented gradients (HOG) is a widely used feature descriptor in computer vision for the purpose of object detection. In the paper, a modified HOG descriptor is described, it uses a lookup table and the method of integral image to speed up the detection performance by a factor of 5~10. By exploiting the special hardware features of a given platform(e.g. a digital signal processor), further improvement can be made to the HOG descriptor in order to have real-time object detection and tracking.
- Mar 20 2017 cs.CV arXiv:1703.05884v2In this paper, we propose the first higher frame rate video dataset (called Need for Speed - NfS) and benchmark for visual object tracking. The dataset consists of 100 videos (380K frames) captured with now commonly available higher frame rate (240 FPS) cameras from real world scenarios. All frames are annotated with axis aligned bounding boxes and all sequences are manually labelled with nine visual attributes - such as occlusion, fast motion, background clutter, etc. Our benchmark provides an extensive evaluation of many recent and state-of-the-art trackers on higher frame rate sequences. We ranked each of these trackers according to their tracking accuracy and real-time performance. One of our surprising conclusions is that at higher frame rates, simple trackers such as correlation filters outperform complex methods based on deep networks. This suggests that for practical applications (such as in robotics or embedded vision), one needs to carefully tradeoff bandwidth constraints associated with higher frame rate acquisition, computational costs of real-time analysis, and the required application accuracy. Our dataset and benchmark allows for the first time (to our knowledge) systematic exploration of such issues, and will be made available to allow for further research in this space.
- Feb 23 2017 cs.CY arXiv:1702.06830v2While smart living based on the controls of voices, gestures, mobile phones or the Web has gained momentum from both academia and industries, most of existing methods are not effective in helping the elderly or people with muscle disordered or motor disabilities. Recently, the Electroencephalography (EEG) signal based mind control has attracted much attentions, due to the fact that it enables users to control devices and to communicate to outer world with little participation of their muscle systems. However, the use of EEG signals face challenges such as low accuracy, arduous and time-consuming feature extraction. This paper proposes a 7-layer deep learning model that includes two layers of Long-Short Term Memory (LSTM) cells to directly classify raw EEG signals, avoiding the time-consuming pre-processing and feature extraction. The hyper-parameters are selected by an Orthogonal Array experiment method to improve the efficiency. Our model is applied to an open EEG dataset released by PhysioNet and achieves 95.05% accuracy over 5 categorical EEG raw data. The applicability of our proposed model is further demonstrated by two use cases of smart living in terms of assisted living with robotics and home automation.
- Feb 13 2017 cs.ET arXiv:1702.03216v1Optical interconnect is a potential solution to attain the large bandwidth on-chip communications needed in high performance computers in a low power and low cost manner. Mode-division multiplexing (MDM) is an emerging technology that scales the capacity of a single wavelength carrier by the number of modes in a multimode waveguide, and is attractive as a cost-effective means for high bandwidth density on-chip communications. Advanced modulation formats with high spectral efficiency in MDM networks can further improve the data rates of the optical link. Here, we demonstrate an intra-chip MDM communications link employing advanced modulation formats with two waveguide modes. We demonstrate a compact single wavelength carrier link that is expected to support 2x100 Gb/s mode multiplexed capacity. The network comprised integrated microring modulators at the transmitter, mode multiplexers, multimode waveguide interconnect, mode demultiplexers and integrated germanium on silicon photodetectors. Each of the mode channels achieves 100 Gb/s line rate with 84 Gb/s net payload data rate at 7% overhead for hard-decision forward error correction (HD-FEC) in the OFDM/16-QAM signal transmission.
- Instant messaging is one of the major channels of computer mediated communication. However, humans are known to be very limited in understanding others' emotions via text-based communication. Aiming on introducing emotion sensing technologies to instant messaging, we developed EmotionPush, a system that automatically detects the emotions of the messages end-users received on Facebook Messenger and provides colored cues on their smartphones accordingly. We conducted a deployment study with 20 participants during a time span of two weeks. In this paper, we revealed five challenges, along with examples, that we observed in our study based on both user's feedback and chat logs, including (i)the continuum of emotions, (ii)multi-user conversations, (iii)different dynamics between different users, (iv)misclassification of emotions and (v)unconventional content. We believe this discussion will benefit the future exploration of affective computing for instant messaging, and also shed light on research of conversational emotion sensing.
- In this paper, an efficient divide-and-conquer (DC) algorithm is proposed for the symmetric tridiagonal matrices based on ScaLAPACK and the hierarchically semiseparable (HSS) matrices. HSS is an important type of rank-structured matrices.Most time of the DC algorithm is cost by computing the eigenvectors via the matrix-matrix multiplications (MMM). In our parallel hybrid DC (PHDC) algorithm, MMM is accelerated by using the HSS matrix techniques when the intermediate matrix is large. All the HSS algorithms are done via the package STRUMPACK. PHDC has been tested by using many different matrices. Compared with the DC implementation in MKL, PHDC can be faster for some matrices with few deflations when using hundreds of processes. However, the gains decrease as the number of processes increases. The comparisons of PHDC with ELPA (the Eigenvalue soLvers for Petascale Applications library) are similar. PHDC is usually slower than MKL and ELPA when using 300 or more processes on Tianhe-2 supercomputer.
- We propose an iterative channel estimation algorithm based on the Least Square Estimation (LSE) and Sparse Message Passing (SMP) algorithm for the Millimeter Wave (mmWave) MIMO systems. The channel coefficients of the mmWave MIMO are approximately modeled as a Bernoulli-Gaussian distribution since there are relatively fewer paths in the mmWave channel, i.e., the channel matrix is sparse and only has a few non-zero entries. By leveraging the advantage of sparseness, we proposed an algorithm that iteratively detects the exact location and value of non-zero entries of the sparse channel matrix. The SMP is used to detect the exact location of non-zero entries of the channel matrix, while the LSE is used for estimating its value at each iteration. We also analyze the Cramer-Rao Lower Bound (CLRB), and show that the proposed algorithm is a minimum variance unbiased estimator. Furthermore, we employ the Gaussian approximation for message densities under density evolution to simplify the analysis of the algorithm, which provides a simple method to predict the performance of the proposed algorithm. Numerical experiments show that the proposed algorithm has much better performance than the existing sparse estimators, especially when the channel is sparse. In addition, our proposed algorithm converges to the CRLB of the genie-aided estimation of sparse channels in just 5 turbo iterations.
- The emerging marketplace for online free services in which service providers earn revenue from using consumer data in direct and indirect ways has lead to significant privacy concerns. This leads to the following question: can the online marketplace sustain multiple service providers (SPs) that offer privacy-differentiated free services? This paper studies the problem of market segmentation for the free online services market by augmenting the classical Hotelling model for market segmentation analysis to include the fact that for the free services market, a consumer values service not in monetized terms but by its quality of service (QoS) and that the differentiator of services is not product price but the privacy risk advertised by a SP. Building upon the Hotelling model, this paper presents a parametrized model for SP profit and consumer valuation of service for both the two- and multi-SP problems to show that: (i) when consumers place a high value on privacy, it leads to a lower use of private data by SPs (i.e., their advertised privacy risk reduces), and thus, SPs compete on the QoS; (ii) SPs that are capable of differentiating on services that do not directly target consumers gain larger market share; and (iii) a higher valuation of privacy by consumers forces SPs with smaller untargeted revenue to offer lower privacy risk to attract more consumers. The work also illustrates the market segmentation problem for more than two SPs and highlights the instability of such markets.
- Existing deep embedding methods in vision tasks are capable of learning a compact Euclidean space from images, where Euclidean distances correspond to a similarity metric. To make learning more effective and efficient, hard sample mining is usually employed, with samples identified through computing the Euclidean feature distance. However, the global Euclidean distance cannot faithfully characterize the true feature similarity in a complex visual feature space, where the intraclass distance in a high-density region may be larger than the interclass distance in low-density regions. In this paper, we introduce a Position-Dependent Deep Metric (PDDM) unit, which is capable of learning a similarity metric adaptive to local feature structure. The metric can be used to select genuinely hard samples in a local neighborhood to guide the deep embedding learning in an online and robust manner. The new layer is appealing in that it is pluggable to any convolutional networks and is trained end-to-end. Our local similarity-aware feature embedding not only demonstrates faster convergence and boosted performance on two complex image retrieval datasets, its large margin nature also leads to superior generalization results under the large and open set scenarios of transfer learning and zero-shot learning on ImageNet 2010 and ImageNet-10K datasets.
- In this paper, we propose a novel channel estimation algorithm based on the Least Square Estimation (LSE) and Sparse Message Passing algorithm (SMP), which is of special interest for Millimeter Wave (mmWave) systems, since this algorithm can leverage the inherent sparseness of the mmWave channel. Our proposed algorithm will iteratively detect exact the location and the value of non-zero entries of sparse channel vector without its prior knowledge of distribution. The SMP is used to detect exact the location of non-zero entries of the channel vector, while the LSE is used for estimating its value at each iteration. Then, the analysis of the Cramer-Rao Lower Bound (CRLB) of our proposed algorithm is given. Numerical experiments show that our proposed algorithm has much better performance than the existing sparse estimators (e.g. LASSO), especially when mmWave systems have massive antennas at both the transmitters and receivers. In addition, we also find that our proposed algorithm converges to the CRLB of the genie-aided estimation of sparse channels in just a few turbo iterations.
- Aug 30 2016 cs.CL arXiv:1608.07738v2In Distributional Semantic Models (DSMs), Vector Cosine is widely used to estimate similarity between word vectors, although this measure was noticed to suffer from several shortcomings. The recent literature has proposed other methods which attempt to mitigate such biases. In this paper, we intend to investigate APSyn, a measure that computes the extent of the intersection between the most associated contexts of two target words, weighting it by context relevance. We evaluated this metric in a similarity estimation task on several popular test sets, and our results show that APSyn is in fact highly competitive, even with respect to the results reported in the literature for word embeddings. On top of it, APSyn addresses some of the weaknesses of Vector Cosine, performing well also on genuine similarity estimation.
- Aug 17 2016 cs.NI arXiv:1608.04411v1VPN service providers (VSP) and IP-VPN customers have traditionally maintained service demarcation boundaries between their routing and signaling entities. This has resulted in the VPNs viewing the VSP network as an opaque entity and therefore limiting any meaningful interaction between the VSP and the VPNs. A key challenge is to expose each VPN to information about available network resources through an abstraction (TA) [1] which is both accurate and fair. In [2] we proposed three decentralized schemes assuming that all the border nodes performing the abstraction have access to the entire core network topology. This assumption likely leads to over- or under-subscription. In this paper we develop centralized schemes to partition the core network capacities, and assign each partition to a specific VPN for applying the decentralized abstraction schemes presented in [2]. First, we present two schemes based on the maximum concurrent flow and the maximum multicommodity flow (MMCF) formulations. We then propose approaches to address the fairness concerns that arise when MMCF formulation is used. We present results based on extensive simulations on several topologies, and provide a comparative evaluation of the different schemes in terms of abstraction efficiency, fairness to VPNs and call performance characteristics achieved.
- Jul 21 2016 cs.CV arXiv:1607.05781v1In this paper, we develop a new approach of spatially supervised recurrent convolutional neural networks for visual object tracking. Our recurrent convolutional network exploits the history of locations as well as the distinctive visual features learned by the deep neural networks. Inspired by recent bounding box regression methods for object detection, we study the regression capability of Long Short-Term Memory (LSTM) in the temporal domain, and propose to concatenate high-level visual features produced by convolutional networks with region information. In contrast to existing deep learning based trackers that use binary classification for region candidates, we use regression for direct prediction of the tracking locations both at the convolutional layer and at the recurrent unit. Our extensive experimental results and performance comparison with state-of-the-art tracking methods on challenging benchmark video tracking datasets shows that our tracker is more accurate and robust while maintaining low computational cost. For most test video sequences, our method achieves the best tracking performance, often outperforms the second best by a large margin.
- Energy harvesting communication has raised great research interests due to its wide applications and feasibility of commercialization. In this paper, we investigate the multiuser energy diversity. Specifically, we reveal the throughput gain coming from the increase of total available energy harvested over time/space and from the combined dynamics of batteries. Considering both centralized and distributed access schemes, the scaling of the average throughput over the number of transmitters is studied, along with the scaling of corresponding available energy in the batteries.
- Several studies on sentence processing suggest that the mental lexicon keeps track of the mutual expectations between words. Current DSMs, however, represent context words as separate features, thereby loosing important information for word expectations, such as word interrelations. In this paper, we present a DSM that addresses this issue by defining verb contexts as joint syntactic dependencies. We test our representation in a verb similarity task on two datasets, showing that joint contexts achieve performances comparable to single dependencies or even better. Moreover, they are able to overcome the data sparsity problem of joint feature spaces, in spite of the limited size of our training corpus.
- This paper considers a low-complexity Gaussian Message Passing Iterative Detection (GMPID) algorithm for Multiple-Input Multiple-Output systems with Non-Orthogonal Multiple Access (MIMO-NOMA), in which a base station with $N_r$ antennas serves $N_u$ sources simultaneously. Both $N_u$ and $N_r$ are very large numbers and we consider the cases that $N_u>N_r$. The GMPID is based on a fully connected loopy graph, which is well understood to be not convergent in some cases. The large-scale property of the MIMO-NOMA is used to simplify the convergence analysis. Firstly, we prove that the variances of the GMPID definitely converge to that of Minimum Mean Square Error (MMSE) detection. Secondly, two sufficient conditions that the means of the GMPID converge to a higher MSE than that of the MMSE detection are proposed. However, the means of the GMPID may still not converge when $ N_u/N_r< (\sqrt{2}-1)^{-2}$. Therefore, a new convergent SA-GMPID is proposed, which converges to the MMSE detection for any $N_u> N_r$ with a faster convergence speed. Finally, numerical results are provided to verify the validity of the proposed theoretical results.
- This paper studies the throughput maximization problem for a three-node relay channel with non-ideal circuit power. In particular, the relay operates in a half-duplex manner, and the decode-and-forward (DF) relaying scheme is adopted. Considering the extra power consumption by the circuits, the optimal power allocation to maximize the throughput of the considered system over infinite time horizon is investigated. First, two special scenarios, i.e., the direct link transmission (only use the direct link to transmit) and the relay assisted transmission (the source and relay transmit with equal probability), are studied, and the corresponding optimal power allocations are obtained. By solving two non-convex optimization problems, the closed-form solutions show that the source and the relay transmit with certain probability, which is determined by the average power budgets, circuit power consumptions, and channel gains. Next, based on the above results, the optimal power allocation for both the cases with and without direct link is derived, which is shown to be a mixed transmission scheme between the direct link transmission and the relay assisted transmission. Finally, numerical results validate the analysis.
- Apr 26 2016 cs.CV arXiv:1604.06877v1The prevalent scene text detection approach follows four sequential steps comprising character candidate detection, false character candidate removal, text line extraction, and text line verification. However, errors occur and accumulate throughout each of these sequential steps which often lead to low detection performance. To address these issues, we propose a unified scene text detection system, namely Text Flow, by utilizing the minimum cost (min-cost) flow network model. With character candidates detected by cascade boosting, the min-cost flow network model integrates the last three sequential steps into a single process which solves the error accumulation problem at both character level and text line level effectively. The proposed technique has been tested on three public datasets, i.e, ICDAR2011 dataset, ICDAR2013 dataset and a multilingual dataset and it outperforms the state-of-the-art methods on all three datasets with much higher recall and F-score. The good performance on the multilingual dataset shows that the proposed technique can be used for the detection of texts in different languages.
- In virtualized data centers, consolidation of Virtual Machines (VMs) on minimizing the number of total physical machines (PMs) has been recognized as a very efficient approach. This paper considers the energy-efficient consolidation of VMs in a Cloud Data center. Concentrating on CPU-intensive applications, the objective is to schedule all requests non-preemptively, subjecting to constraints of PM capacities and running time interval spans, such that the total energy consumption of all PMs is minimized (called MinTE for abbreviation). The MinTE problem is NP-complete in general. We propose a self-adaptive approached called SAVE. The approach makes decisions of the assignment and migration of VMs by probabilistic processes and is based exclusively on local information, therefore it is very simple to implement. Both simulation and real environment test show that our proposed method SAVE can reduce energy consumption about 30% against VMWare DRS and 10-20% against EcoCloud on average.
- While deep convolutional neural networks (CNNs) have shown a great success in single-label image classification, it is important to note that real world images generally contain multiple labels, which could correspond to different objects, scenes, actions and attributes in an image. Traditional approaches to multi-label image classification learn independent classifiers for each category and employ ranking or thresholding on the classification results. These techniques, although working well, fail to explicitly exploit the label dependencies in an image. In this paper, we utilize recurrent neural networks (RNNs) to address this problem. Combined with CNNs, the proposed CNN-RNN framework learns a joint image-label embedding to characterize the semantic label dependency as well as the image-label relevance, and it can be trained end-to-end from scratch to integrate both information in a unified framework. Experimental results on public benchmark datasets demonstrate that the proposed architecture achieves better performance than the state-of-the-art multi-label classification model
- Mar 31 2016 cs.CL arXiv:1603.09054v1In this paper, we claim that vector cosine, which is generally considered among the most efficient unsupervised measures for identifying word similarity in Vector Space Models, can be outperformed by an unsupervised measure that calculates the extent of the intersection among the most mutually dependent contexts of the target words. To prove it, we describe and evaluate APSyn, a variant of the Average Precision that, without any optimization, outperforms the vector cosine and the co-occurrence on the standard ESL test set, with an improvement ranging between +9.00% and +17.98%, depending on the number of chosen top contexts.
- Mar 30 2016 cs.CL arXiv:1603.08701v1In this paper, we claim that Vector Cosine, which is generally considered one of the most efficient unsupervised measures for identifying word similarity in Vector Space Models, can be outperformed by a completely unsupervised measure that evaluates the extent of the intersection among the most associated contexts of two target words, weighting such intersection according to the rank of the shared contexts in the dependency ranked lists. This claim comes from the hypothesis that similar words do not simply occur in similar contexts, but they share a larger portion of their most relevant contexts compared to other related words. To prove it, we describe and evaluate APSyn, a variant of Average Precision that, independently of the adopted parameters, outperforms the Vector Cosine and the co-occurrence on the ESL and TOEFL test sets. In the best setting, APSyn reaches 0.73 accuracy on the ESL dataset and 0.70 accuracy in the TOEFL dataset, beating therefore the non-English US college applicants (whose average, as reported in the literature, is 64.50%) and several state-of-the-art approaches.
- Mar 30 2016 cs.CL arXiv:1603.08705v1In this paper, we describe ROOT13, a supervised system for the classification of hypernyms, co-hyponyms and random words. The system relies on a Random Forest algorithm and 13 unsupervised corpus-based features. We evaluate it with a 10-fold cross validation on 9,600 pairs, equally distributed among the three classes and involving several Parts-Of-Speech (i.e. adjectives, nouns and verbs). When all the classes are present, ROOT13 achieves an F1 score of 88.3%, against a baseline of 57.6% (vector cosine). When the classification is binary, ROOT13 achieves the following results: hypernyms-co-hyponyms (93.4% vs. 60.2%), hypernymsrandom (92.3% vs. 65.5%) and co-hyponyms-random (97.3% vs. 81.5%). Our results are competitive with stateof-the-art models.
- Mar 30 2016 cs.CL arXiv:1603.08702v1ROOT9 is a supervised system for the classification of hypernyms, co-hyponyms and random words that is derived from the already introduced ROOT13 (Santus et al., 2016). It relies on a Random Forest algorithm and nine unsupervised corpus-based features. We evaluate it with a 10-fold cross validation on 9,600 pairs, equally distributed among the three classes and involving several Parts-Of-Speech (i.e. adjectives, nouns and verbs). When all the classes are present, ROOT9 achieves an F1 score of 90.7%, against a baseline of 57.2% (vector cosine). When the classification is binary, ROOT9 achieves the following results against the baseline: hypernyms-co-hyponyms 95.7% vs. 69.8%, hypernyms-random 91.8% vs. 64.1% and co-hyponyms-random 97.8% vs. 79.4%. In order to compare the performance with the state-of-the-art, we have also evaluated ROOT9 in subsets of the Weeds et al. (2014) datasets, proving that it is in fact competitive. Finally, we investigated whether the system learns the semantic relation or it simply learns the prototypical hypernyms, as claimed by Levy et al. (2015). The second possibility seems to be the most likely, even though ROOT9 can be trained on negative examples (i.e., switched hypernyms) to drastically reduce this bias.
- Mar 24 2016 cs.DM arXiv:1603.07168v1We are given a bipartite graph $G = (A \cup B, E)$ where each vertex has a preference list ranking its neighbors: in particular, every $a \in A$ ranks its neighbors in a strict order of preference, whereas the preference lists of $b \in B$ may contain ties. A matching $M$ is popular if there is no matching $M'$ such that the number of vertices that prefer $M'$ to $M$ exceeds the number of vertices that prefer $M$ to~$M'$. We show that the problem of deciding whether $G$ admits a popular matching or not is NP-hard. This is the case even when every $b \in B$ either has a strict preference list or puts all its neighbors into a single tie. In contrast, we show that the problem becomes polynomially solvable in the case when each $b \in B$ puts all its neighbors into a single tie. That is, all neighbors of $b$ are tied in $b$'s list and $b$ desires to be matched to any of them. Our main result is an $O(n^2)$ algorithm (where $n = |A \cup B|$) for the popular matching problem in this model. Note that this model is quite different from the model where vertices in $B$ have no preferences and do not care whether they are matched or not.
- The evolution of communication technology and the proliferation of electronic devices have rendered adversaries powerful means for targeted attacks via all sorts of accessible resources. In particular, owing to the intrinsic interdependency and ubiquitous connectivity of modern communication systems, adversaries can devise malware that propagates through intermediate hosts to approach the target, which we refer to as transmissive attacks. Inspired by biology, the transmission pattern of such an attack in the digital space much resembles the spread of an epidemic in real life. This paper elaborates transmissive attacks, summarizes the utility of epidemic models in communication systems, and draws connections between transmissive attacks and epidemic models. Simulations, experiments, and ongoing research challenges on transmissive attacks are also addressed.
- Feb 04 2016 cs.CV arXiv:1602.01197v1Data imbalance is common in many vision tasks where one or more classes are rare. Without addressing this issue conventional methods tend to be biased toward the majority class with poor predictive accuracy for the minority class. These methods further deteriorate on small, imbalanced data that has a large degree of class overlap. In this study, we propose a novel discriminative sparse neighbor approximation (DSNA) method to ameliorate the effect of class-imbalance during prediction. Specifically, given a test sample, we first traverse it through a cost-sensitive decision forest to collect a good subset of training examples in its local neighborhood. Then we generate from this subset several class-discriminating but overlapping clusters and model each as an affine subspace. From these subspaces, the proposed DSNA iteratively seeks an optimal approximation of the test sample and outputs an unbiased prediction. We show that our method not only effectively mitigates the imbalance issue, but also allows the prediction to extrapolate to unseen data. The latter capability is crucial for achieving accurate prediction on small dataset with limited samples. The proposed imbalanced learning method can be applied to both classification and regression tasks at a wide range of imbalance levels. It significantly outperforms the state-of-the-art methods that do not possess an imbalance handling mechanism, and is found to perform comparably or even better than recent deep learning methods by using hand-crafted features only.
- Machine learning algorithms, in conjunction with user data, hold the promise of revolutionizing the way we interact with our phones, and indeed their widespread adoption in the design of apps bear testimony to this promise. However, currently, the computationally expensive segments of the learning pipeline, such as feature extraction and model training, are offloaded to the cloud, resulting in an over-reliance on the network and under-utilization of computing resources available on mobile platforms. In this paper, we show that by combining the computing power distributed over a number of phones, judicious optimization choices, and contextual information it is possible to execute the end-to-end pipeline entirely on the phones at the edge of the network, efficiently. We also show that by harnessing the power of this combination, it is possible to execute a computationally expensive pipeline at near real-time. To demonstrate our approach, we implement an end-to-end image-processing pipeline -- that includes feature extraction, vocabulary learning, vectorization, and image clustering -- on a set of mobile phones. Our results show a 75% improvement over the standard, full pipeline implementation running on the phones without modification -- reducing the time to one minute under certain conditions. We believe that this result is a promising indication that fully distributed, infrastructure-less computing is possible on networks of mobile phones; enabling a new class of mobile applications that are less reliant on the cloud.
- Person names and location names are essential building blocks for identifying events and social networks in historical documents that were written in literary Chinese. We take the lead to explore the research on algorithmically recognizing named entities in literary Chinese for historical studies with language-model based and conditional-random-field based methods, and extend our work to mining the document structures in historical documents. Practical evaluations were conducted with texts that were extracted from more than 220 volumes of local gazetteers (Difangzhi). Difangzhi is a huge and the single most important collection that contains information about officers who served in local government in Chinese history. Our methods performed very well on these realistic tests. Thousands of names and addresses were identified from the texts. A good portion of the extracted names match the biographical information currently recorded in the China Biographical Database (CBDB) of Harvard University, and many others can be verified by historians and will become as new additions to CBDB.
- Distributed statistical inference has recently attracted enormous attention. Many existing work focuses on the averaging estimator. We propose a one-step approach to enhance a simple-averaging based distributed estimator. We derive the corresponding asymptotic properties of the newly proposed estimator. We find that the proposed one-step estimator enjoys the same asymptotic properties as the centralized estimator. The proposed one-step approach merely requires one additional round of communication in relative to the averaging estimator; so the extra communication burden is insignificant. In finite sample cases, numerical examples show that the proposed estimator outperforms the simple averaging estimator with a large margin in terms of the mean squared errors. A potential application of the one-step approach is that one can use multiple machines to speed up large scale statistical inference with little compromise in the quality of estimators. The proposed method becomes more valuable when data can only be available at distributed machines with limited communication bandwidth.
- Sep 29 2015 cs.NI arXiv:1509.08420v1The Pacific Rim Application and Grid Middleware Assembly (PRAGMA) is an international community of researchers that actively collaborate to address problems and challenges of common interest in eScience. The PRAGMA Experimental Network Testbed (PRAGMA-ENT) was established with the goal of constructing an international software-defined network (SDN) testbed to offer the necessary networking support to the PRAGMA cyberinfrastructure. PRAGMA-ENT is isolated, and PRAGMA researchers have complete freedom to access network resources to develop, experiment, and evaluate new ideas without the concerns of interfering with production networks. In the first phase, PRAGMA-ENT focused on establishing an international L2 backbone. With support from the Florida Lambda Rail (FLR), Internet2, PacificWave, JGN-X, and TWAREN, PRAGMA-ENT backbone connects Open\-Flow-enabled switches at University of Florida (UF), University of California San Diego (UCSD), Nara Institute of Science and Technology (NAIST, Japan), Osaka University (Japan), National Institute of Advanced Industrial Science and Technology (AIST, Japan), and National Center for High-Performance Computing (Taiwan). The second phase of PRAGMA-ENT consisted of evaluation of technologies for the control plane that enables multiple experiments (i.e., OpenFlow controllers) to co-exist. Preliminary experiments with FlowVisor revealed some limitations leading to the development of a new approach, called AutoVFlow. This paper will share our experience in the establishment of PRAGMA-ENT backbone (with international L2 links), its current status, and control plane plans. Discussion on preliminary application ideas, including optimization of routing control; multipath routing control; and remote visualization will also be discussed.
- We propose a virtualization architecture for NoC-based reconfigurable systems. The motivation of this work is to develop a service-oriented architecture that includes Partial Reconfigurable Region as a Service (PRRaaS) and Processing Element as a Service (PEaaS) for software applications. According to the requirements of software applications, new PEs can be created on-demand by (re)configuring the logic resource of the PRRs in the FPGA, while the configured PEs can also be virtualized to support multiple application tasks at the same time. As a result, such a two-level virtualization mechanism, including the gate-level virtualization and the PE-level virtualization, enables an SoC to be dynamically adapted to changing application requirements. Therefore, more software applications can be performed, and system performance can be further enhanced.
- Businesses (retailers) often wish to offer personalized advertisements (coupons) to individuals (consumers), but run the risk of strong reactions from consumers who want a customized shopping experience but feel their privacy has been violated. Existing models for privacy such as differential privacy or information theory try to quantify privacy risk but do not capture the subjective experience and heterogeneous expression of privacy-sensitivity. We propose a Markov decision process (MDP) model to capture (i) different consumer privacy sensitivities via a time-varying state; (ii) different coupon types (action set) for the retailer; and (iii) the action-and-state-dependent cost for perceived privacy violations. For the simple case with two states ("Normal" and "Alerted"), two coupons (targeted and untargeted) model, and consumer behavior statistics known to the retailer, we show that a stationary threshold-based policy is the optimal coupon-offering strategy for a retailer that wishes to minimize its expected discounted cost. The threshold is a function of all model parameters; the retailer offers a targeted coupon if their belief that the consumer is in the "Alerted" state is below the threshold. We extend this two-state model to consumers with multiple privacy-sensitivity states as well as coupon-dependent state transition probabilities. Furthermore, we study the case with imperfect (noisy) cost feedback from consumers and uncertain initial belief state.
- Aug 06 2015 cs.SI physics.soc-ph arXiv:1508.00987v1Identifying the most influential individuals can provide invaluable help in developing and deploying effective viral marketing strategies. Previous studies mainly focus on designing efficient algorithms or heuristics to find top-K influential nodes on a given static social network. While, as a matter of fact, real-world social networks keep evolving over time and a recalculation upon the changed network inevitably leads to a long running time, significantly affecting the efficiency. In this paper, we observe from real-world traces that the evolution of social network follows the preferential attachment rule and the influential nodes are mainly selected from high-degree nodes. Such observations shed light on the design of IncInf, an incremental approach that can efficiently locate the top-K influential individuals in evolving social networks based on previous information instead of calculation from scratch. In particular, IncInf quantitatively analyzes the influence spread changes of nodes by localizing the impact of topology evolution to only local regions, and a pruning strategy is further proposed to effectively narrow the search space into nodes experiencing major increases or with high degrees. We carried out extensive experiments on real-world dynamic social networks including Facebook, NetHEPT, and Flickr. Experimental results demonstrate that, compared with the state-of-the-art static heuristic, IncInf achieves as much as 21X speedup in execution time while maintaining matching performance in terms of influence spread.
- Jul 28 2015 cs.DS arXiv:1507.07396v3Makespan minimization in restricted assignment $(R|p_{ij}\in \{p_j, \infty\}|C_{\max})$ is a classical problem in the field of machine scheduling. In a landmark paper in 1990 [8], Lenstra, Shmoys, and Tardos gave a 2-approximation algorithm and proved that the problem cannot be approximated within 1.5 unless P=NP. The upper and lower bounds of the problem have been essentially unimproved in the intervening 25 years, despite several remarkable successful attempts in some special cases of the problem [2,4,12] recently. In this paper, we consider a special case called graph-balancing with light hyper edges, where heavy jobs can be assigned to at most two machines while light jobs can be assigned to any number of machines. For this case, we present algorithms with approximation ratios strictly better than 2. Specifically, Two job sizes: Suppose that light jobs have weight $w$ and heavy jobs have weight $W$, and $w < W$. We give a $1.5$-approximation algorithm (note that the current 1.5 lower bound is established in an even more restrictive setting [1,3]). Indeed, depending on the specific values of $w$ and $W$, sometimes our algorithm guarantees sub-1.5 approximation ratios. Arbitrary job sizes: Suppose that $W$ is the largest given weight, heavy jobs have weights in the range of $(\beta W, W]$, where $4/7\leq \beta < 1$, and light jobs have weights in the range of $(0,\beta W]$. We present a $(5/3+\beta/3)$-approximation algorithm. Our algorithms are purely combinatorial, without the need of solving a linear program as required in most other known approaches.
- Energy harvesting (EH) based communication has raised great research interests due to its wide application and the feasibility of commercialization. In this paper, we consider wireless communications with EH constraints at the transmitter. First, for delay-tolerant traffic, we investigate the long-term average throughput maximization problem and analytically compare the throughput performance against that of a system supported by conventional power supplies. Second, for delay-sensitive traffic, we analyze the outage probability by studying its asymptotic behavior in the high energy arrival rate regime, where the new concept of energy diversity is formally introduced. Moreover, we show that the speed of outage probability approaching zero, termed energy diversity gain, varies under different power supply models.
- Jun 25 2015 cs.CV arXiv:1506.07310v4Face Recognition has been studied for many decades. As opposed to traditional hand-crafted features such as LBP and HOG, much more sophisticated features can be learned automatically by deep learning methods in a data-driven way. In this paper, we propose a two-stage approach that combines a multi-patch deep CNN and deep metric learning, which extracts low dimensional but very discriminative features for face verification and recognition. Experiments show that this method outperforms other state-of-the-art methods on LFW dataset, achieving 99.77% pair-wise verification accuracy and significantly better accuracy under other two more practical protocols. This paper also discusses the importance of data size and the number of patches, showing a clear path to practical high-performance face recognition systems in real world.
- This paper considers a heterogeneous ad hoc network with multiple transmitter-receiver pairs, in which all transmitters are capable of harvesting renewable energy from the environment and compete for one shared channel by random access. In particular, we focus on two different scenarios: the constant energy harvesting (EH) rate model where the EH rate remains constant within the time of interest and the i.i.d. EH rate model where the EH rates are independent and identically distributed across different contention slots. To quantify the roles of both the energy state information (ESI) and the channel state information (CSI), a distributed opportunistic scheduling (DOS) framework with two-stage probing and save-then-transmit energy utilization is proposed. Then, the optimal throughput and the optimal scheduling strategy are obtained via one-dimension search, i.e., an iterative algorithm consisting of the following two steps in each iteration: First, assuming that the stored energy level at each transmitter is stationary with a given distribution, the expected throughput maximization problem is formulated as an optimal stopping problem, whose solution is proved to exist and then derived for both models; second, for a fixed stopping rule, the energy level at each transmitter is shown to be stationary and an efficient iterative algorithm is proposed to compute its steady-state distribution. Finally, we validate our analysis by numerical results and quantify the throughput gain compared with the best-effort delivery scheme.
- Feb 12 2015 cs.CV arXiv:1502.03240v3Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate mean-field approximate inference for the Conditional Random Fields with Gaussian pairwise potentials as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark.
- Feb 03 2015 cs.DS arXiv:1502.00447v2The traveling salesman problem (TSP) is one of the most challenging NP-hard problems. It has widely applications in various disciplines such as physics, biology, computer science and so forth. The best known approximation algorithm for Symmetric TSP (STSP) whose cost matrix satisfies the triangle inequality (called $\triangle$STSP) is Christofides algorithm which was proposed in 1976 and is a $\frac{3}{2}$-approximation. Since then no proved improvement is made and improving upon this bound is a fundamental open question in combinatorial optimization. In this paper, for the first time, we propose Truncated Generalized Beta distribution (TGB) for the probability distribution of optimal tour lengths in a TSP. We then introduce an iterative TGB approach to obtain quality-proved near optimal approximation, i.e., (1+$\frac{1}{2}(\frac{\alpha+1}{\alpha+2})^{K-1}$)-approximation where $K$ is the number of iterations in TGB and $\alpha (>>1)$ is the shape parameters of TGB. The result can approach the true optimum as $K$ increases.
- A function computation problem in directed acyclic networks has been considered in the literature, where a sink node wants to compute a target function with the inputs generated at multiple source nodes. The network links are error-free but capacity-limited, and the intermediate network nodes perform network coding. The target function is required to be computed with zero error. The computing rate of a network code is measured by the average number of times that the target function can be computed for one use of the network, i.e., each link in the network is used at most once. In the papers [1], [2], two cut-set bounds were proposed on the computing rate. However, we show in this paper that these bounds are not valid for general network function computation problems. We analyze the arguments that lead to the invalidity of these bounds and fix the issue with a new cut-set bound, where a new equivalence relation associated with the inputs of the target function is used. Our bound is qualified for general target functions and network topologies. We also show that our bound is tight for some special cases where the computing capacity is known. Moreover, some results in [11], [12] were proved using the invalid upper bound in [1] and hence their correctness needs further justification. We also justify their validity in the paper.
- The study of degrees of freedom (DoF) of multiuser channels has led to the development of important interference managing schemes, such as interference alignment (IA) and interference neutralization. However, while the integer DoF have been widely studied in literatures, non-integer DoF are much less addressed, especially for channels with less variety. In this paper, we study the non-integer DoF of the time-invariant multiple-input multiple-output (MIMO) interfering multiple access channel (IMAC) in the simple setting of two cells, $K$ users per cell, and $M$ antennas at all nodes. We provide the exact characterization of the maximum achievable sum DoF under the constraint of using linear interference alignment (IA) scheme with symbol extension. Our results indicate that the integer sum DoF characterization $2MK/(K+1)$ achieved by the Suh-Ho-Tse scheme can be extended to the non-integer case only when $K \leq M^2$ for the circularly-symmetric-signaling systems and $K \leq 2M^2$ for the asymmetric-complex-signaling systems. These results are further extended to the time-invariant parallel MIMO IMAC with independent subchannels.
- A heterogeneous system, where small networks (e.g., small cell or WiFi) boost the system throughput under the umbrella of a large network (e.g., large cell), is a promising architecture for the 5G wireless communication networks, where green and sustainable communication is also a key aspect. Renewable energy based communication via energy harvesting (EH) devices is one of such green technology candidates. In this paper, we study an uplink transmission scenario under a heterogeneous network hierarchy, where each mobile user (MU) is powered by a sustainable energy supply, capable of both deterministic access to the large network via one private channel, and dynamic access to a small network with certain probability via one common channel shared by multiple MUs. Considering a general EH model, i.e., energy arrivals are time-correlated, we study an opportunistic transmission scheme and aim to maximize the average throughput for each MU, which jointly exploits the statistics and current states of the private channel, common channel, battery level, and EH rate. Applying a simple yet efficient "save-then-transmit" scheme, the throughput maximization problem is cast as a "rate-of-return" optimal stopping problem. The optimal stopping rule is proved to has a time-dependent threshold-based structure for the case with general Markovian system dynamics, and degrades to a pure threshold policy for the case with independent and identically distributed system dynamics. As performance benchmarks, the optimal power allocation scheme with conventional power supplies is also examined. Finally, numerical results are presented, and a new concept of "EH diversity" is discussed.
- Oct 15 2014 cs.SI arXiv:1410.3512v1The problem of cascading failures in cyber-physical systems is drawing much attention in lieu of different network models for a diverse range of applications. While many analytic results have been reported for the case of large networks, very few of them are readily applicable to finite-size networks. This paper studies cascading failures in finite-size geometric networks where the number of nodes is on the order of tens or hundreds as in many real-life networks. First, the impact of the tolerance parameter on network resiliency is investigated. We quantify the network reaction to initial disturbances of different sizes by measuring the damage imposed on the network. Lower and upper bounds on the number of failures are derived to characterize such damages. Such finite-size analysis reveals the decisiveness and criticality of taking action within the first few stages of failure propagation in preventing a cascade. By studying the trend of the bounds as the number of nodes increases, we observe a phase transition phenomenon in terms of the tolerance parameter. The critical value of the tolerance parameter, known as the threshold, is further derived. The findings of this paper, in particular, shed light on how to choose the tolerance parameter appropriately such that a cascade of failures could be avoided.
- Jul 04 2014 cs.DS arXiv:1407.0892v1We study classical deadline-based preemptive scheduling of tasks in a computing environment equipped with both dynamic speed scaling and sleep state capabilities: Each task is specified by a release time, a deadline and a processing volume, and has to be scheduled on a single, speed-scalable processor that is supplied with a sleep state. In the sleep state, the processor consumes no energy, but a constant wake-up cost is required to transition back to the active state. In contrast to speed scaling alone, the addition of a sleep state makes it sometimes beneficial to accelerate the processing of tasks in order to transition the processor to the sleep state for longer amounts of time and incur further energy savings. The goal is to output a feasible schedule that minimizes the energy consumption. Since the introduction of the problem by Irani et al. [16], its exact computational complexity has been repeatedly posed as an open question (see e.g. [2,8,15]). The currently best known upper and lower bounds are a 4/3-approximation algorithm and NP-hardness due to [2] and [2,17], respectively. We close the aforementioned gap between the upper and lower bound on the computational complexity of speed scaling with sleep state by presenting a fully polynomial-time approximation scheme for the problem. The scheme is based on a transformation to a non-preemptive variant of the problem, and a discretization that exploits a carefully defined lexicographical ordering among schedules.
- May 13 2014 cs.LO arXiv:1405.2409v2G4LTL-ST automatically synthesizes control code for industrial Programmable Logic Controls (PLC) from timed behavioral specifications of input-output signals. These specifications are expressed in a linear temporal logic (LTL) extended with non-linear arithmetic constraints and timing constraints on signals. G4LTL-ST generates code in IEC 61131-3-compatible Structured Text, which is compiled into executable code for a large number of industrial field-level devices. The synthesis algorithm of G4LTL-ST implements pseudo-Boolean abstraction of data constraints and the compilation of timing constraints into LTL, together with a counterstrategy-guided abstraction refinement synthesis loop. Since temporal logic specifications are notoriously difficult to use in practice, G4LTL-ST supports engineers in specifying realizable control problems by suggesting suitable restrictions on the behavior of the control environment from failed synthesis attempts.
- Message Passing Interfaces (MPI) plays an important role in parallel computing. Many parallel applications are implemented as MPI programs. The existing methods of bug detection for MPI programs have the shortage of providing both input and non-determinism coverage, leading to missed bugs. In this paper, we employ symbolic execution to ensure the input coverage, and propose an on-the-fly schedule algorithm to reduce the interleaving explorations for non-determinism coverage, while ensuring the soundness and completeness. We have implemented our approach as a tool, called MPISE, which can automatically detect the deadlock and runtime bugs in MPI programs. The results of the experiments on benchmark programs and real world MPI programs indicate that MPISE finds bugs effectively and efficiently. In addition, our tool also provides diagnostic information and replay mechanism to help understanding bugs.
- Feb 14 2014 cs.CL arXiv:1402.3040v1Module-Attribute Representation of Verbal Semantics (MARVS) is a theory of the representation of verbal semantics that is based on Mandarin Chinese data (Huang et al. 2000). In the MARVS theory, there are two different types of modules: Event Structure Modules and Role Modules. There are also two sets of attributes: Event-Internal Attributes and Role-Internal Attributes, which are linked to the Event Structure Module and the Role Module, respectively. In this study, we focus on four transitive verbs as chi1(eat), wan2(play), huan4(change) and shao1(burn) and explore their event structures by the MARVS theory.
- Modern software systems may exhibit a nondeterministic behavior due to many unpredictable factors. In this work, we propose the node coverage game, a two player turn-based game played on a finite game graph, as a formalization of the problem to test such systems. Each node in the graph represents a \em functional equivalence class of the software under test (SUT). One player, the tester, wants to maximize the node coverage, measured by the number of nodes visited when exploring the game graphs, while his opponent, the SUT, wants to minimize it. An optimal test would maximize the cover, and it is an interesting problem to find the maximal number of nodes that the tester can guarantee to visit, irrespective of the responses of the SUT. We show that the decision problem of whether the guarantee is less than a given number is NP-complete. Then we present techniques for testing nondeterministic SUTs with existing test suites for deterministic models. Finally, we report our implementation and experiments.
- Consider a systematic linear code where some (local) parity symbols depend on few prescribed symbols, while other (heavy) parity symbols may depend on all data symbols. Local parities allow to quickly recover any single symbol when it is erased, while heavy parities provide tolerance to a large number of simultaneous erasures. A code as above is maximally-recoverable if it corrects all erasure patterns which are information theoretically recoverable given the code topology. In this paper we present explicit families of maximally-recoverable codes with locality. We also initiate the study of the trade-off between maximal recoverability and alphabet size.
- This work examines the use of two-way training to efficiently discriminate the channel estimation performances at a legitimate receiver (LR) and an unauthorized receiver (UR) in a multiple-input multiple-output (MIMO) wireless system. This work improves upon the original discriminatory channel estimation (DCE) scheme proposed by Chang et al where multiple stages of feedback and retraining were used. While most studies on physical layer secrecy are under the information-theoretic framework and focus directly on the data transmission phase, studies on DCE focus on the training phase and aim to provide a practical signal processing technique to discriminate between the channel estimation performances at LR and UR. A key feature of DCE designs is the insertion of artificial noise (AN) in the training signal to degrade the channel estimation performance at UR. To do so, AN must be placed in a carefully chosen subspace based on the transmitter's knowledge of LR's channel in order to minimize its effect on LR. In this paper, we adopt the idea of two-way training that allows both the transmitter and LR to send training signals to facilitate channel estimation at both ends. Both reciprocal and non-reciprocal channels are considered and a two-way DCE scheme is proposed for each scenario. For mathematical tractability, we assume that all terminals employ the linear minimum mean square error criterion for channel estimation. Based on the mean square error (MSE) of the channel estimates at all terminals, we formulate and solve an optimization problem where the optimal power allocation between the training signal and AN is found by minimizing the MSE of LR's channel estimate subject to a constraint on the MSE achievable at UR. Numerical results show that the proposed DCE schemes can effectively discriminate between the channel estimation and hence the data detection performances at LR and UR.
- Dec 27 2012 cs.CV arXiv:1212.6094v1Learning Mahanalobis distance metrics in a high- dimensional feature space is very difficult especially when structural sparsity and low rank are enforced to improve com- putational efficiency in testing phase. This paper addresses both aspects by an ensemble metric learning approach that consists of sparse block diagonal metric ensembling and join- t metric learning as two consecutive steps. The former step pursues a highly sparse block diagonal metric by selecting effective feature groups while the latter one further exploits correlations between selected feature groups to obtain an accurate and low rank metric. Our algorithm considers all pairwise or triplet constraints generated from training samples with explicit class labels, and possesses good scala- bility with respect to increasing feature dimensionality and growing data volumes. Its applications to face verification and retrieval outperform existing state-of-the-art methods in accuracy while retaining high efficiency.
- This paper studies the optimal power allocation for outage minimization in point-to-point fading channels with the energy-harvesting constraints and channel distribution information (CDI) at the transmitter. Both the cases with non-causal and causal energy state information (ESI) are considered, which correspond to the energy harvesting rates being known and unknown prior to the transmissions, respectively. For the non-causal ESI case, the average outage probability minimization problem over a finite horizon is shown to be non-convex for a large class of practical fading channels. However, the globally optimal "offline" power allocation is obtained by a forward search algorithm with at most $N$ one-dimensional searches, and the optimal power profile is shown to be non-decreasing over time and have an interesting "save-then-transmit" structure. In particular, for the special case of N=1, our result revisits the classic outage capacity for fading channels with uniform power allocation. Moreover, for the case with causal ESI, we propose both the optimal and suboptimal "online" power allocation algorithms, by applying the technique of dynamic programming and exploring the structure of optimal offline solutions, respectively.
- Oct 10 2012 cs.SY arXiv:1210.2449v1Our goal is to achieve a high degree of fault tolerance through the control of a safety critical systems. This reduces to solving a game between a malicious environment that injects failures and a controller who tries to establish a correct behavior. We suggest a new control objective for such systems that offers a better balance between complexity and precision: we seek systems that are k-resilient. In order to be k-resilient, a system needs to be able to rapidly recover from a small number, up to k, of local faults infinitely many times, provided that blocks of up to k faults are separated by short recovery periods in which no fault occurs. k-resilience is a simple but powerful abstraction from the precise distribution of local faults, but much more refined than the traditional objective to maximize the number of local faults. We argue why we believe this to be the right level of abstraction for safety critical systems when local faults are few and far between. We show that the computational complexity of constructing optimal control with respect to resilience is low and demonstrate the feasibility through an implementation and experimental results.
- Oct 05 2012 cs.NI arXiv:1210.1505v2Recent collapses of SIP servers in the carrier networks indicates two potential problems of SIP: (1) the current SIP design does not easily scale up to large network sizes, and (2) the built-in SIP overload control mechanism cannot handle overload conditions effectively. In order to help carriers prevent widespread SIP network failure effectively, this chapter presents a systematic investigation of current state-of-the-art overload control algorithms. To achieve this goal, this chapter first reviews two basic mechanisms of SIP, and summarizes numerous experiment results reported in the literatures which demonstrate the impact of overload on SIP networks. After surveying the approaches for modeling the dynamic behaviour of SIP networks experiencing overload, the chapter presents a comparison and assessment of different types of SIP overload control solutions. Finally it outlines some research opportunities for managing SIP overload control.
- In this paper, the diamond relay channel is considered, which consists of one source-destination pair and two relay nodes connected with rate-limited out-of-band conferencing links. In particular, we focus on the half-duplex alternative relaying strategy, in which the two relays operate alternatively over time. With different amounts of delay, two conferencing strategies are proposed, each of which can be implemented by either a general two-side conferencing scheme (for which both of the two conferencing links are used) or a special-case one-side conferencing scheme (for which only one of the two conferencing links is used). Based on the most general two-side conferencing scheme, we derive the achievable rates by using the decode-and-forward (DF) and amplify-and-forward (AF) relaying schemes, and show that these rate maximization problems are convex. By further exploiting the properties of the optimal solutions, the simpler one-side conferencing is shown to be equally good as the two-side conferencing in term of the achievable rates under arbitrary channel conditions. Based on this, the DF rate in closed-form is obtained, and the principle to use which one of the two conferencing links for one-side conferencing is also established. Moreover, the DF scheme is shown to be capacity-achieving under certain conditions with even one-side conferencing. For the AF relaying scheme, one-side conferencing is shown to be sub-optimal in general. Finally, numerical results are provided to validate our analysis.
- This work examines the use of two-way training in multiple-input multiple-output (MIMO) wireless systems to discriminate the channel estimation performances between a legitimate receiver (LR) and an unauthorized receiver (UR). This thesis extends upon the previously proposed discriminatory channel estimation (DCE) scheme that allows only the transmitter to send training signals. The goal of DCE is to minimize the channel estimation error at LR while requiring the channel estimation error at UR to remain beyond a certain level. If the training signal is sent only by the transmitter, the performance discrimination between LR and UR will be limited since the training signals help both receivers perform estimates of their downlink channels. In this work, we consider instead the two-way training methodology that allows both the transmitter and LR to send training signals. In this case, the training signal sent by LR helps the transmitter obtain knowledge of the transmitter-to-LR channel, but does not help UR estimate its downlink channel (i.e., the transmitter-to-UR channel). With transmitter knowledge of the estimated transmitter-to-LR channel, artificial noise (AN) can then be embedded in the null space of the transmitter-to-LR channel to disrupt UR's channel estimation without severely degrading the channel estimation at LR. Based on these ideas, two-way DCE training schemes are developed for both reciprocal and non-reciprocal channels. The optimal power allocation between training and AN signals is devised under both average and individual power constraints. Numerical results are provided to demonstrate the efficacy of the proposed two-way DCE training schemes.
- This paper considers the use of energy harvesters, instead of conventional time-invariant energy sources, in wireless cooperative communication. For the purpose of exposition, we study the classic three-node Gaussian relay channel with decode-and-forward (DF) relaying, in which the source and relay nodes transmit with power drawn from energy-harvesting (EH) sources. Assuming a deterministic EH model under which the energy arrival time and the harvested amount are known prior to transmission, the throughput maximization problem over a finite horizon of $N$ transmission blocks is investigated. In particular, two types of data traffic with different delay constraints are considered: delay-constrained (DC) traffic (for which only one-block decoding delay is allowed at the destination) and no-delay-constrained (NDC) traffic (for which arbitrary decoding delay up to $N$ blocks is allowed). For the DC case, we show that the joint source and relay power allocation over time is necessary to achieve the maximum throughput, and propose an efficient algorithm to compute the optimal power profiles. For the NDC case, although the throughput maximization problem is non-convex, we prove the optimality of a separation principle for the source and relay power allocation problems, based upon which a two-stage power allocation algorithm is developed to obtain the optimal source and relay power profiles separately. Furthermore, we compare the DC and NDC cases, and obtain the sufficient and necessary conditions under which the NDC case performs strictly better than the DC case. It is shown that NDC transmission is able to exploit a new form of diversity arising from the independent source and relay energy availability over time in cooperative communication, termed "energy diversity", even with time-invariant channels.
- Network codes designed specifically for distributed storage systems have the potential to provide dramatically higher storage efficiency for the same availability. One main challenge in the design of such codes is the exact repair problem: if a node storing encoded information fails, in order to maintain the same level of reliability we need to create encoded information at a new node. One of the main open problems in this emerging area has been the design of simple coding schemes that allow exact and low cost repair of failed nodes and have high data rates. In particular, all prior known explicit constructions have data rates bounded by 1/2. In this paper we introduce the first family of distributed storage codes that have simple look-up repair and can achieve arbitrarily high rates. Our constructions are very simple to implement and perform exact repair by simple XORing of packets. We experimentally evaluate the proposed codes in a realistic cloud storage simulator and show significant benefits in both performance and reliability compared to replication and standard Reed-Solomon codes.
- Consider a linear [n,k,d]_q code C. We say that that i-th coordinate of C has locality r, if the value at this coordinate can be recovered from accessing some other r coordinates of C. Data storage applications require codes with small redundancy, low locality for information coordinates, large distance, and low locality for parity coordinates. In this paper we carry out an in-depth study of the relations between these parameters. We establish a tight bound for the redundancy n-k in terms of the message length, the distance, and the locality of information coordinates. We refer to codes attaining the bound as optimal. We prove some structure theorems about optimal codes, which are particularly strong for small distances. This gives a fairly complete picture of the tradeoffs between codewords length, worst-case distance and locality of information symbols. We then consider the locality of parity check symbols and erasure correction beyond worst case distance for optimal codes. Using our structure theorem, we obtain a tight bound for the locality of parity symbols possible in such codes for a broad class of parameter settings. We prove that there is a tradeoff between having good locality for parity checks and the ability to correct erasures beyond the minimum distance.
- It is well known that an (n,k) code can be used to store 'k' units of information in 'n' unit-capacity disks of a distributed data storage system. If the code used is maximum distance separable (MDS), then the system can tolerate any (n-k) disk failures, since the original information can be recovered from any k surviving disks. The focus of this paper is the design of a systematic MDS code with the additional property that a single disk failure can be repaired with minimum repair bandwidth, i.e., with the minimum possible amount of data to be downloaded for recovery of the failed disk. Previously, a lower bound of (n-1)/(n-k) units has been established by Dimakis et. al, on the repair bandwidth for a single disk failure in an (n,k) MDS code . Recently, the existence of asymptotic codes achieving this lower bound for arbitrary (n,k) has been established by drawing connections to interference alignment. While the existence of asymptotic constructions achieving this lower bound have been shown, finite code constructions achieving this lower bound existed in previous literature only for the special (high-redundancy) scenario where $k \leq \max(n/2,3)$. The question of existence of finite codes for arbitrary values of (n,k) achieving the lower bound on the repair bandwidth remained open. In this paper, by using permutation coding sub-matrices, we provide the first known finite MDS code which achieves the optimal repair bandwidth of (n-1)/(n-k) for arbitrary (n,k), for recovery of a failed systematic disk. We also generalize our permutation matrix based constructions by developing a novel framework for repair-bandwidth-optimal MDS codes based on the idea of subspace interference alignment - a concept previously introduced by Suh and Tse the context of wireless cellular networks.
- In this correspondence, we consider a half-duplex large relay network, which consists of one source-destination pair and $N$ relay nodes, each of which is connected with a subset of the other relays via signal-to-noise ratio (SNR)-limited out-of-band conferencing links. The asymptotic achievable rates of two basic relaying schemes with the "$p$-portion" conferencing strategy are studied: For the decode-and-forward (DF) scheme, we prove that the DF rate scales as $\mathcal{O} (\log (N))$; for the amplify-and-forward (AF) scheme, we prove that it asymptotically achieves the capacity upper bound in some interesting scenarios as $N$ goes to infinity.
- In this work, we investigate the optimal dynamic packet scheduling policy in a wireless relay network (WRN). We model this network by two sets of parallel queues, that represent the subscriber stations (SS) and the relay stations (RS), with random link connectivity. An optimal policy minimizes, in stochastic ordering sense, the process of cost function of the SS and RS queue sizes. We prove that, in a system with symmetrical connectivity and arrival distributions, a policy that tries to balance the lengths of all the system queues, at every time slot, is optimal. We use stochastic dominance and coupling arguments in our proof. We also provide a low-overhead algorithm for optimal policy implementation.
- We consider a half-duplex diamond relay channel, which consists of one source-destination pair and two relay nodes connected with two-way rate-limited out-of-band conferencing links. Three basic schemes and their achievable rates are studied: For the decode-and-forward (DF) scheme, we obtain the achievable rate by letting the source send a common message and two private messages; for the compress-and-forward (CF) scheme, we exploit the conferencing links to help with the compression of the received signals, or to exchange messages intended for the second hop to introduce certain cooperation; for the amplify-and-forward (AF) scheme, we study the optimal combining strategy between the received signals from the source and the conferencing link. Moreover, we show that these schemes could achieve the capacity upper bound under certain conditions. Finally, we evaluate the various rates for the Gaussian case with numerical results.