results for au:He_J in:cs

- Dec 15 2017 cs.CV arXiv:1712.05114v1This paper proposes a novel and efficient method to build a Computer-Aided Diagnoses (CAD) system for lung nodule detection based on Computed Tomography (CT). This task was treated as an Object Detection on Video (VID) problem by imitating how a radiologist reads CT scans. A lung nodule detector was trained to automatically learn nodule features from still images to detect lung nodule candidates with both high recall and accuracy. Unlike previous work which used 3-dimensional information around the nodule to reduce false positives, we propose two simple but efficient methods, Multi-slice propagation (MSP) and Motionless-guide suppression (MLGS), which analyze sequence information of CT scans to reduce false negatives and suppress false positives. We evaluated our method in open-source LUNA16 dataset which contains 888 CT scans, and obtained state-of-the-art result (Free-Response Receiver Operating Characteristic score of 0.892) with detection speed (end to end within 20 seconds per patient on a single NVidia GTX 1080) much higher than existing methods.
- Recent research has demonstrated the ability to estimate gaze on mobile devices by performing inference on the image from the phone's front-facing camera, and without requiring specialized hardware. While this offers wide potential applications such as in human-computer interaction, medical diagnosis and accessibility (e.g., hands free gaze as input for patients with motor disorders), current methods are limited as they rely on collecting data from real users, which is a tedious and expensive process that is hard to scale across devices. There have been some attempts to synthesize eye region data using 3D models that can simulate various head poses and camera settings, however these lack in realism. In this paper, we improve upon a recently suggested method, and propose a generative adversarial framework to generate a large dataset of high resolution colorful images with high diversity (e.g., in subjects, head pose, camera settings) and realism, while simultaneously preserving the accuracy of gaze labels. The proposed approach operates on extended regions of the eye, and even completes missing parts of the image. Using this rich synthesized dataset, and without using any additional training data from real users, we demonstrate improvements over state-of-the-art for estimating 2D gaze position on mobile devices. We further demonstrate cross-device generalization of model performance, as well as improved robustness to diverse head pose, blur and distance.
- Nov 28 2017 cs.IR arXiv:1711.09559v1What are the intents or goals behind human interactions with image search engines? Knowing why people search for images is of major concern to Web image search engines because user satisfaction may vary as intent varies. Previous analyses of image search behavior have mostly been query-based, focusing on what images people search for, rather than intent-based, that is, why people search for images. To date, there is no thorough investigation of how different image search intents affect users' search behavior. In this paper, we address the following questions: (1)Why do people search for images in text-based Web image search systems? (2)How does image search behavior change with user intent? (3)Can we predict user intent effectively from interactions during the early stages of a search session? To this end, we conduct both a lab-based user study and a commercial search log analysis. We show that user intents in image search can be grouped into three classes: Explore/Learn, Entertain, and Locate/Acquire. Our lab-based user study reveals different user behavior patterns under these three intents, such as first click time, query reformulation, dwell time and mouse movement on the result page. Based on user interaction features during the early stages of an image search session, that is, before mouse scroll, we develop an intent classifier that is able to achieve promising results for classifying intents into our three intent classes. Given that all features can be obtained online and unobtrusively, the predicted intents can provide guidance for choosing ranking methods immediately after scrolling.
- Researchers may describe different aspects of past scientific publications in their publications and the descriptions may keep changing in the evolution of science. The diverse and changing descriptions (i.e., citation context) on a publication characterize the impact and contributions of the past publication. In this article, we aim to provide an approach to understanding the changing and complex roles of a publication characterized by its citation context. We described a method to represent the publications' dynamic roles in science community in different periods as a sequence of vectors by training temporal embedding models. The temporal representations can be used to quantify how much the roles of publications changed and interpret how they changed. Our study in the biomedical domain shows that our metric on the changes of publications' roles is stable over time at the population level but significantly distinguish individuals. We also show the interpretability of our methods by a concrete example.
- Nov 06 2017 cs.CR arXiv:1711.01030v2At present, the cloud storage used in searchable symmetric encryption schemes (SSE) is provided in a private way, which cannot be seen as a true cloud. Moreover, the cloud server is thought to be credible, because it always returns the search result to the user, even they are not correct. In order to really resist this malicious adversary and accelerate the usage of the data, it is necessary to store the data on a public chain, which can be seen as a decentralized system. As the increasing amount of the data, the search problem becomes more and more intractable, because there does not exist any effective solution at present. In this paper, we begin by pointing out the importance of storing the data in a public chain. We then innovatively construct a model of SSE using blockchain(SSE-using-BC) and give its security definition to ensure the privacy of the data and improve the search efficiency. According to the size of data, we consider two different cases and propose two corresponding schemes. Lastly, the security and performance analyses show that our scheme is feasible and secure.
- Oct 17 2017 cs.CE arXiv:1710.05781v1The simulation of electrical discharges has been attracting a great deal of attention. In such simulations, the electric field computation dominates the computational time. In this paper, we propose a fast tree algorithm that helps to reduce the time complexity from $O(N^2)$ (from using direct summation) to $O(N\log N)$. The implementation details are discussed and the time complexity is analyzed. A rigorous error estimation shows the error of the tree algorithm decays exponentially with the number of truncation terms and can be controlled adaptively. Numerical examples are presented to validate the accuracy and efficiency of the algorithm.
- Oct 12 2017 cs.HC arXiv:1710.03755v1Pedestrian safety continues to be a significant concern in urban communities and pedestrian distraction is emerging as one of the main causes of grave and fatal accidents involving pedestrians. The advent of sophisticated mobile and wearable devices, equipped with high-precision on-board sensors capable of measuring fine-grained user movements and context, provides a tremendous opportunity for designing effective pedestrian safety systems and applications. Accurate and efficient recognition of pedestrian distractions in real-time given the memory, computation and communication limitations of these devices, however, remains the key technical challenge in the design of such systems. Earlier research efforts in pedestrian distraction detection using data available from mobile and wearable devices have primarily focused only on achieving high detection accuracy, resulting in designs that are either resource intensive and unsuitable for implementation on mainstream mobile devices, or computationally slow and not useful for real-time pedestrian safety applications, or require specialized hardware and less likely to be adopted by most users. In the quest for a pedestrian safety system that achieves a favorable balance between computational efficiency, detection accuracy, and energy consumption, this paper makes the following main contributions: (i) design of a novel complex activity recognition framework which employs motion data available from users' mobile and wearable devices and a lightweight frequency matching approach to accurately and efficiently recognize complex distraction related activities, and (ii) a comprehensive comparative evaluation of the proposed framework with well-known complex activity recognition techniques in the literature with the help of data collected from human subject pedestrians and prototype implementations on commercially-available mobile and wearable devices.
- Oct 11 2017 cs.CR arXiv:1710.03656v1Smartwatches enable many novel applications and are fast gaining popularity. However, the presence of a diverse set of on-board sensors provides an additional attack surface to malicious software and services on these devices. In this paper, we investigate the feasibility of key press inference attacks on handheld numeric touchpads by using smartwatch motion sensors as a side-channel. We consider different typing scenarios, and propose multiple attack approaches to exploit the characteristics of the observed wrist movements for inferring individual key presses. Experimental evaluation using commercial off-the-shelf smartwatches and smartphones show that key press inference using smartwatch motion sensors is not only fairly accurate, but also comparable with similar attacks using smartphone motion sensors. Additionally, hand movements captured by a combination of both smartwatch and smartphone motion sensors yields better inference accuracy than either device considered individually.
- Aug 07 2017 cs.CV arXiv:1708.01580v1With the rapid progress of China's urbanization, research on the automatic detection of land-use patterns in Chinese cities is of substantial importance. Deep learning is an effective method to extract image features. To take advantage of the deep-learning method in detecting urban land-use patterns, we applied a transfer-learning-based remote-sensing image approach to extract and classify features. Using the Google Tensorflow framework, a powerful convolution neural network (CNN) library was created. First, the transferred model was previously trained on ImageNet, one of the largest object-image data sets, to fully develop the model's ability to generate feature vectors of standard remote-sensing land-cover data sets (UC Merced and WHU-SIRI). Then, a random-forest-based classifier was constructed and trained on these generated vectors to classify the actual urban land-use pattern on the scale of traffic analysis zones (TAZs). To avoid the multi-scale effect of remote-sensing imagery, a large random patch (LRP) method was used. The proposed method could efficiently obtain acceptable accuracy (OA = 0.794, Kappa = 0.737) for the study area. In addition, the results show that the proposed method can effectively overcome the multi-scale effect that occurs in urban land-use classification at the irregular land-parcel level. The proposed method can help planners monitor dynamic urban land use and evaluate the impact of urban-planning schemes.
- Correlated topic modeling has been limited to small model and problem sizes due to their high computational cost and poor scaling. In this paper, we propose a new model which learns compact topic embeddings and captures topic correlations through the closeness between the topic vectors. Our method enables efficient inference in the low-dimensional embedding space, reducing previous cubic or quadratic time complexity to linear w.r.t the topic size. We further speedup variational inference with a fast sampler to exploit sparsity of topic occurrence. Extensive experiments show that our approach is capable of handling model and data scales which are several orders of magnitude larger than existing correlation results, without sacrificing modeling quality by providing competitive or superior performance in document classification and retrieval.
- Jun 01 2017 cs.CV arXiv:1705.10861v1We develop a novel framework for action localization in videos. We propose the Tube Proposal Network (TPN), which can generate generic, class-independent, video-level tubelet proposals in videos. The generated tubelet proposals can be utilized in various video analysis tasks, including recognizing and localizing actions in videos. In particular, we integrate these generic tubelet proposals into a unified temporal deep network for action classification. Compared with other methods, our generic tubelet proposal method is accurate, general, and is fully differentiable under a smoothL1 loss function. We demonstrate the performance of our algorithm on the standard UCF-Sports, J-HMDB21, and UCF-101 datasets. Our class-independent TPN outperforms other tubelet generation methods, and our unified temporal deep network achieves state-of-the-art localization results on all three datasets.
- Impervious surface area is a direct consequence of the urbanization, which also plays an important role in urban planning and environmental management. With the rapidly technical development of remote sensing, monitoring urban impervious surface via high spatial resolution (HSR) images has attracted unprecedented attention recently. Traditional multi-classes models are inefficient for impervious surface extraction because it requires labeling all needed and unneeded classes that occur in the image exhaustively. Therefore, we need to find a reliable one-class model to classify one specific land cover type without labeling other classes. In this study, we investigate several one-class classifiers, such as Presence and Background Learning (PBL), Positive Unlabeled Learning (PUL), OCSVM, BSVM and MAXENT, to extract urban impervious surface area using high spatial resolution imagery of GF-1, China's new generation of high spatial remote sensing satellite, and evaluate the classification accuracy based on artificial interpretation results. Compared to traditional multi-classes classifiers (ANN and SVM), the experimental results indicate that PBL and PUL provide higher classification accuracy, which is similar to the accuracy provided by ANN model. Meanwhile, PBL and PUL outperforms OCSVM, BSVM, MAXENT and SVM models. Hence, the one-class classifiers only need a small set of specific samples to train models without losing predictive accuracy, which is supposed to gain more attention on urban impervious surface extraction or other one specific land cover type.
- Apr 21 2017 cs.CL arXiv:1704.06217v1This paper addresses the problem of predicting popularity of comments in an online discussion forum using reinforcement learning, particularly addressing two challenges that arise from having natural language state and action spaces. First, the state representation, which characterizes the history of comments tracked in a discussion at a particular point, is augmented to incorporate the global context represented by discussions on world events available in an external knowledge source. Second, a two-stage Q-learning framework is introduced, making it feasible to search the combinatorial action space while also accounting for redundancy among sub-actions. We experiment with five Reddit communities, showing that the two methods improve over previous reported results on this task.
- Networked system often relies on distributed algorithms to achieve a global computation goal with iterative local information exchanges between neighbor nodes. To preserve data privacy, a node may add a random noise to its original data for information exchange at each iteration. Nevertheless, a neighbor node can estimate other's original data based on the information it received. The estimation accuracy and data privacy can be measured in terms of $(\epsilon, \delta)$-data-privacy, defined as the probability of $\epsilon$-accurate estimation (the difference of an estimation and the original data is within $\epsilon$) is no larger than $\delta$ (the disclosure probability). How to optimize the estimation and analyze data privacy is a critical and open issue. In this paper, a theoretical framework is developed to investigate how to optimize the estimation of neighbor's original data using the local information received, named optimal distributed estimation. Then, we study the disclosure probability under the optimal estimation for data privacy analysis. We further apply the developed framework to analyze the data privacy of the privacy-preserving average consensus algorithm and identify the optimal noises for the algorithm.
- Let $G$ be a matching-covered graph, i.e., every edge is contained in a perfect matching. An edge subset $X$ of $G$ is feasible if there exists two perfect matchings $M_1$ and $M_2$ such that $|M_1\cap X|\not\equiv |M_2\cap X| \pmod 2$. Lukot'ka and Rollová proved that an edge subset $X$ of a regular bipartite graph is not feasible if and only if $X$ is switching-equivalent to $\emptyset$, and they further ask whether a non-feasible set of a regular graph of class 1 is always switching-equivalent to either $\emptyset$ or $E(G)$? Two edges of $G$ are equivalent to each other if a perfect matching $M$ of $G$ either contains both of them or contains none of them. An equivalent class of $G$ is an edge subset $K$ with at least two edges such that the edges of $K$ are mutually equivalent. An equivalent class is not a feasible set. Lovász proved that an equivalent class of a brick has size 2. In this paper, we show that, for every integer $k\ge 3$, there exist infinitely many $k$-regular graphs of class 1 with an arbitrarily large equivalent class $K$ such that $K$ is not switching-equivalent to either $\emptyset$ or $E(G)$, which provides a negative answer to the problem proposed by Lukot'ka and Rollová. Further, we characterize bipartite graphs with equivalent class, and characterize matching-covered bipartite graphs of which every edge is removable.
- Collaborative filtering (CF) aims to build a model from users' past behaviors and/or similar decisions made by other users, and use the model to recommend items for users. Despite of the success of previous collaborative filtering approaches, they are all based on the assumption that there are sufficient rating scores available for building high-quality recommendation models. In real world applications, however, it is often difficult to collect sufficient rating scores, especially when new items are introduced into the system, which makes the recommendation task challenging. We find that there are often "short" texts describing features of items, based on which we can approximate the similarity of items and make recommendation together with rating scores. In this paper we "borrow" the idea of vector representation of words to capture the information of short texts and embed it into a matrix factorization framework. We empirically show that our approach is effective by comparing it with state-of-the-art approaches.
- Mar 07 2017 cs.CV arXiv:1703.01702v1This paper studies the problem of how to choose good viewpoints for taking photographs of architectures. We achieve this by learning from professional photographs of world famous landmarks that are available on the Internet. Unlike previous efforts devoted to photo quality assessment which mainly rely on 2D image features, we show in this paper combining 2D image features extracted from images with 3D geometric features computed on the 3D models can result in more reliable evaluation of viewpoint quality. Specifically, we collect a set of photographs for each of 15 world famous architectures as well as their 3D models from the Internet. Viewpoint recovery for images is carried out through an image-model registration process, after which a newly proposed viewpoint clustering strategy is exploited to validate users' viewpoint preferences when photographing landmarks. Finally, we extract a number of 2D and 3D features for each image based on multiple visual and geometric cues and perform viewpoint recommendation by learning from both 2D and 3D features using a specifically designed SVM-2K multi-view learner, achieving superior performance over using solely 2D or 3D features. We show the effectiveness of the proposed approach through extensive experiments. The experiments also demonstrate that our system can be used to recommend viewpoints for rendering textured 3D models of buildings for the use of architectural design, in addition to viewpoint evaluation of photographs and recommendation of viewpoints for photographing architectures in practice.
- Feb 24 2017 cs.CL arXiv:1702.07117v1Topic models have been widely used in discovering latent topics which are shared across documents in text mining. Vector representations, word embeddings and topic embeddings, map words and topics into a low-dimensional and dense real-value vector space, which have obtained high performance in NLP tasks. However, most of the existing models assume the result trained by one of them are perfect correct and used as prior knowledge for improving the other model. Some other models use the information trained from external large corpus to help improving smaller corpus. In this paper, we aim to build such an algorithm framework that makes topic models and vector representations mutually improve each other within the same corpus. An EM-style algorithm framework is employed to iteratively optimize both topic model and vector representations. Experimental results show that our model outperforms state-of-art methods on various NLP tasks.
- Differential privacy is a formal mathematical stand-ard for quantifying the degree of that individual privacy in a statistical database is preserved. To guarantee differential privacy, a typical method is adding random noise to the original data for data release. In this paper, we investigate the conditions of differential privacy considering the general random noise adding mechanism, and then apply the obtained results for privacy analysis of the privacy-preserving consensus algorithm. Specifically, we obtain a necessary and sufficient condition of $\epsilon$-differential privacy, and the sufficient conditions of $(\epsilon, \delta)$-differential privacy. We apply them to analyze various random noises. For the special cases with known results, our theory matches with the literature; for other cases that are unknown, our approach provides a simple and effective tool for differential privacy analysis. Applying the obtained theory, on privacy-preserving consensus algorithms, it is proved that the average consensus and $\epsilon$-differential privacy cannot be guaranteed simultaneously by any privacy-preserving consensus algorithm.
- A text network refers to a data type that each vertex is associated with a text document and the relationship between documents is represented by edges. The proliferation of text networks such as hyperlinked webpages and academic citation networks has led to an increasing demand for quickly developing a general sense of a new text network, namely text network exploration. In this paper, we address the problem of text network exploration through constructing a heterogeneous web of topics, which allows people to investigate a text network associating word level with document level. To achieve this, a probabilistic generative model for text and links is proposed, where three different relationships in the heterogeneous topic web are quantified. We also develop a prototype demo system named TopicAtlas to exhibit such heterogeneous topic web, and demonstrate how this system can facilitate the task of text network exploration. Extensive qualitative analyses are included to verify the effectiveness of this heterogeneous topic web. Besides, we validate our model on real-life text networks, showing that it preserves good performance on objective evaluation metrics.
- Sep 22 2016 cs.SY arXiv:1609.06381v2Privacy-preserving data aggregation in ad hoc networks is a challenging problem, considering the distributed communication and control requirement, dynamic network topology, unreliable communication links, etc. The difficulty is exaggerated when there exist dishonest nodes, and how to ensure privacy, accuracy, and robustness against dishonest nodes remains an open issue. Different from the widely used cryptographic approaches, in this paper, we address this challenging problem by exploiting the distributed consensus technique. We first propose a secure consensus-based data aggregation (SCDA) algorithm that guarantees an accurate sum aggregation while preserving the privacy of sensitive data. Then, to mitigate the pollution from dishonest nodes, we propose an Enhanced SCDA (E-SCDA) algorithm that allows neighbors to detect dishonest nodes, and derive the error bound when there are undetectable dishonest nodes. We prove the convergence of both SCDA and E-SCDA. We also prove that the proposed algorithms are $(\epsilon, \sigma)$-data-privacy, and obtain the mathematical relationship between $\epsilon$ and $\sigma$. Extensive simulations have shown that the proposed algorithms have high accuracy and low complexity, and they are robust against network dynamics and dishonest nodes.
- Sep 22 2016 cs.SY arXiv:1609.06368v2Privacy-preserving average consensus aims to guarantee the privacy of initial states and asymptotic consensus on the exact average of the initial value. In existing work, it is achieved by adding and subtracting variance decaying and zero-sum random noises to the consensus process. However, there is lack of theoretical analysis to quantify the degree of the privacy protection. In this paper, we introduce the maximum disclosure probability that the other nodes can infer one node's initial state within a given small interval to quantify the privacy. We develop a novel privacy definition, named $(\epsilon, \delta)$-data-privacy, to depict the relationship between maximum disclosure probability and estimation accuracy. Then, we prove that the general privacy-preserving average consensus (GPAC) provides $(\epsilon, \delta)$-data-privacy, and provide the closed-form expression of the relationship between $\epsilon$ and $\delta$. Meanwhile, it is shown that the added noise with uniform distribution is optimal in terms of achieving the highest $(\epsilon, \delta)$-data-privacy. We also prove that when all information used in the consensus process is available, the privacy will be compromised. Finally, an optimal privacy-preserving average consensus (OPAC) algorithm is proposed to achieve the highest $(\epsilon, \delta)$-data-privacy and avoid the privacy compromission. Simulations are conducted to verify the results.
- This paper explores the suitability of using automatically discovered topics from MOOC discussion forums for modelling students' academic abilities. The Rasch model from psychometrics is a popular generative probabilistic model that relates latent student skill, latent item difficulty, and observed student-item responses within a principled, unified framework. According to scholarly educational theory, discovered topics can be regarded as appropriate measurement items if (1) students' participation across the discovered topics is well fit by the Rasch model, and if (2) the topics are interpretable to subject-matter experts as being educationally meaningful. Such Rasch-scaled topics, with associated difficulty levels, could be of potential benefit to curriculum refinement, student assessment and personalised feedback. The technical challenge that remains, is to discover meaningful topics that simultaneously achieve good statistical fit with the Rasch model. To address this challenge, we combine the Rasch model with non-negative matrix factorisation based topic modelling, jointly fitting both models. We demonstrate the suitability of our approach with quantitative experiments on data from three Coursera MOOCs, and with qualitative survey results on topic interpretability on a Discrete Optimisation MOOC.
- Jul 04 2016 cs.CL arXiv:1607.00070v1User simulation is essential for generating enough data to train a statistical spoken dialogue system. Previous models for user simulation suffer from several drawbacks, such as the inability to take dialogue history into account, the need of rigid structure to ensure coherent user behaviour, heavy dependence on a specific domain, the inability to output several user intentions during one dialogue turn, or the requirement of a summarized action space for tractability. This paper introduces a data-driven user simulator based on an encoder-decoder recurrent neural network. The model takes as input a sequence of dialogue contexts and outputs a sequence of dialogue acts corresponding to user intentions. The dialogue contexts include information about the machine acts and the status of the user goal. We show on the Dialogue State Tracking Challenge 2 (DSTC2) dataset that the sequence-to-sequence model outperforms an agenda-based simulator and an n-gram simulator, according to F-score. Furthermore, we show how this model can be used on the original action space and thereby models user behaviour with finer granularity.
- Jun 14 2016 cs.CL arXiv:1606.03632v3Natural language generation plays a critical role in spoken dialogue systems. We present a new approach to natural language generation for task-oriented dialogue using recurrent neural networks in an encoder-decoder framework. In contrast to previous work, our model uses both lexicalized and delexicalized components i.e. slot-value pairs for dialogue acts, with slots and corresponding values aligned together. This allows our model to learn from all available data including the slot-value pairing, rather than being restricted to delexicalized slots. We show that this helps our model generate more natural sentences with better grammar. We further improve our model's performance by transferring weights learnt from a pretrained sentence auto-encoder. Human evaluation of our best-performing model indicates that it generates sentences which users find more appealing.
- We introduce an online popularity prediction and tracking task as a benchmark task for reinforcement learning with a combinatorial, natural language action space. A specified number of discussion threads predicted to be popular are recommended, chosen from a fixed window of recent comments to track. Novel deep reinforcement learning architectures are studied for effective modeling of the value function associated with actions comprised of interdependent sub-actions. The proposed model, which represents dependence between sub-actions through a bi-directional LSTM, gives the best performance across different experimental configurations and domains, and it also generalizes well with varying numbers of recommendation requests.
- In this paper, we propose to use deep policy networks which are trained with an advantage actor-critic method for statistically optimised dialogue systems. First, we show that, on summary state and action spaces, deep Reinforcement Learning (RL) outperforms Gaussian Processes methods. Summary state and action spaces lead to good performance but require pre-engineering effort, RL knowledge, and domain expertise. In order to remove the need to define such summary spaces, we show that deep RL can also be trained efficiently on the original state and action spaces. Dialogue systems based on partially observable Markov decision processes are known to require many dialogues to train, which makes them unappealing for practical deployment. We show that a deep RL method based on an actor-critic architecture can exploit a small amount of data very efficiently. Indeed, with only a few hundred dialogues collected with a handcrafted policy, the actor-critic deep learner is considerably bootstrapped from a combination of supervised and batch RL. In addition, convergence to an optimal policy is significantly sped up compared to other deep RL methods initialized on the data with batch RL. All experiments are performed on a restaurant domain derived from the Dialogue State Tracking Challenge 2 (DSTC2) dataset.
- Mar 30 2016 cs.CL arXiv:1603.08884v1Understanding unstructured text is a major goal within natural language processing. Comprehension tests pose questions based on short text passages to evaluate such understanding. In this work, we investigate machine comprehension on the challenging \it MCTest benchmark. Partly because of its limited size, prior work on \it MCTest has focused mainly on engineering better features. We tackle the dataset with a neural approach, harnessing simple neural networks arranged in a parallel hierarchy. The parallel hierarchy enables our model to compare the passage, question, and answer from a variety of trainable perspectives, as opposed to using a manually designed, rigid feature set. Perspectives range from the word level to sentence fragments to sequences of sentences; the networks operate only on word-embedding representations of text. When trained with a methodology designed to help cope with limited training data, our Parallel-Hierarchical model sets a new state of the art for \it MCTest, outperforming previous feature-engineered approaches slightly and previous neural approaches by a significant margin (over 15\% absolute).
- This paper adapts topic models to the psychometric testing of MOOC students based on their online forum postings. Measurement theory from education and psychology provides statistical models for quantifying a person's attainment of intangible attributes such as attitudes, abilities or intelligence. Such models infer latent skill levels by relating them to individuals' observed responses on a series of items such as quiz questions. The set of items can be used to measure a latent skill if individuals' responses on them conform to a Guttman scale. Such well-scaled items differentiate between individuals and inferred levels span the entire range from most basic to the advanced. In practice, education researchers manually devise items (quiz questions) while optimising well-scaled conformance. Due to the costly nature and expert requirements of this process, psychometric testing has found limited use in everyday teaching. We aim to develop usable measurement models for highly-instrumented MOOC delivery platforms, by using participation in automatically-extracted online forum topics as items. The challenge is to formalise the Guttman scale educational constraint and incorporate it into topic models. To favour topics that automatically conform to a Guttman scale, we introduce a novel regularisation into non-negative matrix factorisation-based topic modelling. We demonstrate the suitability of our approach with both quantitative experiments on three Coursera MOOCs, and with a qualitative survey of topic interpretability on two MOOCs by domain expert interviews.
- This paper introduces a novel architecture for reinforcement learning with deep neural networks designed to handle state and action spaces characterized by natural language, as found in text-based games. Termed a deep reinforcement relevance network (DRRN), the architecture represents action and state spaces with separate embedding vectors, which are combined with an interaction function to approximate the Q-function in reinforcement learning. We evaluate the DRRN on two popular text games, showing superior performance over other deep Q-learning architectures. Experiments with paraphrased action descriptions show that the model is extracting meaning rather than simply memorizing strings of text.
- Nov 12 2015 cs.NE arXiv:1511.03483v3An important question in evolutionary computation is how good solutions evolutionary algorithms can produce. This paper aims to provide an analytic analysis of solution quality in terms of the relative approximation error, which is defined by the error between 1 and the approximation ratio of the solution found by an evolutionary algorithm. Since evolutionary algorithms are iterative methods, the relative approximation error is a function of generations. With the help of matrix analysis, it is possible to obtain an exact expression of such a function. In this paper, an analytic expression for calculating the relative approximation error is presented for a class of evolutionary algorithms, that is, (1+1) strictly elitist evolution algorithms. Furthermore, analytic expressions of the fitness value and the average convergence rate in each generation are also derived for this class of evolutionary algorithms. The approach is promising, and it can be extended to non-elitist or population-based algorithms too.
- Oct 01 2015 cs.NE arXiv:1509.09060v2Solving constrained optimization problems by multi-objective evolutionary algorithms has scored tremendous achievements in the last decade. Standard multi-objective schemes usually aim at minimizing the objective function and also the degree of constraint violation simultaneously. This paper proposes a new multi-objective method for solving constrained optimization problems. The new method keeps two standard objectives: the original objective function and the sum of degrees of constraint violation. But besides them, four more objectives are added. One is based on the feasible rule. The other three come from the penalty functions. This paper conducts an initial experimental study on thirteen benchmark functions. A simplified version of CMODE is applied to solving multi-objective optimization problems. Our initial experimental results confirm our expectation that adding more helper functions could be useful. The performance of SMODE with more helper functions (four or six) is better than that with only two helper functions.
- Successful applications of reinforcement learning in real-world problems often require dealing with partially observable states. It is in general very challenging to construct and infer hidden states as they often depend on the agent's entire interaction history and may require substantial domain knowledge. In this work, we investigate a deep-learning approach to learning the representation of states in partially observable tasks, with minimal prior knowledge of the domain. In particular, we propose a new family of hybrid models that combines the strength of both supervised learning (SL) and reinforcement learning (RL), trained in a joint fashion: The SL component can be a recurrent neural networks (RNN) or its long short-term memory (LSTM) version, which is equipped with the desired property of being able to capture long-term dependency on history, thus providing an effective way of learning the representation of hidden states. The RL component is a deep Q-network (DQN) that learns to optimize the control for maximizing long-term rewards. Extensive experiments in a direct mailing campaign problem demonstrate the effectiveness and advantages of the proposed approach, which performs the best among a set of previous state-of-the-art methods.
- The explosion of cloud services on the Internet brings new challenges in service discovery and selection. Particularly, the demand for efficient quality-of-service (QoS) evaluation is becoming urgently strong. To address this issue, this paper proposes neighborhood-based approach for QoS prediction of cloud services by taking advantages of collaborative intelligence. Different from heuristic collaborative filtering and matrix factorization, we define a formal neighborhood-based prediction framework which allows an efficient global optimization scheme, and then exploit different baseline estimate component to improve predictive performance. To validate the proposed methods, a large-scale QoS-specific dataset which consists of invocation records from 339 service users on 5,825 web services on a world-scale distributed network is used. Experimental results demonstrate that the learned neighborhood-based models can overcome existing difficulties of heuristic collaborative filtering methods and achieve superior performance than state-of-the-art prediction methods.
- Aug 17 2015 cs.LG arXiv:1508.03398v2We develop a fully discriminative learning approach for supervised Latent Dirichlet Allocation (LDA) model using Back Propagation (i.e., BP-sLDA), which maximizes the posterior probability of the prediction variable given the input document. Different from traditional variational learning or Gibbs sampling approaches, the proposed learning method applies (i) the mirror descent algorithm for maximum a posterior inference and (ii) back propagation over a deep architecture together with stochastic gradient/mirror descent for model parameter estimation, leading to scalable and end-to-end discriminative learning of the model. As a byproduct, we also apply this technique to develop a new learning method for the traditional unsupervised LDA model (i.e., BP-LDA). Experimental results on three real-world regression and classification tasks show that the proposed methods significantly outperform the previous supervised topic models, neural networks, and is on par with deep neural networks.
- SOARAN: A Service-oriented Architecture for Radio Access Network Sharing in Evolving Mobile NetworksAug 04 2015 cs.NI arXiv:1508.00306v1Mobile networks are undergoing fast evolution to software-defined networking (SDN) infrastructure in order to accommodate the ever-growing mobile traffic and overcome the network management nightmares caused by unremitting acceleration in technology innovations and evolution of the service market.Enabled by virtualized network functionalities, evolving carrier wireless networks tend to share radio access network (RAN) among multiple (virtual) network operators so as to increase network capacity and reduce expenses.However, existing RAN sharing models are operator-oriented, which expose extensive resource details, e.g. infrastructure and spectrum,to participating network operators for resource-sharing purposes. These old-fashioned models violate the design principles of SDN abstraction and are infeasible to manage the thriving traffic of on-demand customized services. This paper presents SOARAN, a service-oriented framework for RAN sharing in mobile networks evolving from LTE/LTE advanced to software-defined carrier wireless networks(SD-CWNs), which decouples network operators from radio resource by providing application-level differentiated services. SOARAN defines a serial of abstract applications with distinct Quality of Experience (QoE) requirements. The central controller periodically computes application-level resource allocation for each radio element with respect to runtime traffic demands and channel conditions, and disseminate these allocation decisions as service-oriented policies to respect element. The radio elements then independently determine flow-level resource allocation within each application to accomplish these policies. We formulate the application-level resource allocation as an optimization problem and develop a fast algorithm to solve it with a provably approximate guarantee.
- May 01 2015 cs.NE arXiv:1504.08117v3In evolutionary optimization, it is important to understand how fast evolutionary algorithms converge to the optimum per generation, or their convergence rate. This paper proposes a new measure of the convergence rate, called average convergence rate. It is a normalised geometric mean of the reduction ratio of the fitness difference per generation. The calculation of the average convergence rate is very simple and it is applicable for most evolutionary algorithms on both continuous and discrete optimization. A theoretical study of the average convergence rate is conducted for discrete optimization. Lower bounds on the average convergence rate are derived. The limit of the average convergence rate is analysed and then the asymptotic average convergence rate is proposed.
- Feb 13 2015 cs.NE arXiv:1502.03699v1Multi-objective optimisation is regarded as one of the most promising ways for dealing with constrained optimisation problems in evolutionary optimisation. This paper presents a theoretical investigation of a multi-objective optimisation evolutionary algorithm for solving the 0-1 knapsack problem. Two initialisation methods are considered in the algorithm: local search initialisation and greedy search initialisation. Then the solution quality of the algorithm is analysed in terms of the approximation ratio.
- Jan 08 2015 cs.NI arXiv:1501.01436v2This paper investigates ARQ (Automatic Repeat request) designs for PNC (Physical-layer Network Coding) systems. We have previously found that, besides TWRC (Two-Way Relay Channel) operated on the principle of PNC, there are many other PNC building blocks--building blocks are simple small network structures that can be used to construct a large network. In some of these PNC building blocks, receivers can obtain side information through overhearing. Although such overheard information is not the target information that the receivers desire, the receivers can exploit the overheard information together with a network-coded packet received to obtain a desired native packet. This leads to throughput gain. Our previous study, however, assumed what is sent always get received. In practice, that is not the case. Error control is needed to ensure reliable communication. This paper focuses on the use of ARQ to ensure reliable PNC communication. The availability of overheard Information and its potential exploitation make the ARQ design of a network-coded system different from that of a non-network-coded system. In this paper, we lay out the fundamental considerations for such ARQ design: 1) We address how to track the stored coded packets and overheard packets to increase the chance of packet extraction, and derive the throughput gain achieved by tracking 2) We investigate two variations of PNC ARQ, coupled and non-coupled ARQs, and prove that non-coupled ARQ is more efficient; 3) We show how to optimize parameters in PNC ARQ--specifically the window size and ACK frequency--to minimize the throughput degradation caused by ACK feedback overhead and wasteful retransmissions due to lost ACK.
- We consider channel/subspace tracking systems for temporally correlated millimeter wave (e.g., E-band) multiple-input multiple-output (MIMO) channels. Our focus is given to the tracking algorithm in the non-line-of-sight (NLoS) environment, where the transmitter and the receiver are equipped with hybrid analog/digital precoder and combiner, respectively. In the absence of straightforward time-correlated channel model in the millimeter wave MIMO literature, we present a temporal MIMO channel evolution model for NLoS millimeter wave scenarios. Considering that conventional MIMO channel tracking algorithms in microwave bands are not directly applicable, we propose a new channel tracking technique based on sequentially updating the precoder and combiner. Numerical results demonstrate the superior channel tracking ability of the proposed technique over independent sounding approach in the presented channel model and the spatial channel model (SCM) adopted in 3GPP specification.
- In this paper, we present GASG21 (Grassmannian Adaptive Stochastic Gradient for $L_{2,1}$ norm minimization), an adaptive stochastic gradient algorithm to robustly recover the low-rank subspace from a large matrix. In the presence of column outliers, we reformulate the batch mode matrix $L_{2,1}$ norm minimization with rank constraint problem as a stochastic optimization approach constrained on Grassmann manifold. For each observed data vector, the low-rank subspace $\mathcal{S}$ is updated by taking a gradient step along the geodesic of Grassmannian. In order to accelerate the convergence rate of the stochastic gradient method, we choose to adaptively tune the constant step-size by leveraging the consecutive gradients. Furthermore, we demonstrate that with proper initialization, the K-subspaces extension, K-GASG21, can robustly cluster a large number of corrupted data vectors into a union of subspaces. Numerical experiments on synthetic and real data demonstrate the efficiency and accuracy of the proposed algorithms even with heavy column outliers corruption.
- Some experimental investigations have shown that evolutionary algorithms (EAs) are efficient for the minimum label spanning tree (MLST) problem. However, we know little about that in theory. As one step towards this issue, we theoretically analyze the performances of the (1+1) EA, a simple version of EAs, and a multi-objective evolutionary algorithm called GSEMO on the MLST problem. We reveal that for the MLST$_{b}$ problem the (1+1) EA and GSEMO achieve a $\frac{b+1}{2}$-approximation ratio in expected polynomial times of $n$ the number of nodes and $k$ the number of labels. We also show that GSEMO achieves a $(2ln(n))$-approximation ratio for the MLST problem in expected polynomial time of $n$ and $k$. At the same time, we show that the (1+1) EA and GSEMO outperform local search algorithms on three instances of the MLST problem. We also construct an instance on which GSEMO outperforms the (1+1) EA.
- Modern cloud computing platforms based on virtual machine monitors carry a variety of complex business that present many network security vulnerabilities. At present, the traditional architecture employs a number of security devices at front-end of cloud computing to protect its network security. Under the new environment, however, this approach can not meet the needs of cloud security. New cloud security vendors and academia also made great efforts to solve network security of cloud computing, unfortunately, they also cannot provide a perfect and effective method to solve this problem. We introduce a novel network security architecture for cloud computing (NetSecCC) that addresses this problem. NetSecCC not only provides an effective solution for network security issues of cloud computing, but also greatly improves in scalability, fault-tolerant, resource utilization, etc. We have implemented a proof-of-concept prototype about NetSecCC and proved by experiments that NetSecCC is an effective architecture with minimal performance overhead that can be applied to the extensive practical promotion in cloud computing.
- Apr 15 2014 cs.NE arXiv:1404.3520v1Evolutionary algorithms are well suited for solving the knapsack problem. Some empirical studies claim that evolutionary algorithms can produce good solutions to the 0-1 knapsack problem. Nonetheless, few rigorous investigations address the quality of solutions that evolutionary algorithms may produce for the knapsack problem. The current paper focuses on a theoretical investigation of three types of (N+1) evolutionary algorithms that exploit bitwise mutation, truncation selection, plus different repair methods for the 0-1 knapsack problem. It assesses the solution quality in terms of the approximation ratio. Our work indicates that the solution produced by pure strategy and mixed strategy evolutionary algorithms is arbitrarily bad. Nevertheless, the evolutionary algorithm using helper objectives may produce 1/2-approximation solutions to the 0-1 knapsack problem.
- Apr 07 2014 cs.NI arXiv:1404.1108v1Due to explosive growth of online video content in mobile wireless networks, in-network caching is becoming increasingly important to improve the end-user experience and reduce the Internet access cost for mobile network operators. However, caching is a difficult problem due to the very large number of online videos and video requests,limited capacity of caching nodes, and limited bandwidth of in-network links. Existing solutions that rely on static configurations and average request arrival rates are insufficient to handle dynamic request patterns effectively. In this paper, we propose a dynamic collaborative video caching framework to be deployed in mobile networks. We decompose the caching problem into a content placement subproblem and a source-selection subproblem. We then develop SRS (System capacity Reservation Strategy) to solve the content placement subproblem, and LinkShare, an adaptive traffic-aware algorithm to solve the source selection subproblem. Our framework supports congestion avoidance and allows merging multiple requests for the same video into one request. We carry extensive simulations to validate the proposed schemes. Simulation results show that our SRS algorithm achieves performance within 1-3% of the optimal values and LinkShare significantly outperforms existing solutions.
- Apr 04 2014 cs.NE arXiv:1404.0868v1The 0-1 knapsack problem is a well-known combinatorial optimisation problem. Approximation algorithms have been designed for solving it and they return provably good solutions within polynomial time. On the other hand, genetic algorithms are well suited for solving the knapsack problem and they find reasonably good solutions quickly. A naturally arising question is whether genetic algorithms are able to find solutions as good as approximation algorithms do. This paper presents a novel multi-objective optimisation genetic algorithm for solving the 0-1 knapsack problem. Experiment results show that the new algorithm outperforms its rivals, the greedy algorithm, mixed strategy genetic algorithm, and greedy algorithm + mixed strategy genetic algorithm.
- Mar 10 2014 cs.LO arXiv:1403.1666v1We consider here Linear Temporal Logic (LTL) formulas interpreted over \emphfinite traces. We denote this logic by LTLf. The existing approach for LTLf satisfiability checking is based on a reduction to standard LTL satisfiability checking. We describe here a novel direct approach to LTLf satisfiability checking, where we take advantage of the difference in the semantics between LTL and LTLf. While LTL satisfiability checking requires finding a \emphfair cycle in an appropriate transition system, here we need to search only for a finite trace. This enables us to introduce specialized heuristics, where we also exploit recent progress in Boolean SAT solving. We have implemented our approach in a prototype tool and experiments show that our approach outperforms existing approaches.
- Satisfiability checking for Linear Temporal Logic (LTL) is a fundamental step in checking for possible errors in LTL assertions. Extant LTL satisfiability checkers use a variety of different search procedures. With the sole exception of LTL satisfiability checking based on bounded model checking, which does not provide a complete decision procedure, LTL satisfiability checkers have not taken advantage of the remarkable progress over the past 20 years in Boolean satisfiability solving. In this paper, we propose a new LTL satisfiability-checking framework that is accelerated using a Boolean SAT solver. Our approach is based on the variant of the \emphobligation-set method, which we proposed in earlier work. We describe here heuristics that allow the use of a Boolean SAT solver to analyze the obligations for a given LTL formula. The experimental evaluation indicates that the new approach provides a a significant performance advantage.
- The convergence, convergence rate and expected hitting time play fundamental roles in the analysis of randomised search heuristics. This paper presents a unified Markov chain approach to studying them. Using the approach, the sufficient and necessary conditions of convergence in distribution are established. Then the average convergence rate is introduced to randomised search heuristics and its lower and upper bounds are derived. Finally, novel average drift analysis and backward drift analysis are proposed for bounding the expected hitting time. A computational study is also conducted to investigate the convergence, convergence rate and expected hitting time. The theoretical study belongs to a prior and general study while the computational study belongs to a posterior and case study.
- Dec 04 2013 cs.DS arXiv:1312.1273v1In the current work we introduce a novel estimation of distribution algorithm to tackle a hard combinatorial optimization problem, namely the single-machine scheduling problem, with uncertain delivery times. The majority of the existing research coping with optimization problems in uncertain environment aims at finding a single sufficiently robust solution so that random noise and unpredictable circumstances would have the least possible detrimental effect on the quality of the solution. The measures of robustness are usually based on various kinds of empirically designed averaging techniques. In contrast to the previous work, our algorithm aims at finding a collection of robust schedules that allow for a more informative decision making. The notion of robustness is measured quantitatively in terms of the classical mathematical notion of a norm on a vector space. We provide a theoretical insight into the relationship between the properties of the probability distribution over the uncertain delivery times and the robustness quality of the schedules produced by the algorithm after a polynomial runtime in terms of approximation ratios.
- Nov 08 2013 cs.LO arXiv:1311.1602v1In this paper we present a portfolio LTL-satisfiability solver, called Polsat. To achieve fast satisfiability checking for LTL formulas, the tool integrates four representative LTL solvers: pltl, TRP++, NuSMV, and Aalta. The idea of Polsat is to run the component solvers in parallel to get best overall performance; once one of the solvers terminates, it stops all other solvers. Remarkably, the Polsat solver utilizes the power of modern multi-core compute clusters. The empirical experiments show that Polsat takes advantages of it. Further, Polsat is also a testing plat- form for all LTL solvers.
- Aug 15 2013 cs.NE arXiv:1308.3080v4This paper aims to study how the population size affects the computation time of evolutionary algorithms in a rigorous way. The computation time of an evolutionary algorithm can be measured by either the expected number of generations (hitting time) or the expected number of fitness evaluations (running time) to find an optimal solution. Population scalability is the ratio of the expected hitting time between a benchmark algorithm and an algorithm using a larger population size. Average drift analysis is presented for comparing the expected hitting time of two algorithms and estimating lower and upper bounds on population scalability. Several intuitive beliefs are rigorously analysed. It is prove that (1) using a population sometimes increases rather than decreases the expected hitting time; (2) using a population cannot shorten the expected running time of any elitist evolutionary algorithm on unimodal functions in terms of the time-fitness landscape, but this is not true in terms of the distance-based fitness landscape; (3) using a population cannot always reduce the expected running time on fully-deceptive functions, which depends on the benchmark algorithm using elitist selection or random selection.
- Jul 09 2013 cs.DC arXiv:1307.1955v1Query co-processing on graphics processors (GPUs) has become an effective means to improve the performance of main memory databases. However, the relatively low bandwidth and high latency of the PCI-e bus are usually bottleneck issues for co-processing. Recently, coupled CPU-GPU architectures have received a lot of attention, e.g. AMD APUs with the CPU and the GPU integrated into a single chip. That opens up new opportunities for optimizing query co-processing. In this paper, we experimentally revisit hash joins, one of the most important join algorithms for main memory databases, on a coupled CPU-GPU architecture. Particularly, we study the ?fine-grained co-processing mechanisms on hash joins with and without partitioning. The co-processing outlines an interesting design space. We extend existing cost models to automatically guide decisions on the design space. Our experimental results on a recent AMD APU show that (1) the coupled architecture enables ?fine-grained co-processing and cache reuses, which are inefficient on discrete CPU-GPU architectures; (2) the cost model can automatically guide the design and tuning knobs in the design space; (3) fi?ne-grained co-processing achieves up to 53%, 35% and 28% performance improvement over CPU-only, GPU-only and conventional CPU-GPU co-processing, respectively. We believe that the insights and implications from this study are initial yet important for further research on query co-processing on coupled CPU-GPU architectures.
- Robust high-dimensional data processing has witnessed an exciting development in recent years, as theoretical results have shown that it is possible using convex programming to optimize data fit to a low-rank component plus a sparse outlier component. This problem is also known as Robust PCA, and it has found application in many areas of computer vision. In image and video processing and face recognition, the opportunity to process massive image databases is emerging as people upload photo and video data online in unprecedented volumes. However, data quality and consistency is not controlled in any way, and the massiveness of the data poses a serious computational challenge. In this paper we present t-GRASTA, or "Transformed GRASTA (Grassmannian Robust Adaptive Subspace Tracking Algorithm)". t-GRASTA iteratively performs incremental gradient descent constrained to the Grassmann manifold of subspaces in order to simultaneously estimate a decomposition of a collection of images into a low-rank subspace, a sparse part of occlusions and foreground objects, and a transformation such as rotation or translation of the image. We show that t-GRASTA is 4 $\times$ faster than state-of-the-art algorithms, has half the memory requirement, and can achieve alignment for face images as well as jittered camera surveillance images.
- May 14 2013 cs.NE arXiv:1305.2490v2Hybrid and mixed strategy EAs have become rather popular for tackling various complex and NP-hard optimization problems. While empirical evidence suggests that such algorithms are successful in practice, rather little theoretical support for their success is available, not mentioning a solid mathematical foundation that would provide guidance towards an efficient design of this type of EAs. In the current paper we develop a rigorous mathematical framework that suggests such designs based on generalized schema theory, fitness levels and drift analysis. An example-application for tackling one of the classical NP-hard problems, the "single-machine scheduling problem" is presented.
- May 14 2013 cs.NE arXiv:1305.2504v1The classical Geiringer theorem addresses the limiting frequency of occurrence of various alleles after repeated application of crossover. It has been adopted to the setting of evolutionary algorithms and, a lot more recently, reinforcement learning and Monte-Carlo tree search methodology to cope with a rather challenging question of action evaluation at the chance nodes. The theorem motivates novel dynamic parallel algorithms that are explicitly described in the current paper for the first time. The algorithms involve independent agents traversing a dynamically constructed directed graph that possibly has loops. A rather elegant and profound category-theoretic model of cognition in biological neural networks developed by a well-known French mathematician, professor Andree Ehresmann jointly with a neurosurgeon, Jan Paul Vanbremeersch over the last thirty years provides a hint at the connection between such algorithms and Hebbian learning.
- May 14 2013 cs.AI arXiv:1305.2498v1A popular current research trend deals with expanding the Monte-Carlo tree search sampling methodologies to the environments with uncertainty and incomplete information. Recently a finite population version of Geiringer theorem with nonhomologous recombination has been adopted to the setting of Monte-Carlo tree search to cope with randomness and incomplete information by exploiting the entrinsic similarities within the state space of the problem. The only limitation of the new theorem is that the similarity relation was assumed to be an equivalence relation on the set of states. In the current paper we lift this "curtain of limitation" by allowing the similarity relation to be modeled in terms of an arbitrary set cover of the set of state-action pairs.
- In pure strategy meta-heuristics, only one search strategy is applied for all time. In mixed strategy meta-heuristics, each time one search strategy is chosen from a strategy pool with a probability and then is applied. An example is classical genetic algorithms, where either a mutation or crossover operator is chosen with a probability each time. The aim of this paper is to compare the performance between mixed strategy and pure strategy meta-heuristic algorithms. First an experimental study is implemented and results demonstrate that mixed strategy evolutionary algorithms may outperform pure strategy evolutionary algorithms on the 0-1 knapsack problem in up to 77.8% instances. Then Complementary Strategy Theorem is rigorously proven for applying mixed strategy at the population level. The theorem asserts that given two meta-heuristic algorithms where one uses pure strategy 1 and another uses pure strategy 2, the condition of pure strategy 2 being complementary to pure strategy 1 is sufficient and necessary if there exists a mixed strategy meta-heuristics derived from these two pure strategies and its expected number of generations to find an optimal solution is no more than that of using pure strategy 1 for any initial population, and less than that of using pure strategy 1 for some initial population.
- Jan 07 2013 cs.SI physics.soc-ph arXiv:1301.0803v2Missing link prediction in indirected and un-weighted network is an open and challenge problem which has been studied intensively in recent years. In this paper, we studied the relationships between community structure and link formation and proposed a Fast Block probabilistic Model(FBM). In accordance with the experiments on four real world networks, we have yielded very good accuracy of missing link prediction and huge improvement in computing efficiency compared to conventional methods. By analyzing the mechanism of link formation, we also discovered that clique structure plays a significant role to help us understand how links grow in communities. Therefore, we summarized three principles which are proved to be able to well explain the mechanism of link formation and network evolution from the theory of graph topology.
- Periodic control systems used in spacecrafts and automotives are usually period-driven and can be decomposed into different modes with each mode representing a system state observed from outside. Such systems may also involve intensive computing in their modes. Despite the fact that such control systems are widely used in the above-mentioned safety-critical embedded domains, there is lack of domain-specific formal modelling languages for such systems in the relevant industry. To address this problem, we propose a formal visual modeling framework called mode diagram as a concise and precise way to specify and analyze such systems. To capture the temporal properties of periodic control systems, we provide, along with mode diagram, a property specification language based on interval logic for the description of concrete temporal requirements the engineers are concerned with. The statistical model checking technique can then be used to verify the mode diagram models against desired properties. To demonstrate the viability of our approach, we have applied our modelling framework to some real life case studies from industry and helped detect two design defects for some spacecraft control systems.
- Network intrusion detection is the problem of detecting unauthorised use of, or access to, computer systems over a network. Two broad approaches exist to tackle this problem: anomaly detection and misuse detection. An anomaly detection system is trained only on examples of normal connections, and thus has the potential to detect novel attacks. However, many anomaly detection systems simply report the anomalous activity, rather than analysing it further in order to report higher-level information that is of more use to a security officer. On the other hand, misuse detection systems recognise known attack patterns, thereby allowing them to provide more detailed information about an intrusion. However, such systems cannot detect novel attacks. A hybrid system is presented in this paper with the aim of combining the advantages of both approaches. Specifically, anomalous network connections are initially detected using an artificial immune system. Connections that are flagged as anomalous are then categorised using a Kohonen Self Organising Map, allowing higher-level information, in the form of cluster membership, to be extracted. Experimental results on the KDD 1999 Cup dataset show a low false positive rate and a detection and classification rate for Denial-of-Service and User-to-Root attacks that is higher than those in a sample of other works.
- Jul 24 2012 cs.NI arXiv:1207.5298v4This paper investigates the fundamental building blocks of physical-layer network coding (PNC). Most prior work on PNC focused on its application in a simple two-way-relay channel (TWRC) consisting of three nodes only. Studies of the application of PNC in general networks are relatively few. This paper is an attempt to fill this gap. We put forth two ideas: 1) A general network can be decomposed into small building blocks of PNC, referred to as the PNC atoms, for scheduling of PNC transmissions. 2) We identify nine PNC atoms, with TWRC being one of them. Three major results are as follows. First, using the decomposition framework, the throughput performance of PNC is shown to be significantly better than those of the traditional multi-hop scheme and the conventional network coding scheme. For example, under heavy traffic volume, PNC can achieve 100% throughput gain relative to the traditional multi-hop scheme. Second, PNC decomposition based on a variety of different PNC atoms can yield much better performance than PNC decomposition based on the TWRC atom alone. Third, three out of the nine atoms are most important to good performance. Specifically, the decomposition based on these three atoms is good enough most of the time, and it is not necessary to use the other six atoms.
- Jul 18 2012 cs.FL arXiv:1207.3866v1In this paper, we consider the problem of translating LTL formulas to Buechi automata. We first translate the given LTL formula into a special disjuctive-normal form (DNF). The formula will be part of the state, and its DNF normal form specifies the atomic properties that should hold immediately (labels of the transitions) and the formula that should hold afterwards (the corresponding successor state). Surprisingly, if the given formula is Until-free or Release-free, the Buechi automaton can be obtained directly in this manner. For a general formula, the construction is slightly involved: an additional component will be needed for each formula that helps us to identify the set of accepting states. Notably, our construction is an on-the-fly construction, and the resulting Buechi automaton has in worst case 2^2n+1 states where n denotes the number of subformulas. Moreover, it has a better bound 2^n+1 when the formula is Until- (or Release-) free.
- Periodic control systems used in spacecrafts and automotives are usually period-driven and can be decomposed into different modes with each mode representing a system state observed from outside. Such systems may also involve intensive computing in their modes. Despite the fact that such control systems are widely used in the above-mentioned safety-critical embedded domains, there is lack of domain-specific formal modelling languages for such systems in the relevant industry. To address this problem, we propose a formal visual modeling framework called MDM as a concise and precise way to specify and analyze such systems. To capture the temporal properties of periodic control systems, we provide, along with MDM, a property specification language based on interval logic for the description of concrete temporal requirements the engineers are concerned with. The statistical model checking technique can then be used to verify the MDM models against desired properties. To demonstrate the viability of our approach, we have applied our modelling framework to some real life case studies from industry and helped detect two design defects for some spacecraft control systems.
- Fast approximate nearest neighbor (NN) search in large databases is becoming popular. Several powerful learning-based formulations have been proposed recently. However, not much attention has been paid to a more fundamental question: how difficult is (approximate) nearest neighbor search in a given data set? And which data properties affect the difficulty of nearest neighbor search and how? This paper introduces the first concrete measure called Relative Contrast that can be used to evaluate the influence of several crucial data characteristics such as dimensionality, sparsity, and database size simultaneously in arbitrary normed metric spaces. Moreover, we present a theoretical analysis to prove how the difficulty measure (relative contrast) determines/affects the complexity of Local Sensitive Hashing, a popular approximate NN search method. Relative contrast also provides an explanation for a family of heuristic hashing algorithms with good practical performance based on PCA. Finally, we show that most of the previous works in measuring NN search meaningfulness/difficulty can be derived as special asymptotic cases for dense vectors of the proposed measure.
- Mar 29 2012 cs.NE arXiv:1203.6286v5The hardness of fitness functions is an important research topic in the field of evolutionary computation. In theory, the study can help understanding the ability of evolutionary algorithms. In practice, the study may provide a guideline to the design of benchmarks. The aim of this paper is to answer the following research questions: Given a fitness function class, which functions are the easiest with respect to an evolutionary algorithm? Which are the hardest? How are these functions constructed? The paper provides theoretical answers to these questions. The easiest and hardest fitness functions are constructed for an elitist (1+1) evolutionary algorithm to maximise a class of fitness functions with the same optima. It is demonstrated that the unimodal functions are the easiest and deceptive functions are the hardest in terms of the time-fitness landscape. The paper also reveals that the easiest fitness function to one algorithm may become the hardest to another algorithm, and vice versa.
- Feb 09 2012 cs.NE arXiv:1202.1708v2Nowadays hybrid evolutionary algorithms, i.e, heuristic search algorithms combining several mutation operators some of which are meant to implement stochastically a well known technique designed for the specific problem in question while some others playing the role of random search, have become rather popular for tackling various NP-hard optimization problems. While empirical studies demonstrate that hybrid evolutionary algorithms are frequently successful at finding solutions having fitness sufficiently close to the optimal, many fewer articles address the computational complexity in a mathematically rigorous fashion. This paper is devoted to a mathematically motivated design and analysis of a parameterized family of evolutionary algorithms which provides a polynomial time approximation scheme for one of the well-known NP-hard combinatorial optimization problems, namely the "single machine scheduling problem without precedence constraints". The authors hope that the techniques and ideas developed in this article may be applied in many other situations.
- Dec 08 2011 cs.NE arXiv:1112.1517v4Mixed strategy EAs aim to integrate several mutation operators into a single algorithm. However few theoretical analysis has been made to answer the question whether and when the performance of mixed strategy EAs is better than that of pure strategy EAs. In theory, the performance of EAs can be measured by asymptotic convergence rate and asymptotic hitting time. In this paper, it is proven that given a mixed strategy (1+1) EAs consisting of several mutation operators, its performance (asymptotic convergence rate and asymptotic hitting time)is not worse than that of the worst pure strategy (1+1) EA using one mutation operator; if these mutation operators are mutually complementary, then it is possible to design a mixed strategy (1+1) EA whose performance is better than that of any pure strategy (1+1) EA using one mutation operator.
- This paper presents GRASTA (Grassmannian Robust Adaptive Subspace Tracking Algorithm), an efficient and robust online algorithm for tracking subspaces from highly incomplete information. The algorithm uses a robust $l^1$-norm cost function in order to estimate and track non-stationary subspaces when the streaming data vectors are corrupted with outliers. We apply GRASTA to the problems of robust matrix completion and real-time separation of background from foreground in video. In this second application, we show that GRASTA performs high-quality separation of moving objects from background at exceptional speeds: In one popular benchmark video example, GRASTA achieves a rate of 57 frames per second, even when run in MATLAB on a personal laptop.
- Aug 24 2011 cs.NE arXiv:1108.4531v4Population-based evolutionary algorithms (EAs) have been widely applied to solve various optimization problems. The question of how the performance of a population-based EA depends on the population size arises naturally. The performance of an EA may be evaluated by different measures, such as the average convergence rate to the optimal set per generation or the expected number of generations to encounter an optimal solution for the first time. Population scalability is the performance ratio between a benchmark EA and another EA using identical genetic operators but a larger population size. Although intuitively the performance of an EA may improve if its population size increases, currently there exist only a few case studies for simple fitness functions. This paper aims at providing a general study for discrete optimisation. A novel approach is introduced to analyse population scalability using the fundamental matrix. The following two contributions summarize the major results of the current article. (1) We demonstrate rigorously that for elitist EAs with identical global mutation, using a lager population size always increases the average rate of convergence to the optimal set; and yet, sometimes, the expected number of generations needed to find an optimal solution (measured by either the maximal value or the average value) may increase, rather than decrease. (2) We establish sufficient and/or necessary conditions for the superlinear scalability, that is, when the average convergence rate of a $(\mu+\mu)$ EA (where $\mu\ge2$) is bigger than $\mu$ times that of a $(1+1)$ EA.