results for au:Guo_H in:cs

- Oct 17 2017 cs.LG arXiv:1710.05128v1Parametric embedding methods such as parametric t-SNE (pt-SNE) have been widely adopted for data visualization and out-of-sample data embedding without further computationally expensive optimization or approximation. However, the performance of pt-SNE is highly sensitive to the hyper-parameter batch size due to conflicting optimization goals, and often produces dramatically different embeddings with different choices of user-defined perplexities. To effectively solve these issues, we present parametric t-distributed stochastic exemplar-centered embedding methods. Our strategy learns embedding parameters by comparing given data only with precomputed exemplars, resulting in a cost function with linear computational and memory complexity, which is further reduced by noise contrastive samples. Moreover, we propose a shallow embedding network with high-order feature interactions for data visualization, which is much easier to tune but produces comparable performance in contrast to a deep neural network employed by pt-SNE. We empirically demonstrate, using several benchmark datasets, that our proposed methods significantly outperform pt-SNE in terms of robustness, visual effects, and quantitative evaluations.
- Oct 12 2017 cs.NE arXiv:1710.04036v1In this paper, we extend a bio-inspired algorithm called the porcellio scaber algorithm (PSA) to solve constrained optimization problems, including a constrained mixed discrete-continuous nonlinear optimization problem. Our extensive experiment results based on benchmark optimization problems show that the PSA has a better performance than many existing methods or algorithms. The results indicate that the PSA is a promising algorithm for constrained optimization.
- Oct 09 2017 cs.CV arXiv:1710.02213v1Video denoising refers to the problem of removing "noise" from a video sequence. Here the term "noise" is used in a broad sense to refer to any corruption or outlier or interference that is not the quantity of interest. In this work, we develop a novel approach to video denoising that is based on the idea that many noisy or corrupted videos can be split into three parts - the "low-rank layer", the "sparse layer", and a small residual (which is small and bounded). We show, using extensive experiments, that our denoising approach outperforms the state-of-the-art denoising algorithms.
- Imitation learning holds the promise to address challenging robotic tasks such as autonomous navigation. It however requires a human supervisor to oversee the training process and send correct control commands to robots without feedback, which is always prone to error and expensive. To minimize human involvement and avoid manual labeling of data in the robotic autonomous navigation with imitation learning, this paper proposes a novel semi-supervised imitation learning solution based on a multi-sensory design. This solution includes a suboptimal sensor policy based on sensor fusion to automatically label states encountered by a robot to avoid human supervision during training. In addition, a recording policy is developed to throttle the adversarial affect of learning too much from the suboptimal sensor policy. This solution allows the robot to learn a navigation policy in a self-supervised manner. With extensive experiments in indoor environments, this solution can achieve near human performance in most of the tasks and even surpasses human performance in case of unexpected events such as hardware failures or human operation errors. To best of our knowledge, this is the first work that synthesizes sensor fusion and imitation learning to enable robotic autonomous navigation in the real world without human supervision.
- Sep 26 2017 cs.DS arXiv:1709.08561v1Gorodezky and Pak (Random Struct. Algorithms, 2014) introduced a "cluster-popping" algorithm for sampling root-connected subgraphs in a directed graph, and conjectured that it runs in expected polynomial time on bi-directed graphs. We confirm their conjecture. It follows that there is a fully polynomial-time randomized approximation scheme (FPRAS) for reachability in bi-directed graphs. Reachability is the probability that, assuming each arc fails independently, all vertices can reach a special root vertex in the remaining graph. A bi-directed graph is one in which each directed arc has a parallel twin oriented in the opposite sense.
- Sep 18 2017 cs.SE arXiv:1709.04986v1Automated synthesis of reactive systems from spe- cifications has been a topic of research for decades. Recently, a variety of approaches have been proposed to extend synt- hesis of reactive systems from propositional specifications to- wards specifications over rich theories. Such approaches include inductive synthesis, template-based synthesis, counterexample- guided synthesis, and predicate abstraction techniques. In this paper, we propose a novel approach to program synthesis based on the validity of forall-exists formulas. The approach is inspired by verification techniques that construct inductive invariants, like IC3 / Property Directed Reachability, and is completely automated. The original problem space is recursively refined by blocking out regions of unsafe states, with the goal being the discovery of a fixpoint that describes safe reactions. If such a fixpoint is found, we construct a witness that is directly translated into an implementation. We have implemented the algorithm in the JKIND model checker, and exercised it against contracts written using the Lustre specification language. Experimental results show how the new algorithm yields better performance as well as soundness for "unrealizable" results when compared to JKIND's existing synthesis procedure, an approach based on the construction of k-inductive proofs of realizability.
- Aug 14 2017 cs.CV arXiv:1708.03416v1Hand pose estimation from a single depth image is an essential topic in computer vision and human computer interaction. Despite recent advancements in this area promoted by convolutional neural network, accurate hand pose estimation is still a challenging problem. In this paper we propose a Pose guided structured Region Ensemble Network (Pose-REN) to boost the performance of hand pose estimation. The proposed method extracts regions from the feature maps of convolutional neural network under the guide of an initially estimated pose, generating more optimal and representative features for hand pose estimation. The extracted feature regions are then integrated hierarchically according to the topology of hand joints by employing tree-structured fully connections. A refined estimation of hand pose is directly regressed by the proposed network and the final hand pose is obtained by utilizing an iterative cascaded method. Comprehensive experiments on public hand pose datasets demonstrate that our proposed method outperforms state-of-the-art algorithms.
- Aug 11 2017 cs.CV arXiv:1708.03278v1Dynamic hand gesture recognition has attracted increasing interests because of its importance for human computer interaction. In this paper, we propose a new motion feature augmented recurrent neural network for skeleton-based dynamic hand gesture recognition. Finger motion features are extracted to describe finger movements and global motion features are utilized to represent the global movement of hand skeleton. These motion features are then fed into a bidirectional recurrent neural network (RNN) along with the skeleton sequence, which can augment the motion features for RNN and improve the classification performance. Experiments demonstrate that our proposed method is effective and outperforms start-of-the-art methods.
- Because of vast volume of data being produced by today's scientific simulations and experiments, lossy data compressor allowing user-controlled loss of accuracy during the compression is a relevant solution for significantly reducing the data size. However, lossy compressor developers and users are missing a tool to explore the features of scientific datasets and understand the data alteration after compression in a systematic and reliable way. To address this gap, we have designed and implemented a generic framework called Z-checker. On the one hand, Z-checker combines a battery of data analysis components for data compression. On the other hand, Z-checker is implemented as an open-source community tool to which users and developers can contribute and add new analysis components based on their additional analysis demands. In this paper, we present a survey of existing lossy compressors. Then we describe the design framework of Z-checker, in which we integrated evaluation metrics proposed in prior work as well as other analysis tools. Specifically, for lossy compressor developers, Z-checker can be used to characterize critical properties of any dataset to improve compression strategies. For lossy compression users, Z-checker can detect the compression quality, provide various global distortion analysis comparing the original data with the decompressed data and statistical analysis of the compression error. Z-checker can perform the analysis with either coarse granularity or fine granularity, such that the users and developers can select the best-fit, adaptive compressors for different parts of the dataset. Z-checker features a visualization interface displaying all analysis results in addition to some basic views of the datasets such as time series. To the best of our knowledge, Z-checker is the first tool designed to assess lossy compression comprehensively for scientific datasets.
- Jul 25 2017 cs.CV arXiv:1707.07248v13D hand pose estimation from single depth image is an important and challenging problem for human-computer interaction. Recently deep convolutional networks (ConvNet) with sophisticated design have been employed to address it, but the improvement over traditional random forest based methods is not so apparent. To exploit the good practice and promote the performance for hand pose estimation, we propose a tree-structured Region Ensemble Network (REN) for directly 3D coordinate regression. It first partitions the last convolution outputs of ConvNet into several grid regions. The results from separate fully-connected (FC) regressors on each regions are then integrated by another FC layer to perform the estimation. By exploitation of several training strategies including data augmentation and smooth $L_1$ loss, proposed REN can significantly improve the performance of ConvNet to localize hand joints. The experimental results demonstrate that our approach achieves the best performance among state-of-the-art algorithms on three public hand pose datasets. We also experiment our methods on fingertip detection and human pose datasets and obtain state-of-the-art accuracy.
- While natural languages are compositional, how state-of-the-art neural models achieve compositionality is still unclear. We propose a deep network, which not only achieves competitive accuracy for text classification, but also exhibits compositional behavior. That is, while creating hierarchical representations of a piece of text, such as a sentence, the lower layers of the network distribute their layer-specific attention weights to individual words. In contrast, the higher layers compose meaningful phrases and clauses, whose lengths increase as the networks get deeper until fully composing the sentence.
- In this paper, we introduce a generalized value iteration network (GVIN), which is an end-to-end neural network planning module. GVIN emulates the value iteration algorithm by using a novel graph convolution operator, which enables GVIN to learn and plan on irregular spatial graphs. We propose three novel differentiable kernels as graph convolution operators and show that the embedding based kernel achieves the best performance. We further propose episodic Q-learning, an improvement upon traditional n-step Q-learning that stabilizes training for networks that contain a planning module. Lastly, we evaluate GVIN on planning problems in 2D mazes, irregular graphs, and real-world street networks, showing that GVIN generalizes well for both arbitrary graphs and unseen graphs of larger scale and outperforms a naive generalization of VIN (discretizing a spatial graph into a 2D image).
- For Markov chain Monte Carlo methods, one of the greatest discrepancies between theory and system is the scan order - while most theoretical development on the mixing time analysis deals with random updates, real-world systems are implemented with systematic scans. We bridge this gap for models that exhibit a bipartite structure, including, most notably, the Restricted/Deep Boltzmann Machine. The de facto implementation for these models scans variables in a layerwise fashion. We show that the Gibbs sampler with a layerwise alternating scan order has its relaxation time (in terms of epochs) no larger than that of a random-update Gibbs sampler (in terms of variable updates). We also construct examples to show that this bound is asymptotically tight. Through standard inequalities, our result also implies a comparison on the mixing times.
- We show that the Clifford gates and stabilizer circuits in the quantum computing literature, which admit efficient classical simulation, are equivalent to affine signatures under a unitary condition. The latter is a known class of tractable functions under the Holant framework.
- We propose a multi-view network for text classification. Our method automatically creates various views of its input text, each taking the form of soft attention weights that distribute the classifier's focus among a set of base features. For a bag-of-words representation, each view focuses on a different subset of the text's words. Aggregating many such views results in a more discriminative and robust representation. Through a novel architecture that both stacks and concatenates views, we produce a network that emphasizes both depth and width, allowing training to converge quickly. Using our multi-view architecture, we establish new state-of-the-art accuracies on two benchmark tasks.
- Mar 28 2017 cs.SY arXiv:1703.08906v1In the last few decades, the development of miniature biological sensors that can detect and measure different phenomena at the nanoscale has led to transformative disease diagnosis and treatment techniques. Among others, biofunctional Raman nanoparticles have been utilized in vitro and in vivo for multiplexed diagnosis and detection of different biological agents. However, existing solutions require the use of bulky lasers to excite the nanoparticles and similarly bulky and expensive spectrometers to measure the scattered Raman signals, which limit the practicality and applications of this nano-biosensing technique. In addition, due to the high path loss of the intra-body environment, the received signals are usually very weak, which hampers the accuracy of the measurements. In this paper, the concept of cooperative Raman spectrum reconstruction for real-time in vivo nano-biosensing is presented for the first time. The fundamental idea is to replace the single excitation and measurement points (i.e., the laser and the spectrometer, respectively) by a network of interconnected nano-devices that can simultaneously excite and measure nano-biosensing particles. More specifically, in the proposed system a large number of nanosensors jointly and distributively collect the Raman response of nano-biofunctional nanoparticles (NBPs) traveling through the blood vessels. This paper presents a detailed description of the sensing system and, more importantly, proves its feasibility, by utilizing accurate models of optical signal propagation in intra-body environment and low-complexity estimation algorithms. The numerical results show that with a certain density of NBPs, the reconstructed Raman spectrum can be recovered and utilized to accurately extract the targeting intra-body information.
- Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
- In this paper, we study the performance of initial access beamforming schemes in the cases with large but finite number of transmit antennas and users. Particularly, we develop an efficient beamforming scheme using genetic algorithms. Moreover, taking the millimeter wave communication characteristics and different metrics into account, we investigate the effect of various parameters such as number of antennas/receivers, beamforming resolution as well as hardware impairments on the system performance. As shown, our proposed algorithm is generic in the sense that it can be effectively applied with different channel models, metrics and beamforming methods. Also, our results indicate that the proposed scheme can reach (almost) the same end-to-end throughput as the exhaustive search-based optimal approach with considerably less implementation complexity.
- Emojis, as a new way of conveying nonverbal cues, are widely adopted in computer-mediated communications. In this paper, first from a message sender perspective, we focus on people's motives in using four types of emojis -- positive, neutral, negative, and non-facial. We compare the willingness levels of using these emoji types for seven typical intentions that people usually apply nonverbal cues for in communication. The results of extensive statistical hypothesis tests not only report the popularities of the intentions, but also uncover the subtle differences between emoji types in terms of intended uses. Second, from a perspective of message recipients, we further study the sentiment effects of emojis, as well as their duplications, on verbal messages. Different from previous studies in emoji sentiment, we study the sentiments of emojis and their contexts as a whole. The experiment results indicate that the powers of conveying sentiment are different between four emoji types, and the sentiment effects of emojis vary in the contexts of different valences.
- Metric learning methods for dimensionality reduction in combination with k-Nearest Neighbors (kNN) have been extensively deployed in many classification, data embedding, and information retrieval applications. However, most of these approaches involve pairwise training data comparisons, and thus have quadratic computational complexity with respect to the size of training set, preventing them from scaling to fairly big datasets. Moreover, during testing, comparing test data against all the training data points is also expensive in terms of both computational cost and resources required. Furthermore, previous metrics are either too constrained or too expressive to be well learned. To effectively solve these issues, we present an exemplar-centered supervised shallow parametric data embedding model, using a Maximally Collapsing Metric Learning (MCML) objective. Our strategy learns a shallow high-order parametric embedding function and compares training/test data only with learned or precomputed exemplars, resulting in a cost function with linear computational complexity for both training and testing. We also empirically demonstrate, using several benchmark datasets, that for classification in two-dimensional embedding space, our approach not only gains speedup of kNN by hundreds of times, but also outperforms state-of-the-art supervised embedding approaches.
- Feb 16 2017 cs.CV arXiv:1702.04517v2Convective storm nowcasting has attracted substantial attention in various fields. Existing methods under a deep learning framework rely primarily on radar data. Although they perform nowcast storm advection well, it is still challenging to nowcast storm initiation and growth, due to the limitations of the radar observations. This paper describes the first attempt to nowcast storm initiation, growth, and advection simultaneously under a deep learning framework using multi-source meteorological data. To this end, we present a multi-channel 3D-cube successive convolution network (3D-SCN). As real-time re-analysis meteorological data can now provide valuable atmospheric boundary layer thermal dynamic information, which is essential to predict storm initiation and growth, both raw 3D radar and re-analysis data are used directly without any handcraft feature engineering. These data are formulated as multi-channel 3D cubes, to be fed into our network, which are convolved by cross-channel 3D convolutions. By stacking successive convolutional layers without pooling, we build an end-to-end trainable model for nowcasting. Experimental results show that deep learning methods achieve better performance than traditional extrapolation methods. The qualitative analyses of 3D-SCN show encouraging results of nowcasting of storm initiation, growth, and advection.
- Feb 09 2017 cs.CV arXiv:1702.02447v2Hand pose estimation from monocular depth images is an important and challenging problem for human-computer interaction. Recently deep convolutional networks (ConvNet) with sophisticated design have been employed to address it, but the improvement over traditional methods is not so apparent. To promote the performance of directly 3D coordinate regression, we propose a tree-structured Region Ensemble Network (REN), which partitions the convolution outputs into regions and integrates the results from multiple regressors on each regions. Compared with multi-model ensemble, our model is completely end-to-end training. The experimental results demonstrate that our approach achieves the best performance among state-of-the-arts on two public datasets.
- Jan 24 2017 cs.SI arXiv:1701.06250v2The 2016 U.S. presidential election has witnessed the major role of Twitter in the year's most important political event. Candidates used this social media platform extensively for online campaigns. Meanwhile, social media has been filled with rumors, which might have had huge impacts on voters' decisions. In this paper, we present a thorough analysis of rumor tweets from the followers of two presidential candidates: Hillary Clinton and Donald Trump. To overcome the difficulty of labeling a large amount of tweets as training data, we detect rumor tweets by matching them with verified rumor articles. We analyze over 8 million tweets collected from the followers of the two candidates. Our results provide answers to several primary concerns about rumors in this election, including: which side of the followers posted the most rumors, who posted these rumors, what rumors they posted, and when they posted these rumors. The insights of this paper can help us understand the online rumor behaviors in American politics.
- In this paper, we examine the physical layer security for cooperative wireless networks with multiple intermediate nodes, where the decode-and-forward (DF) protocol is considered. We propose a new joint relay and jammer selection (JRJS) scheme for protecting wireless communications against eavesdropping, where an intermediate node is selected as the relay for the sake of forwarding the source signal to the destination and meanwhile, the remaining intermediate nodes are employed to act as friendly jammers which broadcast the artificial noise for disturbing the eavesdropper. We further investigate the power allocation among the source, relay and friendly jammers for maximizing the secrecy rate of proposed JRJS scheme and derive a closed-form sub-optimal solution. Specificially, all the intermediate nodes which successfully decode the source signal are considered as relay candidates. For each candidate, we derive the sub-optimal closed-form power allocation solution and obtain the secrecy rate result of the corresponding JRJS scheme. Then, the candidate which is capable of achieving the highest secrecy rate is selected as the relay. Two assumptions about the channel state information (CSI), namely the full CSI (FCSI) and partial CSI (PCSI), are considered. Simulation results show that the proposed JRJS scheme outperforms the conventional pure relay selection, pure jamming and GSVD based beamforming schemes in terms of secrecy rate. Additionally, the proposed FCSI based power allocation (FCSI-PA) and PCSI based power allocation (PCSI-PA) schemes both achieve higher secrecy rates than the equal power allocation (EPA) scheme.
- Dec 26 2016 cs.CV arXiv:1612.07978v1Accurate detection of fingertips in depth image is critical for human-computer interaction. In this paper, we present a novel two-stream convolutional neural network (CNN) for RGB-D fingertip detection. Firstly edge image is extracted from raw depth image using random forest. Then the edge information is combined with depth information in our CNN structure. We study several fusion approaches and suggest a slow fusion strategy as a promising way of fingertip detection. As shown in our experiments, our real-time algorithm outperforms state-of-the-art fingertip detection methods on the public dataset HandNet with an average 3D error of 9.9mm, and shows comparable accuracy of fingertip estimation on NYU hand dataset.
- Nov 30 2016 cs.IR arXiv:1611.09496v1It is well known that learning customers' preference and making recommendations to them from today's information-exploded environment is critical and non-trivial in an on-line system. There are two different modes of recommendation systems, namely pull-mode and push-mode. The majority of the recommendation systems are pull-mode, which recommend items to users only when and after users enter Application Market. While push-mode works more actively to enhance or re-build connection between Application Market and users. As one of the most successful phone manufactures,both the number of users and apps increase dramatically in Huawei Application Store (also named Hispace Store), which has approximately 0.3 billion registered users and 1.2 million apps until 2016 and whose number of users is growing with high-speed. For the needs of real scenario, we establish a Push Service Platform (shortly, PSP) to discover the target user group automatically from web-scale user operation log data with an additional small set of labelled apps (usually around 10 apps),in Hispace Store. As presented in this work,PSP includes distributed storage layer, application layer and evaluation layer. In the application layer, we design a practical graph-based algorithm (named A-PARW) for user group discovery, which is an approximate version of partially absorbing random walk. Based on I mode of A-PARW, the effectiveness of our system is significantly improved, compared to the predecessor to presented system, which uses Personalized Pagerank in its application layer.
- We propose a new algorithmic framework, called "partial rejection sampling", to draw samples exactly from a product distribution, conditioned on none of a number of bad events occurring. Our framework builds (perhaps surprising) new connections between the variable framework of the Lovász Local Lemma and some classical sampling algorithms such as the "cycle-popping" algorithm for rooted spanning trees by Wilson. Among other applications, we discover new algorithms to sample satisfying assignments of k-CNF formulas with bounded variable occurrences.
- Given a matrix of observed data, Principal Components Analysis (PCA) computes a small number of orthogonal directions that contain most of its variability. Provably accurate solutions for PCA have been in use for a long time. However, to the best of our knowledge, all existing theoretical guarantees for it assume that the data and the corrupting noise are mutually independent, or at least uncorrelated. This is valid in practice often, but not always. In this paper, we study the PCA problem in the setting where the data and noise can be correlated. Such noise is often also referred to as "data-dependent noise". We obtain a correctness result for the standard eigenvalue decomposition (EVD) based solution to PCA under simple assumptions on the data-noise correlation. We also develop and analyze a generalization of EVD, cluster-EVD, that improves upon EVD in certain regimes.
- Explicit high-order feature interactions efficiently capture essential structural knowledge about the data of interest and have been used for constructing generative models. We present a supervised discriminative High-Order Parametric Embedding (HOPE) approach to data visualization and compression. Compared to deep embedding models with complicated deep architectures, HOPE generates more effective high-order feature mapping through an embarrassingly simple shallow model. Furthermore, two approaches to generating a small number of exemplars conveying high-order interactions to represent large-scale data sets are proposed. These exemplars in combination with the feature mapping learned by HOPE effectively capture essential data variations. Moreover, through HOPE, these exemplars are employed to increase the computational efficiency of kNN classification for fast information retrieval by thousands of times. For classification in two-dimensional embedding space on MNIST and USPS datasets, our shallow method HOPE with simple Sigmoid transformations significantly outperforms state-of-the-art supervised deep embedding models based on deep neural networks, and even achieved historically low test error rate of 0.65% in two-dimensional space on MNIST, which demonstrates the representational efficiency and power of supervised shallow models with high-order feature interactions.
- Given a matrix of observed data, Principal Components Analysis (PCA) computes a small number of orthogonal directions that contain most of its variability. Provably accurate solutions for PCA have been in use for a long time. However, to the best of our knowledge, all existing theoretical guarantees for it assume that the data and the corrupting noise are mutually independent, or at least uncorrelated. This is valid in practice often, but not always. In this paper, we study the PCA problem in the setting where the data and noise can be correlated. Such noise is often also referred to as "data-dependent noise". We obtain a correctness result for the standard eigenvalue decomposition (EVD) based solution to PCA under simple assumptions on the data-noise correlation. We also develop and analyze a generalization of EVD, cluster-EVD, that improves upon EVD in certain regimes.
- We show that the mixing time of Glauber (single edge update) dynamics for the random cluster model at $q=2$ is bounded by a polynomial in the size of the underlying graph. As a consequence, the Swendsen-Wang algorithm for the ferromagnetic Ising model at any temperature has the same polynomial mixing time bound.
- For anti-ferromagnetic 2-spin systems, a beautiful connection has been established, namely that the following three notions align perfectly: the uniqueness in infinite regular trees, the decay of correlations (also known as spatial mixing), and the approximability of the partition function. The uniqueness condition implies spatial mixing, and an FPTAS for the partition function exists based on spatial mixing. On the other hand, non-uniqueness implies some long range correlation, based on which NP-hardness reductions are built. These connections for ferromagnetic 2-spin systems are much less clear, despite their similarities to anti-ferromagnetic systems. The celebrated Jerrum-Sinclair Markov chain [JS93] works even if spatial mixing or uniqueness fails. We provide some partial answers. We use $(\beta,\gamma)$ to denote the $(+,+)$ and $(-,-)$ edge interactions and $\lambda$ the external field, where $\beta\gamma>1$. If all fields satisfy $\lambda<\lambda_c$ (assuming $\beta\le\gamma$), where $\lambda_c=\left(\gamma/\beta\right)^\frac{\Delta_c+1}{2}$ and $\Delta_c=\frac{\sqrt{\beta\gamma}+1}{\sqrt{\beta\gamma}-1}$, then a weaker version of spatial mixing holds in all trees. Moreover, if $\beta\le 1$, then $\lambda<\lambda_c$ is sufficient to guarantee strong spatial mixing and FPTAS. This improves the previous best algorithm, which is an FPRAS based on Markov chains and works for $\lambda<\gamma/\beta$ [LLZ14a]. The bound $\lambda_c$ is almost optimal. When $\beta\le 1$, uniqueness holds in all infinite regular trees, if and only if $\lambda\le\lambda_c^{int}$, where $\lambda_c^{int}=\left(\gamma/\beta\right)^\frac{\lceil\Delta_c\rceil+1}{2}$. If we allow fields $\lambda>\lambda_c^{int'}$, where $\lambda_c^{int'}=\left(\gamma/\beta\right)^\frac{\lfloor\Delta_c\rfloor+2}{2}$, then approximating the partition function is #BIS-hard.
- Approximate counting via correlation decay is the core algorithmic technique used in the sharp delineation of the computational phase transition that arises in the approximation of the partition function of anti-ferromagnetic two-spin models. Previous analyses of correlation-decay algorithms implicitly depended on the occurrence of strong spatial mixing (SSM). This means that one uses worst-case analysis of the recursive procedure that creates the sub-instances. We develop a new analysis method that is more refined than the worst-case analysis. We take the shape of instances in the computation tree into consideration and amortise against certain "bad" instances that are created as the recursion proceeds. This enables us to show correlation decay and to obtain an FPTAS even when SSM fails. We apply our technique to the problem of approximately counting independent sets in hypergraphs with degree upper-bound Delta and with a lower bound k on the arity of hyperedges. Liu and Lin gave an FPTAS for k>=2 and Delta<=5 (lack of SSM was the obstacle preventing this algorithm from being generalised to Delta=6). Our technique gives a tight result for Delta=6, showing that there is an FPTAS for k>=3 and Delta<=6. The best previously-known approximation scheme for Delta=6 is the Markov-chain simulation based FPRAS of Bordewich, Dyer and Karpinski, which only works for k>=8. Our technique also applies for larger values of k, giving an FPTAS for k>=Delta. This bound is not substantially stronger than existing randomised results in the literature. Nevertheless, it gives the first deterministic approximation scheme in this regime. Moreover, unlike existing results, it leads to an FPTAS for counting dominating sets in regular graphs with sufficiently large degree. We further demonstrate that approximately counting independent sets in hypergraphs is NP-hard even within the uniqueness regime.
- We introduce a novel schema for sequence to sequence learning with a Deep Q-Network (DQN), which decodes the output sequence iteratively. The aim here is to enable the decoder to first tackle easier portions of the sequences, and then turn to cope with difficult parts. Specifically, in each iteration, an encoder-decoder Long Short-Term Memory (LSTM) network is employed to, from the input sequence, automatically create features to represent the internal states of and formulate a list of potential actions for the DQN. Take rephrasing a natural sentence as an example. This list can contain ranked potential words. Next, the DQN learns to make decision on which action (e.g., word) will be selected from the list to modify the current decoded sequence. The newly modified output sequence is subsequently used as the input to the DQN for the next decoding iteration. In each iteration, we also bias the reinforcement learning's attention to explore sequence portions which are previously difficult to be decoded. For evaluation, the proposed strategy was trained to decode ten thousands natural sentences. Our experiments indicate that, when compared to a left-to-right greedy beam search LSTM decoder, the proposed method performed competitively well when decoding sentences from the training set, but significantly outperformed the baseline when decoding unseen sentences, in terms of BLEU score obtained.
- We present TAO, a software testing tool performing automated test and oracle generation based on a semantic approach. TAO entangles grammar-based test generation with automated semantics evaluation using a denotational semantics framework. We show how TAO can be incorporated with the Selenium automation tool for automated web testing, and how TAO can be further extended to support automated delta debugging, where a failing web test script can be systematically reduced based on grammar-directed strategies. A real-life parking website is adopted throughout the paper to demonstrate the effectivity of our semantics-based web testing approach.
- We prove a complexity dichotomy for complex-weighted Holant problems with an arbitrary set of symmetric constraint functions on Boolean variables. This dichotomy is specifically to answer the question: Is the FKT algorithm under a holographic transformation a \emphuniversal strategy to obtain polynomial-time algorithms for problems over planar graphs that are intractable in general? This dichotomy is a culmination of previous ones, including those for Spin Systems, Holant, and #CSP. A recurring theme has been that a holographic reduction to FKT is a universal strategy. Surprisingly, for planar Holant, we discover new planar tractable problems that are not expressible by a holographic reduction to FKT. In previous work, an important tool was a dichotomy for #CSP^d, which denotes #CSP where every variable appears a multiple of d times. However its proof violates planarity. We prove a dichotomy for planar #CSP^2. We apply this planar #CSP^2 dichotomy in the proof of the planar Holant dichotomy. As a special case of our new planar tractable problems, counting perfect matchings (#PM) over k-uniform hypergraphs is polynomial-time computable when the incidence graph is planar and k >= 5. The same problem is #P-hard when k=3 or k=4, which is also a consequence of our dichotomy. When k=2, it becomes #PM over planar graphs and is tractable again. More generally, over hypergraphs with specified hyperedge sizes and the same planarity assumption, #PM is polynomial-time computable if the greatest common divisor of all hyperedge sizes is at least 5.
- Many real-world applications are associated with structured data, where not only input but also output has interplay. However, typical classification and regression models often lack the ability of simultaneously exploring high-order interaction within input and that within output. In this paper, we present a deep learning model aiming to generate a powerful nonlinear functional mapping from structured input to structured output. More specifically, we propose to integrate high-order hidden units, guided discriminative pretraining, and high-order auto-encoders for this purpose. We evaluate the model with three datasets, and obtain state-of-the-art performances among competitive methods. Our current work focuses on structured output regression, which is a less explored area, although the model can be extended to handle structured label classification.
- The chain-structured long short-term memory (LSTM) has showed to be effective in a wide range of problems such as speech recognition and machine translation. In this paper, we propose to extend it to tree structures, in which a memory cell can reflect the history memories of multiple child cells or multiple descendant cells in a recursive process. We call the model S-LSTM, which provides a principled way of considering long-distance interaction over hierarchies, e.g., language or image parse structures. We leverage the models for semantic composition to understand the meaning of text, a fundamental problem in natural language understanding, and show that it outperforms a state-of-the-art recursive model by replacing its composition layers with the S-LSTM memory blocks. We also show that utilizing the given structures is helpful in achieving a performance better than that without considering the structures.
- We study the complexity of approximately evaluating the Ising and Tutte partition functions with complex parameters. Our results are partly motivated by the study of the quantum complexity classes BQP and IQP. Recent results show how to encode quantum computations as evaluations of classical partition functions. These results rely on interesting and deep results about quantum computation in order to obtain hardness results about the difficulty of (classically) evaluating the partition functions for certain fixed parameters. The motivation for this paper is to study more comprehensively the complexity of (classically) approximating the Ising and Tutte partition functions with complex parameters. Partition functions are combinatorial in nature and quantifying their approximation complexity does not require a detailed understanding of quantum computation. Using combinatorial arguments, we give the first full classification of the complexity of multiplicatively approximating the norm and additively approximating the argument of the Ising partition function for complex edge interactions (as well as of approximating the partition function according to a natural complex metric). We also study the norm approximation problem in the presence of external fields, for which we give a complete dichotomy when the parameters are roots of unity. Previous results were known just for a few such points, and we strengthen these results from BQP-hardness to #P-hardness. Moreover, we show that computing the sign of the Tutte polynomial is #P-hard at certain points related to the simulation of BQP. Using our classifications, we then revisit the connections to quantum computation, drawing conclusions that are a little different from (and incomparable to) ones in the quantum literature, but along similar lines.
- Jul 21 2014 cs.OH arXiv:1407.5040v2Magnetic Induction (MI) communication technique has shown great potentials in complex and RF-challenging environments, such as underground and underwater, due to its advantage over EM wave-based techniques in penetrating lossy medium. However, the transmission distance of MI techniques is limited since magnetic field attenuates very fast in the near field. To this end, this paper proposes Metamaterial-enhanced Magnetic Induction (M$^2$I) communication mechanism, where a MI coil antenna is enclosed by a metamaterial shell that can enhance the magnetic fields around the MI transceivers. As a result, the M$^2$I communication system can achieve tens of meters communication range by using pocket-sized antennas. In this paper, an analytical channel model is developed to explore the fundamentals of the M$^2$I mechanism, in the aspects of communication range and channel capacity, and the susceptibility to various hostile and complex environments. The theoretical model is validated through the finite element simulation software, Comsol Multiphysics. Proof-of-concept experiments are also conducted to validate the feasibility of M$^2$I.
- Apr 16 2014 cs.CC arXiv:1404.4020v1We show that an effective version of Siegel's Theorem on finiteness of integer solutions and an application of elementary Galois theory are key ingredients in a complexity classification of some Holant problems. These Holant problems, denoted by Holant(f), are defined by a symmetric ternary function f that is invariant under any permutation of the k >= 3 domain elements. We prove that Holant(f) exhibits a complexity dichotomy. This dichotomy holds even when restricted to planar graphs. A special case of this result is that counting edge k-colorings is #P-hard over planar 3-regular graphs for k >= 3. In fact, we prove that counting edge k-colorings is #P-hard over planar r-regular graphs for all k >= r >= 3. The problem is polynomial-time computable in all other parameter settings. The proof of the dichotomy theorem for Holant(f) depends on the fact that a specific polynomial p(x,y) has an explicitly listed finite set of integer solutions, and the determination of the Galois groups of some specific polynomials. In the process, we also encounter the Tutte polynomial, medial graphs, Eulerian partitions, Puiseux series, and a certain lattice condition on the (logarithm of) the roots of polynomials.
- #BIS-Hardness for 2-Spin Systems on Bipartite Bounded Degree Graphs in the Tree Nonuniqueness RegionNov 19 2013 cs.CC arXiv:1311.4451v4Counting independent sets on bipartite graphs (#BIS) is considered a canonical counting problem of intermediate approximation complexity. It is conjectured that #BIS neither has an FPRAS nor is as hard as #SAT to approximate. We study #BIS in the general framework of two-state spin systems on bipartite graphs. We define two notions, nearly-independent phase-correlated spins and unary symmetry breaking. We prove that it is #BIS-hard to approximate the partition function of any 2-spin system on bipartite graphs supporting these two notions. As a consequence, we classify the complexity of approximating the partition function of antiferromagnetic 2-spin systems on bounded-degree bipartite graphs.
- This paper designs and evaluates a practical algorithm, called practical recursive projected compressive sensing (Prac-ReProCS), for recovering a time sequence of sparse vectors $S_t$ and a time sequence of dense vectors $L_t$ from their sum, $M_t:= S_t + L_t$, when any subsequence of the $L_t$'s lies in a slowly changing low-dimensional subspace. A key application where this problem occurs is in video layering where the goal is to separate a video sequence into a slowly changing background sequence and a sparse foreground sequence that consists of one or more moving regions/objects. Prac-ReProCS is a practical modification of its theoretical counterpart which was analyzed in our recent work. Experimental comparisons demonstrating the advantage of the approach for both simulated and real videos are shown. Extension to the undersampled case is also developed.
- Holographic algorithms were first introduced by Valiant as a new methodology to derive polynomial time algorithms. The algorithms introduced by Valiant are based on matchgates, which are intrinsically for problems over planar structures. In this paper we introduce two new families of holographic algorithms. These algorithms work over general, i.e., not necessarily planar, graphs. Instead of matchgates, the two underlying families of constraint functions are of the affine type and of the product type. These play the role of Kasteleyn's algorithm for counting planar perfect matchings. The new algorithms are obtained by transforming a problem to one of these two families by holographic reductions. The tractability of affine and product type constraint functions is known. The real challenge is to determine when some concrete problem, expressed by its constraint functions, has such a holographic reduction. We present a polynomial time algorithm to decide if a given counting problem has a holographic algorithm using the affine or product type constraint functions. Our algorithm also finds a holographic transformation when one exists. We exhibit concrete problems that can be solved by the new holographic algorithms. When the constraint functions are symmetric, we further present a polynomial time algorithm for the same decision and search problems, where the complexity is measured in terms of the (exponentially more) succinct presentation of symmetric constraint functions. The algorithm for the symmetric case also shows that the recent dichotomy theorem for Holant problems with symmetric constraints is efficiently decidable. Our proof techniques are mainly algebraic, e.g., stabilizers and orbits of group actions.
- We prove a complexity dichotomy theorem for symmetric complex-weighted Boolean #CSP when the constraint graph of the input must be planar. The problems that are #P-hard over general graphs but tractable over planar graphs are precisely those with a holographic reduction to matchgates. This generalizes a theorem of Cai, Lu, and Xia for the case of real weights. We also obtain a dichotomy theorem for a symmetric arity 4 signature with complex weights in the planar Holant framework, which we use in the proof of our #CSP dichotomy. In particular, we reduce the problem of evaluating the Tutte polynomial of a planar graph at the point (3,3) to counting the number of Eulerian orientations over planar 4-regular graphs to show the latter is #P-hard. This strengthens a theorem by Huang and Lu to the planar setting. Our proof techniques combine new ideas with refinements and extensions of existing techniques. These include planar pairings, the recursive unary construction, the anti-gadget technique, and pinning in the Hadamard basis.
- May 15 2012 cs.CC arXiv:1205.2934v1A two-state spin system is specified by a 2 x 2 matrix A = A_0,0 A_0,1, A_1,0 A_1,1 = \beta 1, 1 \gamma where \beta, \gamma \ge 0. Given an input graph G=(V,E), the partition function Z_A(G) of a system is defined as Z_A(G) = \sum_\sigma: V -> 0,1 \prod_(u,v) ∈E A_\sigma(u), \sigma(v) We prove inapproximability results for the partition function in the region specified by the non-uniqueness condition from phase transition for the Gibbs measure. More specifically, assuming NP \ne RP, for any fixed \beta, \gamma in the unit square, there is no randomized polynomial-time algorithm that approximates Z_A(G) for d-regular graphs G with relative error \epsilon = 10^-4, if d = \Omega(∆(\beta,\gamma)), where ∆(\beta,\gamma) > 1/(1-\beta\gamma) is the uniqueness threshold. Up to a constant factor, this hardness result confirms the conjecture that the uniqueness phase transition coincides with the transition from computational tractability to intractability for Z_A(G). We also show a matching inapproximability result for a region of parameters \beta, \gamma outside the unit square, and all our results generalize to partition functions with an external field.
- May 01 2012 cs.CC arXiv:1204.6445v2We prove a complexity dichotomy theorem for Holant problems over an arbitrary set of complex-valued symmetric constraint functions F on Boolean variables. This extends and unifies all previous dichotomies for Holant problems on symmetric constraint functions (taking values without a finite modulus). We define and characterize all symmetric vanishing signatures. They turned out to be essential to the complete classification of Holant problems. The dichotomy theorem has an explicit tractability criterion expressible in terms of holographic transformations. A Holant problem defined by a set of constraint functions F is solvable in polynomial time if it satisfies this tractability criterion, and is #P-hard otherwise. The tractability criterion can be intuitively stated as follows: A set F is tractable if (1) every function in F has arity at most two, or (2) F is transformable to an affine type, or (3) F is transformable to a product type, or (4) F is vanishing, combined with the right type of binary functions, or (5) F belongs to a special category of vanishing type Fibonacci gates. The proof of this theorem utilizes many previous dichotomy theorems on Holant problems and Boolean #CSP. Holographic transformations play an indispensable role as both a proof technique and in the statement of the tractability criterion.
- In this paper, we consider a general bi-directional relay network with two sources and N relays when neither the source nodes nor the relays know the channel state information (CSI). A joint relay selection and analog network coding using differential modulation (RS-ANC-DM) is proposed. In the proposed scheme, the two sources employ differential modulations and transmit the differential modulated symbols to all relays at the same time. The signals received at the relay is a superposition of two transmitted symbols, which we call the analog network coded symbols. Then a single relay which has minimum sum SER is selected out of N relays to forward the ANC signals to both sources. To facilitate the selection process, in this paper we also propose a simple sub-optimal Min-Max criterion for relay selection, where a single relay which minimizes the maximum SER of two source nodes is selected. Simulation results show that the proposed Min-Max selection has almost the same performance as the optimal selection, but is much simpler. The performance of the proposed RS-ANC-DM scheme is analyzed, and a simple asymptotic SER expression is derived. The analytical results are verified through simulations.
- Feb 17 2010 cs.LO arXiv:1002.3083v1Live sequence charts (LSCs) have been proposed as an inter-object scenario-based specification and visual programming language for reactive systems. In this paper, we introduce a logic-based framework to check the consistency of an LSC specification. An LSC simulator has been implemented in logic programming, utilizing a memoized depth-first search strategy, to show how a reactive system in LSCs would response to a set of external event sequences. A formal notation is defined to specify external event sequences, extending the regular expression with a parallel operator and a testing control. The parallel operator allows interleaved parallel external events to be tested in LSCs simultaneously; while the testing control provides users to a new approach to specify and test certain temporal properties (e.g., CTL formula) in a form of LSC. Our framework further provides either a state transition graph or a failure trace to justify the consistency checking results.
- Rotation symmetric Boolean functions have important applications in the design of cryptographic algorithms. In this paper, the Conjecture about rotation symmetric Boolean functions (RSBFs) of degree 3 proposed by Cusik and Stănică is proved. As a result, the nonlinearity of such kind of functions is determined.
- With the feature of multi-master bus access, nondestructive contention-based arbitration and flexible configuration, Controller Area Network (CAN) bus is applied into the control system of Wire Harness Assembly Machine (WHAM). To accomplish desired goal, the specific features of the CAN bus is analyzed by compared with other field buses and the functional performances in the CAN bus system of WHAM is discussed. Then the application layer planning of CAN bus for dynamic priority is presented. The critical issue for the use of CAN bus system in WHAM is the data transfer rate between different nodes. So processing efficient model is introduced to assist analyzing data transfer procedure. Through the model, it is convenient to verify the real time feature of the CAN bus system in WHAM.
- This paper describes the development of the PALS system, an implementation of Prolog capable of efficiently exploiting or-parallelism on distributed-memory platforms--specifically Beowulf clusters. PALS makes use of a novel technique, called incremental stack-splitting. The technique proposed builds on the stack-splitting approach, previously described by the authors and experimentally validated on shared-memory systems, which in turn is an evolution of the stack-copying method used in a variety of parallel logic and constraint systems--e.g., MUSE, YAP, and Penny. The PALS system is the first distributed or-parallel implementation of Prolog based on the stack-splitting method ever realized. The results presented confirm the superiority of this method as a simple yet effective technique to transition from shared-memory to distributed-memory systems. PALS extends stack-splitting by combining it with incremental copying; the paper provides a description of the implementation of PALS, including details of how distributed scheduling is handled. We also investigate methodologies to effectively support order-sensitive predicates (e.g., side-effects) in the context of the stack-splitting scheme. Experimental results obtained from running PALS on both Shared Memory and Beowulf systems are presented and analyzed.
- An efficient and flexible engine for computing fixed points is critical for many practical applications. In this paper, we firstly present a goal-directed fixed point computation strategy in the logic programming paradigm. The strategy adopts a tabled resolution (or memorized resolution) to mimic the efficient semi-naive bottom-up computation. Its main idea is to dynamically identify and record those clauses that will lead to recursive variant calls, and then repetitively apply those alternatives incrementally until the fixed point is reached. Secondly, there are many situations in which a fixed point contains a large number or even infinite number of solutions. In these cases, a fixed point computation engine may not be efficient enough or feasible at all. We present a mode-declaration scheme which provides the capabilities to reduce a fixed point from a big solution set to a preferred small one, or from an infeasible infinite set to a finite one. The mode declaration scheme can be characterized as a meta-level operation over the original fixed point. We show the correctness of the mode declaration scheme. Thirdly, the mode-declaration scheme provides a new declarative method for dynamic programming, which is typically used for solving optimization problems. There is no need to define the value of an optimal solution recursively, instead, defining a general solution suffices. The optimal value as well as its corresponding concrete solution can be derived implicitly and automatically using a mode-directed fixed point computation engine. Finally, this fixed point computation engine has been successfully implemented in a commercial Prolog system. Experimental results are shown to indicate that the mode declaration improves both time and space performances in solving dynamic programming problems.