results for au:Lin_M in:cs

- Mar 08 2018 cs.CV arXiv:1803.02612v2Previous monocular depth estimation methods take a single view and directly regress the expected results. Though recent advances are made by applying geometrically inspired loss functions during training, the inference procedure does not explicitly impose any geometrical constraint. Therefore these models purely rely on the quality of data and the effectiveness of learning to generalize. This either leads to suboptimal results or the demand of huge amount of expensive ground truth labelled data to generate reasonable results. In this paper, we show for the first time that the monocular depth estimation problem can be reformulated as two sub-problems, a view synthesis procedure followed by stereo matching, with two intriguing properties, namely i) geometrical constraints can be explicitly imposed during inference; ii) demand on labelled depth data can be greatly alleviated. We show that the whole pipeline can still be trained in an end-to-end fashion and this new formulation plays a critical role in advancing the performance. The resulting model outperforms all the previous monocular depth estimation methods as well as the stereo block matching method in the challenging KITTI dataset by only using a small number of real training data. The model also generalizes well to other monocular depth estimation benchmarks. We also discuss the implications and the advantages of solving monocular depth estimation using stereo methods.
- Dec 19 2017 cs.LG arXiv:1712.05929v1Conventionally, the resource allocation is formulated as an optimization problem and solved online with instantaneous scenario information. Since most resource allocation problems are not convex, the optimal solutions are very difficult to be obtained in real time. Lagrangian relaxation or greedy methods are then often employed, which results in performance loss. Therefore, the conventional methods of resource allocation are facing great challenges to meet the ever-increasing QoS requirements of users with scarce radio resource. Assisted by cloud computing, a huge amount of historical data on scenarios can be collected for extracting similarities among scenarios using machine learning. Moreover, optimal or near-optimal solutions of historical scenarios can be searched offline and stored in advance. When the measured data of current scenario arrives, the current scenario is compared with historical scenarios to find the most similar one. Then, the optimal or near-optimal solution in the most similar historical scenario is adopted to allocate the radio resources for the current scenario. To facilitate the application of new design philosophy, a machine learning framework is proposed for resource allocation assisted by cloud computing. An example of beam allocation in multi-user massive multiple-input-multiple-output (MIMO) systems shows that the proposed machine-learning based resource allocation outperforms conventional methods.
- One of the key requirements for the Lattice QCD Application Development as part of the US Exascale Computing Project is performance portability across multiple architectures. Using the Grid C++ expression template as a starting point, we report on the progress made with regards to the Grid GPU offloading strategies. We present both the successes and issues encountered in using CUDA, OpenACC and Just-In-Time compilation. Experimentation and performance on GPUs with a SU(3)$\times$SU(3) streaming test will be reported. We will also report on the challenges of using current OpenMP 4.x for GPU offloading in the same code.
- Aug 01 2017 cs.CV arXiv:1707.09695v13D human articulated pose recovery from monocular image sequences is very challenging due to the diverse appearances, viewpoints, occlusions, and also the human 3D pose is inherently ambiguous from the monocular imagery. It is thus critical to exploit rich spatial and temporal long-range dependencies among body joints for accurate 3D pose sequence prediction. Existing approaches usually manually design some elaborate prior terms and human body kinematic constraints for capturing structures, which are often insufficient to exploit all intrinsic structures and not scalable for all scenarios. In contrast, this paper presents a Recurrent 3D Pose Sequence Machine(RPSM) to automatically learn the image-dependent structural constraint and sequence-dependent temporal context by using a multi-stage sequential refinement. At each stage, our RPSM is composed of three modules to predict the 3D pose sequences based on the previously learned 2D pose representations and 3D poses: (i) a 2D pose module extracting the image-dependent pose representations, (ii) a 3D pose recurrent module regressing 3D poses and (iii) a feature adaption module serving as a bridge between module (i) and (ii) to enable the representation transformation from 2D to 3D domain. These three modules are then assembled into a sequential prediction framework to refine the predicted poses with multiple recurrent stages. Extensive evaluations on the Human3.6M dataset and HumanEva-I dataset show that our RPSM outperforms all state-of-the-art approaches for 3D pose estimation.
- Tissue characterization has long been an important component of Computer Aided Diagnosis (CAD) systems for automatic lesion detection and further clinical planning. Motivated by the superior performance of deep learning methods on various computer vision problems, there has been increasing work applying deep learning to medical image analysis. However, the development of a robust and reliable deep learning model for computer-aided diagnosis is still highly challenging due to the combination of the high heterogeneity in the medical images and the relative lack of training samples. Specifically, annotation and labeling of the medical images is much more expensive and time-consuming than other applications and often involves manual labor from multiple domain experts. In this work, we propose a multi-stage, self-paced learning framework utilizing a convolutional neural network (CNN) to classify Computed Tomography (CT) image patches. The key contribution of this approach is that we augment the size of training samples by refining the unlabeled instances with a self-paced learning CNN. By implementing the framework on high performance computing servers including the NVIDIA DGX1 machine, we obtained the experimental result, showing that the self-pace boosted network consistently outperformed the original network even with very scarce manual labels. The performance gain indicates that applications with limited training samples such as medical image analysis can benefit from using the proposed framework.
- Jul 05 2017 cs.CV arXiv:1707.01083v2We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8%) than recent MobileNet on ImageNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves ~13x actual speedup over AlexNet while maintaining comparable accuracy.
- Jun 29 2017 cs.CY arXiv:1706.09372v1Improving the health of the nation's population and increasing the capabilities of the US healthcare system to support diagnosis, treatment, and prevention of disease is a critical national and societal priority. In the past decade, tremendous advances in expanding computing capabilities--sensors, data analytics, networks, advanced imaging, and cyber-physical systems--have, and will continue to, enhance healthcare and health research, with resulting improvements in health and wellness. However, the cost and complexity of healthcare continues to rise alongside the impact of poor health on productivity and quality of life. What is lacking are transformative capabilities that address significant health and healthcare trends: the growing demands and costs of chronic disease, the greater responsibility placed on patients and informal caregivers, and the increasing complexity of health challenges in the US, including mental health, that are deeply rooted in a person's social and environmental context.
- May 24 2017 cs.DM arXiv:1705.08379v1Let $G$ be an undirected graph. An edge of $G$ dominates itself and all edges adjacent to it. A subset $E'$ of edges of $G$ is an edge dominating set of $G$, if every edge of the graph is dominated by some edge of $E'$. We say that $E'$ is a perfect edge dominating set of $G$, if every edge not in $E'$ is dominated by exactly one edge of $E'$. The perfect edge dominating problem is to determine a least cardinality perfect edge dominating set of $G$. For this problem, we describe two NP-completeness proofs, for the classes of claw-free graphs of degree at most 3, and for bounded degree graphs, of maximum degree at most $d \geq 3$ and large girth. In contrast, we prove that the problem admits an $O(n)$ time solution, for cubic claw-free graphs. In addition, we prove a complexity dichotomy theorem for the perfect edge domination problem, based on the results described in the paper. Finally, we describe a linear time algorithm for finding a minimum weight perfect edge dominating set of a $P_5$-free graph. The algorithm is robust, in the sense that, given an arbitrary graph $G$, either it computes a minimum weight perfect edge dominating set of $G$, or it exhibits an induced subgraph of $G$, isomorphic to a $P_5$.
- May 16 2017 cs.ET arXiv:1705.04991v1With advancing process technologies and booming IoT markets, millimeter-wave CMOS RFICs have been widely developed in re- cent years. Since the performance of CMOS RFICs is very sensi- tive to the precision of the layout, precise placement of devices and precisely matched microstrip lengths to given values have been a labor-intensive and time-consuming task, and thus become a major bottleneck for time to market. This paper introduces a progressive integer-linear-programming-based method to gener- ate high-quality RFIC layouts satisfying very stringent routing requirements of microstrip lines, including spacing/non-crossing rules, precise length, and bend number minimization, within a given layout area. The resulting RFIC layouts excel in both per- formance and area with much fewer bends compared with the simulation-tuning based manual layout, while the layout gener- ation time is significantly reduced from weeks to half an hour.
- Softmax GAN is a novel variant of Generative Adversarial Network (GAN). The key idea of Softmax GAN is to replace the classification loss in the original GAN with a softmax cross-entropy loss in the sample space of one single batch. In the adversarial learning of $N$ real training samples and $M$ generated samples, the target of discriminator training is to distribute all the probability mass to the real samples, each with probability $\frac{1}{M}$, and distribute zero probability to generated data. In the generator training phase, the target is to assign equal probability to all data points in the batch, each with probability $\frac{1}{M+N}$. While the original GAN is closely related to Noise Contrastive Estimation (NCE), we show that Softmax GAN is the Importance Sampling version of GAN. We futher demonstrate with experiments that this simple change stabilizes GAN training.
- Apr 21 2017 cs.SD arXiv:1704.06008v1Sound propagation encompasses various acoustic phenomena including reverberation. Current virtual acoustic methods, ranging from parametric filters to physically-accurate solvers, can simulate reverberation with varying degrees of fidelity. We investigate the effects of reverberant sounds generated using different propagation algorithms on acoustic distance perception, i.e., how faraway humans perceive a sound source. In particular, we evaluate two classes of methods for real-time sound propagation in dynamic scenes based on parametric filters and ray tracing. Our study shows that the more accurate method shows less distance compression as compared to the approximate, filter-based method. This suggests that accurate reverberation in VR results in a better reproduction of acoustic distances. We also quantify the levels of distance compression introduced by different propagation methods in a virtual environment.
- We study an extreme scenario in multi-label learning where each training instance is endowed with a single one-bit label out of multiple labels. We formulate this problem as a non-trivial special case of one-bit rank-one matrix sensing and develop an efficient non-convex algorithm based on alternating power iteration. The proposed algorithm is able to recover the underlying low-rank matrix model with linear convergence. For a rank-$k$ model with $d_1$ features and $d_2$ classes, the proposed algorithm achieves $O(\epsilon)$ recovery error after retrieving $O(k^{1.5}d_1 d_2/\epsilon)$ one-bit labels within $O(kd)$ memory. Our bound is nearly optimal in the order of $O(1/\epsilon)$. This significantly improves the state-of-the-art sampling complexity of one-bit multi-label learning. We perform experiments to verify our theory and evaluate the performance of the proposed algorithm.
- Feb 08 2017 cs.GT arXiv:1702.01803v1Motivated by online display ad exchanges, we study a setting in which an exchange repeatedly interacts with bidders who have quota, making decisions about which subsets of bidders are called to participate in ad-slot-specific auctions. A bidder with quota cannot respond to more than a certain number of calls per second. In practice, random throttling is the principal solution by which these constraints are enforced. Given the repeated nature of the interaction with its bidders, the exchange has access to data containing information about each bidder's segments of interest. This information can be utilized to design smarter callout mechanisms --- with the potential of improving the exchange's long-term revenue. In this work, we present a general framework for evaluating and comparing the performance of various callout mechanisms using historical auction data only. To measure the impact of a callout mechanism on long-term revenue, we propose a strategic model that captures the repeated interaction between the exchange and bidders. Our model leads us to two metrics for performance: immediate revenue impact and social welfare. Next we present an empirical framework for estimating these two metrics from historical data. For the baseline to compare against, we consider random throttling, as well as a greedy algorithm with certain theoretical guarantees. We propose several natural callout mechanisms and investigate them through our framework on both synthetic and real auction data. We characterize the conditions under which each heuristic performs well and show that, in addition to being computationally faster, in practice our heuristics consistently and significantly outperform the baselines.
- Oct 17 2016 cs.DM arXiv:1610.04544v1Let G denote a graph and let K be a subset of vertices that are a set of target vertices of G. The K-terminal reliability of G is defined as the probability that all target vertices in K are connected, considering the possible failures of non-target vertices of G. The problem of computing K-terminal reliability is known to be #P-complete for polygon-circle graphs, and can be solved in polynomial-time for t-polygon graphs, which are a subclass of polygon-circle graphs. The class of circle graphs is a subclass of polygon-circle graphs and a superclass of t-polygon graphs. Therefore, the problem of computing K-terminal reliability for circle graphs is of particular interest. This paper proves that the problem remains #P-complete even for circle graphs. Additionally, this paper proposes a linear-time algorithm for solving the problem for proper circular-arc graphs, which are a subclass of circle graphs and a superclass of proper interval graphs.
- We develop an efficient alternating framework for learning a generalized version of Factorization Machine (gFM) on steaming data with provable guarantees. When the instances are sampled from $d$ dimensional random Gaussian vectors and the target second order coefficient matrix in gFM is of rank $k$, our algorithm converges linearly, achieves $O(\epsilon)$ recovery error after retrieving $O(k^{3}d\log(1/\epsilon))$ training instances, consumes $O(kd)$ memory in one-pass of dataset and only requires matrix-vector product operations in each iteration. The key ingredient of our framework is a construction of an estimation sequence endowed with a so-called Conditionally Independent RIP condition (CI-RIP). As special cases of gFM, our framework can be applied to symmetric or asymmetric rank-one matrix sensing problems, such as inductive matrix completion and phase retrieval.
- Aug 04 2016 cs.CV arXiv:1608.01250v4Most recent garment capturing techniques rely on acquiring multiple views of clothing, which may not always be readily available, especially in the case of pre-existing photographs from the web. As an alternative, we pro- pose a method that is able to compute a rich and realistic 3D model of a human body and its outfits from a single photograph with little human in- teraction. Our algorithm is not only able to capture the global shape and geometry of the clothing, it can also extract small but important details of cloth, such as occluded wrinkles and folds. Unlike previous methods using full 3D information (i.e. depth, multi-view images, or sampled 3D geom- etry), our approach achieves detailed garment recovery from a single-view image by using statistical, geometric, and physical priors and a combina- tion of parameter estimation, semantic parsing, shape recovery, and physics- based cloth simulation. We demonstrate the effectiveness of our algorithm by re-purposing the reconstructed garments for virtual try-on and garment transfer applications, as well as cloth animation for digital characters.
- The large number of user-generated videos uploaded on to the Internet everyday has led to many commercial video search engines, which mainly rely on text metadata for search. However, metadata is often lacking for user-generated videos, thus these videos are unsearchable by current search engines. Therefore, content-based video retrieval (CBVR) tackles this metadata-scarcity problem by directly analyzing the visual and audio streams of each video. CBVR encompasses multiple research topics, including low-level feature design, feature fusion, semantic detector training and video search/reranking. We present novel strategies in these topics to enhance CBVR in both accuracy and speed under different query inputs, including pure textual queries and query by video examples. Our proposed strategies have been incorporated into our submission for the TRECVID 2014 Multimedia Event Detection evaluation, where our system outperformed other submissions in both text queries and video example queries, thus demonstrating the effectiveness of our proposed approaches.
- Feb 16 2016 cs.CV arXiv:1602.04348v1Maximally stable extremal regions (MSER), which is a popular method to generate character proposals/candidates, has shown superior performance in scene text detection. However, the pixel-level operation limits its capability for handling some challenging cases (e.g., multiple connected characters, separated parts of one character and non-uniform illumination). To better tackle these cases, we design a character proposal network (CPN) by taking advantage of the high capacity and fast computing of fully convolutional network (FCN). Specifically, the network simultaneously predicts characterness scores and refines the corresponding locations. The characterness scores can be used for proposal ranking to reject non-character proposals and the refining process aims to obtain the more accurate locations. Furthermore, considering the situation that different characters have different aspect ratios, we propose a multi-template strategy, designing a refiner for each aspect ratio. The extensive experiments indicate our method achieves recall rates of 93.88%, 93.60% and 96.46% on ICDAR 2013, SVT and Chinese2k datasets respectively using less than 1000 proposals, demonstrating promising performance of our character proposal network.
- Jan 26 2016 cs.CV arXiv:1601.06719v4R-CNN style methods are sorts of the state-of-the-art object detection methods, which consist of region proposal generation and deep CNN classification. However, the proposal generation phase in this paradigm is usually time consuming, which would slow down the whole detection time in testing. This paper suggests that the value discrepancies among features in deep convolutional feature maps contain plenty of useful spatial information, and proposes a simple approach to extract the information for fast region proposal generation in testing. The proposed method, namely Relief R-CNN (R2-CNN), adopts a novel region proposal generator in a trained R-CNN style model. The new generator directly generates proposals from convolutional features by some simple rules, thus resulting in a much faster proposal generation speed and a lower demand of computation resources. Empirical studies show that R2-CNN could achieve the fastest detection speed with comparable accuracy among all the compared algorithms in testing.
- MXNet is a multi-language machine learning (ML) library to ease the development of ML algorithms, especially for deep neural networks. Embedded in the host language, it blends declarative symbolic expression with imperative tensor computation. It offers auto differentiation to derive gradients. MXNet is computation and memory efficient and runs on various heterogeneous systems, ranging from mobile devices to distributed GPU clusters. This paper describes both the API design and the system implementation of MXNet, and explains how embedding of both symbolic expression and tensor operation is handled in a unified fashion. Our preliminary experiments reveal promising results on large scale deep neural network applications using multiple GPU machines.
- Nov 17 2015 cs.CV arXiv:1511.05045v2Image and video classification research has made great progress through the development of handcrafted local features and learning based features. These two architectures were proposed roughly at the same time and have flourished at overlapping stages of history. However, they are typically viewed as distinct approaches. In this paper, we emphasize their structural similarities and show how such a unified view helps us in designing features that balance efficiency and effectiveness. As an example, we study the problem of designing efficient video feature learning algorithms for action recognition. We approach this problem by first showing that local handcrafted features and Convolutional Neural Networks (CNNs) share the same convolution-pooling network structure. We then propose a two-stream Convolutional ISA (ConvISA) that adopts the convolution-pooling structure of the state-of-the-art handcrafted video feature with greater modeling capacities and a cost-effective training algorithm. Through custom designed network structures for pixels and optical flow, our method also reflects distinctive characteristics of these two data sources. Our experimental results on standard action recognition benchmarks show that by focusing on the structure of CNNs, rather than end-to-end training methods, we are able to design an efficient and powerful video feature learning algorithm.
- Aug 18 2015 cs.DS arXiv:1508.03769v1We consider algorithms for "smoothed online convex optimization" problems, a variant of the class of online convex optimization problems that is strongly related to metrical task systems. Prior literature on these problems has focused on two performance metrics: regret and the competitive ratio. There exist known algorithms with sublinear regret and known algorithms with constant competitive ratios; however, no known algorithm achieves both simultaneously. We show that this is due to a fundamental incompatibility between these two metrics - no algorithm (deterministic or randomized) can achieve sublinear regret and a constant competitive ratio, even in the case when the objective functions are linear. However, we also exhibit an algorithm that, for the important special case of one-dimensional decision spaces, provides sublinear regret while maintaining a competitive ratio that grows arbitrarily slowly.
- May 19 2015 cs.CV arXiv:1505.04427v1Motivated by the success of data-driven convolutional neural networks (CNNs) in object recognition on static images, researchers are working hard towards developing CNN equivalents for learning video features. However, learning video features globally has proven to be quite a challenge due to its high dimensionality, the lack of labelled data and the difficulty in processing large-scale video data. Therefore, we propose to leverage effective techniques from both data-driven and data-independent approaches to improve action recognition system. Our contribution is three-fold. First, we propose a two-stream Stacked Convolutional Independent Subspace Analysis (ConvISA) architecture to show that unsupervised learning methods can significantly boost the performance of traditional local features extracted from data-independent models. Second, we demonstrate that by learning on video volumes detected by Improved Dense Trajectory (IDT), we can seamlessly combine our novel local descriptors with hand-crafted descriptors. Thus we can utilize available feature enhancing techniques developed for hand-crafted descriptors. Finally, similar to multi-class classification framework in CNNs, we propose a training-free re-ranking technique that exploits the relationship among action classes to improve the overall performance. Our experimental results on four benchmark action recognition datasets show significantly improved performance.
- Feb 17 2015 cs.CV arXiv:1502.04132v1We propose a method for representing motion information for video classification and retrieval. We improve upon local descriptor based methods that have been among the most popular and successful models for representing videos. The desired local descriptors need to satisfy two requirements: 1) to be representative, 2) to be discriminative. Therefore, they need to occur frequently enough in the videos and to be be able to tell the difference among different types of motions. To generate such local descriptors, the video blocks they are based on must contain just the right amount of motion information. However, current state-of-the-art local descriptor methods use video blocks with a single fixed size, which is insufficient for covering actions with varying speeds. In this paper, we introduce a long-short term motion feature that generates descriptors from video blocks with multiple lengths, thus covering motions with large speed variance. Experimental results show that, albeit simple, our model achieves state-of-the-arts results on several benchmark datasets.
- Feb 06 2015 cs.DM arXiv:1502.01523v2Given a graph $G = (V,E)$, a \emphperfect dominating set is a subset of vertices $V' \subseteq V(G)$ such that each vertex $v \in V(G)\setminus V'$ is dominated by exactly one vertex $v' \in V'$. An \emphefficient dominating set is a perfect dominating set $V'$ where $V'$ is also an independent set. These problems are usually posed in terms of edges instead of vertices. Both problems, either for the vertex or edge variant, remains NP-Hard, even when restricted to certain graphs families. We study both variants of the problems for the circular-arc graphs, and show efficient algorithms for all of them.
- Jan 20 2015 cs.CV arXiv:1501.04277v1In this paper, we study the robust subspace clustering problem, which aims to cluster the given possibly noisy data points into their underlying subspaces. A large pool of previous subspace clustering methods focus on the graph construction by different regularization of the representation coefficient. We instead focus on the robustness of the model to non-Gaussian noises. We propose a new robust clustering method by using the correntropy induced metric, which is robust for handling the non-Gaussian and impulsive noises. Also we further extend the method for handling the data with outlier rows/features. The multiplicative form of half-quadratic optimization is used to optimize the non-convex correntropy objective function of the proposed models. Extensive experiments on face datasets well demonstrate that the proposed methods are more robust to corruptions and occlusions.
- In this paper, we introduce a novel deep learning framework, termed Purine. In Purine, a deep network is expressed as a bipartite graph (bi-graph), which is composed of interconnected operators and data tensors. With the bi-graph abstraction, networks are easily solvable with event-driven task dispatcher. We then demonstrate that different parallelism schemes over GPUs and/or CPUs on single or multiple PCs can be universally implemented by graph composition. This eases researchers from coding for various parallelization schemes, and the same dispatcher can be used for solving variant graphs. Scheduled by the task dispatcher, memory transfers are fully overlapped with other computations, which greatly reduce the communication overhead and help us achieve approximate linear acceleration.
- Nov 26 2014 cs.CV arXiv:1411.6660v4Most state-of-the-art action feature extractors involve differential operators, which act as highpass filters and tend to attenuate low frequency action information. This attenuation introduces bias to the resulting features and generates ill-conditioned feature matrices. The Gaussian Pyramid has been used as a feature enhancing technique that encodes scale-invariant characteristics into the feature space in an attempt to deal with this attenuation. However, at the core of the Gaussian Pyramid is a convolutional smoothing operation, which makes it incapable of generating new features at coarse scales. In order to address this problem, we propose a novel feature enhancing technique called Multi-skIp Feature Stacking (MIFS), which stacks features extracted using a family of differential filters parameterized with multiple time skips and encodes shift-invariance into the frequency space. MIFS compensates for information lost from using differential operators by recapturing information at coarse scales. This recaptured information allows us to match actions at different speeds and ranges of motion. We prove that MIFS enhances the learnability of differential-based features exponentially. The resulting feature matrices from MIFS have much smaller conditional numbers and variances than those from conventional methods. Experimental results show significantly improved performance on challenging action recognition and event detection tasks. Specifically, our method exceeds the state-of-the-arts on Hollywood2, UCF101 and UCF50 datasets and is comparable to state-of-the-arts on HMDB51 and Olympics Sports datasets. MIFS can also be used as a speedup strategy for feature extraction with minimal or no accuracy cost.
- We propose a novel deep network structure called "Network In Network" (NIN) to enhance model discriminability for local patches within the receptive field. The conventional convolutional layer uses linear filters followed by a nonlinear activation function to scan the input. Instead, we build micro neural networks with more complex structures to abstract the data within the receptive field. We instantiate the micro neural network with a multilayer perceptron, which is a potent function approximator. The feature maps are obtained by sliding the micro networks over the input in a similar manner as CNN; they are then fed into the next layer. Deep NIN can be implemented by stacking mutiple of the above described structure. With enhanced local modeling via the micro network, we are able to utilize global average pooling over feature maps in the classification layer, which is easier to interpret and less prone to overfitting than traditional fully connected layers. We demonstrated the state-of-the-art classification performances with NIN on CIFAR-10 and CIFAR-100, and reasonable performances on SVHN and MNIST datasets.
- May 21 2013 cs.SI physics.soc-ph arXiv:1305.4429v1Social networks provide a new perspective for enterprises to better understand their customers and have attracted substantial attention in industry. However, inferring high quality customer social networks is a great challenge while there are no explicit customer relations in many traditional OLTP environments. In this paper, we study this issue in the field of passenger transport and introduce a new member to the family of social networks, which is named Co-Travel Networks, consisting of passengers connected by their co-travel behaviors. We propose a novel method to infer high quality co-travel networks of civil aviation passengers from their co-booking behaviors derived from the PNRs (Passenger Naming Records). In our method, to accurately evaluate the strength of ties, we present a measure of Co-Journey Times to count the co-travel times of complete journeys between passengers. We infer a high quality co-travel network based on a large encrypted PNR dataset and conduct a series of network analyses on it. The experimental results show the effectiveness of our inferring method, as well as some special characteristics of co-travel networks, such as the sparsity and high aggregation, compared with other kinds of social networks. It can be expected that such co-travel networks will greatly help the industry to better understand their passengers so as to improve their services. More importantly, we contribute a special kind of social networks with high strength of ties generated from very close and high cost travel behaviors, for further scientific researches on human travel behaviors, group travel patterns, high-end travel market evolution, etc., from the perspective of social networks.
- Say that an edge of a graph $G$ dominates itself and every other edge adjacent to it. An edge dominating set of a graph $G=(V,E)$ is a subset of edges $E' \subseteq E$ which dominates all edges of $G$. In particular, if every edge of $G$ is dominated by exactly one edge of $E'$ then $E'$ is a dominating induced matching. It is known that not every graph admits a dominating induced matching, while the problem to decide if it does admit it is NP-complete. In this paper we consider the problems of finding a minimum weighted dominating induced matching, if any, and counting the number of dominating induced matchings of a graph with weighted edges. We describe an exact algorithm for general graphs that runs in $O^*(1.1939^n)$ time and polynomial (linear) space. This improves over any existing exact algorithm for the problems in consideration.
- Say that an edge of a graph G dominates itself and every other edge adjacent to it. An edge dominating set of a graph G = (V,E) is a subset of edges E' of E which dominates all edges of G. In particular, if every edge of G is dominated by exactly one edge of E' then E' is a dominating induced matching. It is known that not every graph admits a dominating induced matching, while the problem to decide if it does admit is NP-complete. In this paper we consider the problem of finding a minimum weighted dominating induced matching, if any, of a graph with weighted edges. We describe two exact algorithms for general graphs. The algorithms are efficient in the cases where G admits a known vertex dominating set of small size, or when G contains a polynomial number of maximal independent sets.
- Jul 27 2012 cs.PF arXiv:1207.6295v2Energy consumption imposes a significant cost for data centers; yet much of that energy is used to maintain excess service capacity during periods of predictably low load. Resultantly, there has recently been interest in developing designs that allow the service capacity to be dynamically resized to match the current workload. However, there is still much debate about the value of such approaches in real settings. In this paper, we show that the value of dynamic resizing is highly dependent on statistics of the workload process. In particular, both slow time-scale non-stationarities of the workload (e.g., the peak-to-mean ratio) and the fast time-scale stochasticity (e.g., the burstiness of arrivals) play key roles. To illustrate the impact of these factors, we combine optimization-based modeling of the slow time-scale with stochastic modeling of the fast time scale. Within this framework, we provide both analytic and numerical results characterizing when dynamic resizing does (and does not) provide benefits.
- We give a linear-time algorithm that checks for isomorphism between two 0-1 matrices that obey the circular-ones property. This algorithm leads to linear-time isomorphism algorithms for related graph classes, including Helly circular-arc graphs, \Gamma-circular-arc graphs, proper circular-arc graphs and convex-round graphs.
- Mar 22 2011 cs.DM arXiv:1103.3732v1A Helly circular-arc model M = (C,A) is a circle C together with a Helly family \Aof arcs of C. If no arc is contained in any other, then M is a proper Helly circular-arc model, if every arc has the same length, then M is a unit Helly circular-arc model, and if there are no two arcs covering the circle, then M is a normal Helly circular-arc model. A Helly (resp. proper Helly, unit Helly, normal Helly) circular-arc graph is the intersection graph of the arcs of a Helly (resp. proper Helly, unit Helly, normal Helly) circular-arc model. In this article we study these subclasses of Helly circular-arc graphs. We show natural generalizations of several properties of (proper) interval graphs that hold for some of these Helly circular-arc subclasses. Next, we describe characterizations for the subclasses of Helly circular-arc graphs, including forbidden induced subgraphs characterizations. These characterizations lead to efficient algorithms for recognizing graphs within these classes. Finally, we show how do these classes of graphs relate with straight and round digraphs.
- Mar 21 2011 cs.SE arXiv:1103.3569v1Open source projects often maintain open bug repositories during development and maintenance, and the reporters often point out straightly or implicitly the reasons why bugs occur when they submit them. The comments about a bug are very valuable for developers to locate and fix the bug. Meanwhile, it is very common in large software for programmers to override or overload some methods according to the same logic. If one method causes a bug, it is obvious that other overridden or overloaded methods maybe cause related or similar bugs. In this paper, we propose and implement a tool Rebug- Detector, which detects related bugs using bug information and code features. Firstly, it extracts bug features from bug information in bug repositories; secondly, it locates bug methods from source code, and then extracts code features of bug methods; thirdly, it calculates similarities between each overridden or overloaded method and bug methods; lastly, it determines which method maybe causes potential related or similar bugs. We evaluate Rebug-Detector on an open source project: Apache Lucene-Java. Our tool totally detects 61 related bugs, including 21 real bugs and 10 suspected bugs, and it costs us about 15.5 minutes. The results show that bug features and code features extracted by our tool are useful to find real bugs in existing projects.
- Oct 21 2010 cs.SE arXiv:1010.4092v3The number of bug reports in complex software increases dramatically. Now bugs are triaged manually, bug triage or assignment is a labor-intensive and time-consuming task. Without knowledge about the structure of the software, testers often specify the component of a new bug wrongly. Meanwhile, it is difficult for triagers to determine the component of the bug only by its description. We dig out the components of 28,829 bugs in Eclipse bug project have been specified wrongly and modified at least once. It results in these bugs have to be reassigned and delays the process of bug fixing. The average time of fixing wrongly-specified bugs is longer than that of correctly-specified ones. In order to solve the problem automatically, we use historical fixed bug reports as training corpus and build classifiers based on support vector machines and Naïve Bayes to predict the component of a new bug. The best prediction accuracy reaches up to 81.21% on our validation corpus of Eclipse project. Averagely our predictive model can save about 54.3 days for triagers and developers to repair a bug. Keywords: bug reports; bug triage; text classification; predictive model
- May 14 2010 cs.DS arXiv:1005.2211v1In this paper we present a modification of a technique by Chiba and Nishizeki [Chiba and Nishizeki: Arboricity and Subgraph Listing Algorithms, SIAM J. Comput. 14(1), pp. 210--223 (1985)]. Based on it, we design a data structure suitable for dynamic graph algorithms. We employ the data structure to formulate new algorithms for several problems, including counting subgraphs of four vertices, recognition of diamond-free graphs, cop-win graphs and strongly chordal graphs, among others. We improve the time complexity for graphs with low arboricity or h-index.
- May 08 2008 cs.OH arXiv:0805.0914v1A new technique was developed for studying the mechanical behavior of nano-scale thin metal films on substrate is presented. The test structure was designed on a novel "paddle" cantilever beam specimens with dimensions as few hundred nanometers to less than 10 nanometers. This beam is in triangle shape in order to provide uniform plane strain distribution. Standard clean room processing was used to prepare the paddle sample. The experiment can be operated by using the electrostatic deflection on the paddle uniform distributed stress cantilever beam and then measure the deposited thin metal film materials on top of it. A capacitance technique was used to measurement on the other side of the deflected plate to measure its deflection with respect to the force. The measured strain was converted through the capacitance measurement for the deflection of the cantilever. System performance on the residual stress measurement of thin films are calculated with three different forces on the "paddle" cantilever beam, including the force due to the film, compliance force and electrostatic force.
- Feb 22 2008 cs.OH arXiv:0802.3083v1Microelectromechanical systems (MEMS) technologies are developing rapidly with increasing study of the design, fabrication and commercialization of microscale systems and devices. Accurate knowledge on the mechanical behaviors of thin film materials used for MEMS is important for successful design and development of MEMS. Here a novel electroplating spring-bridge micro-tensile specimen integrates pin-pin align holes, misalignment compensate spring, load sensor beam and freestanding thin film is demonstrated and fabricated. The specimen is fit into a specially designed micro-mechanical apparatus to carry out a series of monotonic tensile testing on sub-micron freestanding thin films. Certain thin films applicable as structure or motion gears in MEMS were tested including sputtered gold, copper and tantalum nitride thin films. Metal specimens were fabricated by sputtering; for tantalum nitride film samples, nitrogen gas was introduced into the chamber during sputtering tantalum films on the silicon wafer. The sample fabrication method involves three steps of lithography and two steps of electroplating copper to hold a dog bone freestanding thin film. Using standard wet etching or lift off techniques, a series of microtensile specimens were patterned in metal thin films, holes, and seed layer for spring and frame structure on the underlying silicon oxide coated silicon substrate. Two steps of electroplating processing to distinct spring and frame portion of the test chip. Finally, chemical etched away the silicon oxide to separated electroplated specimen and silicon substrate.
- Nov 22 2007 cs.OH arXiv:0711.3300v1Microelectromechanical systems (MEMS) technologies are developing rapidly with increasing study of the design, fabrication and commercialization of microscale systems and devices. Accurate mechanical properties are important for successful design and development of MEMS. We have demonstrated here a novel electroplating spring frame MEMS Structure Specimen integrates pin-pin align holes, misalignment compensate spring structure frame, load sensor beam and freestanding thin film. The specimen can be fit into a specially designed microtensile apparatus which is capable of carrying out a series of tests on sub-micro scale freestanding thin films.