results for au:Fu_H in:cs

- We consider the stochastic shortest path (SSP) problem for succinct Markov decision processes (MDPs), where the MDP consists of a set of variables, and a set of nondeterministic rules that update the variables. First, we show that several examples from the AI literature can be modeled as succinct MDPs. Then we present computational approaches for upper and lower bounds for the SSP problem: (a)~for computing upper bounds, our method is polynomial-time in the implicit description of the MDP; (b)~for lower bounds, we present a polynomial-time (in the size of the implicit description) reduction to quadratic programming. Our approach is applicable even to infinite-state MDPs. Finally, we present experimental results to demonstrate the effectiveness of our approach on several classical examples from the AI literature.
- Mar 22 2018 cs.CV arXiv:1803.07955v1Images captured under outdoor scenes usually suffer from low contrast and limited visibility due to suspended atmospheric particles, which directly affects the quality of photos. Despite numerous image dehazing methods have been proposed, effective hazy image restoration remains a challenging problem. Existing learning-based methods usually predict the medium transmission by Convolutional Neural Networks (CNNs), but ignore the key global atmospheric light. Different from previous learning-based methods, we propose a flexible cascaded CNN for single hazy image restoration, which considers the medium transmission and global atmospheric light jointly by two task-driven subnetworks. Specifically, the medium transmission estimation subnetwork is inspired by the densely connected CNN while the global atmospheric light estimation subnetwork is a light-weight CNN. Besides, these two subnetworks are cascaded by sharing the common features. Finally, with the estimated model parameters, the haze-free image is obtained by the atmospheric scattering model inversion, which achieves more accurate and effective restoration performance. Qualitatively and quantitatively experimental results on the synthetic and real-world hazy images demonstrate that the proposed method effectively removes haze from such images, and outperforms several state-of-the-art dehazing methods.
- Mar 12 2018 cs.CV arXiv:1803.03391v1Visual saliency detection model simulates the human visual system to perceive the scene, and has been widely used in many vision tasks. With the acquisition technology development, more comprehensive information, such as depth cue, inter-image correspondence, or temporal relationship, is available to extend image saliency detection to RGBD saliency detection, co-saliency detection, or video saliency detection. RGBD saliency detection model focuses on extracting the salient regions from RGBD images by combining the depth information. Co-saliency detection model introduces the inter-image correspondence constraint to discover the common salient object in an image group. The goal of video saliency detection model is to locate the motion-related salient object in video sequences, which considers the motion cue and spatiotemporal constraint jointly. In this paper, we review different types of saliency detection algorithms, summarize the important issues of the existing methods, and discuss the existent problems and future works. Moreover, the evaluation datasets and quantitative measurements are briefly introduced, and the experimental analysis and discission are conducted to provide a holistic overview of different saliency detection methods.
- Feb 22 2018 cs.GT arXiv:1802.07407v1This paper studies the revenue of simple mechanisms in settings where a third-party data provider is present. When no data provider is present, it is known that simple mechanisms achieve a constant fraction of the revenue of optimal mechanisms. The results in this paper demonstrate that this is no longer true in the presence of a third party data provider who can provide the bidder with a signal that is correlated with the item type. Specifically, we show that even with a single seller, a single bidder, and a single item of uncertain type for sale, pricing each item-type separately (the analog of item pricing for multi-item auctions) and bundling all item-types under a single price (the analog of grand bundling) can both simultaneously be a logarithmic factor worse than the optimal revenue. Further, in the presence of a data provider, item-type partitioning mechanisms---a more general class of mechanisms which divide item-types into disjoint groups and offer prices for each group---still cannot achieve within a $\log \log$ factor of the optimal revenue.
- We study the local geometry of a one-hidden-layer fully-connected neural network where the training samples are generated from a multi-neuron logistic regression model. We prove that under Gaussian input, the empirical risk function employing quadratic loss exhibits strong convexity and smoothness uniformly in a local neighborhood of the ground truth, for a class of smooth activation functions satisfying certain properties, including sigmoid and tanh, as soon as the sample complexity is sufficiently large. This implies that if initialized in this neighborhood, gradient descent converges linearly to a critical point that is provably close to the ground truth without requiring a fresh set of samples at each iteration. This significantly improves upon prior results on learning shallow neural networks with multiple neurons. To the best of our knowledge, this is the first global convergence guarantee for one-hidden-layer neural networks using gradient descent over the empirical risk function without resampling at the near-optimal sampling and computational complexity.
- Jan 04 2018 cs.CV arXiv:1801.00926v3Glaucoma is a chronic eye disease that leads to irreversible vision loss. The cup to disc ratio (CDR) plays an important role in the screening and diagnosis of glaucoma. Thus, the accurate and automatic segmentation of optic disc (OD) and optic cup (OC) from fundus images is a fundamental task. Most existing methods segment them separately, and rely on hand-crafted visual feature from fundus images. In this paper, we propose a deep learning architecture, named M-Net, which solves the OD and OC segmentation jointly in a one-stage multi-label system. The proposed M-Net mainly consists of multi-scale input layer, U-shape convolutional network, side-output layer, and multi-label loss function. The multi-scale input layer constructs an image pyramid to achieve multiple level receptive field sizes. The U-shape convolutional network is employed as the main body network structure to learn the rich hierarchical representation, while the side-output layer acts as an early classifier that produces a companion local prediction map for different scale layers. Finally, a multi-label loss function is proposed to generate the final segmentation map. For improving the segmentation performance further, we also introduce the polar transformation, which provides the representation of the original image in the polar coordinate system. The experiments show that our M-Net system achieves state-of-the-art OD and OC segmentation result on ORIGA dataset. Simultaneously, the proposed method also obtains the satisfactory glaucoma screening performances with calculated CDR value on both ORIGA and SCES datasets.
- Dec 05 2017 cs.CV arXiv:1712.00621v1Despite the recent progress in image dehazing, several problems remain largely unsolved such as robustness for varying scenes, the visual quality of reconstructed images, and effectiveness and flexibility for applications. To tackle these problems, we propose a new deep network architecture for single image dehazing called DR-Net. Our model consists of three main subnetworks: a transmission prediction network that predicts transmission map for the input image, a haze removal network that reconstructs latent image steered by the transmission map, and a refinement network that enhances the details and color properties of the dehazed result via weakly supervised learning. Compared to previous methods, our method advances in three aspects: (i) pure data-driven model; (ii) the end-to-end system; (iii) superior robustness, accuracy, and applicability. Extensive experiments demonstrate that our DR-Net outperforms the state-of-the-art methods on both synthetic and real images in qualitative and quantitative metrics. Additionally, the utility of DR-Net has been illustrated by its potential usage in several important computer vision tasks.
- This paper presents a sparse denoising autoencoder (SDAE)-based deep neural network (DNN) for the direction finding (DF) of small unmanned aerial vehicles (UAVs). It is motivated by the practical challenges associated with classical DF algorithms such as MUSIC and ESPRIT. The proposed DF scheme is practical and low-complex in the sense that a phase synchronization mechanism, an antenna calibration mechanism, or a profound knowledge about the antenna radiation pattern are not essential. Also, the DF can be accomplished using a singlechannel RF receiver implementation. The paper validates the proposed method experimentally as well.
- Probabilistic timed automata (PTAs) are timed automata (TAs) extended with discrete probability distributions.They serve as a mathematical model for a wide range of applications that involve both stochastic and timed behaviours. In this work, we consider the problem of model-checking linear \emphdense-time properties over PTAs. In particular, we study linear dense-time properties that can be encoded by TAs with infinite acceptance criterion.First, we show that the problem of model-checking PTAs against deterministic-TA specifications can be solved through a product construction. Based on the product construction, we prove that the computational complexity of the problem with deterministic-TA specifications is EXPTIME-complete. Then we show that when relaxed to general (nondeterministic) TAs, the model-checking problem becomes undecidable.Our results substantially extend state of the art with both the dense-time feature and the nondeterminism in TAs.
- Nov 20 2017 cs.CV arXiv:1711.06500v1An intrinsic challenge of person re-identification (re-ID) is the annotation difficulty. This typically means 1) few training samples per identity, and 2) thus the lack of diversity among the training samples. Consequently, we face high risk of over-fitting when training the convolutional neural network (CNN), a state-of-the-art method in person re-ID. To reduce the risk of over-fitting, this paper proposes a Pseudo Positive Regularization (PPR) method to enrich the diversity of the training data. Specifically, unlabeled data from an independent pedestrian database is retrieved using the target training data as query. A small proportion of these retrieved samples are randomly selected as the Pseudo Positive samples and added to the target training set for the supervised CNN training. The addition of Pseudo Positive samples is therefore a data augmentation method to reduce the risk of over-fitting during CNN training. We implement our idea in the identification CNN models (i.e., CaffeNet, VGGNet-16 and ResNet-50). On CUHK03 and Market-1501 datasets, experimental results demonstrate that the proposed method consistently improves the baseline and yields competitive performance to the state-of-the-art person re-ID methods.
- Nov 07 2017 cs.CV arXiv:1711.01371v1As a newly emerging and significant topic in computer vision community, co-saliency detection aims at discovering the common salient objects in multiple related images. The existing methods often generate the co-saliency map through a direct forward pipeline which is based on the designed cues or initialization, but lack the refinement-cycle scheme. Moreover, they mainly focus on RGB image and ignore the depth information for RGBD images. In this paper, we propose an iterative RGBD co-saliency framework, which utilizes the existing single saliency maps as the initialization, and generates the final RGBD cosaliency map by using a refinement-cycle model. Three schemes are employed in the proposed RGBD co-saliency framework, which include the addition scheme, deletion scheme, and iteration scheme. The addition scheme is used to highlight the salient regions based on intra-image depth propagation and saliency propagation, while the deletion scheme filters the saliency regions and removes the non-common salient regions based on interimage constraint. The iteration scheme is proposed to obtain more homogeneous and consistent co-saliency map. Furthermore, a novel descriptor, named depth shape prior, is proposed in the addition scheme to introduce the depth information to enhance identification of co-salient objects. The proposed method can effectively exploit any existing 2D saliency model to work well in RGBD co-saliency scenarios. The experiments on two RGBD cosaliency datasets demonstrate the effectiveness of our proposed framework.
- Oct 17 2017 cs.CV arXiv:1710.05172v1Co-saliency detection aims at extracting the common salient regions from an image group containing two or more relevant images. It is a newly emerging topic in computer vision community. Different from the most existing co-saliency methods focusing on RGB images, this paper proposes a novel co-saliency detection model for RGBD images, which utilizes the depth information to enhance identification of co-saliency. First, the intra saliency map for each image is generated by the single image saliency model, while the inter saliency map is calculated based on the multi-constraint feature matching, which represents the constraint relationship among multiple images. Then, the optimization scheme, namely Cross Label Propagation (CLP), is used to refine the intra and inter saliency maps in a cross way. Finally, all the original and optimized saliency maps are integrated to generate the final co-saliency result. The proposed method introduces the depth information and multi-constraint feature matching to improve the performance of co-saliency detection. Moreover, the proposed method can effectively exploit any existing single image saliency model to work well in co-saliency scenarios. Experiments on two RGBD co-saliency datasets demonstrate the effectiveness of our proposed model.
- Sep 21 2017 cs.CR arXiv:1709.06654v1Mobile operating systems adopt permission systems to protect system integrity and user privacy. In this work, we propose INSPIRED, an intention-aware dynamic mediation system for mobile operating systems with privacy-preserving capability. When a security or privacy sensitive behavior is triggered, INSPIRED automatically infers the underlying program intention by examining its runtime environment and justifies whether to grant the relevant permission by matching with user intention. We stress on runtime contextual-integrity by answering the following three questions: who initiated the behavior, when was the sensitive action triggered and under what kind of environment was it triggered? Specifically, observing that mobile applications intensively leverage user interface (UI) to reflect the underlying application functionality, we propose a machine learning based permission model using foreground information obtained from multiple sources. To precisely capture user intention, our permission model evolves over time and it can be user-customized by continuously learning from user decisions. Moreover, by keeping and processing all user's behavioral data inside her own device (i.e., without sharing with a third-party cloud for learning), INSPIRED is also privacy-preserving. Our evaluation shows that our model achieves both high precision and recall (95%) based on 6,560 permission requests from both benign apps and malware. Further, it is capable of capturing users' specific privacy preferences with an acceptable median f-measure (84.7%) for 1,272 decisions from users. Finally, we show INSPIRED can be deployed on real Android devices to provide real-time protection with a low overhead.
- Aug 29 2017 cs.CV arXiv:1708.08267v1Monocular depth estimation, which plays a key role in understanding 3D scene geometry, is fundamentally an ill-posed problem. Existing methods based on deep convolutional neural networks (DCNNs) have examined this problem by learning convolutional networks to estimate continuous depth maps from monocular images. However, we find that training a network to predict a high spatial resolution continuous depth map often suffers from poor local solutions. In this paper, we hypothesize that achieving a compromise between spatial and depth resolutions can improve network training. Based on this "compromise principle", we propose a regression-classification cascaded network (RCCN), which consists of a regression branch predicting a low spatial resolution continuous depth map and a classification branch predicting a high spatial resolution discrete depth map. The two branches form a cascaded structure allowing the classification and regression branches to benefit from each other. By leveraging large-scale raw training datasets and some data augmentation strategies, our network achieves top or state-of-the-art results on the NYU Depth V2, KITTI, and Make3D benchmarks.
- Aug 15 2017 cs.GR arXiv:1708.03760v1In recent years consumer-level depth sensors have been adopted in various applications. However, they often produce depth maps at a not very fast frame rate (around 30 frames per second), preventing them from being used for applications like digitizing human performance involving fast motion. On the other hand there are available low-cost faster-rate video cameras. This motivates us to develop a hybrid camera that consists of a high-rate video camera and a low-rate depth camera, and to allow temporal interpolation of depth maps with the help of auxiliary color images. To achieve this we develop a novel algorithm that reconstructs intermediate depth frames and estimates scene flow simultaneously. We have tested our algorithm on various examples involving fast, non-rigid motions of single or multiple objects. Our experiments show that our scene flow estimation method is more precise than purely tracking based method and the state-of-the-art techniques.
- Jul 20 2017 cs.GT arXiv:1707.05875v1We consider a revenue optimizing seller selling a single item to a buyer, on whose private value the seller has a noisy signal. We show that, when the signal is kept private, arbitrarily more revenue could potentially be extracted than if the signal is leaked or revealed. We then show that, if the seller is not allowed to make payments to the buyer, the gap between the two is bounded by a multiplicative factor of 3, if the value distribution conditioning on each signal is regular. We give examples showing that both conditions are necessary for a constant bound to hold. We connect this scenario to multi-bidder single-item auctions where bidders' values are correlated. Similarly to the setting above, we show that the revenue of a Bayesian incentive compatible, ex post individually rational auction can be arbitrarily larger than that of a dominant strategy incentive compatible auction, whereas the two are no more than a factor of 5 apart if the auctioneer never pays the bidders and if each bidder's value conditioning on the others' is drawn according to a regular distribution. The upper bounds in both settings degrade gracefully when the distribution is a mixture of a small number of regular distributions.
- May 08 2017 cs.CV arXiv:1705.02145v1Large-scale is a trend in person re-identification (re-id). It is important that real-time search be performed in a large gallery. While previous methods mostly focus on discriminative learning, this paper makes the attempt in integrating deep learning and hashing into one framework to evaluate the efficiency and accuracy for large-scale person re-id. We integrate spatial information for discriminative visual representation by partitioning the pedestrian image into horizontal parts. Specifically, Part-based Deep Hashing (PDH) is proposed, in which batches of triplet samples are employed as the input of the deep hashing architecture. Each triplet sample contains two pedestrian images (or parts) with the same identity and one pedestrian image (or part) of the different identity. A triplet loss function is employed with a constraint that the Hamming distance of pedestrian images (or parts) with the same identity is smaller than ones with the different identity. In the experiment, we show that the proposed Part-based Deep Hashing method yields very competitive re-id accuracy on the large-scale Market-1501 and Market-1501+500K datasets.
- We consider the problem of developing automated techniques for solving recurrence relations to aid the expected-runtime analysis of programs. Several classical textbook algorithms have quite efficient expected-runtime complexity, whereas the corresponding worst-case bounds are either inefficient (e.g., QUICK-SORT), or completely ineffective (e.g., COUPON-COLLECTOR). Since the main focus of expected-runtime analysis is to obtain efficient bounds, we consider bounds that are either logarithmic, linear, or almost-linear ($\mathcal{O}(\log n)$, $\mathcal{O}(n)$, $\mathcal{O}(n\cdot\log n)$, respectively, where n represents the input size). Our main contribution is an efficient (simple linear-time algorithm) sound approach for deriving such expected-runtime bounds for the analysis of recurrence relations induced by randomized algorithms. Our approach can infer the asymptotically optimal expected-runtime bounds for recurrences of classical randomized algorithms, including RANDOMIZED-SEARCH, QUICK-SORT, QUICK-SELECT, COUPONCOLLECTOR, where the worst-case bounds are either inefficient (such as linear as compared to logarithmic of expected-runtime, or quadratic as compared to linear or almost-linear of expected-runtime), or ineffective. We have implemented our approach, and the experimental results show that we obtain the bounds efficiently for the recurrences of various classical algorithms.
- May 02 2017 cs.PL arXiv:1705.00317v1We study the problem of developing efficient approaches for proving worst-case bounds of non-deterministic recursive programs. Ranking functions are sound and complete for proving termination and worst-case bounds of nonrecursive programs. First, we apply ranking functions to recursion, resulting in measure functions. We show that measure functions provide a sound and complete approach to prove worst-case bounds of non-deterministic recursive programs. Our second contribution is the synthesis of measure functions in nonpolynomial forms. We show that non-polynomial measure functions with logarithm and exponentiation can be synthesized through abstraction of logarithmic or exponentiation terms, Farkas' Lemma, and Handelman's Theorem using linear programming. While previous methods obtain worst-case polynomial bounds, our approach can synthesize bounds of the form $\mathcal{O}(n\log n)$ as well as $\mathcal{O}(n^r)$ where $r$ is not an integer. We present experimental results to demonstrate that our approach can obtain efficiently worst-case bounds of classical recursive algorithms such as (i) Merge-Sort, the divide-and-conquer algorithm for the Closest-Pair problem, where we obtain $\mathcal{O}(n \log n)$ worst-case bound, and (ii) Karatsuba's algorithm for polynomial multiplication and Strassen's algorithm for matrix multiplication, where we obtain $\mathcal{O}(n^r)$ bound such that $r$ is not an integer and close to the best-known bounds for the respective algorithms.
- Color theme or color palette can deeply influence the quality and the feeling of a photograph or a graphical design. Although color palettes may come from different sources such as online crowd-sourcing, photographs and graphical designs, in this paper, we consider color palettes extracted from fine art collections, which we believe to be an abundant source of stylistic and unique color themes. We aim to capture color styles embedded in these collections by means of statistical models and to build practical applications upon these models. As artists often use their personal color themes in their paintings, making these palettes appear frequently in the dataset, we employed density estimation to capture the characteristics of palette data. Via density estimation, we carried out various predictions and interpolations on palettes, which led to promising applications such as photo-style exploration, real-time color suggestion, and enriched photo recolorization. It was, however, challenging to apply density estimation to palette data as palettes often come as unordered sets of colors, which make it difficult to use conventional metrics on them. To this end, we developed a divide-and-conquer sorting algorithm to rearrange the colors in the palettes in a coherent order, which allows meaningful interpolation between color palettes. To confirm the performance of our model, we also conducted quantitative experiments on datasets of digitized paintings collected from the Internet and received favorable results.
- Feb 07 2017 cs.CR arXiv:1702.01160v2Mobile applications (apps) often transmit sensitive data through network with various intentions. Some transmissions are needed to fulfill the app's functionalities. However, transmissions with malicious receivers may lead to privacy leakage and tend to behave stealthily to evade detection. The problem is twofold: how does one unveil sensitive transmissions in mobile apps, and given a sensitive transmission, how does one determine if it is legitimate? In this paper, we propose LeakSemantic, a framework that can automatically locate abnormal sensitive network transmissions from mobile apps. LeakSemantic consists of a hybrid program analysis component and a machine learning component. Our program analysis component combines static analysis and dynamic analysis to precisely identify sensitive transmissions. Compared to existing taint analysis approaches, LeakSemantic achieves better accuracy with fewer false positives and is able to collect runtime data such as network traffic for each transmission. Based on features derived from the runtime data, machine learning classifiers are built to further differentiate between the legal and illegal disclosures. Experiments show that LeakSemantic achieves 91% accuracy on 2279 sensitive connections from 1404 apps.
- Jan 12 2017 cs.PL arXiv:1701.02944v1We study the termination problem for nondeterministic recursive probabilistic programs. First, we show that a ranking-supermartingales-based approach is both sound and complete for bounded terminiation (i.e., bounded expected termination time over all schedulers). Our result also clarifies previous results which claimed that ranking supermartingales are not a complete approach even for nondeterministic probabilistic programs without recursion. Second, we show that conditionally difference-bounded ranking supermartingales provide a sound approach for lower bounds of expected termination time. Finally, we show that supermartingales with lower bounds on conditional absolute difference provide a sound approach for almost-sure termination, along with explicit bounds on tail probabilities of nontermination within a given number of steps. We also present several illuminating counterexamples that establish the necessity of certain prerequisites (such as conditionally difference-bounded condition).
- There is a trend to acquire high accuracy land-cover maps using multi-source classification methods, most of which are based on data fusion, especially pixel- or feature-level fusions. A probabilistic graphical model (PGM) approach is proposed in this research for 30 m resolution land-cover mapping with multi-temporal Landsat and MODerate Resolution Imaging Spectroradiometer (MODIS) data. Independent classifiers were applied to two single-date Landsat 8 scenes and the MODIS time-series data, respectively, for probability estimation. A PGM was created for each pixel in Landsat 8 data. Conditional probability distributions were computed based on data quality and reliability by using information selectively. Using the administrative territory of Beijing City (Area-1) and a coastal region of Shandong province, China (Area-2) as study areas, multiple land-cover maps were generated for comparison. Quantitative results show the effectiveness of the proposed method. Overall accuracies promoted from 74.0% (maps acquired from single-temporal Landsat images) to 81.8% (output of the PGM) for Area-1. Improvements can also be seen when using MODIS data and only a single-temporal Landsat image as input (overall accuracy: 78.4% versus 74.0% for Area-1, and 86.8% versus 83.0% for Area-2). Information from MODIS data did not help much when the PGM was applied to cloud free regions of. One of the advantages of the proposed method is that it can be applied where multi-temporal data cannot be simply stacked as a multi-layered image.
- Dec 06 2016 cs.CV arXiv:1612.01227v1The human visual system excels at detecting local blur of visual images, but the underlying mechanism is mysterious. Traditional views of blur such as reduction in local or global high-frequency energy and loss of local phase coherence have fundamental limitations. For example, they cannot well discriminate flat regions from blurred ones. Here we argue that high-level semantic information is critical in successfully detecting local blur. Therefore, we resort to deep neural networks that are proficient in learning high-level features and propose the first end-to-end local blur mapping algorithm based on a fully convolutional network (FCN). We empirically show that high-level features of deeper layers indeed play a more important role than low-level features of shallower layers in resolving challenging ambiguities for this task. We test the proposed method on a standard blur detection benchmark and demonstrate that it significantly advances the state-of-the-art (ODS F-score of 0.853). In addition, we explore the use of the generated blur map in three applications, including blur region segmentation, blur degree estimation, and blur magnification.
- May 16 2016 cs.CR arXiv:1605.04025v2The exponential growth of mobile devices has raised concerns about sensitive data leakage. In this paper, we make the first attempt to identify suspicious location-related HTTP transmission flows from the user's perspective, by answering the question: Is the transmission user-intended? In contrast to previous network-level detection schemes that mainly rely on a given set of suspicious hostnames, our approach can better adapt to the fast growth of app market and the constantly evolving leakage patterns. On the other hand, compared to existing system-level detection schemes built upon program taint analysis, where all sensitive transmissions as treated as illegal, our approach better meets the user needs and is easier to deploy. In particular, our proof-of-concept implementation (FlowIntent) captures sensitive transmissions missed by TaintDroid, the state-of-the-art dynamic taint analysis system on Android platforms. Evaluation using 1002 location sharing instances collected from more than 20,000 apps shows that our approach achieves about 91% accuracy in detecting illegitimate location transmissions.
- The proliferation of Social Network Sites (SNSs) has greatly reformed the way of information dissemination, but also provided a new venue for hosts with impure motivations to disseminate malicious information. Social trust is the basis for information dissemination in SNSs. Malicious hosts judiciously and dynamically make the balance between maintaining its social trust and selfishly maximizing its malicious gain over a long time-span. Studying the optimal response strategies for each malicious host could assist to design the best system maneuver so as to achieve the targeted level of overall malicious activities. In this paper, we propose an interaction-based social trust model, and formulate the maximization of long-term malicious gains of multiple competing hosts as a non-cooperative differential game. Through rigorous analysis, optimal response strategies are identified and the best system maneuver mechanism is presented. Extensive numerical studies further verify the analytical results.
- Apr 26 2016 cs.CV arXiv:1604.07090v5Co-saliency detection is a newly emerging and rapidly growing research area in computer vision community. As a novel branch of visual saliency, co-saliency detection refers to the discovery of common and salient foregrounds from two or more relevant images, and can be widely used in many computer vision tasks. The existing co-saliency detection algorithms mainly consist of three components: extracting effective features to represent the image regions, exploring the informative cues or factors to characterize co-saliency, and designing effective computational frameworks to formulate co-saliency. Although numerous methods have been developed, the literature is still lacking a deep review and evaluation of co-saliency detection techniques. In this paper, we aim at providing a comprehensive review of the fundamentals, challenges, and applications of co-saliency detection. Specifically, we provide an overview of some related computer vision works, review the history of co-saliency detection, summarize and categorize the major algorithms in this research area, discuss some open issues in this area, present the potential applications of co-saliency detection, and finally point out some unsolved challenges and promising future works. We expect this review to be beneficial to both fresh and senior researchers in this field, and give insights to researchers in other related areas regarding the utility of co-saliency detection algorithms.
- Apr 26 2016 cs.PL arXiv:1604.07169v1We consider nondeterministic probabilistic programs with the most basic liveness property of termination. We present efficient methods for termination analysis of nondeterministic probabilistic programs with polynomial guards and assignments. Our approach is through synthesis of polynomial ranking supermartingales, that on one hand significantly generalizes linear ranking supermartingales and on the other hand is a counterpart of polynomial ranking-functions for proving termination of nonprobabilistic programs. The approach synthesizes polynomial ranking-supermartingales through Positivstellensatz's, yielding an efficient method which is not only sound, but also semi-complete over a large subclass of programs. We show experimental results to demonstrate that our approach can handle several classical programs with complex polynomial guards and assignments, and can synthesize efficient quadratic ranking-supermartingales when a linear one does not exist even for simple affine programs.
- Apr 12 2016 cs.SI physics.soc-ph arXiv:1604.02694v1Social status refers to the relative position within the society. It is an important notion in sociology and related research. The problem of measuring social status has been studied for many years. Various indicators are proposed to assess social status of individuals, including educational attainment, occupation, and income/wealth. However, these indicators are sometimes difficult to collect or measure. We investigate social networks for alternative measures of social status. Online activities expose certain traits of users in the real world. We are interested in how these activities are related to social status, and how social status can be predicted with social network data. To the best of our knowledge, this is the first study on connecting online activities with social status in reality. In particular, we focus on the network structure of microblogs in this study. A user following another implies some kind of status. We cast the predicted social status of users to the "status" of real-world entities, e.g., universities, occupations, and regions, so that we can compare and validate predicted results with facts in the real world. We propose an efficient algorithm for this task and evaluate it on a dataset consisting of 3.4 million users from Sina Weibo. The result shows that it is possible to predict social status with reasonable accuracy using social network data. We also point out challenges and limitations of this approach, e.g., inconsistence between online popularity and real-world status for certain users. Our findings provide insights on analyzing online social status and future designs of ranking schemes for social networks.
- Oct 30 2015 cs.PL arXiv:1510.08517v1In this paper, we consider termination of probabilistic programs with real-valued variables. The questions concerned are: 1. qualitative ones that ask (i) whether the program terminates with probability 1 (almost-sure termination) and (ii) whether the expected termination time is finite (finite termination); 2. quantitative ones that ask (i) to approximate the expected termination time (expectation problem) and (ii) to compute a bound B such that the probability to terminate after B steps decreases exponentially (concentration problem). To solve these questions, we utilize the notion of ranking supermartingales which is a powerful approach for proving termination of probabilistic programs. In detail, we focus on algorithmic synthesis of linear ranking-supermartingales over affine probabilistic programs (APP's) with both angelic and demonic non-determinism. An important subclass of APP's is LRAPP which is defined as the class of all APP's over which a linear ranking-supermartingale exists. Our main contributions are as follows. Firstly, we show that the membership problem of LRAPP (i) can be decided in polynomial time for APP's with at most demonic non-determinism, and (ii) is NP-hard and in PSPACE for APP's with angelic non-determinism; moreover, the NP-hardness result holds already for APP's without probability and demonic non-determinism. Secondly, we show that the concentration problem over LRAPP can be solved in the same complexity as for the membership problem of LRAPP. Finally, we show that the expectation problem over LRAPP can be solved in 2EXPTIME and is PSPACE-hard even for APP's without probability and non-determinism (i.e., deterministic programs). Our experimental results demonstrate the effectiveness of our approach to answer the qualitative and quantitative questions over APP's with at most demonic non-determinism.
- Sep 24 2015 cs.DM arXiv:1509.07029v1The global packing number problem arises from the investigation of optimal wavelength allocation in an optical network that employs Wavelength Division Multiplexing (WDM). Consider an optical network that is represented by a connected, simple graph $G$. We assume all communication channels are bidirectional, so that all links and paths are undirected. It follows that there are ${|G|\choose 2}$ distinct node pairs associated with $G$, where $|G|$ is the number of nodes in $G$. A path system $\mathcal{P}$ of $G$ consists of ${|G|\choose 2}$ paths, one path to connect each of the node pairs. The global packing number of a path system $\mathcal{P}$, denoted by $\Phi(G,\mathcal{P})$, is the minimum integer $k$ to guarantee the existence of a mapping $\omega:\mathcal{P}\to\{1,2,\ldots,k\}$, such that $\omega(P)\neq\omega(P')$ if $P$ and $P'$ have common edge(s). The global packing number of $G$, denoted by $\Phi(G)$, is defined to be the minimum $\Phi(G,\mathcal{P})$ among all possible path systems $\mathcal{P}$. If there is no wavelength conversion along any optical transmission path for any node pair in the network, the global packing number signifies the minimum number of wavelengths required to support simultaneous communication for all pairs in the network. In this paper, the focus is on ring networks, so that $G$ is a cycle. Explicit formulas for the global packing number of a cycle is derived. The investigation is further extended to chain networks. A path system, $\mathcal{P}$, that enjoys $\Phi(G,\mathcal{P})=\Phi(G)$ is called ideal. A characterization of ideal path systems is also presented. We also describe an efficient heuristic algorithm to assign wavelengths that can be applied to a general network with more complicated traffic load.
- Jul 30 2015 cs.GT arXiv:1507.08042v1Designing revenue optimal auctions for selling an item to $n$ symmetric bidders is a fundamental problem in mechanism design. Myerson (1981) shows that the second price auction with an appropriate reserve price is optimal when bidders' values are drawn i.i.d. from a known regular distribution. A cornerstone in the prior-independent revenue maximization literature is a result by Bulow and Klemperer (1996) showing that the second price auction without a reserve achieves $(n-1)/n$ of the optimal revenue in the worst case. We construct a randomized mechanism that strictly outperforms the second price auction in this setting. Our mechanism inflates the second highest bid with a probability that varies with $n$. For two bidders we improve the performance guarantee from $0.5$ to $0.512$ of the optimal revenue. We also resolve a question in the design of revenue optimal mechanisms that have access to a single sample from an unknown distribution. We show that a randomized mechanism strictly outperforms all deterministic mechanisms in terms of worst case guarantee.
- In this work, we maximize the secrecy rate of the wireless-powered untrusted relay network by jointly designing power splitting (PS) ratio and relay beamforming with the proposed global optimal algorithm (GOA) and local optimal algorithm (LOA). Different from the literature, artificial noise (AN) sent by the destination not only degrades the channel condition of the eavesdropper to improve the secrecy rate, but also becomes a new source of energy powering the untrusted relay based on PS. Hence, it is of high economic benefits and efficiency to take advantage of AN compared with the literature. Simulation results show that LOA can achieve satisfactory secrecy rate performance compared with that of GOA, but with less computation time.
- Feb 17 2015 cs.GR arXiv:1502.04232v1We present a multi-scale approach to sketch-based shape retrieval. It is based on a novel multi-scale shape descriptor called Pyramidof- Parts, which encodes the features and spatial relationship of the semantic parts of query sketches. The same descriptor can also be used to represent 2D projected views of 3D shapes, allowing effective matching of query sketches with 3D shapes across multiple scales. Experimental results show that the proposed method outperforms the state-of-the-art method, whether the sketch segmentation information is obtained manually or automatically by considering each stroke as a semantic part.
- In this paper we give a partial shift version of user-irrepressible sequence sets and conflict-avoiding codes. By means of disjoint difference sets, we obtain an infinite number of such user-irrepressible sequence sets whose lengths are shorter than known results in general. Subsequently, the newly defined partially conflict-avoiding codes are discussed.
- Let ${\cal C}$ be a $q$-ary code of length $n$ and size $M$, and ${\cal C}(i) = \{{\bf c}(i) \ | \ {\bf c}=({\bf c}(1), {\bf c}(2), \ldots, {\bf c}(n))^{T} \in {\cal C}\}$ be the set of $i$th coordinates of ${\cal C}$. The descendant code of a sub-code ${\cal C}^{'} \subseteq {\cal C}$ is defined to be ${\cal C}^{'}(1) \times {\cal C}^{'}(2) \times \cdots \times {\cal C}^{'}(n)$. In this paper, we introduce a multimedia analogue of codes with the identifiable parent property (IPP), called multimedia IPP codes or $t$-MIPPC$(n, M, q)$, so that given the descendant code of any sub-code ${\cal C}^{'}$ of a multimedia $t$-IPP code ${\cal C}$, one can always identify, as IPP codes do in the generic digital scenario, at least one codeword in ${\cal C}^{'}$. We first derive a general upper bound on the size $M$ of a multimedia $t$-IPP code, and then investigate multimedia $3$-IPP codes in more detail. We characterize a multimedia $3$-IPP code of length $2$ in terms of a bipartite graph and a generalized packing, respectively. By means of these combinatorial characterizations, we further derive a tight upper bound on the size of a multimedia $3$-IPP code of length $2$, and construct several infinite families of (asymptotically) optimal multimedia $3$-IPP codes of length $2$.
- We study revenue maximization in settings where agents' values are interdependent: each agent receives a signal drawn from a correlated distribution and agents' values are functions of all of the signals. We introduce a variant of the generalized VCG auction with reserve prices and random admission, and show that this auction gives a constant approximation to the optimal expected revenue in matroid environments. Our results do not require any assumptions on the signal distributions, however, they require the value functions to satisfy a standard single-crossing property and a concavity-type condition.
- Networked sensing, where the goal is to perform complex inference using a large number of inexpensive and decentralized sensors, has become an increasingly attractive research topic due to its applications in wireless sensor networks and internet-of-things. To reduce the communication, sensing and storage complexity, this paper proposes a simple sensing and estimation framework to faithfully recover the principal subspace of high-dimensional data streams using a collection of binary measurements from distributed sensors, without transmitting the whole data. The binary measurements are designed to indicate comparison outcomes of aggregated energy projections of the data samples over pairs of randomly selected directions. When the covariance matrix is a low-rank matrix, we propose a spectral estimator that recovers the principal subspace of the covariance matrix as the subspace spanned by the top eigenvectors of a properly designed surrogate matrix, which is provably accurate as soon as the number of binary measurements is sufficiently large. An adaptive rank selection strategy based on soft thresholding is also presented. Furthermore, we propose a tailored spectral estimator when the covariance matrix is additionally Toeplitz, and show reliable estimation can be obtained from a substantially smaller number of binary measurements. Our results hold even when a constant fraction of the binary measurements is randomly flipped. Finally, we develop a low-complexity online algorithm to track the principal subspace when new measurements arrive sequentially. Numerical examples are provided to validate the proposed approach.
- Jun 09 2014 cs.GT arXiv:1406.1571v1Crémer and McLean [1985] showed that, when buyers' valuations are drawn from a correlated distribution, an auction with full knowledge on the distribution can extract the full social surplus. We study whether this phenomenon persists when the auctioneer has only incomplete knowledge of the distribution, represented by a finite family of candidate distributions, and has sample access to the real distribution. We show that the naive approach which uses samples to distinguish candidate distributions may fail, whereas an extended version of the Crémer-McLean auction simultaneously extracts full social surplus under each candidate distribution. With an algebraic argument, we give a tight bound on the number of samples needed by this auction, which is the difference between the number of candidate distributions and the dimension of the linear space they span.
- Apr 09 2014 cs.GT arXiv:1404.2041v2We study combinatorial auctions where each item is sold separately but simultaneously via a second price auction. We ask whether it is possible to efficiently compute in this game a pure Nash equilibrium with social welfare close to the optimal one. We show that when the valuations of the bidders are submodular, in many interesting settings (e.g., constant number of bidders, budget additive bidders) computing an equilibrium with good welfare is essentially as easy as computing, completely ignoring incentives issues, an allocation with good welfare. On the other hand, for subadditive valuations, we show that computing an equilibrium requires exponential communication. Finally, for XOS (a.k.a. fractionally subadditive) valuations, we show that if there exists an efficient algorithm that finds an equilibrium, it must use techniques that are very different from our current ones.
- Oct 10 2013 cs.SY arXiv:1310.2514v4In this paper, we consider multi-dimensional maximal cost-bounded reachability probability over continuous-time Markov decision processes (CTMDPs). Our major contributions are as follows. Firstly, we derive an integral characterization which states that the maximal cost-bounded reachability probability function is the least fixed point of a system of integral equations. Secondly, we prove that the maximal cost-bounded reachability probability can be attained by a measurable deterministic cost-positional scheduler. Thirdly, we provide a numerical approximation algorithm for maximal cost-bounded reachability probability. We present these results under the setting of both early and late schedulers.
- Understanding the query complexity for testing linear-invariant properties has been a central open problem in the study of algebraic property testing. Triangle-freeness in Boolean functions is a simple property whose testing complexity is unknown. Three Boolean functions $f_1$, $f_2$ and $f_3: \mathbb{F}_2^k \to \{0, 1\}$ are said to be triangle free if there is no $x, y \in \mathbb{F}_2^k$ such that $f_1(x) = f_2(y) = f_3(x + y) = 1$. This property is known to be strongly testable (Green 2005), but the number of queries needed is upper-bounded only by a tower of twos whose height is polynomial in $1 / \epsislon$, where $\epsislon$ is the distance between the tested function triple and triangle-freeness, i.e., the minimum fraction of function values that need to be modified to make the triple triangle free. A lower bound of $(1 / \epsilon)^{2.423}$ for any one-sided tester was given by Bhattacharyya and Xie (2010). In this work we improve this bound to $(1 / \epsilon)^{6.619}$. Interestingly, we prove this by way of a combinatorial construction called \emphuniquely solvable puzzles that was at the heart of Coppersmith and Winograd's renowned matrix multiplication algorithm.
- May 04 2013 cs.GT arXiv:1305.0598v1We study the design of Bayesian incentive compatible mechanisms in single parameter domains, for the objective of optimizing social efficiency as measured by social cost. In the problems we consider, a group of participants compete to receive service from a mechanism that can provide such services at a cost. The mechanism wishes to choose which agents to serve in order to maximize social efficiency, but is not willing to suffer an expected loss: the agents' payments should cover the cost of service in expectation. We develop a general method for converting arbitrary approximation algorithms for the underlying optimization problem into Bayesian incentive compatible mechanisms that are cost-recovering in expectation. In particular, we give polynomial time black-box reductions from the mechanism design problem to the problem of designing a social cost minimization algorithm without incentive constraints. Our reduction increases the expected social cost of the given algorithm by a factor of O(log(minn, h)), where n is the number of agents and h is the ratio between the highest and lowest nonzero valuations in the support. We also provide a lower bound illustrating that this inflation of the social cost is essential: no BIC cost-recovering mechanism can achieve an approximation factor better than \Omega(log(n)) or \Omega(log(h)) in general. Our techniques extend to show that a certain class of truthful algorithms can be made cost-recovering in the non-Bayesian setting, in such a way that the approximation factor degrades by at most O(log(minn, h)). This is an improvement over previously-known constructions with inflation factor O(log n).
- Jan 04 2013 cs.GT arXiv:1301.0401v1We study simple and approximately optimal auctions for agents with a particular form of risk-averse preferences. We show that, for symmetric agents, the optimal revenue (given a prior distribution over the agent preferences) can be approximated by the first-price auction (which is prior independent), and, for asymmetric agents, the optimal revenue can be approximated by an auction with simple form. These results are based on two technical methods. The first is for upper-bounding the revenue from a risk-averse agent. The second gives a payment identity for mechanisms with pay-your-bid semantics.
- We consider the problem of approximating the probability mass of the set of timed paths under a continuous-time Markov chain (CTMC) that are accepted by a deterministic timed automaton (DTA). As opposed to several existing works on this topic, we consider DTA with multiple clocks. Our key contribution is an algorithm to approximate these probabilities using finite difference methods. An error bound is provided which indicates the approximation error. The stepping stones towards this result include rigorous proofs for the measurability of the set of accepted paths and the integral-equation system characterizing the acceptance probability, and a differential characterization for the acceptance probability.
- Sep 24 2012 cs.GT arXiv:1209.4703v1Simultaneous item auctions are simple procedures for allocating items to bidders with potentially complex preferences over different item sets. In a simultaneous auction, every bidder submits bids on all items simultaneously. The allocation and prices are then resolved for each item separately, based solely on the bids submitted on that item. Such procedures occur in practice (e.g. eBay) but are not truthful. We study the efficiency of Bayesian Nash equilibrium (BNE) outcomes of simultaneous first- and second-price auctions when bidders have complement-free (a.k.a. subadditive) valuations. We show that the expected social welfare of any BNE is at least 1/2 of the optimal social welfare in the case of first-price auctions, and at least 1/4 in the case of second-price auctions. These results improve upon the previously-known logarithmic bounds, which were established by [Hassidim, Kaplan, Mansour and Nisan '11] for first-price auctions and by [Bhawalkar and Roughgarden '11] for second-price auctions.
- Jun 18 2012 cs.GT arXiv:1206.3541v3The intuition that profit is optimized by maximizing marginal revenue is a guiding principle in microeconomics. In the classical auction theory for agents with linear utility and single-dimensional preferences, Bulow and Roberts (1989) show that the optimal auction of Myerson (1981) is in fact optimizing marginal revenue. In particular Myerson's virtual values are exactly the derivative of an appropriate revenue curve. This paper considers mechanism design in environments where the agents have multi-dimensional and non-linear preferences. Understanding good auctions for these environments is considered to be the main challenge in Bayesian optimal mechanism design. In these environments maximizing marginal revenue may not be optimal and there is sometimes no direct way to implement the marginal revenue maximization. Our contributions are three fold: we characterize the settings for which marginal revenue maximization is optimal (by identifying an important condition that we call revenue linearity), we give simple procedures for implementing marginal revenue maximization in general, and we show that marginal revenue maximization is approximately optimal. Our approximation factor smoothly degrades in a term that quantifies how far the environment is from ideal (where marginal revenue maximization is optimal). Because the marginal revenue mechanism is optimal for single-dimensional agents, our generalization immediately approximately extends many results for single-dimensional agents. One of the biggest open questions in Bayesian algorithmic mechanism design is developing methodologies that are not brute-force in the size of the agent type space. Our methods identify a subproblem that, e.g., for unit-demand agents with values drawn from product distributions, enables approximation mechanisms that are polynomial in the dimension.
- Mar 23 2012 cs.GT arXiv:1203.5099v1We study an abstract optimal auction problem for a single good or service. This problem includes environments where agents have budgets, risk preferences, or multi-dimensional preferences over several possible configurations of the good (furthermore, it allows an agent's budget and risk preference to be known only privately to the agent). These are the main challenge areas for auction theory. A single-agent problem is to optimize a given objective subject to a constraint on the maximum probability with which each type is allocated, a.k.a., an allocation rule. Our approach is a reduction from multi-agent mechanism design problem to collection of single-agent problems. We focus on maximizing revenue, but our results can be applied to other objectives (e.g., welfare). An optimal multi-agent mechanism can be computed by a linear/convex program on interim allocation rules by simultaneously optimizing several single-agent mechanisms subject to joint feasibility of the allocation rules. For single-unit auctions, Border \citeyearparB91 showed that the space of all jointly feasible interim allocation rules for $n$ agents is a $\NumTypes$-dimensional convex polytope which can be specified by $2^\NumTypes$ linear constraints, where $\NumTypes$ is the total number of all agents' types. Consequently, efficiently solving the mechanism design problem requires a separation oracle for the feasibility conditions and also an algorithm for ex-post implementation of the interim allocation rules. We show that the polytope of jointly feasible interim allocation rules is the projection of a higher dimensional polytope which can be specified by only $O(\NumTypes^2)$ linear constraints. Furthermore, our proof shows that finding a preimage of the interim allocation rules in the higher dimensional polytope immediately gives an ex-post implementation.
- Mar 19 2012 cs.DL arXiv:1203.3611v1Literature-based knowledge discovery method was introduced by Dr. Swanson in 1986. He hypothesized a connection between Raynaud's phenomenon and dietary fish oil, the field of literature-based discovery (LBD) was born from then on. During the subsequent two decades, LBD's research attracts some scientists including information science, computer science, and biomedical science, etc.. It has been a part of knowledge discovery and text mining. This paper summarizes the development of recent years about LBD and presents two parts, methodology research and applied research. Lastly, some problems are pointed as future research directions.
- Nov 16 2010 cs.GT arXiv:1011.3232v2This short note exhibits a truthful-in-expectation $O(\frac {\log m} {\log \log m})$-approximation mechanism for combinatorial auctions with subadditive bidders that uses polynomial communication.
- Nov 11 2010 cs.GT arXiv:1011.2413v1We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We observe that there exists an algorithm that finds the optimal randomized mechanism that runs in time polynomial in the size of the support. We leverage this result to show that in the oracle model introduced by Ronen and Saberi [FOCS'02], there exists a polynomial time truthful in expectation mechanism that provides a $(\frac 3 2+\epsilon)$-approximation to the revenue achievable by an optimal truthful-in-expectation mechanism, and a polynomial time deterministic truthful mechanism that guarantees $\frac 5 3$ approximation to the revenue achievable by an optimal deterministic truthful mechanism. We show that the $\frac 5 3$-approximation mechanism provides the same approximation ratio also with respect to the optimal truthful-in-expectation mechanism. This shows that the performance gap between truthful-in-expectation and deterministic mechanisms is relatively small. En route, we solve an open question of Mehta and Vazirani [EC'04]. Finally, we extend some of our results to the multi-item case, and show how to compute the optimal truthful-in-expectation mechanisms for bidders with more complex valuations.
- This paper is concerned with the error performance analysis of binary differential phase shift keying with differential detection over the nonselective, Rayleigh fading channel with combining diversity reception. Space antenna diversity reception is assumed. The diversity branches are independent, but have nonidentically distributed statistics. The fading process in each branch is assumed to have an arbitrary Doppler spectrum with arbitrary Doppler bandwidth. Both optimum diversity reception and suboptimum diversity reception are considered. Results available previously apply only to the case of first and second-order diversity. Our results are more general in that the order of diversity is arbitrary. Moreover, the bit error probability (BEP) result is obtained in an exact, closed-form expression which shows the behavior of the BEP as an explict function of the one-bit-interval fading correlation coefficient at the matched filter output, the mean signal-to-noise ratio per bit per branch and the order of diversity. A simple, more easily computable Chernoff bound to the BEP of the optimum diversity detector is also derived.
- Jun 29 2010 cs.RO arXiv:1006.5226v1Due to the flexibility and adaptability of human, manual handling work is still very important in industry, especially for assembly and maintenance work. Well-designed work operation can improve work efficiency and quality; enhance safety, and lower cost. Most traditional methods for work system analysis need physical mock-up and are time consuming. Digital mockup (DMU) and digital human modeling (DHM) techniques have been developed to assist ergonomic design and evaluation for a specific worker population (e.g. 95 percentile); however, the operation adaptability and adjustability for a specific individual are not considered enough. In this study, a new framework based on motion tracking technique and digital human simulation technique is proposed for motion-time analysis of manual operations. A motion tracking system is used to track a worker's operation while he/she is conducting a manual handling work. The motion data is transferred to a simulation computer for real time digital human simulation. The data is also used for motion type recognition and analysis either online or offline for objective work efficiency evaluation and subjective work task evaluation. Methods for automatic motion recognition and analysis are presented. Constraints and limitations of the proposed method are discussed.
- Jul 14 2009 cs.GT arXiv:0907.1948v2If a two-player social welfare maximization problem does not admit a PTAS, we prove that any maximal-in-range truthful mechanism that runs in polynomial time cannot achieve an approximation factor better than 1/2. Moreover, for the k-player version of the same problem, the hardness of approximation improves to 1/k under the same two-player hardness assumption. (We note that 1/k is achievable by a trivial deterministic maximal-in-range mechanism.) This hardness result encompasses not only deterministic maximal-in-range mechanisms, but also all universally-truthful randomized maximal in range algorithms, as well as a class of strictly more powerful truthful-in-expectation randomized mechanisms recently introduced by Dobzinski and Dughmi. Our result applies to any class of valuation functions that satisfies some minimal closure properties. These properties are satisfied by the valuation functions in all well-studied APX-hard social welfare maximization problems, such as coverage, submodular, and subadditive valuations. We also prove a stronger result for universally-truthful maximal-in-range mechanisms. Namely, even for the class of budgeted additive valuations, which admits an FPTAS, no such mechanism can achieve an approximation factor better than 1/k in polynomial time.
- This paper is concerned with optimum diversity receiver structure and its performance analysis of differential phase shift keying (DPSK) with differential detection over nonselective, independent, nonidentically distributed, Rayleigh fading channels. The fading process in each branch is assumed to have an arbitrary Doppler spectrum with arbitrary Doppler bandwidth, but to have distinct, asymmetric fading power spectral density characteristic. Using 8-DPSK as an example, the average bit error probability (BEP) of the optimum diversity receiver is obtained by calculating the BEP for each of the three individual bits. The BEP results derived are given in exact, explicit, closed-form expressions which show clearly the behavior of the performance as a function of various system parameters.