results for au:Jin_K in:cs

- We consider the problem of finding the maximum area parallelogram (MAP) inside a given convex polygon. Our main result is an algorithm for computing the MAP in an $n$-sided polygon in $O(n^2)$ time. Achieving this running time requires proving several new structural properties of the MAP. Our algorithm actually computes all the locally maximal area parallelograms (LMAPs). In addition to the algorithm, we prove that the LMAPs interleave each other, thus the number of LMAPs is bounded by $O(n)$. We discuss applications of our result to, among others, the problem of computing the maximum area centrally-symmetric convex body (MAC) inside a convex polygon, and the simplest case of the Heilbronn Triangle Problem.
- Sep 07 2017 cs.CV arXiv:1709.01809v1We present a new method for image reconstruction which replaces the projector in a projected gradient descent (PGD) with a convolutional neural network (CNN). CNNs trained as high-dimensional (image-to-image) regressors have recently been used to efficiently solve inverse problems in imaging. However, these approaches lack a feedback mechanism to enforce that the reconstructed image is consistent with the measurements. This is crucial for inverse problems, and more so in biomedical imaging, where the reconstructions are used for diagnosis. In our scheme, the gradient descent enforces measurement consistency, while the CNN recursively projects the solution closer to the space of desired reconstruction images. We provide a formal framework to ensure that the classical PGD converges to a local minimizer of a non-convex constrained least-squares problem. When the projector is replaced with a CNN, we propose a relaxed PGD, which always converges. Finally, we propose a simple scheme to train a CNN to act like a projector. Our experiments on sparse view Computed Tomography (CT) reconstruction for both noiseless and noisy measurements show an improvement over the total-variation (TV) method and a recent CNN-based technique.
- Jul 14 2017 cs.CG arXiv:1707.04071v3The widely known linear time algorithm for computing the maximum area triangle in a convex polygon was found incorrect by Keikha et. al. [arXiv 1705.11035]. We present an alternative algorithm in this paper. Comparing to the only previously known correct solution, ours is much simpler and more efficient. Moreover, it can be easily extended to compute the minimum area triangle enclosing a convex polygon. Also, our new approach seems powerful in solving other related problems.
- In this paper, we revisit the lexical and modular factorization of the middle level graph. We provide new and simple definitions, prove new properties and formulas, and design optimum algorithms for computing these factorizations. In addition, we provide interesting applications of these factorizations in some hat-guessing type games.
- For effective treatment of Alzheimer disease (AD), it is important to identify subjects who are most likely to exhibit rapid cognitive decline. Herein, we developed a novel framework based on a deep convolutional neural network which can predict future cognitive decline in mild cognitive impairment (MCI) patients using flurodeoxyglucose and florbetapir positron emission tomography (PET). The architecture of the network only relies on baseline PET studies of AD and normal subjects as the training dataset. Feature extraction and complicated image preprocessing including nonlinear warping are unnecessary for our approach. Accuracy of prediction (84.2%) for conversion to AD in MCI patients outperformed conventional feature-based quantification approaches. ROC analyses revealed that performance of CNN-based approach was significantly higher than that of the conventional quantification methods (p < 0.05). Output scores of the network were strongly correlated with the longitudinal change in cognitive measurements. These results show the feasibility of deep learning as a tool for predicting disease outcome using brain images.
- Feb 08 2017 cs.CG arXiv:1702.01836v1We study approximation algorithms for the following geometric version of the maximum coverage problem: Let $\mathcal{P}$ be a set of $n$ weighted points in the plane. Let $D$ represent a planar object, such as a rectangle, or a disk. We want to place $m$ copies of $D$ such that the sum of the weights of the points in $\mathcal{P}$ covered by these copies is maximized. For any fixed $\varepsilon>0$, we present efficient approximation schemes that can find a $(1-\varepsilon)$-approximation to the optimal solution. In particular, for $m=1$ and for the special case where $D$ is a rectangle, our algorithm runs in time $O(n\log (\frac{1}{\varepsilon}))$, improving on the previous result. For $m>1$ and the rectangular case, our algorithm runs in $O(\frac{n}{\varepsilon}\log (\frac{1}{\varepsilon})+\frac{m}{\varepsilon}\log m +m(\frac{1}{\varepsilon})^{O(\min(\sqrt{m},\frac{1}{\varepsilon}))})$ time. For a more general class of shapes (including disks, polygons with $O(1)$ edges), our algorithm runs in $O(n(\frac{1}{\varepsilon})^{O(1)}+\frac{m}{\epsilon}\log m + m(\frac{1}{\varepsilon})^{O(\min(m,\frac{1}{\varepsilon^2}))})$ time.
- Compressed sensing provided a data-acquisition paradigm for sparse signals. Remarkably, it has been shown that practical algorithms provide robust recovery from noisy linear measurements acquired at a near optimal sampling rate. In many real-world applications, a signal of interest is typically sparse not in the canonical basis but in a certain transform domain, such as wavelets or the finite difference. The theory of compressed sensing was extended to the analysis sparsity model but known extensions are limited to specific choices of sensing matrix and sparsifying transform. In this paper, we propose a unified theory for robust recovery of sparse signals in a general transform domain by convex programming. In particular, our results apply to general acquisition and sparsity models and show how the number of measurements for recovery depends on properties of measurement and sparsifying transforms. Moreover, we also provide extensions of our results to the scenarios where the atoms in the transform has varying incoherence parameters and the unknown signal exhibits a structured sparsity pattern. In particular, for the partial Fourier recovery of sparse signals over a circulant transform, our main results suggest a uniformly random sampling. Numerical results demonstrate that the variable density random sampling by our main results provides superior recovery performance over known sampling strategies.
- Nov 14 2016 cs.CV arXiv:1611.03679v1In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyper parameter selection. The starting point of our work is the observation that unrolled iterative methods have the form of a CNN (filtering followed by point-wise non-linearity) when the normal operator (H*H, the adjoint of H times H) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill-posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a 512 x 512 image on GPU.
- We investigate multi-round team competitions between two teams, where each team selects one of its players simultaneously in each round and each player can play at most once. The competition defines an extensive-form game with perfect recall and can be solved efficiently by standard methods. We are interested in the properties of the subgame perfect equilibria of this game. We first show that uniformly random strategy is a subgame perfect equilibrium strategy for both teams when there are no redundant players (i.e., the number of players in each team equals to the number of rounds of the competition). Secondly, a team can safely abandon its weak players if it has redundant players and the strength of players is transitive. We then focus on the more interesting case where there are redundant players and the strength of players is not transitive. In this case, we obtain several counterintuitive results. First of all, a player might help improve the payoff of its team, even if it is dominated by the entire other team. We give a necessary condition for a dominated player to be useful. We also study the extent to which the dominated players can increase the payoff. These results bring insights into playing and designing general team competitions.
- Dec 15 2015 cs.CG arXiv:1512.03897v7We propose a novel geometric structure, called $Nest(P)$, which is induced by $P$ and is an arrangement of $\Theta(n^2)$ segments, each of which is parallel to an edge of $P$. This structure admits several interesting and nontrivial properties, which follow from two fundamental properties in geometry, namely, convexity and parallelism. Moreover, we give a perfect application of this structure in the following geometric optimization problem: Given a convex polygon $P$ with $n$ edges, compute the parallelograms in $P$ with maximal area. We design an $O(n\log^2n)$ time algorithm for computing all these parallelograms, which improves over a previous known quadratic time algorithm. Concretely, we show that $Nest(P)$ captures the essential nature of the maximal area parallelograms, and the optimization problem we considered reduces to answering $O(n)$ location queries on $Nest(P)$. Moreover, using a few nontrivial algorithmic tricks, we answer each of these queries in $O(\log^2n)$ time. This should avoid an explicit construction of $Nest(P)$, which takes $\Omega(n^2)$ time.
- While the recent theory of compressed sensing provides an opportunity to overcome the Nyquist limit in recovering sparse signals, a solution approach usually takes a form of inverse problem of the unknown signal, which is crucially dependent on specific signal representation. In this paper, we propose a drastically different two-step Fourier compressive sampling framework in continuous domain that can be implemented as a measurement domain interpolation, after which a signal reconstruction can be done using classical analytic reconstruction methods. The main idea is originated from the fundamental duality between the sparsity in the primary space and the low-rankness of a structured matrix in the spectral domain, which shows that a low-rank interpolator in the spectral domain can enjoy all the benefit of sparse recovery with performance guarantees. Most notably, the proposed low-rank interpolation approach can be regarded as a generalization of recent spectral compressed sensing to recover large class of finite rate of innovations (FRI) signals at near optimal sampling rate. Moreover, for the case of cardinal representation, we can show that the proposed low-rank interpolation will benefit from inherent regularization and the optimal incoherence parameter. Using the powerful dual certificates and golfing scheme, we show that the new framework still achieves the near-optimal sampling rate for general class of FRI signal recovery, and the sampling rate can be further reduced for the class of cardinal splines. Numerical results using various type of FRI signals confirmed that the proposed low-rank interpolation approach has significant better phase transition than the conventional CS approaches.
- Sparse + Low Rank Decomposition of Annihilating Filter-based Hankel Matrix for Impulse Noise RemovalOct 20 2015 cs.CV arXiv:1510.05559v1Recently, so called annihilating filer-based low rank Hankel matrix (ALOHA) approach was proposed as a powerful image inpainting method. Based on the observation that smoothness or textures within an image patch corresponds to sparse spectral components in the frequency domain, ALOHA exploits the existence of annihilating filters and the associated rank-deficient Hankel matrices in the image domain to estimate the missing pixels. By extending this idea, here we propose a novel impulse noise removal algorithm using sparse + low rank decomposition of an annihilating filter-based Hankel matrix. The new approach, what we call the robust ALOHA, is motivated by the observation that an image corrupted with impulse noises has intact pixels; so the impulse noises can be modeled as sparse components, whereas the underlying image can be still modeled using a low-rank Hankel structured matrix. To solve the sparse + low rank decomposition problem, we propose an alternating direction method of multiplier (ADMM) method with initial factorized matrices coming from low rank matrix fitting (LMaFit) algorithm. To adapt the local image statistics that have distinct spectral distributions, the robust ALOHA is applied patch by patch. Experimental results from two types of impulse noises - random valued impulse noises and salt/pepper noises - for both single channel and multi-channel color images demonstrate that the robust ALOHA outperforms the existing algorithms up to 8dB in terms of the peak signal to noise ratio (PSNR).
- Parallel MRI (pMRI) and compressed sensing MRI (CS-MRI) have been considered as two distinct reconstruction problems. Inspired by recent k-space interpolation methods, an annihilating filter based low-rank Hankel matrix approach (ALOHA) is proposed as a general framework for sparsity-driven k-space interpolation method which unifies pMRI and CS-MRI. Specifically, our framework is based on the fundamental duality between the transform domain sparsity in the primary space and the low-rankness of weighted Hankel matrix in the reciprocal space, which converts pMRI and CS-MRI to a k-space interpolation problem using structured matrix completion. Using theoretical results from the latest compressed sensing literatures, we showed that the required sampling rates for ALOHA may achieve the optimal rate. Experimental results with in vivo data for single/multi-coil imaging as well as dynamic imaging confirmed that the proposed method outperforms the state-of-the-art pMRI and CS-MRI.
- Dec 03 2013 cs.SC arXiv:1312.0462v1We improve the local generic position method for isolating the real roots of a zero-dimensional bivariate polynomial system with two polynomials and extend the method to general zero-dimensional polynomial systems. The method mainly involves resultant computation and real root isolation of univariate polynomial equations. The roots of the system have a linear univariate representation. The complexity of the method is $\tilde{O}_B(N^{10})$ for the bivariate case, where $N=\max(d,\tau)$, $d$ resp., $\tau$ is an upper bound on the degree, resp., the maximal coefficient bitsize of the input polynomials. The algorithm is certified with probability 1 in the multivariate case. The implementation shows that the method is efficient, especially for bivariate polynomial systems.
- Apr 05 2012 cs.CG arXiv:1204.0905v1In this paper, an algorithm to compute a certified $G^1$ rational parametric approximation for algebraic space curves is given by extending the local generic position method for solving zero dimensional polynomial equation systems to the case of dimension one. By certified, we mean the approximation curve and the original curve have the same topology and their Hausdauff distance is smaller than a given precision. Thus, the method also gives a new algorithm to compute the topology for space algebraic curves. The main advantage of the algorithm, inhering from the local generic method, is that topology computation and approximation for a space curve is directly reduced to the same tasks for two plane curves. In particular, the error bound of the approximation space curve is obtained from the error bounds of the approximation plane curves explicitly. Nontrivial examples are used to show the effectivity of the method.