results for au:Jin_K in:cs

- For effective treatment of Alzheimer disease (AD), it is important to identify subjects who are most likely to exhibit rapid cognitive decline. Herein, we developed a novel framework based on a deep convolutional neural network which can predict future cognitive decline in mild cognitive impairment (MCI) patients using flurodeoxyglucose and florbetapir positron emission tomography (PET). The architecture of the network only relies on baseline PET studies of AD and normal subjects as the training dataset. Feature extraction and complicated image preprocessing including nonlinear warping are unnecessary for our approach. Accuracy of prediction (84.2%) for conversion to AD in MCI patients outperformed conventional feature-based quantification approaches. ROC analyses revealed that performance of CNN-based approach was significantly higher than that of the conventional quantification methods (p < 0.05). Output scores of the network were strongly correlated with the longitudinal change in cognitive measurements. These results show the feasibility of deep learning as a tool for predicting disease outcome using brain images.
- Feb 08 2017 cs.CG arXiv:1702.01836v1We study approximation algorithms for the following geometric version of the maximum coverage problem: Let $\mathcal{P}$ be a set of $n$ weighted points in the plane. Let $D$ represent a planar object, such as a rectangle, or a disk. We want to place $m$ copies of $D$ such that the sum of the weights of the points in $\mathcal{P}$ covered by these copies is maximized. For any fixed $\varepsilon>0$, we present efficient approximation schemes that can find a $(1-\varepsilon)$-approximation to the optimal solution. In particular, for $m=1$ and for the special case where $D$ is a rectangle, our algorithm runs in time $O(n\log (\frac{1}{\varepsilon}))$, improving on the previous result. For $m>1$ and the rectangular case, our algorithm runs in $O(\frac{n}{\varepsilon}\log (\frac{1}{\varepsilon})+\frac{m}{\varepsilon}\log m +m(\frac{1}{\varepsilon})^{O(\min(\sqrt{m},\frac{1}{\varepsilon}))})$ time. For a more general class of shapes (including disks, polygons with $O(1)$ edges), our algorithm runs in $O(n(\frac{1}{\varepsilon})^{O(1)}+\frac{m}{\epsilon}\log m + m(\frac{1}{\varepsilon})^{O(\min(m,\frac{1}{\varepsilon^2}))})$ time.
- Compressed sensing provided a new sampling paradigm for sparse signals. Remarkably, it has been shown that practical algorithms provide robust recovery from noisy linear measurements acquired at a near optimal sample rate. In real-world applications, a signal of interest is typically sparse not in the canonical basis but in a certain transform domain, such as the wavelet or the finite difference domain. The theory of compressed sensing was extended to this analysis sparsity model. However, existing theoretic results for the analysis sparsity model have limitations in the following senses: the derivation relies on particular choices of sensing matrix and sparsifying transform; sample complexity is significantly suboptimal compared to the analogous result in compressed sensing with the canonical sparsity model. In this paper, we propose a novel unified theory for robust recovery of sparse signals in a general transform domain using convex relaxation methods that overcomes the aforementioned limitations. Our results do not rely on any specific choice of sensing matrix and sparsifying transform and provide near optimal sample complexity when the transforms satisfy certain conditions. In particular, our theory extends existing near optimal sample complexity results to a wider class of sensing matrix and sparsifying transform. We also provide extensions of our results to the scenarios where the atoms in the transform has varying incoherence parameters and the unknown signal exhibits a structured sparsity pattern. Numerical results show that the proposed variable density sampling provides superior recovery performance over previous works, which is aligned with the presented theory.
- Nov 14 2016 cs.CV arXiv:1611.03679v1In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyper parameter selection. The starting point of our work is the observation that unrolled iterative methods have the form of a CNN (filtering followed by point-wise non-linearity) when the normal operator (H*H, the adjoint of H times H) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill-posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a 512 x 512 image on GPU.
- We investigate multi-round team competitions between two teams, where each team selects one of its players simultaneously in each round and each player can play at most once. The competition defines an extensive-form game with perfect recall and can be solved efficiently by standard methods. We are interested in the properties of the subgame perfect equilibria of this game. We first show that uniformly random strategy is a subgame perfect equilibrium strategy for both teams when there are no redundant players (i.e., the number of players in each team equals to the number of rounds of the competition). Secondly, a team can safely abandon its weak players if it has redundant players and the strength of players is transitive. We then focus on the more interesting case where there are redundant players and the strength of players is not transitive. In this case, we obtain several counterintuitive results. First of all, a player might help improve the payoff of its team, even if it is dominated by the entire other team. We give a necessary condition for a dominated player to be useful. We also study the extent to which the dominated players can increase the payoff. These results bring insights into playing and designing general team competitions.
- Dec 15 2015 cs.CG arXiv:1512.03897v6Given a convex polygon $P$ with $n$ edges, we consider the geometric optimization problem of computing the parallelograms in $P$ with maximal area. We design an $O(n\log^2n)$ time algorithm for computing all these parallelograms, which improves over a previous known quadratic time algorithm. To this end, we propose a novel geometric structure, called $\Nest(P)$, which is induced by $P$ and is an arrangement of $\Theta(n^2)$ segments, each of which is parallel to an edge of $P$. This structure admits several interesting properties, which follow from two fundamental properties in geometry, namely, convexity and parallelism. Structure $\Nest(P)$ captures the essential nature of the maximal area parallelograms, and the original optimization problem can be reduced to answering $O(n)$ location queries on $\Nest(P)$. Moreover, avoiding an explicit construction of $\Nest(P)$, which would take $\Omega(n^2)$ time, we answer each of these queries in $O(\log^2n)$ time.
- While the recent theory of compressed sensing provides an opportunity to overcome the Nyquist limit in recovering sparse signals, a solution approach usually takes a form of inverse problem of the unknown signal, which is crucially dependent on specific signal representation. In this paper, we propose a drastically different two-step Fourier compressive sampling framework in continuous domain that can be implemented as a measurement domain interpolation, after which a signal reconstruction can be done using classical analytic reconstruction methods. The main idea is originated from the fundamental duality between the sparsity in the primary space and the low-rankness of a structured matrix in the spectral domain, which shows that a low-rank interpolator in the spectral domain can enjoy all the benefit of sparse recovery with performance guarantees. Most notably, the proposed low-rank interpolation approach can be regarded as a generalization of recent spectral compressed sensing to recover large class of finite rate of innovations (FRI) signals at near optimal sampling rate. Moreover, for the case of cardinal representation, we can show that the proposed low-rank interpolation will benefit from inherent regularization and the optimal incoherence parameter. Using the powerful dual certificates and golfing scheme, we show that the new framework still achieves the near-optimal sampling rate for general class of FRI signal recovery, and the sampling rate can be further reduced for the class of cardinal splines. Numerical results using various type of FRI signals confirmed that the proposed low-rank interpolation approach has significant better phase transition than the conventional CS approaches.
- Sparse + Low Rank Decomposition of Annihilating Filter-based Hankel Matrix for Impulse Noise RemovalOct 20 2015 cs.CV arXiv:1510.05559v1Recently, so called annihilating filer-based low rank Hankel matrix (ALOHA) approach was proposed as a powerful image inpainting method. Based on the observation that smoothness or textures within an image patch corresponds to sparse spectral components in the frequency domain, ALOHA exploits the existence of annihilating filters and the associated rank-deficient Hankel matrices in the image domain to estimate the missing pixels. By extending this idea, here we propose a novel impulse noise removal algorithm using sparse + low rank decomposition of an annihilating filter-based Hankel matrix. The new approach, what we call the robust ALOHA, is motivated by the observation that an image corrupted with impulse noises has intact pixels; so the impulse noises can be modeled as sparse components, whereas the underlying image can be still modeled using a low-rank Hankel structured matrix. To solve the sparse + low rank decomposition problem, we propose an alternating direction method of multiplier (ADMM) method with initial factorized matrices coming from low rank matrix fitting (LMaFit) algorithm. To adapt the local image statistics that have distinct spectral distributions, the robust ALOHA is applied patch by patch. Experimental results from two types of impulse noises - random valued impulse noises and salt/pepper noises - for both single channel and multi-channel color images demonstrate that the robust ALOHA outperforms the existing algorithms up to 8dB in terms of the peak signal to noise ratio (PSNR).
- Parallel MRI (pMRI) and compressed sensing MRI (CS-MRI) have been considered as two distinct reconstruction problems. Inspired by recent k-space interpolation methods, an annihilating filter based low-rank Hankel matrix approach (ALOHA) is proposed as a general framework for sparsity-driven k-space interpolation method which unifies pMRI and CS-MRI. Specifically, our framework is based on the fundamental duality between the transform domain sparsity in the primary space and the low-rankness of weighted Hankel matrix in the reciprocal space, which converts pMRI and CS-MRI to a k-space interpolation problem using structured matrix completion. Using theoretical results from the latest compressed sensing literatures, we showed that the required sampling rates for ALOHA may achieve the optimal rate. Experimental results with in vivo data for single/multi-coil imaging as well as dynamic imaging confirmed that the proposed method outperforms the state-of-the-art pMRI and CS-MRI.
- Dec 03 2013 cs.SC arXiv:1312.0462v1We improve the local generic position method for isolating the real roots of a zero-dimensional bivariate polynomial system with two polynomials and extend the method to general zero-dimensional polynomial systems. The method mainly involves resultant computation and real root isolation of univariate polynomial equations. The roots of the system have a linear univariate representation. The complexity of the method is $\tilde{O}_B(N^{10})$ for the bivariate case, where $N=\max(d,\tau)$, $d$ resp., $\tau$ is an upper bound on the degree, resp., the maximal coefficient bitsize of the input polynomials. The algorithm is certified with probability 1 in the multivariate case. The implementation shows that the method is efficient, especially for bivariate polynomial systems.
- Apr 05 2012 cs.CG arXiv:1204.0905v1In this paper, an algorithm to compute a certified $G^1$ rational parametric approximation for algebraic space curves is given by extending the local generic position method for solving zero dimensional polynomial equation systems to the case of dimension one. By certified, we mean the approximation curve and the original curve have the same topology and their Hausdauff distance is smaller than a given precision. Thus, the method also gives a new algorithm to compute the topology for space algebraic curves. The main advantage of the algorithm, inhering from the local generic method, is that topology computation and approximation for a space curve is directly reduced to the same tasks for two plane curves. In particular, the error bound of the approximation space curve is obtained from the error bounds of the approximation plane curves explicitly. Nontrivial examples are used to show the effectivity of the method.