results for au:Zou_J in:math

- Feb 15 2018 math.CO arXiv:1802.04930v1Given a graph $G$ and a positive integer $k$, the \emphGallai-Ramsey number is defined to be the minimum number of vertices $n$ such that any $k$-edge coloring of $K_n$ contains either a rainbow (all different colored) triangle or a monochromatic copy of $G$. In this paper, we obtain general upper and lower bounds on the Gallai-Ramsey numbers for books $B_{m} = K_{2} + \overline{K_{m}}$ and prove sharp results for $m \leq 5$.
- In this paper we are concerned with the analysis of heavy-tailed data when a portion of the extreme values are unavailable. This research was motivated by an analysis of the degree distributions in a large social network. The degree distributions of such networks tend to have power law behavior in the tails. We focus on the Hill estimator, which plays a starring role in heavy-tailed modeling. The Hill estimator for this data exhibited a smooth and increasing "sample path" as a function of the number of upper order statistics used in constructing the estimator. This behavior became more apparent as we artificially removed more of the upper order statistics. Building on this observation, we introduce a new parameterization into the Hill estimator that is a function of $\delta$ and $\theta$, that correspond, respectively, to the proportion of extreme values that are unavailable and the proportion of upper order statistics used in the estimation. As a function of $(\delta, \theta)$, we establish functional convergence of the normalized Hill estimator to a Gaussian random field. An estimation procedure is developed based on the limit theory to estimate the number of missing extremes and extreme value parameters including the tail index and the bias of Hill's estimate. We illustrate how this approach works in both simulations and real data examples.
- Timing detection is the first step and very important in wireless communication systems. Timing detection performance is usually affected by the frequency offset. Therefore, it is a challenge to design the synchronization signal in massive narrowband Internet of Things (NB-IoT) scenarios where the frequency offset is usually large due to the low cost requirement. In this paper, we firstly proposed a new general synchronization signal structure with a couple of sequences which are conjugated to remove the potential timing error arose from large frequency offset. Then, we analyze the suitable sequence for our proposed synchronization signal structure and discuss a special ZC sequence as an example. Finally, the simulation results demonstrate our proposed synchronization signal can work well when the frequency offset is large. It means that our proposed synchronization signal design is very suitable for the massive NB-IoT.
- Nov 22 2016 math.PR arXiv:1611.06566v1This paper is concerned with asymptotic behavior of a variety of functionals of increments of continuous semimartingales. Sampling times are assumed to follow a rather general discretization scheme. If an underlying semimartingale is thought of as a financial asset price process, a general sampling scheme like the one employed in this paper is capable of reflecting what happens whenever the financial trading data are recorded in a tick-by-tick fashion. A law of large numbers and a central limit theorem are proved after an appropriate normalization. One application of our result is an extension of the realized kernel estimator of integrated volatility to the case of random sampling times.
- A full-duplex two-way relay channel with multiple antennas is considered. For this three-node network, the beamforming design needs to suppress self-interference. While a traditional way is to apply zero-forcing for self-interference mitigation, it may harm the desired signals. In this paper, a design which reserves a fraction of self-interference is proposed by solving a quality-of-service constrained beamforming design problem. Since the problem is challenging due to the loop self-interference, a convergence-guaranteed alternating optimization algorithm is proposed to jointly design the relay-user beamformers. Numerical results show that the proposed scheme outperforms zero-forcing method, and achieves a transmit power close to the ideal case.
- Oct 12 2016 math.NA arXiv:1610.03196v1We shall propose and analyze some new preconditioners for the saddle-point systems arising from the edge element discretization of the time-harmonic Maxwell equations in three dimensions. We will first consider the saddle-point systems with vanishing wave number, for which we present an important relation between the solutions of the singular curl-curl system and the non-singular saddle-point system, then demonstrate that the saddle-point system can be efficiently solved by the Hiptmair-Xu solver. For the saddle-point systems with non-vanishing wave numbers, we will show that the PCG with a new preconditioner can apply for the non-singular system when wave numbers are small, while the methods like preconditioned MINRES may apply for some existing and new preconditioners when wave numbers are large. The spectral behaviors of the resulting preconditioned systems for the existing and new preconditioners are analyzed and compared, and numerical experiments are presented to demonstrate and compare the efficiencies of these preconditioners.
- In this work we develop and analyze an adaptive finite element method for efficiently solving electrical impedance tomography -- a severely ill-posed nonlinear inverse problem for recovering the conductivity from boundary voltage measurements. The reconstruction technique is based on Tikhonov regularization with a Sobolev smoothness penalty and discretizing the forward model using continuous piecewise linear finite elements. We derive an adaptive finite element algorithm with an a posteriori error estimator involving the concerned state and adjoint variables and the recovered conductivity. The convergence of the algorithm is established, in the sense that the sequence of discrete solutions contains a convergent subsequence to a solution of the optimality system for the continuous formulation. Numerical results are presented to verify the convergence and efficiency of the algorithm.
- Markov chains and diffusion processes are indispensable tools in machine learning and statistics that are used for inference, sampling, and modeling. With the growth of large-scale datasets, the computational cost associated with simulating these stochastic processes can be considerable, and many algorithms have been proposed to approximate the underlying Markov chain or diffusion. A fundamental question is how the computational savings trade off against the statistical error incurred due to approximations. This paper develops general results that address this question. We bound the Wasserstein distance between the equilibrium distributions of two diffusions as a function of their mixing rates and the deviation in their drifts. We show that this error bound is tight in simple Gaussian settings. Our general result on continuous diffusions can be discretized to provide insights into the computational-statistical trade-off of Markov chains. As an illustration, we apply our framework to derive finite-sample error bounds of approximate unadjusted Langevin dynamics. We characterize computation-constrained settings where, by using fast-to-compute approximate gradients in the Langevin dynamics, we obtain more accurate samples compared to using the exact gradients. Finally, as an additional application of our approach, we quantify the accuracy of approximate zig-zag sampling. Our theoretical analyses are supported by simulation experiments.
- Apr 13 2016 math.FA arXiv:1604.03364v2We consider the ill-posed operator equation $Ax=y$ with an injective and bounded linear operator $A$ mapping between $\ell^2$ and a Hilbert space $Y$, possessing the unique solution \linebreak $x^\dag=\{x^\dag_k\}_{k=1}^\infty$. For the cases that sparsity $x^\dag \in \ell^0$ is expected but often slightly violated in practice, we investigate in comparison with the $\ell^1$-regularization the elastic-net regularization, where the penalty is a weighted superposition of the $\ell^1$-norm and the $\ell^2$-norm square, under the assumption that $x^\dag \in \ell^1$. There occur two positive parameters in this approach, the weight parameter $\eta$ and the regularization parameter as the multiplier of the whole penalty in the Tikhonov functional, whereas only one regularization parameter arises in $\ell^1$-regularization. Based on the variational inequality approach for the description of the solution smoothness with respect to the forward operator $A$ and exploiting the method of approximate source conditions, we present some results to estimate the rate of convergence for the elastic-net regularization. The occurring rate function contains the rate of the decay $x^\dag_k \to 0$ for $k \to \infty$ and the classical smoothness properties of $x^\dag$ as an element in $\ell^2$.
- Feb 17 2016 math.NA arXiv:1602.04939v1In the reconstruction process of sound waves in a 3D stratified waveguide, a key technique is to effectively reduce the huge computational demand. In this work, we propose an efficient and simple multilevel reconstruction method to help locate the accurate position of a point source in a stratified ocean. The proposed method can be viewed as a direct sampling method since no solutions of optimizations or linear systems are involved. The novel method exhibits several strengths: fast convergence, robustness against noise, advantages in computational complexity and applicability for a very small number of receivers.
- A traditional cellular system (e.g., LTE) operates only on the licensed spectrum. This tutorial explains the concept of cellular communications on both licensed and license-exempt spectrum under a unified architecture. The purpose to extend a cellular system into the bandwidth-rich license-exempt spectrum is to form a larger cellular network for all spectrum types. This would result in an ultimate mobile converged cellular network. This tutorial examines the benefits of this concept, the technical challenges, and provides a conceptual LTE-based design example that helps to show how a traditional cellular system like the LTE can adapt itself to a different spectrum type, conform to the regulatory requirements, and harmoniously co-exist with the incumbent systems such as Wi-Fi. In order to cope with the interference and regulation rules on license-exempt spectrum, a special medium access mechanism is introduced into the existing LTE transmission frame structure to exploit the full benefits of coordinated and managed cellular architecture.
- Machine-type communication (MTC) is the key technology to support data transfer among devices (sensors and actuators). Cellular communication technologies are developed mainly for "human-type" communications, while enabling MTC with cellular networks not only improves the connectivity, accessibility, and availability of an MTC network but also has the potential to further drive down the operation cost. However, cellular MTC poses some unique challenges due to a huge mismatch between the cellular user equipment and the MTC device, caused mainly by the physical limitations, deployment environment and density, and application behavior of MTC devices. One of the most challenging issues is to provide an efficient way for an MTC device to access the network. We address the issues in the existing random access scheme in a cellular system (e.g., LTE), and propose an effective spectrum concept enabling a power and spectrally optimized design specifically-tailored for MTC.
- In this work we shall review the (phased) inverse scattering problem and then pursue the phaseless reconstruction from far-field data with the help of the concept of scattering coefficients. We perform sensitivity, resolution and stability analysis of both phased and phaseless problems and compare the degree of ill-posedness of the phased and phaseless reconstructions. The phaseless reconstruction is highly nonlinear and much more severely ill-posed. Algorithms are provided to solve both the phased and phaseless reconstructions in the linearized case. Stability is studied by estimating the condition number of the inversion process for both the phased and phaseless cases. An optimal strategy is suggested to attain the infimum of the condition numbers of the phaseless reconstruction, which may provide an important guidance for efficient phaseless measurements in practical applications. To the best of our knowledge, the stability analysis in terms of condition numbers are new for the phased and phaseless inverse scattering problems, and are very important to help us understand the degree of ill-posedness of these inverse problems. Numerical experiments are provided to illustrate the theoretical asymptotic behavior, as well as the effectiveness and robustness of the phaseless reconstruction algorithm.
- Aug 26 2015 math.OC arXiv:1508.06064v1In this paper, we propose a parallel space-time domain decomposition method for solving an unsteady source identification problem governed by the linear convection-diffusion equation. Traditional approaches require to solve repeatedly a forward parabolic system, an adjoint system and a system with respect to the unknowns. The three systems have to be solved one after another. These sequential steps are not desirable for large scale parallel computing. A space-time restrictive additive Schwarz method is proposed for a fully implicit space-time coupled discretization scheme to recover the time-dependent pollutant source intensity functions. We show with numerical experiments that the scheme works well with noise in the observation data. More importantly it is demonstrated that the parallel space-time Schwarz preconditioner is scalable on a supercomputer with over $10^3$ processors, thus promising for large scale applications.
- As the number of processor cores on supercomputers becomes larger and larger, algorithms with high degree of parallelism attract more attention. In this work, we propose a novel space-time coupled algorithm for solving an inverse problem associated with the time-dependent convection-diffusion equation in three dimensions. We introduce a mixed finite element/finite difference method and a one-level and a two-level space-time parallel domain decomposition preconditioner for the Karush-Kuhn-Tucker (KKT) system induced from reformulating the inverse problem as an output least-squares optimization problem in the space-time domain. The new full space approach eliminates the sequential steps of the optimization outer loop and the inner forward and backward time marching processes, thus achieves high degree of parallelism. Numerical experiments validate that this approach is effective and robust for recovering unsteady moving sources. We report strong scalability results obtained on a supercomputer with more than 1,000 processors.
- In this work we perform some mathematical analysis on non-negative matrix factorizations (NMF) and apply NMF to some imaging and inverse problems. We will propose a sparse low-rank approximation of big positive data and images in terms of tensor products of positive vectors, and investigate its effectiveness in terms of the number of tensor products to be used in the approximation. A new concept of multi-level analysis (MLA) framework is also suggested to extract major components in the matrix representing structures of different resolutions, but still preserving the positivity of the basis and sparsity of the approximation. We will also propose a semi-smooth Newton method based on primal-dual active sets for the non-negative factorization. Numerical results are given to demonstrate the effectiveness of the proposed method to capture features in images and structures of inverse problems under no a-priori assumption on the data structure, as well as to provide a sparse low-rank representation of the data.
- Feb 25 2015 math.AP arXiv:1502.06803v2This work aims at providing a mathematical and numerical framework for the analysis on the effects of pulsed electric fields on biological media. Biological tissues and cell suspensions are described as having a heteregeneous permittivity and a heteregeneous conductivity. Well-posedness of the model problem and the regularity of its solution are established. A fully discrete finite element scheme is proposed for the numerical approximation of the potential distribution as a function of time and space simultaneously for an arbitrary shaped pulse, and it is demonstrated to enjoy the optimal convergence order in both space and time. The proposed numerical scheme has potential applications in the fields of medicine, food sciences, and biotechnology.
- Sampling rate required in the Nth Power Nonlinear Transformation (NPT) method is typically much greater than Nyquist rate, which causes heavy burden for the Analog to Digital Converter (ADC). Taking advantage of the sparse property of PSK signals' spectrum under NPT, we develop the NPT method for PSK signals with Sub-Nyquist rate samples. In this paper, combined the NPT method with Compressive Sensing (CS) theory, frequency spectrum reconstruction of the Nth power nonlinear transformation of PSK signals is presented, which can be further used for AMR and rough estimations of unknown carrier frequency and symbol rate.
- Dec 30 2014 math.NA arXiv:1412.8279v1We shall investigate randomized algorithms for solving large-scale linear inverse problems with general regularizations. We first present some techniques to transform inverse problems of general form into the ones of standard form, then apply randomized algorithms to reduce large-scale systems of standard form to much smaller-scale systems and seek their regularized solutions in combination with some popular choice rules for regularization parameters. Then we will propose a second approach to solve large-scale ill-posed systems with general regularizations. This involves a new randomized generalized SVD algorithm that can essentially reduce the size of the original large-scale ill-posed systems. The reduced systems can provide approximate regularized solutions with about the same accuracy as the ones by the classical generalized SVD, and more importantly, the new approach gains obvious robustness, stability and computational time as it needs only to work on problems of much smaller size. Numerical results are given to demonstrated the efficiency of the algorithms.
- In this work, we are concerned with the diffusive optical tomography (DOT) problem in the case when only one or two pairs of Cauchy data is available. We propose a simple and efficient direct sampling method (DSM) to locate inhomogeneities inside a homogeneous background and solve the DOT problem in both full and limited aperture cases. This new method is easy to implement and less expensive computationally. Numerical experiments demonstrate its effectiveness and robustness against noise in the data. This provides a new promising numerical strategy for the DOT problem.
- In this paper we consider the inverse scattering problem for high-contrast targets. We mathematically analyze the experimentally-observed phenomenon of super-resolution in imaging the target shape. This is the first time that a mathematical theory of super-resolution has been established in the context of imaging high contrast inclusions. We illustrate our main findings with a variety of numerical examples. Our analysis is based on the novel concept of scattering coefficients. These findings may help in developing resonant structures for resolution enhancement.
- Aug 26 2014 math.NA arXiv:1408.5547v1We propose an inexact Uzawa algorithm with two variable relaxation parameters for solving the generalized saddle-point system. The saddle-point problems can be found in a wide class of applications, such as the augmented Lagrangian formulation of the constrained minimization, the mixed finite element method, the mortar domain decomposition method and the discretization of elliptic and parabolic interface problems. The two variable parameters can be updated at each iteration, requiring no a priori estimates on the spectrum of two preconditioned subsystems involved. The convergence and convergence rate of the algorithm are analysed. Both symmetric and nonsymmetric saddle-point systems are discussed, and numerical experiments are presented to demonstrate the robustness and effectiveness of the algorithm.
- In this paper, we are concerned with a shape design problem, in which our target is to design, up to rigid transformations and scaling, the shape of an object given either its polarization tensor at multiple contrasts or the partial eigenvalues of its Neumann-PoincarĂ© operator, which are known as the Fredholm eigenvalues. We begin by proposing to recover the eigenvalues of the Neumann-PoincarĂ© operator from the polarization tensor by means of the holomorphic functional calculus. Then we develop a regularized Gauss-Newton optimization method for the shape reconstruction process. We present numerical results to demonstrate the effectiveness of the proposed methods and to illustrate important properties of the Fredholm eigenvalues and their associated eigenfunctions. Our results are expected to have important applications in the design of plasmon resonances in nanoparticles as well as in the multifrequency or pulsed imaging of small anomalies.
- Oct 24 2013 math.AP arXiv:1310.6096v1This work investigates the scattering coefficients for inverse medium scattering problems. It shows some fundamental properties of the coefficients such as symmetry and tensorial properties. The relationship between the scattering coefficients and the far-field pattern is also derived. Furthermore, the sensitivity of the scattering coefficients with respect to changes in the permittivity and permeability distributions is investigated. In the linearized case, explicit formulas for reconstructing permittivity and permeability distributions from the scattering coefficients is proposed. They relate the exponentially ill-posed character of the inverse medium scattering problem at a fixed frequency to the exponential decay of the scattering coefficients. Moreover, they show the stability of the reconstruction from multifrequency measurements. This provides a new direction for solving inverse medium scattering problems.
- Sep 10 2013 math.NA arXiv:1309.2101v1We shall establish the convergence of an adaptive conforming finite element method for the reconstruction of the distributed flux in a diffusion system. The adaptive method is based on a posteriori error estimators for the distributed flux, state and costate variables. The sequence of discrete solutions produced by the adaptive algorithm is proved to converge to the true triplet satisfying the optimality conditions in the energy norm and the corresponding error estimator converges to zero asymptotically.
- The regularized near-cloak via the transformation optics approach in the time-harmonic electromagnetic scattering is considered. This work extends the existing studies mainly in two aspects. First, it presents a near-cloak construction by incorporating a much more general conducting layer between the cloaked and cloaking regions. This might be of significant practical importance when production fluctuations occur. Second, it allows the cloaked contents to be both passive and active with an applied current inside. In assessing the near-cloaking performance, comprehensive and sharp estimates are derived for the scattering amplitude in terms of the asymptotic regularization parameter and the material tensors of the conducting layer. The scattering estimates are independent of the passive/active contents being cloaked, which implies that one could nearly cloak arbitrary contents by using the proposed near-cloak construction.
- We present a new splitting method for time-dependent convection-dominated diffusion problems. The original convection diffusion system is split into two sub-systems: a pure convection system and a diffusion system. At each time step, a convection problem and a diffusion problem are solved successively. The scheme has the following nice features: the convection subproblem is solved explicitly and a multistep technique is introduced to essentially enlarge the stability region so that the resulting scheme behaves like an unconditionally stable scheme; the diffusion subproblem is always self-adjoint and coercive so that it can be solved efficiently using many existing optimal preconditioned iterative solvers. The scheme is then extended for Navier-Stokes equations, where the nonlinear convection is resolved by a linear explicit multistep scheme at the convection step, and only a generalized Stokes problem is needed to solve at the diffusion step with the resulting stiffness matrix being invariant in the time marching process. The new schemes are all free from tuning some stabilization parameters for the convection-dominated diffusion problems. Numerical simulations are presented to demonstrate the stability, convergence and performance of the single-step and multistep variants of the new scheme.
- Dec 21 2012 math.NA arXiv:1212.5085v1In this paper, we study the inverse electromagnetic medium scattering problem of estimating the support and shape of medium scatterers from scattered electric or magnetic near-field data. We shall develop a novel direct sampling method based on an analysis of electromagnetic scattering and the behavior of the fundamental solution. The method is applicable even with one incident field and needs only to compute inner products of the measured scattered field with the fundamental solutions located at sampling points. Hence it is strictly direct, computationally very efficient, and highly tolerant to the presence of noise in the data. Two- and three-dimensional numerical experiments indicate that it can provide reliable support estimates of one single and multiple scatterers in case of both exact and highly noisy data.
- Dec 18 2012 math.NA arXiv:1212.4040v1In the reconstruction process of unknown multiple scattering objects in inverse medium scattering problems, the first important step is to effectively locate some approximate domains that contain all inhomogeneous media. Without such an effective step, one may have to take a much larger computational domain than actually needed in the reconstruction of all scattering objects, thus resulting in a huge additional computational efforts. In this work we propose a simple and efficient multilevel reconstruction algorithm to help locate an accurate position and shape of each inhomogeneous medium. Then other existing effective but computationally more demanding reconstruction algorithms may be applied in these initially located computational domains to achieve more accurate shapes of the scatter and the contrast values over each medium domain. The new algorithm exhibits several strengths: robustness against noise, requiring less incidences, fast convergence, flexibility to deal with scatterers of special shapes, and advantages in computational complexity.
- Aug 28 2012 math.PR arXiv:1208.5225v1Ergodicity is a fundamental issue for a stochastic process. In this paper, we refine results on ergodicity for a general type of Markov chain to a specific type or the $GI/G/1$-type Markov chain, which has many interesting and important applications in various areas. It is of interest to obtain conditions in terms of system parameters or the given information about the process, under which the chain has various ergodic properties. Specifically, we provide necessary and sufficient conditions for geometric, strong and polynomial ergodicity, respectively.
- This work is concerned with a direct sampling method (DSM) for inverse acoustic scattering problems using far-field data. The method characterizes some unknown obstacles, inhomogeneous media or cracks, directly through an indicator function computed from the measured data. Using one or very few incident waves, the DSM provides quite reasonable profiles of scatterers in time-harmonic inverse acoustic scattering without a priori knowledge of either the physical properties or the number of disconnected components of the scatterer. We shall first derive the DSM using far-field data, then carry out a systematic evaluation of the performances and distinctions of the DSM using both near-field and far-field data. The numerical simulations are shown to demonstrate interesting and promising potentials of the DSM: a) ability to identify not only medium scatterers, but also obstacles, and even cracks, using measurement data from one or few incident directions, b) robustness with respect to large noise, and c) computational efficiency with only inner products involved.
- May 22 2012 math.NA arXiv:1205.4277v1We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer support, and one resolution enhancing step with mixed regularization. The first step is strictly direct and of sampling type, and faithfully detects the scatterer support. The second step is an innovative application of nonsmooth mixed regularization, and it accurately resolves the scatterer sizes as well as intensities. The model is efficiently solved by a semi-smooth Newton-type method. Numerical results for two- and three-dimensional examples indicate that the approach is accurate, computationally efficient, and robust with respect to data noise.
- We consider time-harmonic wave scattering from an inhomogeneous isotropic medium supported in a bounded domain $\Omega\subset\mathbb{R}^N$ ($N\geq 2$). In a subregion $D\Subset\Omega$, the medium is supposed to be lossy and have a large mass density. We study the asymptotic development of the wave field as the mass density $\rho\rightarrow +\infty$ and show that the wave field inside $D$ will decay exponentially while the wave filed outside the medium will converge to the one corresponding to a sound-hard obstacle $D\Subset\Omega$ buried in the medium supported in $\Omega\backslash\bar{D}$. Moreover, the normal velocity of the wave field on $\partial D$ from outside $D$ is shown to be vanishing as $\rho\rightarrow +\infty$. We derive very accurate estimates for the wave field inside and outside $D$ and on $\partial D$ in terms of $\rho$, and show that the asymptotic estimates are sharp. The implication of the obtained results is given for an inverse scattering problem of reconstructing a complex scatterer.
- High-frequency data observed on the prices of financial assets are commonly modeled by diffusion processes with micro-structure noise, and realized volatility-based methods are often used to estimate integrated volatility. For problems involving a large number of assets, the estimation objects we face are volatility matrices of large size. The existing volatility estimators work well for a small number of assets but perform poorly when the number of assets is very large. In fact, they are inconsistent when both the number, $p$, of the assets and the average sample size, $n$, of the price data on the $p$ assets go to infinity. This paper proposes a new type of estimators for the integrated volatility matrix and establishes asymptotic theory for the proposed estimators in the framework that allows both $n$ and $p$ to approach to infinity. The theory shows that the proposed estimators achieve high convergence rates under a sparsity assumption on the integrated volatility matrix. The numerical studies demonstrate that the proposed estimators perform well for large $p$ and complex price and volatility models. The proposed method is applied to real high-frequency financial data.
- Sep 15 2006 math.AP arXiv:math/0609396v1This paper is the second part of a threefold article, aimed at solving numerically the Poisson problem in three-dimensional prismatic or axisymmetric domains. In the first part of this series, the Fourier Singular Complement Method was introduced and analysed, in prismatic domains. In this second part, the FSCM is studied in axisymmetric domains with conical vertices, whereas, in the third part, implementation issues, numerical tests and comparisons with other methods are carried out. The method is based on a Fourier expansion in the direction parallel to the reentrant edges of the domain, and on an improved variant of the Singular Complement Method in the 2D section perpendicular to those edges. Neither refinements near the reentrant edges or vertices of the domain, nor cut-off functions are required in the computations to achieve an optimal convergence order in terms of the mesh size and the number of Fourier modes used.
- Sep 15 2006 math.AP arXiv:math/0609394v1This is the first part of a threefold article, aimed at solving numerically the Poisson problem in three-dimensional prismatic or axisymmetric domains. In this first part, the Fourier Singular Complement Method is introduced and analysed, in prismatic domains. In the second part, the FSCM is studied in axisymmetric domains with conical vertices, whereas, in the third part, implementation issues, numerical tests and comparisons with other methods are carried out. The method is based on a Fourier expansion in the direction parallel to the reentrant edges of the domain, and on an improved variant of the Singular Complement Method in the 2D section perpendicular to those edges. Neither refinements near the reentrant edges of the domain nor cut-off functions are required in the computations to achieve an optimal convergence order in terms of the mesh size and the number of Fourier modes used.
- Feb 17 2005 math.NA arXiv:math/0502357v1We present a sublinear randomized algorithm to compute a sparse Fourier transform for nonequispaced data. Suppose a signal S is known to consist of N equispaced samples, of which only L<N are available. If the ratio p=L/N is not close to 1, the available data are typically non-equispaced samples. Then our algorithm reconstructs a near-optimal B-term representation R with high probability 1-delta, in time and space poly(B,log(L),log p, log(1/delta), epsilon^-1, such that ||S-R||^2 < (1+epsilon) ||S-R_opt^B||^2, where R_opt^B is the optimal B-term Fourier representation of signal S. The sublinear poly(logL) time is compared to the superlinear O(Nlog N+L) time requirement of the present best known Inverse Nonequispaced Fast Fourier Transform (INFFT) algorithms. Numerical experiments support the advantage in speed of our algorithm over other methods for sparse signals: it already outperforms INFFT for large but realistic size N and works well even in the situation of a large percentage of missing data and in the presence of noise.
- Nov 06 2004 math.NA arXiv:math/0411102v2We analyze a sublinear RAlSFA (Randomized Algorithm for Sparse Fourier Analysis) that finds a near-optimal B-term Sparse Representation R for a given discrete signal S of length N, in time and space poly(B,log(N)), following the approach given in \citeGGIMS. Its time cost poly(log(N)) should be compared with the superlinear O(N log N) time requirement of the Fast Fourier Transform (FFT). A straightforward implementation of the RAlSFA, as presented in the theoretical paper \citeGGIMS, turns out to be very slow in practice. Our main result is a greatly improved and practical RAlSFA. We introduce several new ideas and techniques that speed up the algorithm. Both rigorous and heuristic arguments for parameter choices are presented. Our RAlSFA constructs, with probability at least 1-delta, a near-optimal B-term representation R in time poly(B)log(N)log(1/delta)/ epsilon^2 log(M) such that ||S-R||^2<=(1+epsilon)||S-R_opt||^2. Furthermore, this RAlSFA implementation already beats the FFTW for not unreasonably large N. We extend the algorithm to higher dimensional cases both theoretically and numerically. The crossover point lies at N=70000 in one dimension, and at N=900 for data on a N*N grid in two dimensions for small B signals where there is noise.