results for au:Fauss_M in:cs

- Detecting the presence of an active random wireless source with minimum latency utilizing array signal processing is considered. The problem is studied under the constraint that the analog-to-digital conversion at each radio sensor is restricted to the reading of the analog receive signal sign. We formulate the digital signal processing task as a sequential hypothesis test in simple form. To circumvent the intractable log-likelihood ratio of the resulting multivariate binary array data, a reduced model representation within the exponential family is employed. This approach allows us to design a sequential test and to analyze its analytic performance along classical arguments. In the context of wireless spectrum monitoring for satellite-based navigation and synchronization systems, we study the achievable processing latency, characterized by the average sample number, as a function of the antennas in use. The practical feasibility and potential of the discussed low-complexity sensing and decision-making technology is demonstrated via simulations.
- The problem of minimizing convex functionals of probability distributions is solved under the assumption that the density of every distribution is bounded from above and below. First, a system of sufficient and necessary first order optimality conditions, which characterize global minima as solutions of a fixed-point equation, is derived. Based on these conditions, two algorithms are proposed that iteratively solve the fixed-point equation via a block coordinate descent strategy. While the first algorithm is conceptually simpler and more efficient, it is not guaranteed to converge for objective functions that are not strictly convex. This shortcoming is overcome in the second algorithm, which uses an additional outer proximal iteration, and, which is proven to converge under very mild assumptions. Two examples are given to demonstrate the theoretical usefulness of the optimality conditions as well as the high efficiency and accuracy of the proposed numerical algorithms.
- The sequential analysis of the problem of joint signal detection and signal-to-noise ratio (SNR) estimation for a linear Gaussian observation model is considered. The problem is posed as an optimization setup where the goal is to minimize the number of samples required to achieve the desired (i) type I and type II error probabilities and (ii) mean squared error performance. This optimization problem is reduced to a more tractable formulation by transforming the observed signal and noise sequences to a single sequence of Bernoulli random variables; joint detection and estimation is then performed on the Bernoulli sequence. This transformation renders the problem easily solvable, and results in a computationally simpler sufficient statistic compared to the one based on the (untransformed) observation sequences. Experimental results demonstrate the advantages of the proposed method, making it feasible for applications having strict constraints on data storage and computation.
- The density band model proposed by Kassam for robust hypothesis testing is revisited in this paper. First, a novel criterion for the general characterization of least favorable distributions is proposed, which unifies existing results. This criterion is then used to derive an implicit definition of the least favorable distributions under band uncertainties. In contrast to the existing solution, it only requires two scalar values to be determined and eliminates the need for case-by-case statements. Based on this definition, a generic fixed-point algorithm is proposed that iteratively calculates the least favorable distributions for arbitrary band specifications. Finally, three different types of robust tests that emerge from band models are discussed and a numerical example is presented to illustrate their potential use in practice.