Mar 16 2018 math.OC
In this paper we present a new algorithmic realization of a projection-based scheme for general convex constrained optimization problem. The general idea is to transform the original optimization problem to a sequence of feasibility problems by iteratively constraining the objective function from above until the feasibility problem is inconsistent. For each of the feasibility problems one may apply any of the existing projection methods for solving it. In particular, the scheme allows the use of subgradient projections and does not require exact projections onto the constraints sets as in existing similar methods. We also apply the newly introduced concept of superiorization to optimization formulation and compare its performance to our scheme. We provide some numerical results for convex quadratic test problems as well as for real-life optimization problems coming from medical treatment planning.
Jan 03 2018 math.OC
The Douglas-Rachford (DR) algorithm is an iterative procedure that uses sequential reflections onto convex sets and which has become popular for convex feasibility problems. In this paper we propose a structural generalization that allows to use $r$-sets-DR operators in a cyclic fashion. We prove convergence and illustrate the advantage of such operators with $r>2$ over the classical $2$-sets-DR operators in a cyclic algorithm.
Jan 03 2018 math.OC
The subgradient extragradient method for solving the variational inequality (VI) problem, which is introduced by Censor et al. \citeCGR, replaces the second projection onto the feasible set of the VI, in the extragradient method, with a subgradient projection onto some constructible half-space. Since the method has been introduced, many authors proposed extensions and modifications with applications to various problems. In this paper, we introduce a modified subgradient extragradient method by improving the stepsize of its second step. Convergence of the proposed method is proved under standard and mild conditions and primary numerical experiments illustrate the performance and advantage of this new subgradient extragradient variant.
Dec 12 2017 math.OC
Numerous problems in signal processing and imaging, statistical learning and data mining, or computer vision can be formulated as optimization problems which consist in minimizing a sum of convex functions, not necessarily differentiable, possibly composed with linear operators and that in turn can be transformed to split feasibility problems (SFP), see for example \citece94. Each function is typically either a data fidelity term or a regularization term enforcing some properties on the solution, see for example \citecpp09 and references therein. In this paper we are interested in Split Feasibility Problems which can be seen as a general form of $Q$-Lasso introduced in \citeaasnx13 that extended the well-known Lasso of Tibshirani \citeTibshirani96. $Q$ is a closed convex subset of a Euclidean $m$-space, for some integer $m\geq1$, that can be interpreted as the set of errors within given tolerance level when linear measurements are taken to recover a signal/image via the Lasso. Inspired by recent works by Lou et al \citely, xcxz12, we are interested in a nonconvex regularization of SFP and propose three split algorithms for solving this general case. The first one is based on the DC (difference of convex) algorithm (DCA) introduced by Pham Dinh Tao, the second one in nothing else than the celebrate forward-backward algorithm and the third one uses a method introduced by Mine and Fukushima. It is worth mentioning that the SFP model a number of applied problems arising from signal/image processing and specially optimization problems for intensity-modulated radiation therapy (IMRT) treatment planning, see for example \citecbmt06.
Nov 07 2017 math.OC
In this paper we study the bounded perturbation resilience of projection and contraction algorithms for solving variational inequality (VI) problems in real Hilbert spaces. Under typical and standard assumptions of monotonicity and Lipschitz continuity of the VI's associated mapping, convergence of the perturbed projection and contraction algorithms is proved. Based on the bounded perturbed resilience of projection and contraction algorithms, we present some inertial projection and contraction algorithms. In addition we show that the perturbed algorithms converges at the rate of $O(1/t)$.
Nov 07 2017 math.OC
In this paper we study the bounded perturbation resilience of the extragradient and the subgradient extragradient methods for solving variational inequality (VI) problem in real Hilbert spaces. This is an important property of algorithms which guarantees the convergence of the scheme under summable errors, meaning that an inexact version of the methods can also be considered. Moreover, once an algorithm is proved to be bounded perturbation resilience, superiorizion can be used, and this allows flexibility in choosing the bounded perturbations in order to obtain a superior solution, as well explained in the paper. We also discuss some inertial extragradient methods. Under mild and standard assumptions of monotonicity and Lipschitz continuity of the VI's associated mapping, convergence of the perturbed extragradient and subgradient extragradient methods is proved. In addition we show that the perturbed algorithms converges at the rate of $O(1/t)$. Numerical illustrations are given to demonstrate the performances of the algorithms.
Feb 06 2017 math.OC
In this paper we study variational inequalities in a real Hilbert space, which are governed by a strongly monotone and Lipschitz continuous operator $F$ over a closed and convex set $C$. We assume that the set $C$ can be outerly approximated by the fixed point sets of a sequence of certain quasi-nonexpansive operators called cutters. We propose an iterative method the main idea of which is to project at each step onto a particular half-space constructed by using the input data. Our approach is based on a method presented by Fukushima in 1986, which has recently been extended by several authors. In the present paper we establish strong convergence in Hilbert space. We emphasize that to the best of our knowledge, Fukushima's method has so far been considered only in the Euclidean setting with different conditions on $F$. We provide several examples for the case where $C$ is the common fixed point set of a finite number of cutters with numerical illustrations of our theoretical results.
Oct 11 2016 math.OC
Multicriteria optimization problems occur in many real life applications, for example in cancer radiotherapy treatment and in particular in intensity modulated radiation therapy (IMRT). In this work we focus on optimization problems with multiple objectives that are ranked according to their importance. We solve these problems numerically by combining lexicographic optimization with our recently proposed level set scheme, which yields a sequence of auxiliary convex feasibility problems; solved here via projection methods. The projection enables us to combine the newly introduced superiorization methodology with multicriteria optimization methods to speed up computation while guaranteeing convergence of the optimization. We demonstrate our scheme with a simple 2D academic example (used in the literature) and also present results from calculations on four real head neck cases in IMRT (Radiation Oncology of the Ludwig-Maximilians University, Munich, Germany) for two different choices of superiorization parameter sets suited to yield fast convergence for each case individually or robust behavior for all four cases.
Jun 21 2016 math.OC
The implicit convex feasibility problem attempts to find a point in the intersection of a finite family of convex sets, some of which are not explicitly determined but may vary. We develop simultaneous and sequential projection methods capable of handling such problems and demonstrate their applicability to image denoising in a specific medical imaging situation. By allowing the variable sets to undergo scaling, shifting and rotation, this work generalizes previous results wherein the implicit convex feasibility problem was used for cooperative wireless sensor network positioning where sets are balls and their centers were implicit.
Apr 03 2013 math.OC
This paper is concerned with the variational inequality problem (VIP) over the fixed point set of a quasi-nonexpansive operator. We propose, in particular, an algorithm which entails, at each step, projecting onto a suitably chosen half-space, and prove that the sequences it generates converge to the unique solution of the VIP. We also present an application of our result to a hierarchical optimization problem.
Modifying von Neumann's alternating projections algorithm, we obtain an alternating method for solving the recently introduced Common Solutions to Variational Inequalities Problem (CSVIP). For simplicity, we mainly confine our attention to the two-set CSVIP, which entails finding common solutions to two unrelated variational inequalities in Hilbert space.
We introduce and study the Split Common Null Point Problem (SCNPP) for set-valued maximal monotone mappings in Hilbert spaces. This problem generalizes our Split Variational Inequality Problem (SVIP) [Y. Censor, A. Gibali and S. Reich, Algorithms for the split variational inequality problem, Numerical Algorithms 59 (2012), 301--323]. The SCNPP with only two set-valued mappings entails finding a zero of a maximal monotone mapping in one space, the image of which under a given bounded linear transformation is a zero of another maximal monotone mapping. We present four iterative algorithms that solve such problems in Hilbert spaces, and establish weak convergence for one and strong convergence for the other three.
We propose a prototypical Split Inverse Problem (SIP) and a new variational problem, called the Split Variational Inequality Problem (SVIP), which is a SIP. It entails finding a solution of one inverse problem (e.g., a Variational Inequality Problem (VIP)), the image of which under a given bounded linear transformation is a solution of another inverse problem such as a VIP. We construct iterative algorithms that solve such problems, under reasonable conditions, in Hilbert space and then discuss special cases, some of which are new even in Euclidean space.