Báo cáo hóa học: " Review Article Super-Relaxed (η)-Proximal Point Algorithms, Relaxed (η)-Proximal Point Algorithms, Linear Convergence Analysis, and Nonlinear Variational Inclusions" doc

47 191 0
Báo cáo hóa học: " Review Article Super-Relaxed (η)-Proximal Point Algorithms, Relaxed (η)-Proximal Point Algorithms, Linear Convergence Analysis, and Nonlinear Variational Inclusions" doc

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Hindawi Publishing Corporation Fixed Point Theory and Applications Volume 2009, Article ID 957407, 47 pages doi:10.1155/2009/957407 Review Article Super-Relaxed (η)-Proximal Point Algorithms, Relaxed (η)-Proximal Point Algorithms, Linear Convergence Analysis, and Nonlinear Variational Inclusions Ravi P Agarwal1, and Ram U Verma1, Department of Mathematical Sciences, Florida Institute of Technology, Melbourne, FL 32901, USA Department of Mathematics and Statistics, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia International Publications (USA), 12085 Lake Cypress Circle, Suite I109, Orlando, FL 32828, USA Correspondence should be addressed to Ravi P Agarwal, agarwal@fit.edu Received 26 June 2009; Accepted 30 August 2009 Recommended by Lai Jiu Lin We glance at recent advances to the general theory of maximal set-valued monotone mappings and their role demonstrated to examine the convex programming and closely related field of nonlinear variational inequalities We focus mostly on applications of the super-relaxed η proximal point algorithm to the context of solving a class of nonlinear variational inclusion problems, based on the notion of maximal η -monotonicity Investigations highlighted in this communication are greatly influenced by the celebrated work of Rockafellar 1976 , while others have played a significant part as well in generalizing the proximal point algorithm considered by Rockafellar 1976 to the case of the relaxed proximal point algorithm by Eckstein and Bertsekas 1992 Even for the linear convergence analysis for the overrelaxed or super-relaxed η -proximal point algorithm, the fundamental model for Rockafellar’s case does the job Furthermore, we attempt to explore possibilities of generalizing the Yosida regularization/approximation in light of maximal η -monotonicity, and then applying to firstorder evolution equations/inclusions Copyright q 2009 R P Agarwal and R U Verma This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited Introduction and Preliminaries We begin with a real Hilbert space X with the norm · and the inner product ·, · We consider the general variational inclusion problem of the following form Find a solution to 0∈M x , where M : X → 2X is a set-valued mapping on X 1.1 Fixed Point Theory and Applications In the first part, Rockafellar introduced the proximal point algorithm, and examined the general convergence and rate of convergence analysis, while solving 1.1 by showing when M is maximal monotone, that the sequence {xk } generated for an initial point x0 by xk ≈ Pk x k 1.2 converges weakly to a solution of 1.1 , provided that the approximation is made sufficiently accurate as the iteration proceeds, where Pk I ck M −1 for a sequence {ck } of positive real numbers that is bounded away from zero, and in second part using the first part and further amending the proximal point algorithm succeeded in achieving the linear convergence It follows from 1.2 that xk is an approximate solution to inclusion problem −1 ck x − x k 0∈M x 1.3 As a matter of fact, Rockafellar did demonstrate the weak convergence and strong convergence separately in two theorems, but for the strong convergence a further imposition of the Lipschitz continuity of M−1 at plays the crucial part Let us recall these results Theorem 1.1 see Let X be a real Hilbert space Let M : X → 2X be maximal monotone, and let x∗ be a zero of M Let the sequence {xk } be generated by the iterative procedure xk M ≈ J ck x k 1.4 such that xk M − J ck x k ≤ k, 1.5 M where Jck I ck M −1 , ∞ k < ∞, and {ck } is bounded away from zero Suppose that the k sequence {xk } is bounded in the sense that there exists at least one solution to ∈ M x Then the sequence {xk } converges weakly to x∗ for ∈ M x∗ with lim Qk xk for Qk k→∞ M I − J ck 1.6 Remark 1.2 Note that Rockafellar in Theorem 1.1, pointed out by a counterexample that the condition ∞ k 0, Δ2 then the sequence {xk } converges weakly to a zero of M sup αk < 2, c inf ck > 0, 1.20 Fixed Point Theory and Applications Convergence analysis for Algorithm 1.4 is achieved using the notion of the firm nonexpansiveness of the resolvent operator I ck M −1 Somehow, they have not considered applying Algorithm 1.4 to Theorem 1.3 to the case of the linear convergence The nonexpansiveness of the resolvent operator I ck M −1 poses the prime difficulty to algorithmic convergence, and may be, this could have been the real steering for Rockafellar to the Lipschitz continuity of M−1 instead That is why the Yosida approximation turned out to be more effective in this scenario, because the Yosida approximation Mck −1 ck I − I ck M −1 1.21 takes care of the Lipschitz continuity issue As we look back into the literature, general maximal monotonicity has played a greater role to studying convex programming as well as variational inequalities/inclusions Later it turned out that one of the most fundamental algorithms applied to solve these problems was the proximal point algorithm In , Eckstein and Bertsekas have shown that much of the theory of the relaxed proximal point algorithm and related algorithms can be passed along to the Douglas-Rachford splitting method and its specializations, for instance, the alternating direction method of multipliers Just recently, Verma generalized the relaxed proximal point algorithm and applied to the approximation solvability of variational inclusion problems of the form 1.1 Recently, a great deal of research on the solvability of inclusion problems is carried out using resolvent operator techniques, that have applications to other problems such as equilibria problems in economics, optimization and control theory, operations research, and mathematical programming In this survey, we first discuss in detail the history of proximal point algorithms with their applications to general nonlinear variational inclusion problems, and then we recall some significant developments, especially the relaxation of proximal point algorithms with applications to the Douglas-Rachford splitting method At the second stage, we turn our attention to over-relaxed proximal point algorithms and their contribution to the linear convergence We start with some introductory materials to the over-relaxed η proximal point algorithm based on the notion of maximal η -monotonicity, and recall some investigations on approximation solvability of a general class of nonlinear inclusion problems involving maximal η -monotone mappings in a Hilbert space setting As a matter fact, we examine the convergence analysis of the over-relaxed η -proximal point algorithm for solving a class of nonlinear inclusions Also, several results on the generalized firm nonexpansiveness and generalized resolvent mapping are given Furthermore, we explore the real impact of recently obtained results on the celebrated work of Rockafellar, most importantly in the case of over-relaxed or super-relaxed proximal point algorithms For more details, we refer the reader 1–55 We note that the solution set for 1.1 turns out to be the same as of the Yosida inclusion ∈ Mρ , 1.22 where Mρ M I ρM −1 is the Yosida regularization of M, while there is an equivalent form ρ−1 I − I ρM −1 , that is characterized as the Yosida approximation of M with Fixed Point Theory and Applications parameter ρ > It seems in certain ways that it is easier to solve the Yosida inclusion than 1.1 In other words, Mρ provides better solvability conditions under right choice for ρ than M itself To prove this assertion, let us recall the following existence theorem Theorem 1.6 Let M : X → 2X be a set-valued maximal monotone mapping on X Then the following statements are equivalent i An element u ∈ X is a solution to ∈ Mρ u ii u I ρM −1 u Assume that u is a solution to ∈ Mρ u 0∈M I −1 ρM ⇒ ∈ ρM I ⇒ I ρM ⇒u I −1 M I ρM −1 Then we have u ρM −1 u ∈ I ρM −1 u ρM I ρM −1 1.23 u u On the other hand, Mρ has also been applied to first-order evolution equations/inclusions in Hilbert space as well as in Banach space settings As in our present situation, resolvent operator I ρM −1 is empowered by η -maximal monotonicity, the Yosida approximation can be generalized in the context of solving first-order evolution equations/inclusions In Zeidler 52, Lemma 31.7 , it is shown that the Yosida approximation Mρ is 2ρ−1 -Lipschitz continuous, that is, Mρ x − Mρ y ≤ x−y ρ ∀x, y ∈ D M , 1.24 where this inequality is based on the nonexpansiveness of the resolvent operator RM I ρ ρM −1 , though the result does not seem to be much application oriented, while if we apply I ρM −1 , we can achieve, as the firm nonexpansiveness of the resolvent operator RM ρ applied in , more application-oriented results as follows: x − y, Mρ x − Mρ y Mρ x − Mρ y where the Lipschitz constant is 1/ρ ≥ ρ Mρ x − Mρ y ≤ x−y ρ , 1.25 ∀x, y ∈ D M , Fixed Point Theory and Applications Proof For any x, y ∈ D M , we have x−y ρ M ρ x − Mρ y RM x − RM y ρ ρ 1.26 Based on this equality and the firm nonexpansiveness of RM , we derive ρ x − y, Mρ x − Mρ y ρ Mρ x − Mρ y RM x − RM y , Mρ x − Mρ y ρ ρ ρ Mρ x − Mρ y ρ Mρ x − Mρ y ≥ ρ Mρ x − Mρ y ρ Mρ x − M ρ y RM x − RM y , Mρ x − Mρ y ρ ρ M R x − RM y , x − y − RM x − RM y ρ ρ ρ ρ ρ − M R x − RM y ρ ρ ρ M R x − RM y ρ ρ ρ 1.27 Thus, we have x − y, Mρ x − Mρ y ≥ ρ M ρ x − Mρ y 1.28 This completes the proof We note that from applications’ point of view, it seems that the result x − y, Mρ x − Mρ y ≥ ρ Mρ x − Mρ y , 1.29 that is, Mρ is ρ -cocoercive, is relatively more useful than that of the nonexpansive form Mρ x − M ρ y ≤ x−y ρ ∀x, y ∈ D M 1.30 It is well known when M is maximal monotone, the resolvent operator RM ρ −1 is single valued and Lipschitz continuous globally with the best constant ρ−1 I ρM Furthermore, the inverse resolvent identity is satisfied I− I ρM −1 I ρM Indeed, the Yosida approximation Mρ ρ−1 I − I ρM −1 are related to this identity Let us consider I− I ρM −1 I −1 −1 ρM ρM −1 1.31 and its equivalent form M I −1 −1 1.32 Fixed Point Theory and Applications Suppose that u ∈ I − I ρM −1 w , then we have u∈ I− I −1 ρM w −1 ⇐⇒ u ∈ w − I ρM ⇐⇒ w − u ∈ I ρM ⇐⇒ w ∈ w − u ρM w − u −1 w w 1.33 ⇐⇒ u ∈ ρM w − u −1 ⇐⇒ w − u ∈ ρM ⇐⇒ w ∈ I ρM ⇐⇒ u ∈ I ρM u −1 u −1 −1 w On the other hand, we have the inverse resolvent identity that lays the foundation of the Yosida approximation Lemma 1.7 see 26, Lemma 12.14 All mappings M : X → 2X satisfy ρI M−1 −1 ρ−1 I − I −1 ρM for ρ > 1.34 Proof We include the proof, though its similar to that of the above identity Assume that u ∈ ρ−1 I − I ρM −1 w , then we have ρu ∈ I − I ρM −1 w −1 ⇐⇒ ρu ∈ w − I ρM ⇐⇒ w − ρu ∈ I ρM ⇐⇒ w ∈ w − ρu ρM w − ρu −1 w w 1.35 ⇐⇒ u ∈ M w − ρu ⇐⇒ w − ρu ∈ M−1 u ⇐⇒ w ∈ ρI ⇐⇒ u ∈ ρI which is the required assertion M−1 M−1 u −1 w , Fixed Point Theory and Applications Note that when M : X → 2X is maximal monotone, mappings I− I ρM −1 , I ρM −1 −1 1.36 are single valued, in fact maximal monotone and nonexpansive The contents for the paper are organized as follows Section deals with a general historical development of the relaxed proximal point algorithm and its variants in conjunction with maximal η -monotonicity, and with the approximation solvability of a class of nonlinear inclusion problems using the convergence analysis for the proximal point algorithm as well as for the relaxed proximal point algorithm Section introduces and derives some results on unifying maximal η -monotonicity and generalized firm nonexpansiveness of the generalized resolvent operator In Section 3, the role of the overrelaxed η -proximal point algorithm is examined in detail in terms of its applications to approximating the solution of the inclusion problem 1.1 Finally, Section deals with some important specializations that connect the results on general maximal monotonicity, especially to several aspects of the linear convergence General Maximal η-Monotonicity In this section we discus some results based on basic properties of maximal η-monotonicity, and then we derive some results involving η-monotonicity and the generalized firm nonexpansiveness Let X denote a real Hilbert space with the norm · and inner product ·, · Let M : X → 2X be a multivalued mapping on X We will denote both the map M and its graph by M, that is, the set { x, y : y ∈ M x } This is equivalent to stating that a mapping is any subset M of X × X, and M x {y : x, y ∈ M} If M is single valued, we will still use M x to represent the unique y such that x, y ∈ M rather than the singleton set {y} This interpretation will much depend on the context The domain of a map M is defined as its projection onto the first argument by x ∈ X : ∃y ∈ X : x, y ∈ M dom M dom T {x ∈ X : M x / ∅} 2.1 X will denote the full domain of M, and the range of M is defined by y ∈ X : ∃x ∈ X : x, y ∈ M range M 2.2 The inverse M−1 of M is { y, x : x, y ∈ M} For a real number ρ and a mapping M, let ρM { x, ρy : x, y ∈ M} If L and M are any mappings, we define L M x, y z : x, y ∈ L, x, z ∈ M 2.3 10 Fixed Point Theory and Applications Definition 2.1 Let M : X → 2X be a multivalued mapping on X The map M is said to be i monotone if u∗ − v∗ , u − v ≥ ∀ u, u∗ , v, v∗ ∈ graph M , ii 2.4 r -strongly monotone if there exists a positive constant r such that u∗ − v ∗ , u − v ≥ r u − v ∀ u, u∗ , v, v∗ ∈ graph M , 2.5 ∀ u, u∗ , v, v∗ ∈ graph M , 2.6 iii strongly monotone if u∗ − v∗ , u − v ≥ u − v iv r -strongly pseudomonotone if v∗ , u − v ≥ 2.7 implies u∗ , u − v ≥ r u − v ∀ u, u∗ , v, v∗ ∈ graph M , 2.8 v pseudomonotone if v∗ , u − v ≥ 2.9 implies u∗ , u − v ≥ vi ∀ u, u∗ , v, v∗ ∈ graph M , 2.10 m -relaxed monotone if there exists a positive constant m such that u∗ − v∗ , u − v ≥ −m u − v ∀ u, u∗ , v, v∗ ∈ graph M , 2.11 vii cocoercive if u∗ − v∗ , u − v ≥ u∗ − v∗ viii ∀ u, u∗ , v, v∗ ∈ graph M , 2.12 c -cocoercive if there is a positive constant c such that u∗ − v∗ , u − v ≥ c u∗ − v∗ ∀ u, u∗ , v, v∗ ∈ graph M 2.13 Fixed Point Theory and Applications (equivalently, M−1 33 {z∗ }) and for constants a ≥ and b > 0, one has z − z∗ ≤ a w whenever z ∈ M−1 w , w ≤ b 4.40 Here {δk }, {αk }, {ck } ⊆ 0, ∞ 4.41 are scalar sequences such that δk → and ∞ δk < ∞ k Then the sequence {xk } converges linearly to a unique solution x∗ with rate − α∗ − d2 − α∗ − d2 4.42 < 1, a2 / c∗ a2 , α∗ lim supk → ∞ αk , and sequences {αk } and {ck } satisfy αk ≥ 1, where d ∗ c ≤ ∞, infk≥0 αk > 0, and supk≥0 αk < ck Proof We need to include the proof for the sake of the completeness Suppose that x∗ is a zero of M For all k ≥ 0, we set M,η ∗ Jk ∗ Therefore, Jk x∗ I − J ck 4.43 Then, in light of Theorem 3.1, any solution to 1.1 is a fixed point of M,η J ck , ∗ and hence a zero of Jk Next, the proof of 4.38 follows from a regular manipulation, and the following equality: u−v M,η J ck M,η u − J ck v ∗ ∗ Jk u − Jk v ∀u, v ∈ X 4.44 Before we start establishing linear convergence of the sequence {xk }, we express {xk } in light of Algorithm 3.2 as yk 1 − αk xk I− M,η Jck ∗ αk Jk M,η αk Jck xk 4.45 x k Now we begin examining the boundedness of the sequence {xk } leading to xk − xk → 34 Fixed Point Theory and Applications Next, we estimate using Proposition 2.13 that yk − x∗ − αk xk M,η xk − x∗ αk Jck ∗ xk − x∗ − αk Jk xk xk − x∗ ∗ ∗ − 2αk xk − x∗ , Jk xk − Jk x∗ ≤ xk − x∗ ∗ − 2αk Jk xk xk − x∗ 2 ∗ α2 Jk xk k ∗ Jk x k − αk − αk ∗ α2 Jk xk k 4.46 Since under the assumptions αk − αk > 0, it follows that yk − x∗ ≤ xk − x∗ 4.47 Moreover, xk − yk − αk xk αk yk − M,η αk yk − Jck − αk xk M,η αk Jck xk 4.48 xk ≤ αk ek Now we find the estimate leading to the boundedness of the sequence {xk }, xk − x∗ ≤ y k − x∗ ≤ xk − x∗ ≤ x0 − x∗ xk k αj ej ∞ ek k Thus, the sequence {xk } is bounded αk ek j ≤ x0 − x∗ − yk 4.49 Fixed Point Theory and Applications 35 We further examine the estimate yk − x∗ − x∗ ≤ yk − x∗ yk xk − x∗ ≤ xk − x∗ xk ≤ xk − x∗ xk 1 − yk yk − x∗ , xk yk − x∗ xk ∗ Jk x k − αk − αk − x∗ 2 xk − yk 1 xk − yk 1 xk − yk − yk 1 − yk 2 ∗ Jk x k − αk − αk − yk xk xk − yk ∞ x0 − x∗ 1 ∞ ek k ∞ ek k ek , k 4.50 where αk − αk > Since {ek } is summable, so is {ek }, and hence k Jj∗ xj j M,η that is, xk − Jck sequence {xk }, xk ∞ k ek < ∞ As k → ∞, we have that ∗ < ∞ ⇒ lim Jk xk k→∞ 0, 4.51 → Now we find the estimate leading to the boundedness of the xk − x∗ ≤ y k − x∗ ≤ xk − x∗ ≤ x0 − x∗ xk 4.52 k αj ej ∞ ek k Thus, the sequence {xk } is bounded − yk αk ek j ≤ x0 − x∗ 36 Fixed Point Theory and Applications We further examine the estimate yk − x∗ − x∗ ≤ yk − x∗ yk xk − x∗ ≤ xk − x∗ xk ≤ xk − x∗ xk 1 − yk 2 yk − x∗ , xk yk − x∗ xk ∗ Jk x k − αk − αk − x∗ xk − yk 1 xk − yk 1 xk − yk − yk 1 − yk 2 ∗ Jk x k − αk − αk − yk xk xk − yk ∞ x0 − x∗ 1 ∞ ek k ∞ ek k ek , k 4.53 where αk − αk > Since {ek } is summable, so is {ek }, and hence k Jj∗ xj ∞ k ek < ∞ As k → ∞, we have that ∗ < ∞ ⇒ lim Jk xk k→∞ j 0, 4.54 M,η that is, xk − Jck xk → Now we turn our attention using the previous argument to linear convergence of the M,η −1 ∗ ∗ 0, it implies for k large that ck Jk xk ∈ M Jck xk sequence {xk } Since limk → ∞ Jk xk −1 ∗ k ≤ b for k ≥ k and b > Therefore, in light of 4.40 , by taking w Moreover, ck Jk x −1 ∗ ck Jk xk and z M,η J ck xk , we have M,η J ck −1 ∗ x k − x ∗ ≤ a c k Jk x k ∀k ≥ k 4.55 Applying 4.38 , we arrive at M,η J ck M,η where Jck x∗ x∗ xk − x∗ ≤ a2 ck a2 xk − x∗ for t ≥ 1, 4.56 Fixed Point Theory and Applications Since yk yk − x∗ M,η − αk xk : 37 αk Jck xk , we estimate using 4.35 and αk ≥ that M,η − αk xk M,η αk Jck αk Jck xk − x∗ M,η xk − x∗ M,η xk − x∗ M,η xk − x∗ ≤ α2 Jck k ≤ α2 Jck k 2αk − αk αk − αk ≤ αk − αk αk − αk xk − x∗ − αk xk − x∗ − αk J ck xk − x∗ xk − x∗ xk − x∗ a2 a2 a2 c2 k xk − x∗ M,η J ck ck − αk M,η α2 k a2 − αk xk − x∗ α2 Jck k xk − x∗ − αk J ck 2αk − αk η Jck 2αk − αk J ck − αk xk − x∗ M,η xk − x∗ xk − x∗ 2 xk − x∗ xk , x∗ , xk − x∗ − αk xk − x∗ , xk − x∗ M,η xk − x∗ − αk M,η 2αk − αk 4.57 Hence, we have yk − x∗ ≤ θk xk − x∗ , 4.58 where θk αk − αk a2 ck − αk a2 < 1, 4.59 for αk − αk > and αk ≥ Since Algorithm 3.2 ensures M,η y k − J ck xk ≤ δk y k − x k , 4.60 αk yk − xk xk − xk , 38 Fixed Point Theory and Applications we have xk − yk M,η αk yk − Jck 1 − x∗ − x∗ ≤ yk − x∗ αk δk yk − xk yk − x∗ δk x k − xk ≤ yk − x∗ δk x k − x∗ δk x k − x ∗ ≤ θk xk − x∗ yk ≤ yk xk − x∗ ≤ αk δk yk − xk , xk δk x k − x∗ δk x k − x ∗ xk − yk xk − yk 1 4.61 It follows that xk − x∗ ≤ θk δ k k x − x∗ , − δk 4.62 where lim sup θk δ k − δk lim sup θk 4.63 − α∗ − d2 − α∗ − d2 for setting d a2 / c∗ < 1, a2 Note that if we set η x, y x − y in Theorem 4.3, we get a result connecting to the case of a linear convergence setting, but the algorithm remains overrelaxed or superrelaxed In this context, we state the following results before we start examining Theorem 4.7, the main result on linear convergence in the maximal monotone setting Note that based on Proposition 4.6, notions of cocoercivity and firm nonexpansiveness coincide, though it is well known that they may differ in usage much depending on the context Theorem 4.4 Let X be a real Hilbert space, and let M : X → 2X be maximal monotone Then the following statements are mutually equivalent i An element u ∈ X is a solution to 1.1 ii For an u ∈ X, one has u M Jc u for c > 0, 4.64 where M Jc u I cM −1 u 4.65 Fixed Point Theory and Applications 39 Proof It follows from the definition of the generalized resolvent operator corresponding to M Next, we present the super-relaxed Proximal point algorithm based on the maximal monotonicity Algorithm 4.5 Let M : X → 2X be a set-valued maximal monotone mapping on X with ∈ range M , and let the sequence {xk } be generated by the iterative procedure xk − αk xk αk yk ∀k ≥ 0, 4.66 and yk satisfies ≤ δk y k − x k , M y k − J ck x k M where Jck I ck M −1 4.67 , δk → and yk − αk xk ∀k ≥ M αk Jck xk 4.68 Here {δk }, {αk }, {ck } ⊆ 0, ∞ are scalar sequences such that ∞ k 4.69 δk < ∞ Proposition 4.6 Let X be a real Hilbert space, and let M : X → 2X be maximal monotone Then, ∗ M for Jk I − Jρ , one has ∗ ∗ u − v, Jk u − Jk v ∗ ∗ ≥ Jk u − Jk v , 4.70 where M Jρ u I ρM −1 u ∀u ∈ X 4.71 Theorem 4.7 Let X be a real Hilbert space Let M : X → 2X be maximal monotone, and let x∗ be a zero of M Let the sequence {xk } be generated by the iterative procedure xk 1 − αk xk αk yk ∀k ≥ 0, 4.72 and yk satisfies M y k − J ck x k ≤ ek , 4.73 40 Fixed Point Theory and Applications M I ck M −1 , ∞ ek < ∞, {αk }, {ek }{ck } ⊆ 0, ∞ , ck c∗ ≤ ∞, infk≥0 αk > 0, and where Jck k supk≥0 αk < Suppose that the sequence {xk } is bounded in the sense that there exists at least one solution to ∈ M x Then one has M J ck x k − x ∗ ≤ xk − x∗ ∗ − Jk x k , 4.74 where ∗ Jk M I − J ck 4.75 In addition, suppose that the sequence {xk } is generated by Algorithm 4.5, and that M−1 is a Lipschitz continuous at 0, that is, there exists a unique solution z∗ to ∈ M z (equivalently, {z∗ }) and for constants a ≥ and b > 0, one has M−1 z − z∗ ≤ a w whenever z ∈ M−1 w , w ≤ b 4.76 Here {δk }, {αk }, {ck } ⊆ 0, ∞ 4.77 are scalar sequences such that δk → and ∞ δk < ∞ k Then the sequence {xk } converges linearly to a unique solution x∗ with rate − α∗ − d2 − α∗ − d2 4.78 < 1, where d a2 / c∗ a2 , α∗ lim supk → ∞ αk , and sequences {αk } and {ck } satisfy αk ≥ 1, ∗ c ≤ ∞, infk≥0 αk > 0, and supk≥0 αk < ck Proof We need to include the proof for the sake of the completeness Suppose that x∗ is a zero of M For all k ≥ 0, we set ∗ Jk M I − J ck 4.79 ∗ Therefore, Jk x∗ Then, in light of Theorem 4.4, any solution to 1.1 is a fixed point of ∗ M Jck , and hence a zero of Jk Next, the proof of 4.74 follows from applying the regular manipulation, and the following equality: u−v M M J ck u − J ck v ∗ ∗ Jk u − Jk v ∀u, v ∈ X 4.80 Before we start establishing linear convergence of the sequence {xk }, we express {xk } in light Fixed Point Theory and Applications 41 of Algorithm 4.5 as yk − αk xk ∗ I − αk Jk M Jck M αk Jck xk 4.81 xk Now we begin examining the boundedness of the sequence {xk } leading to xk − x → Next, we estimate using Proposition 4.6 that k yk − x∗ 2 M αk Jck xk − x∗ − αk xk ∗ xk − x∗ − αk Jk xk xk − x∗ ∗ ∗ − 2αk xk − x∗ , Jk xk − Jk x∗ ≤ xk − x∗ ∗ − 2αk Jk xk xk − x∗ 2 ∗ α2 Jk xk k ∗ Jk x k − αk − αk ∗ α2 Jk xk k 4.82 Since under the assumptions αk − αk > 0, it follows that yk − x∗ ≤ xk − x∗ 4.83 Moreover, xk − yk − αk xk αk yk − − αk xk M αk Jck xk 4.84 M αk yk − Jck xk ≤ αk ek Now we find the estimate leading to the boundedness of the sequence {xk }, xk − x∗ ≤ y k − x∗ ≤ xk − x∗ ≤ x0 − x∗ xk k αj ej ∞ ek k Therefore, the sequence {xk } is bounded αk ek j ≤ x0 − x∗ − yk 4.85 42 Fixed Point Theory and Applications We further examine the estimate yk − x∗ − x∗ ≤ yk − x∗ yk xk − x∗ ≤ xk − x∗ xk ≤ xk − x∗ xk 1 − yk yk − x∗ , xk yk − x∗ xk ∗ Jk x k − αk − αk − x∗ 2 xk − yk 1 xk − yk 1 xk − yk − yk 1 − yk 2 ∗ Jk x k − αk − αk − yk xk xk − yk ∞ x0 − x∗ 1 ∞ ek k ∞ ek k ek , k 4.86 where αk − αk > Since {ek } is summable, so is {ek }, and hence k Jj∗ xj j M,η that is, xk − Jck sequence {xk }, xk ∞ k ek < ∞ As k → ∞, we have that ∗ < ∞ ⇒ lim Jk xk k→∞ 0, 4.87 → Now we find the estimate leading to the boundedness of the xk − x∗ ≤ y k − x∗ ≤ xk − x∗ ≤ x0 − x∗ xk k αj ej ∞ ek k Thus, the sequence {xk } is bounded αk ek j ≤ x0 − x∗ − yk 4.88 Fixed Point Theory and Applications 43 We further examine the estimate yk − x∗ − x∗ ≤ yk − x∗ yk xk − x∗ ≤ xk − x∗ xk ≤ xk − x∗ xk 1 − yk yk − x∗ , xk yk − x∗ xk ∗ Jk x k − αk − αk − x∗ 2 xk − yk 1 − yk xk 1 xk − yk − yk 2 ∗ Jk x k − αk − αk − yk xk − yk xk − yk ∞ x0 − x∗ 1 ∞ ek k ∞ ek k ek , k 4.89 where αk − αk > Since {ek } is summable, so is {ek }, and hence k Jj∗ xj ∞ k ek < ∞ As k → ∞, we have that ∗ < ∞ ⇒ lim Jk xk k→∞ j 0, 4.90 M,η that is, xk − Jck xk → Now we turn our attention using the previous argument to linear convergence of the −1 ∗ ∗ M sequence {xk } Since limk → ∞ Jk xk 0, it implies for k large that ck Jk xk ∈ M Jck xk −1 ∗ Moreover, ck Jk xk ≤ b for k ≥ k and b > Therefore, in light of 4.76 , by taking w −1 ∗ ck Jk xk and z M,η J ck xk , we have M −1 ∗ J ck x k − x ∗ ≤ a c k J k x k ∀k ≥ k 4.91 Applying 4.74 , we arrive at M Jck x k − x ∗ M where Jck x∗ x∗ ≤ a2 ck a2 xk − x∗ for t ≥ 1, 4.92 44 Fixed Point Theory and Applications Since yk yk − x∗ : − αk xk M αk Jck xk , we estimate for αk ≥ that M αk Jck xk − x∗ M,η 2αk − αk − αk M,η α2 k J ck M J ck x k − x ∗ αk − αk a2 ≤ αk − αk ck αk − αk xk − x∗ a2 a2 ck xk − x∗ − αk − αk a2 xk − x∗ 2 xk − x∗ − αk xk − x∗ − αk xk − x∗ M α2 Jck xk − x∗ k ≤ α2 Jck k M αk Jck xk − x∗ − αk xk 2αk − αk − αk xk − x∗ xk − x∗ xk − x∗ − αk 2αk − αk M,η J ck xk − x∗ 2 xk − x∗ M Jck xk − x∗ , xk − x∗ 4.93 Hence, we have yk − x∗ ≤ θk xk − x∗ , 4.94 where θk αk − αk a2 ck − αk a2 < 1, 4.95 for αk − αk > and αk ≥ Since Algorithm 4.5 ensures M y k − J ck x k ≤ δk y k − x k , 4.96 αk yk − xk xk − xk , Fixed Point Theory and Applications 45 we have xk − yk M,η αk yk − Jck 1 − x∗ − x∗ ≤ yk − x∗ αk δk yk − xk yk − x∗ δk x k − xk ≤ yk − x∗ δk x k − x∗ δk x k − x ∗ ≤ θk xk − x∗ yk ≤ yk xk − x∗ ≤ αk δk yk − xk , xk δk x k − x∗ δk x k − x ∗ xk − yk xk 1 − yk 4.97 It follows that xk − x∗ ≤ θk δ k k x − x∗ , − δk 4.98 where lim sup θk δ k − δk lim sup θk 4.99 − α∗ − α∗ − d2 < 1, for setting d a2 / c∗ a2 References R T Rockafellar, “Monotone operators and the proximal point algorithm,” SIAM Journal on Control and Optimization, vol 14, no 5, pp 877–898, 1976 J Eckstein and D P Bertsekas, “On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators,” Mathematical Programming, vol 55, no 3, pp 293–318, 1992 R U Verma, “On the generalized proximal point algorithm with applications to inclusion problems,” Journal of Industrial and Management Optimization, vol 5, no 2, pp 381–390, 2009 R P Agarwal and R U Verma, “The over-relaxed η− proximal point algorithm and nonlinear variational inclusion problems,” Nonlinear Functional Analysis and Applications, vol 14, no 4, 2009 V Barbu, Nonlinear Semigroups and Differential Equations in Banach Spaces, Nordhoff, Leyden, The Nethedands, 1976 O A Boikanyo and G Morosanu, “Modified Rockafellar’s algorithms,” Mathematical Sciences Research Journal In press D P Bertsekas, “Necessary and sufficient condition for a penalty method to be exact,” Mathematical Programming, vol 9, no 1, pp 87–99, 1975 D P Bertsekas, Constrained Optimization and Lagrange Multiplier Methods, Computer Science and Applied Mathematics, Academic Press, New York, NY, USA, 1982 46 Fixed Point Theory and Applications J Douglas Jr and H H Rachford Jr., “On the numerical solution of heat conduction problems in two and three space variables,” Transactions of the American Mathematical Society, vol 82, pp 421–439, 1956 10 J Eckstein, Splitting methods for monotone operators with applications to parallel optimization, Doctoral dissertation, Department of Civil Engineering, Massachusetts Institute of Technology, Cambridge, Mass, USA, 1989 11 J Eckstein, “Nonlinear proximal point algorithms using Bregman functions, with applications to convex programming,” Mathematics of Operations Research, vol 18, no 1, pp 202–226, 1993 12 J Eckstein, “Approximate iterations in Bregman-function-based proximal algorithms,” Mathematical Programming, vol 83, no 1, pp 113–123, 1998 13 J Eckstein and M C Ferris, “Smooth methods of multipliers for complementarity problems,” Mathematical Programming, vol 86, no 1, pp 65–90, 1999 14 M C Ferris, “Finite termination of the proximal point algorithm,” Mathematical Programming, vol 50, no 3, pp 359–366, 1991 15 O Guler, “On the convergence of the proximal point algorithm for convex minimization,” SIAM ¨ Journal on Control and Optimization, vol 29, no 2, pp 403–419, 1991 16 B Martinet, “R´ gularisation d’in´ quations variationnelles par approximations successives,” Revue e e Francaise d’Informatique et de Recherche Op´ rationnelle, S´ rie Rouge, vol 4, no 3, pp 154–158, 1970 ¸ e e 17 G J Minty, “Monotone nonlinear operators in Hilbert space,” Duke Mathematical Journal, vol 29, pp 341–346, 1962 18 G Morosanu, Nonlinear Evolution Equations and Applications, vol 26 of Mathematics and Its Applications ¸ (East European Series), D Reidel, Dordrecht, The Netherlands, 1988 19 A Moudafi, “Mixed equilibrium problems: sensitivity analysis and algorithmic aspect,” Computers & Mathematics with Applications, vol 44, no 8-9, pp 1099–1108, 2002 20 A Moudafi and M Th´ ra, “Finding a zero of the sum of two maximal monotone operators,” Journal e of Optimization Theory and Applications, vol 94, no 2, pp 425–448, 1997 21 J.-S Pang, “Complementarity problems,” in Handbook of Global Optimization, R Horst and P Pardalos, Eds., vol of Nonconvex Optimization and Its Applications, pp 271–338, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1995 22 S M Robinson, “Composition duality and maximal monotonicity,” Mathematical Programming, vol 85, no 1, pp 1–13, 1999 23 S M Robinson, “Linear convergence of epsilon-subgradient descent methods for a class of convex functions,” Mathematical Programming, vol 86, pp 41–50, 1999 24 R T Rockafellar, “On the maximal monotonicity of subdifferential mappings,” Pacific Journal of Mathematics, vol 33, pp 209–216, 1970 25 R T Rockafellar, “Augmented Lagrangians and applications of the proximal point algorithm in convex programming,” Mathematics of Operations Research, vol 1, no 2, pp 97–116, 1976 26 R T Rockafellar and R J.-B Wets, Variational Analysis, Springer, Berlin, Germany, 2004 27 M V Solodov and B F Svaiter, “An inexact hybrid generalized proximal point algorithm and some new results on the theory of Bregman functions,” Mathematics of Operations Research, vol 25, no 2, pp 214–230, 2000 28 M V Solodov and B F Svaiter, “Forcing strong convergence of proximal point iterations in a Hilbert space,” Mathematical Programming, vol 87, no 1, pp 189–202, 2000 29 W Takahashi, “Approximating solutions of accretive operators by viscosity approximation methods in Banach spaces,” in Applied Functional Analysis, pp 225–243, Yokohama, Yokohama, Japan, 2007 30 P Tossings, “The perturbed proximal point algorithm and some of its applications,” Applied Mathematics and Optimization, vol 29, no 2, pp 125–159, 1994 31 P Tseng, “Applications of a splitting algorithm to decomposition in convex programming and variational inequalities,” SIAM Journal on Control and Optimization, vol 29, no 1, pp 119–138, 1991 32 P Tseng, “Alternating projection-proximal methods for convex programming and variational inequalities,” SIAM Journal on Optimization, vol 7, no 4, pp 951–965, 1997 33 P Tseng, “A modified forward-backward splitting method for maximal monotone mappings,” SIAM Journal on Control and Optimization, vol 38, no 2, pp 431–446, 2000 34 R U Verma, “A fixed-point theorem involving Lipschitzian generalised pseudo-contractions,” Proceedings of the Royal Irish Academy Section A, vol 97, no 1, pp 83–86, 1997 35 R U Verma, “New class of nonlinear A-monotone mixed variational inclusion problems and resolvent operator technique,” Journal of Computational Analysis and Applications, vol 8, no 3, pp 275– 285, 2006 Fixed Point Theory and Applications 47 36 R U Verma, “Nonlinear A-monotone variational inclusions systems and the resolvent operator technique,” Journal of Applied Functional Analysis, vol 1, no 2, pp 183–189, 2006 37 R U Verma, “A-monotonicity and its role in nonlinear variational inclusions,” Journal of Optimization Theory and Applications, vol 129, no 3, pp 457–467, 2006 38 R U Verma, “A-monotone nonlinear relaxed cocoercive variational inclusions,” Central European Journal of Mathematics, vol 5, no 2, pp 386–396, 2007 39 R U Verma, “Approximation solvability of a class of nonlinear set-valued variational inclusions involving A, η -monotone mappings,” Journal of Mathematical Analysis and Applications, vol 337, no 2, pp 969–975, 2008 40 R U Verma, Nonlinear Approximation Solvability Involving Regular and Demiregular Convergence, International Publications USA , Orlando, Fla, USA, 1994 41 R U Verma, “General projection systems and relaxed cocoercive nonlinear variational inequalities,” The ANZIAM Journal, vol 49, no 2, pp 205–212, 2007 42 R U Verma, “General proximal point algorithmic models and nonlinear variational inclusions involving RMM mappings,” accepted to Journal of Informatics and Mathematical Sciences 43 R U Verma, “General proximal point algorithm involving η-maximal accretiveness framework in Banach spaces,” Positivity, vol 13, no 4, pp 771–782, 2009 44 R U Verma, “The generalized relaxed proximal point algorithm involving A-maximal-relaxed accretive mappings with applications to Banach spaces,” Mathematical and Computer Modelling, vol 50, no 7-8, pp 1026–1032, 2009 45 K Yosida, Functional Analysis, Springer, Berlin, Germany, 1965 46 K Yosida, “On the differentiability and representation of one-parameter semigroups of linear operators,” Journal of Mathematical Society of Japan, vol 1, pp 15–21, 1948 47 H.-K Xu, “Iterative algorithms for nonlinear operators,” Journal of the London Mathematical Society, vol 66, no 1, pp 240–256, 2002 48 E Zeidler, “The Ljusternik-Schnirelman theory for indefinite and not necessarily odd nonlinear operators and its applications,” Nonlinear Analysis: Theory, Methods & Applications, vol 4, no 3, pp 451–489, 1980 49 E Zeidler, “Ljusternik-Schnirelman theory on general level sets,” Mathematische Nachrichten, vol 129, pp 235–259, 1986 50 E Zeidler, Nonlinear Functional Analysis and Its Applications—Part 1: Fixed-Point Theorems, Springer, New York, NY, USA, 1986 51 E Zeidler, Nonlinear Functional Analysis and Its Applications—Part A: Linear Monotone Operators, Springer, New York, NY, USA, 1990 52 E Zeidler, Nonlinear Functional Analysis and Its Applications—Part B: Nonlinear Monotone Operators, Springer, New York, NY, USA, 1990 53 E Zeidler, Nonlinear Functional Analysis and Its Applications—Part 3: Variational Methods and Optimization, Springer, New York, NY, USA, 1985 54 T Zolezzi, “Continuity of generalized gradients and multipliers under perturbations,” Mathematics of Operations Research, vol 10, no 4, pp 664–673, 1985 55 L Zoretti, “Un th´ or` me de la th´ orie des ensembles,” Bulletin de la Soci´ t´ Math´ matique de France, e e e ee e vol 37, pp 116–119, 1909 ... Industrial and Management Optimization, vol 5, no 2, pp 381–390, 2009 R P Agarwal and R U Verma, “The over -relaxed η− proximal point algorithm and nonlinear variational inclusion problems,” Nonlinear. .. “A-monotonicity and its role in nonlinear variational inclusions,” Journal of Optimization Theory and Applications, vol 129, no 3, pp 457–467, 2006 38 R U Verma, “A-monotone nonlinear relaxed cocoercive variational. ..2 Fixed Point Theory and Applications In the first part, Rockafellar introduced the proximal point algorithm, and examined the general convergence and rate of convergence analysis, while

Ngày đăng: 21/06/2014, 20:20

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan