David G. Luenberger, Yinyu Ye - Linear and Nonlinear Programming International Series Episode 1 Part 7 pps

25 399 0
David G. Luenberger, Yinyu Ye - Linear and Nonlinear Programming International Series Episode 1 Part 7 pps

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

140 Chapter 5 Interior-Point Methods of these concepts. This structure is useful for specifying and analyzing various versions of interior point methods. Most methods employ a step of Newton’s method to find a point near the central path when moving from one value of  to another. One approach is the predictor–corrector method, which first takes a step in the direction of decreasing  and then a corrector step to get closer to the central path. Another method employs a potential function whose value can be decreased at each step, which guarantees convergence and assures that intermediate points simultaneously make progress toward the solution while remaining close to the central path. Complete algorithms based on these approaches require a number of other features and details. For example, once systematic movement toward the solution is terminated, a final phase may move to a nearby vertex or to a non-vertex point on a face of the constraint set. Also, an initial phase must be employed to obtain an feasible point that is close to the central path from which the steps of the search algorithm can be started. These features are incorporated into several commercial software packages, and generally they perform well, able to solve very large linear programs in reasonable time. 5.9 EXERCISES 1. Using the simplex method, solve the program (1) and count the number of pivots required. 2. Prove the volume reduction rate in Theorem 1 for the ellipsoid method. 3. Develop a cutting plane method, based on the ellipsoid method, to find a point satisfying convex inequalities f i x  0 i=1 m x 2  R 2  where f i ’s are convex functions of x in C 1 . 4. Consider the linear program (5) and assume that   p =x  Ax =b x > 0 is nonempty and its optimal solution set is bounded. Show that the dual of the problem has a nonempty interior. 5. (Farkas’ lemma) Prove: Exactly one of the feasible sets x  Ax = b x  0 and y  y T A  0 y T b =1 is nonempty. A vector y in the latter set is called an infeasibility certificate for the former. 6. (Strict complementarity) Consider any linear program in standard form and its dual and let both of them be feasible. Then, there always exists a strictly complementary solution pair, x ∗  y ∗  s ∗ , such that x ∗ j s ∗ j =0 and x ∗ j +s ∗ j > 0 for all j Moreover, the supports of x ∗ and s ∗ , P ∗ =j  x ∗ j > 0 and Z ∗ =j  x ∗ j > 0, are invariant among all strictly complementary solution pairs. 7. (Central path theorem) Let x y s be the central path of (9). Then prove 5.9 Exercises 141 (a) The central path point x y s is bounded for 0 <  0 and any given 0 < 0 < . (b) For 0 <  <, c T x    c T x and b T y    b T y Furthermore, if x   = x and y   = y, c T x  <c T x and b T y  >b T y (c) x y s converges to an optimal solution pair for (LP) and (LD). Moreover, the limit point x0 P ∗ is the analytic center on the primal optimal face, and the limit point s0 Z ∗ is the analytic center on the dual optimal face, where P ∗ Z ∗  is the strict complementarity partition of the index set 1 2  n. 8. Consider a primal-dual interior point x y s ∈  where <1. Prove that there is a fixed quantity >0 such that x j   for all j ∈ P ∗ and s j   for all j ∈ Z ∗  where P ∗ Z ∗  is defined in Exercise 6. 9. (Potential level theorem) Define the potential level set  = x y s ∈    n+ x s   Prove (a)  1  ⊂  2  if  1   2  (b) For every ,  is bounded and its closure  has non-empty intersection with the solution set. 10. Given 0 < x 0 < s ∈ E n , show that n logx T s − n  j=1 logx j s j   n log n and x T s  exp   n+p x s −n log n p   142 Chapter 5 Interior-Point Methods 11. (Logarithmic approximation) If d ∈ E n such that d  < 1 then 1 T d  n  i=1 log1+d i   1 T d − d 2 21 −d    [Note:If d = d 1 d 2 d n  then d  ≡max i d i ] 12. Let the direction d x  d y  d s  be generated by system (14) with  = n/n + and  = x T s/n, and let the step size be  =   minXs XS −1/2  x T s n+ 1−Xs  (21) where  is a positive constant less than 1. Let x + =x +d x  y + =y +d y  and s + =s +d s  Then, using Exercise 11 and the concavity of the logarithmic function show x +  y +  s +  ∈   and  n+ x +  s +  − n+ x s  −  minXs Xs −1/2 1− n + x T s Xs+  2 21 −  13. Let v = Xs in Exercise 12. Prove  minv V −1/2 1− n + 1 T v v   3/4  where V is the diagonal matrix of v. Thus, the two exercises imply  n+ x +  s +  − n+ x s  −  3/4+  2 21 − =− for a constant . One can verify that >02 when  = 04. 14. Prove property (19) for (HDSP). REFERENCES 5.1 Computation and complexity models were developed by a number of scientists; see, e.g., Cook [C5], Hartmanis and Stearns [H5] and Papadimitriou and Steiglitz [P2] for the bit complexity models and Blum et al. [B21] for the real number arithmetic model. For a general discussion of complexity see Vavasis [V4]. For a comprehensive treatment which served as the basis for much of this chapter, see Ye [Y3]. 5.2 The Klee Minty example is presented in [K5]. Much of this material is based on a teaching note of Cottle on Linear Programming taught at Stanford [C6]. Practical performances of the simplex method can be seen in Bixby [B18]. References 143 5.3 The ellipsoid method was developed by Khachiyan [K4]; more developments of the ellipsoid method can be found in Bland, Goldfarb and Todd [B20]. 5.3 The analytic center for a convex polyhedron given by linear inequalities was introduced by Huard [H12], and later by Sonnevend [S8]. The barrier function was introduced by Frisch [F19]. The central path was analyzed in McLinden [M3], Megiddo [M4], and Bayer and Lagarias [B3, B4], Gill et al [G5]. 5.5 Path-following algorithms were first developed by Renegar [R1]. A primal barrier or path- following algorithm was independently analyzed by Gonzaga [G13]. Both Gonzaga [G13] and Vaidya [V1] extended the rank-one updating technique [K2] for solving the Newton equation of each iteration, and proved that each iteration uses On 25  arithmetic operations on average. Kojima, Mizuno and Yoshise [K6] and Monteiro and Adler [M7] developed a symmetric primal-dual path-following algorithm with the same iteration and arithmetic operation bounds. 5.6–5.7 Predictor-corrector algorithms were developed by Mizuno et al. [M6]. A more practical predictor-corrector algorithm was proposed by Mehrotra [M5] (also see Lustig et al. [L19] and Zhang and Zhang [Z3]). Mehrotra’s technique has been used in almost all linear programming interior-point implementations. A primal potential reduction algorithm was initially proposed by Karmarkar [K2]. The primal-dual potential function was proposed by Tanabe [T2] and Todd and Ye [T5]. The primal-dual potential reduction algorithm was developed by Ye [Y1], Freund [F18], Kojima, Mizuno and Yoshise [K7], Goldfarb and Xiao [G11], Gonzaga and Todd [G14], Todd [T4], Tunel [T10], Tutuncu [T11], and others. The homogeneous and self-dual embedding method can be found in Ye et al. [Y2], Luo et al. [L18], Andersen and Ye [A5], and many others. It is also implemented in most linear programming software packages such as SEDUMI of Sturm [S11]. 5.1–5.7 There are several comprehensive text books which cover interior-point linear programming algorithms. They include Bazaraa, Jarvis and Sherali [B6], Bertsekas [B12], Bertsimas and Tsitsiklis [B13], Cottle [C6], Cottle, Pang and Stone [C7], Dantzig and Thapa [D9, D10], Fang and Puthenpura [F2], den Hertog [H6], Murty [M12], Nash and Sofer [N1], Nesterov [N2], Roos et al. [R4], Renegar [R2], Saigal [S1], Vanderebei [V3], and Wright [W8]. PART II UNCONSTRAINED PROBLEMS Chapter 6 TRANSPORTATION AND NETWORK FLOW PROBLEMS There are a number of problems of special structure that are important components of the subject of linear programming. A broad class of such special problems is represented by the transportation problem and related problems treated in the first five sections of this chapter, and network flow problems treated in the last three sections. These problems are important because, first, they represent broad areas of applications that arise frequently. Indeed, many of these problems were originally formulated prior to the general development of linear programming, and they continue to arise in a variety of applications. Second, these problems are important because of their associated rich theory, which provides important insight and suggests new general developments. The chapter is roughly divided into two parts. In the first part the transportation problem is examined from the viewpoint of the revised simplex method, which takes an extremely simple form for this problem. The second part of the chapter introduces graphs and network flows. The transportation algorithm is generalized and given new interpretations. Next, a special, highly efficient algorithm, the tree algorithm, is developed for solution of the maximal flow problem. 6.1 THE TRANSPORTATION PROBLEM The transportation problem was stated briefly in Chapter 2. We restate it here. There are m origins that contain various amounts of a commodity that must be shipped to n destinations to meet demand requirements. Specifically, origin i contains an amount a i , and destination j has a requirement of amount b j . It is assumed that the system is balanced in the sense that total supply equals total demand. That is, m  i=1 a i = n  j=1 b j  (1) The numbers a i and b j , i = 1 2mj = 1 2n, are assumed to be nonneg- ative, and in many applications they are in fact nonnegative integers. There is a unit 145 146 Chapter 6 Transportation and Network Flow Problems cost c ij associated with the shipping of the commodity from origin i to destination j. The problem is to find the shipping pattern between origins and destinations that satisfies all the requirements and minimizes the total shipping cost. In mathematical terms the above problem can be expressed as finding a set of x ij ’s, i = 1 2mj =1 2n,to minimize m  i=1 n  j=1 c ij x ij subject to n  j=1 x ij =a i for i = 1 2  m (2) m  i=1 x ij =b j for j = 1 2  n x ij  0 for all i and j This mathematical problem, together with the assumption (1), is the general trans- portation problem. In the shipping context, the variables x ij represent the amounts of the commodity shipped from origin i to destination j. The structure of the problem can be seen more clearly by writing the constraint equations in standard form: x 11 +x 12 +···+x 1n =a 1 x 21 +x 22 +···+x 2n =a 2    x m1 +x m2 +···+x mn =a m x 11 +x 21 x m1 =b 1 x 12 + x 22 + x m2 =b 2    x 1n + x 2n + x mn =b n (3) The structure is perhaps even more evident when the coefficient matrix A of the system of equations above is expressed in vector–matrix notation as A = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 1 T 1 T · · · 1 T II···I ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦  (4) where 1 = 111 is n-dimensional, and where each I is an n ×n identity matrix. 6.1 The Transportation Problem 147 In practice it is usually unnecessary to write out the constraint equations of the transportation problem in the explicit form (3). A specific transportation problem is generally defined by simply presenting the data in compact form, such as: a =a 1 a 2   a m  b =b 1 b 2   b n  C = ⎡ ⎢ ⎢ ⎣ c 11 c 12 ···c 1n c 21 c 22 ···c 2n c m1 c m2 ···c mn ⎤ ⎥ ⎥ ⎦  The solution can also be represented by an m ×n array, and as we shall see, all computations can be made on arrays of a similar dimension. Example 1. As an example, which will be solved completely in a later section, a specific transportation problem with four origins and five destinations is defined by a =30 80 10 60 b =10 50 20 80 20 C = ⎡ ⎢ ⎢ ⎣ 34689 22455 22232 33242 ⎤ ⎥ ⎥ ⎦  Note that the balance requirement is satisfied, since the sum of the supply and the demand are both 180. Feasibility and Redundancy A first step in the study of the structure of the transportation problem is to show that there is always a feasible solution, thus establishing that the problem is well defined. A feasible solution can be found by allocating shipments from origins to destinations in proportion to supply and demand requirements. Specifically, let S be equal to the total supply (which is also equal to the total demand). Then let x ij = a i b j /S for i = 1 2m; j =1 2n. The reader can easily verify that this is a feasible solution. We also note that the solutions are bounded, since each x ij is bounded by a i (and by b j ). A bounded program with a feasible solution has an optimal solution. Thus, a transportation problem always has an optimal solution. A second step in the study of the structure of the transportation problem is based on a simple examination of the constraint equations. Clearly there are m equations corresponding to origin constraints and n equations corresponding to destination constraints—a total of n+m. However, it is easily noted that the sum of the origin equations is m  i=1 n  j=1 x ij = m  i=1 a i  (5) and the sum of the destination equations is n  j=1 m  i=1 x ij = n  j=1 b j  (6) 148 Chapter 6 Transportation and Network Flow Problems The left-hand sides of these equations are equal. Since they were formed by two distinct linear combinations of the original equations, it follows that the equations in the original system are not independent. The right-hand sides of (5) and (6) are equal by the assumption that the system is balanced, and therefore the two equations are, in fact, consistent. However, it is clear that the original system of equations is redundant. This means that one of the constraints can be eliminated without changing the set of feasible solutions. Indeed, any one of the constraints can be chosen as the one to be eliminated, for it can be reconstructed from those remaining. The above observations are summarized and slightly extended in the following theorem. Theorem. A transportation problem always has a solution, but there is exactly one redundant equality constraint. When any one of the equality constraints is dropped, the remaining system of n +m −1 equality constraints is linearly independent. Proof. The existence of a solution and a redundancy were established above. The sum of all origin constraints minus the sum of all destination constraints is identically zero. It follows that any constraint can be expressed as a linear combination of the others, and hence any one constraint can be dropped. Suppose that one equation is dropped, say the last one. Suppose that there were a linear combination of the remaining equations that was identically zero. Let the coefficients of such a combination be  i , i =1 2m, and  j , j =1 2n−1. Referring to (3), it is seen that each x in , i = 1 2m, appears only in the ith equation (since the last one has been dropped). Thus  i = 0 for i = 1 2n. In the remaining equations x ij appears in only one equation, and hence  j = 0, j = 1 2n−1. Hence the only linear combination that yields zero is the zero combination, and therefore the system of equations is linearly independent. It follows from the above discussion that a basis for the transportation problem consists of m +n−1 vectors, and a nondegenerate basic feasible solution consists of m +n −1 variables. The simple solution found earlier in this section is clearly not a basic solution. 6.2 FINDING A BASIC FEASIBLE SOLUTION There is a straightforward way to compute an initial basic feasible solution to a trans- portation problem. The method is worth studying at this stage because it introduces the computational process that is the foundation for the general solution technique based on the simplex method. It also begins to illustrate the fundamental property of the structure of transportation problems that is discussed in the next section. The Northwest Corner Rule This procedure is conducted on the solution array shown below: 6.2 Finding a Basic Feasible Solution 149 x 11 x 12 x 13 ··· x 1n a 1 x 21 x 22 x 23 ··· x 2n a 2       x m1 x m2 x m3 ··· x mn a m b 1 b 2 b 3 ··· b n (7) The individual elements of the array appear in cells and represent a solution. An empty cell denotes a value of zero. Beginning with all empty cells, the procedure is given by the following steps: Step 1. Start with the cell in the upper left-hand corner. Step 2. Allocate the maximum feasible amount consistent with row and column sum requirements involving that cell. (At least one of these requirements will then be met.) Step 3. Move one cell to the right if there is any remaining row requirement (supply). Otherwise move one cell down. If all requirements are met, stop; otherwise go to Step 2. The procedure is called the Northwest Corner Rule because at each step it selects the cell in the upper left-hand corner of the subarray consisting of current nonzero row and column requirements. Example 1. A basic feasible solution constructed by the Northwest Corner Rule is shown below for Example 1 of the last section. 10 20 30 30 20 30 80 10 10 40 20 60 10 50 20 80 20 (8) In the first step, at the upper left-hand corner, a maximum of 10 units could be allocated, since that is all that was required by column 1. This left 30 −10 = 20 units required in the first row. Next, moving to the second cell in the top row, the remaining 20 units were allocated. At this point the row 1 requirement is met, and it is necessary to move down to the second row. The reader should be able to follow the remaining steps easily. There is the possibility that at some point both the row and column requirements corresponding to a cell may be met. The next entry will then be a zero, indicating a degenerate basic solution. In such a case there is a choice as to where to place the zero. One can either move right or move down to enter the zero. Two examples of degenerate solutions to a problem are shown below: [...]... right is the same matrix when its rows and columns are permuted according to the order found ⎡ 2 1 0 1 3 2 0 0 0 7 2 0 1 5 4 2 0 1 0 0 0 1 0 0 2 0 0 3 3 0 3 ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 1 4 0 2 2 0 2 5 1 6 ⎤ ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 4 1 5 1 0 2 0 2 1 2 3 1 0 0 4 1 2 2 0 0 0 2 3 3 0 0 0 0 2 7 0 0 0 0 0 1 ⎤ 4 ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ 4 3 1 6 5 2 ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ Triangularization The importance of triangularity is, of course, the associated... the current basis c 11 c12 c 21 c22 c13 c23 cm1 ··· ··· ··· 1 2 u1 u2 cmn ··· c1n c2n um n In this case the main part of the array, with the coefficients cij , remains fixed, and we calculate the extra column and row corresponding to u and v The procedure for calculating the simplex multipliers is this: Step 1 Assign an arbitrary value to any one of the multipliers Step 2 Scan the rows and columns of the... to exactly one job, and each job must have one assigned worker One wishes to make the assignment in such a way as to maximize (in this example) the total value of the assignment n; The general formulation of the assignment problem is to find xij , i = 1 2 j =1 2 n to n n cij xij minimize j =1 i =1 n xij = 1 for i =1 2 n xij = 1 for j =1 2 n 0 for i =1 2 n j =1 2 subject to n (11 ) j =1 n i =1 xij In the motivating... shown in Table 6 .1 2 3 1 4 Fig 6.2 A directed graph 16 2 Chapter 6 Transportation and Network Flow Problems 1 2 3 4 (1, 2) 1 1 (1, 4) 1 1 (2, 3) (2, 4) 1 1 1 1 (4, 2) 1 1 Table 6 .1 Incidence matrix for example Clearly, all information about the structure of the graph is contained in the node-arc incidence matrix This representation is often very useful for computational purposes, since it is easily... the cycle and has a +1 in the row corresponding to ni and a 1 in the row corresponding to ni +1 As a result, the +1 s and 1 s all cancel in the combination Thus, the combination is the zero vector, contradicting the linear independence of a1 a2 am We have therefore established that the collection of arcs corresponding to a basis does not contain a cycle Since there are n − 1 arcs and n nodes, it is... two Hungarian mathematicians, and this method was later generalized to form the primal-dual method for linear programming 6.6 BASIC NETWORK CONCEPTS We now begin a study of an entirely different topic in linear programming: graphs and flows in networks It will be seen, however, that this topic provides a foundation for a wide assortment of linear programming applications and, in fact, provides a different... order x13 , x23 , x25 , x35 , x32 , x 31 , x 41 , x 51 , x54 The smallest variable with a minus assigned to it is x 51 = 10 Thus we set = 10 The Transportation Algorithm It is now possible to put together the components developed to this point in the form of a complete revised simplex procedure for the transportation problem The steps are: 6.4 Simplex Method for Transportation Problems 15 7 Step 1 Compute... which only one multiplier must be determined This is the bottom right corner element, and it gives u4 = 2 Then, from the equation 4 = 2 + 4 , 4 is found to be 2 Next, u3 and u2 are determined, then 3 and 2 , and finally u1 and 1 The result is shown below: u 3 2 2 3 −2 4 2 2 3 1 6 4 2 2 1 8 5 3 4 2 9 5 2 2 0 5 3 1 2 Cycle of Change In accordance with the general simplex procedure, if a nonbasic variable... variables This value is added to all cells that have a + assigned to them and subtracted from all cells that have a − assigned The result will be the new basic feasible solution The procedure is illustrated by the following example Example 3 A completed solution array is shown below: 10 0 20− 20+ 10 0 10 − 40 10 + 30− 10 0 10 + 30 400 40 10 30 60 10 50 40 In this example x53 is the entering variable, so a plus sign... sequence must have the form km j In Fig 6 .1, (1, 2), (2, 4), (4, 3) is a chain i k1 k 1 k 2 k 2 k 3 between nodes 1 and 3 If a direction of movement along a chain is specified—say from node i to node j—it is then called a path from i to j A cycle is a chain leading from node i back to node i The chain (1, 2), (2, 4), (4, 3), (3, 1) is a cycle for the graph in Fig 6 .1 A graph is connected if there is a chain . [T2] and Todd and Ye [T5]. The primal-dual potential reduction algorithm was developed by Ye [Y1], Freund [F18], Kojima, Mizuno and Yoshise [K7], Goldfarb and Xiao [G1 1] , Gonzaga and Todd [G1 4],. according to the order found. ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 12 010 2 410 500 000400 2 17 213 232003 02 010 0 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ 4 3 1 6 5 2 ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 400000 12 0000 514 000 12 1200 032320 212 3 71 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ 32 516 4 Triangularization The. writing the constraint equations in standard form: x 11 +x 12 +···+x 1n =a 1 x 21 +x 22 +···+x 2n =a 2    x m1 +x m2 +···+x mn =a m x 11 +x 21 x m1 =b 1 x 12 + x 22 + x m2 =b 2    x 1n +

Ngày đăng: 06/08/2014, 15:20

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan