David G. Luenberger, Yinyu Ye - Linear and Nonlinear Programming International Series Episode 1 Part 4 doc

25 564 0
David G. Luenberger, Yinyu Ye - Linear and Nonlinear Programming International Series Episode 1 Part 4 doc

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

3.9 Decomposition 65 The search for the minimal element in (48) is normally made with respect to nonbasic columns only. The search can be formally extended to include basic columns as well, however, since for basic elements p ij − T  q ij e i  =0 The extra zero values do not influence the subsequent procedure, since a new column will enter only if the minimal value is less than zero. We therefore define r ∗ as the minimum relative cost coefficient for all possible basis vectors. That is, r ∗ =minimum i∈1N  r ∗ 1 =minimum j∈1K i  p ij − T  q ij e i     Using the definitions of p ij and q ij , this becomes r ∗ i =minimum j∈  1K i   c T i x ij − T 0 L i x ij − m+i   (49) where  0 is the vector made up of the first m elements of m being the number of rows of L i (the number of linking constraints in (43)). The minimization problem in (49) is actually solved by the ith subproblem: minimize c T i − T 0 L i x i subject to A i x i =b i (50) x i  0 This follows from the fact that  m+i is independent of the extreme point index j (since  is fixed during the determination of the r i ’s), and that the solution of (50) must be that extreme point of S i , say x ik , of minimum cost, using the adjusted cost coefficients c T i − T 0 L i . Thus, an algorithm for this special version of the revised simplex method applied to the master problem is the following: Given a basis B Step 1. Calculate the current basic solution x B , and solve  T B =c T B for . Step 2. For each i = 1 2N, determine the optimal solution x ∗ i of the ith subproblem (50) and calculate r ∗ i =  c T i − T 0 L i  x ∗ i − m+i  (51) If all r ∗ i > 0, stop; the current solution is optimal. Step 3. Determine which column is to enter the basis by selecting the minimal r ∗ i . Step 4. Update the basis of the master problem as usual. 66 Chapter 3 The Simplex Method This algorithm has an interesting economic interpretation in the context of a multidivisional firm minimizing its total cost of operations as described earlier. Division i’s activities are internally constrained by Ax i = b i , and the common resources b 0 impose linking constraints. At Step 1 of the algorithm, the firm’s central management formulates its current master plan, which is perhaps suboptimal, and announces a new set of prices that each division must use to revise its recommended strategy at Step 2. In particular, − 0 reflects the new prices that higher management has placed on the common resources. The division that reports the greatest rate of potential cost improvement has its recommendations incorporated in the new master plan at Step 3, and the process is repeated. If no cost improvement is possible, central management settles on the current master plan. Example 2. Consider the problem minimize −x 1 −2x 2 − 4y 1 −3y 2 subject to x 1 + x 2 + 2y 1  4 x 2 + y 1 + y 2  3 2x 1 + x 2  4 x 1 + x 2  2 y 1 + y 2  2 3y 1 +2y 2  5 x 1  0x 2  0y 1  0y 2  0 The decomposition algorithm can be applied by introducing slack variables and identifying the first two constraints as linking constraints. Rather than using double subscripts, the primary variables of the subsystems are taken to be x = x 1 x 2 , y = y 1 y 2 . Initialization. Any vector (x, y) of the master problem must be of the form x = I  i=1  i x i  y = J  j=1  j y j  where x i and y j are extreme points of the subsystems, and J  i=1  i =1 J  j=1  j =1 i  0 j  0 Therefore the master problem is minimize I  i=1 p i  i + J  j=1 t j  j subject to I  i=1  i L 1 x i + J  j=1  j L 2 y j +s =b 3.9 Decomposition 67 I  i=1  i =1 i  0i=1 2I j  j=1  j =1 j  0j=1 2J where p i is the cost of x i , t j is the cost of y j , and where s =s 1 s 2  is a vector of slack variables for the linking constraints. This problem corresponds to (47). A starting basic feasible solution is s =b,  1 =1,  1 =1, where x 1 =0, y 1 =0 are extreme points of the subsystems. The corresponding starting basis is B = I and, accordingly, the initial tableau for the revised simplex method for the master problem is Variable B −1 Value s 1 1000 4 s 2 0100 3  1 0010 1  1 0001 1 Then  T =0 0 0 0 B −1 =0 0 0 0. Iteration 1. The relative cost coefficients are found by solving the subproblems defined by (50). The first is minimize −x 1 −2x 2 subject to 2x 1 +x 2  4 x 1 +x 2  2 x 1  0x 2  0 This problem can be solved easily (by the simplex method or by inspection). The solution is x = 0 2, with r 1 =−4. The second subsystem is solved correspondingly. The solution is y = 1 1 with r 2 =−7. It follows from Step 2 of the general algorithm that r ∗ =−7. We let y 2 =1 1 and bring  2 into the basis of the master problem. Master Iteration. The new column to enter the basis is ⎡ ⎢ ⎢ ⎣ L 2 y 2 0 1 ⎤ ⎥ ⎥ ⎦ = ⎡ ⎢ ⎢ ⎣ 2 2 0 1 ⎤ ⎥ ⎥ ⎦  68 Chapter 3 The Simplex Method and since the current basis is B =I, the new tableau is Variable B −1 Value New column s 1 100 0 4 2 s 2 010 0 3 2  1 001 0 1 0  1 000 1 1 1 which after pivoting leads to Variable B −1 Value s 1 100−22 s 2 010−21  1 001 0 1  2 000 1 1 Since t 2 =c T 2 y 2 =−7, we find  =000−7 B −1 =000−7 Iteration 2. Since  0 , which comprises the first two components of , has not changed, the subproblems remain the same, but now according to (51), r ∗ =−4 and  2 should be brought into the basis, where x 2 =0 2. Master Iteration. The new column to enter the basis is ⎡ ⎢ ⎢ ⎢ ⎣ L 1 x 2 1 0 ⎤ ⎥ ⎥ ⎥ ⎦ = ⎡ ⎢ ⎢ ⎢ ⎣ 2 2 1 0 ⎤ ⎥ ⎥ ⎥ ⎦  This must be multiplied by B −1 to obtain its representation in terms of the current basis (but the representation does not change it in this case). The master tableau is 3.9 Decomposition 69 then updated as follows: Variable B −1 Value New column s 1 100−222 s 2 010−212  1 0010 11  2 0001 10 Variable B −1 Value 0. s 1 1 −10 0 1  2 0 1/2 0 −1 1/2  1 0 −1/2 1 1 1/2  2 0001 1 Since p 2 =−4, we have  T =  0 −4 0 −7  B −1 =  0 −2 0 −3   Iteration 3. The subsystem’s problems are now minimize −x 1 subject to 2x 1 +x 2  4 x 1 +x 2  2 x 1  0x 2  0 minimize −2y 1 − y 2 +3 subject to y 1 + y 2  2 3y 1 + 2y 2  5 y 1  0y 2  0 It follows that x 3 =2 0 and  3 should be brought into the basis. Master Iteration. Proceeding as usual, we obtain the new tableau and new  as follows. Variable B −1 Value s 1 1 −10012  2 0 1/2 0 −1 1/2 0  1 0 −1/2 1 1 1/2 1  2 0 0 0 1 1/2 0 s 1 10−2 −20  2 0 1/2 0 −1 1/2  3 0 −1/2 1 1 1/2  3 0 0011  T =  0 −4 −2 −7  B −1 =  0 −1 −2 −5  70 Chapter 3 The Simplex Method The subproblems now have objectives −x 1 −x 2 +2 and −3y 1 −2y 2 +5, respectively, which both have minimum values of zero. Thus the current solution is optimal. The solution is 1/2x 2 +1/2x 3 +y 2 , or equivalently, x 1 =1, x 2 =1, y 1 =1, y 2 =1. 3.10 SUMMARY The simplex method is founded on the fact that the optimal value of a linear program, if finite, is always attained at a basic feasible solution. Using this foundation there are two ways in which to visualize the simplex process. The first is to view the process as one of continuous change. One starts with a basic feasible solution and imagines that some nonbasic variable is increased slowly from zero. As the value of this variable is increased, the values of the current basic variables are continuously adjusted so that the overall vector continues to satisfy the system of linear equality constraints. The change in the objective function due to a unit change in this nonbasic variable, taking into account the corresponding required changes in the values of the basic variables, is the relative cost coefficient associated with the nonbasic variable. If this coefficient is negative, then the objective value will be continuously improved as the value of this nonbasic variable is increased, and therefore one increases the variable as far as possible, to the point where further increase would violate feasibility. At this point the value of one of the basic variables is zero, and that variable is declared nonbasic, while the nonbasic variable that was increased is declared basic. The other viewpoint is more discrete in nature. Realizing that only basic feasible solutions need be considered, various bases are selected and the corre- sponding basic solutions are calculated by solving the associated set of linear equations. The logic for the systematic selection of new bases again involves the relative cost coefficients and, of course, is derived largely from the first, continuous, viewpoint. 3.11 EXERCISES 1. Using pivoting, solve the simultaneous equations 3x 1 +2x 2 =5 5x 1 + x 2 =9 2. Using pivoting, solve the simultaneous equations x 1 +2x 2 +x 3 =7 2x 1 −x 2 +2x 3 =6 x 1 +x 2 +3x 3 =12 3. Solve the equations in Exercise 2 by Gaussian elimination as described in Appendix C. 3.11 Exercises 71 4. Suppose B is an m ×m square nonsingular matrix, and let the tableau T be constructed, T = I B where I is the m ×m identity matrix. Suppose that pivot operations are performed on this tableau so that it takes the form [C, I]. Show that C =B −1 . 5. Show that if the vectors a 1  a 2 a m are a basis in E m , the vectors a 1  a 2 a p−1  a q  a p+1 a m also are a basis if and only if y pq = 0, where y pq is defined by the tableau (7). 6. If r j > 0 for every j corresponding to a variable x j that is not basic, show that the corresponding basic feasible solution is the unique optimal solution. 7. Show that a degenerate basic feasible solution may be optimal without satisfying r j  0 for all j. 8. a) Using the simplex procedure, solve maximize −x 1 +x 2 subject to x 1 −x 2  2 x 1 +x 2  6 x 1  0x 2  0 b) Draw a graphical representation of the problem in x 1 , x 2 space and indicate the path of the simplex steps. c) Repeat for the problem maximize x 1 +x 2 subject to −2x 1 +x 2  1 x 1 −x 2  1 x 1  0x 2  0 9. Using the simplex procedure, solve the spare-parts manufacturer’s problem (Exercise 4, Chapter 2). 10. Using the simplex procedure, solve minimize 2x 1 +4x 2 +x 3 +x 4 subject to x 1 +3x 2 +x 4  4 2x 1 + x 2  3 x 2 +4x 3 +x 4  3 x 1  0 i =1 2 3 4 11. For the linear program of Exercise 10 a) How much can the elements of b = 4 3 3 be changed without changing the optimal basis? b) How much can the elements of c = 2 41 1 be changed without changing the optimal basis? 72 Chapter 3 The Simplex Method c) What happens to the optimal cost for small changes in b? d) What happens to the optimal cost for small changes in c? 12. Consider the problem minimize x 1 −3x 2 −04x 3 subject to 3x 1 − x 2 + 2x 3  7 −2x 1 +4x 2  12 −4x 1 +3x 2 + 3x 3  14 x 1  0x 2  0x 3  0 a) Find an optimal solution. b) How many optimal basic feasible solutions are there? c) Show that if c 4 + 1 3 a 14 + 4 5 a 24  0, then another activity x 4 can be introduced with cost coefficient c 1 and activity vector a 14 a 24 a 34  without changing the optimal solution. 13. Rather than select the variable corresponding to the most negative relative cost coefficient as the variable to enter the basis, it has been suggested that a better criterion would be to select that variable which, when pivoted in, will produce the greatest improvement in the objective function. Show that this criterion leads to selecting the variable x k corresponding to the index k minimizing max iy ik >0 r k y i0 /y ik . 14. In the ordinary simplex method one new vector is brought into the basis and one removed at every step. Consider the possibility of bringing two new vectors into the basis and removing two at each stage. Develop a complete procedure that operates in this fashion. 15. Degeneracy. If a basic feasible solution is degenerate, it is then theoretically possible that a sequence of degenerate basic feasible solutions will be generated that endlessly cycles without making progress. It is the purpose of this exercise and the next two to develop a technique that can be applied to the simplex method to avoid this cycling. Corresponding to the linear system Ax = b where A = a 1  a 2 a n  define the perturbed system Ax =b where b =b+a 1 + 2 a 2 +···+ n a n  > 0. Show that if there is a basic feasible solution (possibly degenerate) to the unperturbed system with basis B =a 1  a 2 a m , then corresponding to the same basis, there is a nondegenerate basic feasible solution to the perturbed system for some range of >0. 16. Show that corresponding to any basic feasible solution to the perturbed system of Exercise 15, which is nondegenerate for some range of >0, and to a vector a k not in the basis, there is a unique vector a i in the basis which when replaced by a k leads to a basic feasible solution; and that solution is nondegenerate for a range of >0. 17. Show that the tableau associated with a basic feasible solution of the perturbed system of Exercise 15, and which is nondegenerate for a range of >0, is identical with that of the unperturbed system except in the column under b. Show how the proper pivot in a given column to preserve feasibility of the perturbed system can be determined from the tableau of the unperturbed system. Conclude that the simplex method will avoid cycling if whenever there is a choice in the pivot element of a column k, arising from a tie in the minimum of y i0 /y ik among the elements i ∈I 0 , the tie is resolved by finding 3.11 Exercises 73 the minimum of y i1 /y ik , i ∈I 0 . If there still remainties among elements i ∈I, the process is repeated with y i2 /y ik , etc., until there is a unique element. 18. Using the two-phase simplex procedure solve a  minimize −3x 1 +x 2 +3x 3 −x 4 subject to x 1 +2x 2 − x 3 + x 4 =0 2x 1 −2x 2 +3x 3 +3x 4 =9 x 1 − x 2 +2x 3 − x 4 =6 x 1  0i= 1 2 34 b minimize x 1 +6x 2 − 7x 3 + x 4 +5x 5 subject to 5x 1 −4x 2 +13x 3 −2x 4 + x 5 =20 x 1 − x 2 + 5x 3 − x 4 + x 5 =8 x 1  0i=12 34 5 19. Solve the oil refinery problem (Exercise 3, Chapter 2). 20. Show that in the phase I procedure of a problem that has feasible solutions, if an artificial variable becomes nonbasic, it need never again be made basic. Thus, when an artificial variable becomes nonbasic its column can be eliminated from future tableaus. 21. Suppose the phase I procedure is applied to the system Ax = b, x  0, and that the resulting tableau (ignoring the cost row) has the form x 1 x 2 ···x k x k+1 ···x n y 1 y 2 ···y k y k+1 ···y m 1 1 1 R 1 S 1 0 ··· 0 0 ··· 0    0 ··· 0 ¯ b 1    ¯ b k 00··· 0    0 ··· 0 R 2 S 2 1 1 1 0    0 This corresponds to having m −k basic artificial variables at zero level. a) Show that any nonzero element in R 2 can be used as a pivot to eliminate a basic artificial variable, thus yielding a similar tableau but with k increased by one. b) Suppose that the process in (a) has been repeated to the point where R 2 =0. Show that the original system is redundant, and show how phase II may proceed by eliminating the bottom rows. c) Use the above method to solve the linear program minimize 2x 1 +6x 2 +x 3 + x 4 subject to x 1 +2x 2 + x 4 =6 x 1 +2x 2 +x 3 + x 4 =7 x 1 +3x 2 −x 3 +2x 4 =7 x 1 + x 2 +x 3 =5 x 1  0x 2  0x 3  0x 4  0 74 Chapter 3 The Simplex Method 22. Find a basic feasible solution to x 1 +2x 2 − x 3 + x 4 =3 2x 1 +4x 2 + x 3 +2x 4 =12 x 1 +4x 2 +2x 3 + x 4 =9 x 1  0i=12 3 4 23. Consider the system of linear inequalities Ax  b, x  0 with b  0. This system can be transformed to standard form by the introduction of m surplus variables so that it becomes Ax−y = b, x  0, y  0. Let b k = max i b i and consider the new system in standard form obtained by adding the kth row to the negative of every other row. Show that the new system requires the addition of only a single artificial variable to obtain an initial basic feasible solution. Use this technique to find a basic feasible solution to the system. x 1 +2x 2 + x 3  4 2x 1 + x 2 + x 3  5 2x 1 +3x 2 +2x 3  6 x i  0i= 1 2 3 24. It is possible to combine the two phases of the two-phase method into a single procedure by the big–M method. Given the linear program in standard form minimize c T x subject to Ax =b x  0 one forms the approximating problem minimize c T x +M m  i=1 y i subject to Ax + y =b x  0 y  0 In this problem y = y 1 y 2 y m  is a vector of artificial variables and M is a large constant. The term M m  i=1 y i serves as a penalty term for nonzero y i ’s. If this problem is solved by the simplex method, show the following: a) If an optimal solution is found with y = 0, then the corresponding x is an optimal basic feasible solution to the original problem. b) If for every M>0 an optimal solution is found with y =0, then the original problem is infeasible. c) If for every M>0 the approximating problem is unbounded, then the original problem is either unbounded or infeasible. d) Suppose now that the original problem has a finite optimal value V. Let VM be the optimal value of the approximating problem. Show that VM  V. e) Show that for M 1  M 2 we have VM 1   VM 2 . f) Show that there is a value M 0 such that for M  M 0 , VM = V, and hence conclude that the big–M method will produce the right solution for large enough values of M. [...]... appropriate sequence of tableaus is given below without explanation 2 1 1 2 2 4 1 1 3 1 0 0 3/2 1 2 1 2 −3 1 0 0 1 0 0 0 1 0 0 1 0 1/ 2 1/ 2 1 1 1 2 0 1 0 4 6 0 2 2 8 1/ 2 1 1 2 1 10 1 1 1 The optimal solution is x1 = 0, x2 = 1, x3 = 2 The corresponding dual program is maximize 4 subject to 2 2 1 +6 1+ 1+ 2 1+ 2 0 1 2 2 2 2 1 4 −3 0 2 The optimal solution to the dual is obtained directly from the... y1 1 0 0 0 y2 2/3 −7/3 −2/3 8/3 a) Determine the next pivot element y3 0 3 −2 11 y4 0 1 0 0 y5 4/ 3 −2/3 2/3 4/ 3 y6 0 0 1 0 y0 4 2 2 −8 76 Chapter 3 The Simplex Method b) Given that the inverse of the current basis is 1 B = a1 a4 a6 1 ⎤ ⎡ 1 1 1 1⎣ 1 −2 2⎦ = 3 1 2 1 and the corresponding cost coefficients are T cB = c1 c4 c6 = 1 −3 1 find the original problem 28 In many applications of linear programming. .. contradicting rp 0 32 Use the Dantzig–Wolfe decomposition method to solve minimize subject to −4x1 − x2 − 3x3 − 2x4 2x1 + 2x2 + x3 + 2x4 x2 + 2x3 + 3x4 2x1 + x2 x2 − x3 + 2x4 x3 + 2x4 x1 0 x2 0 x3 0 6 4 5 1 2 6 x4 0 REFERENCES 3 .1 3.7 All of this is now standard material contained in most courses in linear programming See the references cited at the end of Chapter 2 For the original work in this area,... appeared in the first tableau: 1 = 1, 2 = 1 Geometric Interpretation The duality relations can be viewed in terms of the dual interpretations of linear constraints emphasized in Chapter 3 Consider a linear program in standard form For sake of concreteness we consider the problem minimize 18 x1 + 12 x2 + 2x3 + 6x4 subject to 3x1 + x2 − 2x3 + x4 = 2 x1 + 3x2 − x4 = 2 x1 0 x2 0 x 3 0 x4 0 The columns of the constraints... arrows a) Formulate the above problem as a linear programming problem with upper bounds (Hint: Denote by xij the number of calls routed from city i to city j.) b) Find the solution by inspection of the graph 26 Using the revised simplex method find a basic feasible solution to x1 + 2x2 − x3 + x4 = 3 2x1 + 4x2 + x3 + 2x4 = 12 x1 + 4x2 + 2x3 + x4 = 9 x1 0 i = 1 2 3 4 27 The following tableau is an intermediate... in Fig 4. 2 A basic solution represents construction of b with positive weights on two of the ai ’s The dual problem is maximize subject to 2 3 −2 +2 1+ 1+ 3 1 1 1 − 2 2 2 2 18 12 2 6 4. 3 Relations to the Simplex Procedure 87 a2 b a1 a3 a4 Fig 4. 2 The primal requirements space The dual problem is shown geometrically in Fig 4. 3 Each column ai of the primal defines a constraint of the dual as a half-space... issues of linear programming, see Murtagh [M9] 78 Chapter 3 The Simplex Method 3.9 For a more comprehensive description of the Dantzig and Wolfe [D 11] decomposition method, see Dantzig [D6] 3 .11 The degeneracy technique discussed in Exercises 15 17 is due to Charnes [C2] The anticycling method of Exercise 35 is due to Bland [B19] For the state of the art in Simplex solvers see Bixby [B18] Chapter 4 DUALITY... Orden and Wolfe [D8], Orchard-Hays [O1], and Dantzig [D4] for the revised simplex method; and Charnes and Lemke [C3] and Dantzig [D5] for upper bounds The synthetic carrot interpretation is due to Gale [G2] 3.8 The idea of using LU decomposition for the simplex method is due to Bartels and Golub [B2] See also Bartels [B1] For a nice simple introduction to Gaussian elimination, see Forsythe and Moler [F15]... cB B 1 to the dual is obtained In particular, if, as is the case with slack variables, cI = 0, then the elements in the last row under B 1 are equal to the negative of components of the solution to the dual Example Consider the primal program minimize − x1 − 4x2 − 3x3 subject to 2x1 + 2x2 + x3 4 x1 + 2x2 + 2x3 6 x1 0 x2 0 x3 0 86 Chapter 4 Duality This can be solved by introducing slack variables and. .. vector (which is really a composite of u and v) is not restricted to be nonnegative 4 .1 Dual Linear Programs 81 Similar transformations can be worked out for any linear program to first get the primal in the form (1) , calculate the dual, and then simplify the dual to account for special structure In general, if some of the linear inequalities in the primal (1) are changed to equality, the corresponding . B 1 Value s 1 1 10 012  2 0 1/ 2 0 1 1/2 0  1 0 1/ 2 1 1 1/ 2 1  2 0 0 0 1 1/2 0 s 1 10−2 −20  2 0 1/ 2 0 1 1/2  3 0 1/ 2 1 1 1/ 2  3 0 0 011  T =  0 4 −2 −7  B 1 =  0 1 −2 −5  70. B 1 Value New column s 1 100−222 s 2 010 − 21 2  1 0 010 11  2 00 01 10 Variable B 1 Value 0. s 1 1 10 0 1  2 0 1/ 2 0 1 1/2  1 0 1/ 2 1 1 1/ 2  2 00 01 1 Since p 2 = 4, we have  T =  0 4 . 0 3 2  1 0 01 0 1 0  1 000 1 1 1 which after pivoting leads to Variable B 1 Value s 1 100−22 s 2 010 − 21  1 0 01 0 1  2 000 1 1 Since t 2 =c T 2 y 2 =−7, we find  =000−7 B 1 =000−7 Iteration

Ngày đăng: 06/08/2014, 15:20

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan