Multiprocessor Scheduling Part 5 ppt

30 112 0
Multiprocessor Scheduling Part 5 ppt

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Multiprocessor Scheduling: Theory and Applications 110 For the total idle time U(L), the next lemma provides an upper bound. Lemma 6. For any job list L = {J 1 , J 2 , , J n }, we have Proof. By the definition of R, no machine has idle time later than time point R. We will prove this lemma according to two cases. Case 1. At most machines in A are idle simultaneously in any interval [a, b] with a < b. Let v i be the sum of the idle time on machine M i before time point and be the sum of the idle time on machine M i after time point , i = 1, 2, , m. The following facts are obvious: In addition, we have because at most machines in A are idle simultaneously in any interval [a, b]with a < b R. Thus we have On-line Scheduling on Identical Machines for Jobs with Arbitrary Release Times 111 Case 2. At least machines in A are idle simultaneously in an interval [a, b] with a < b. In this case, we select a and b such that at most machines in A are idle simultaneously in any interval [a', b'] with a < b a' < b'. Let That means > by our assumption. Let , be such a machine that its idle interval [a, b] is created last among all machines . Let Suppose the idle interval [a, b] on machine is created by job . That means that the idle interval [a, b] on machine M i for any i A' has been created before job is assigned. Hence we have for any i A'. In the following, let We have b because b and b, i A'. What we do in estimating is to find a job index set S such that each job J j (j S) satisfies and . And hence by (8) we have To do so, we first show that (9) holds. Note that job must be assigned in Step 5 because it is an idle job. We can conclude that (9) holds if we can prove that job is assigned in Step 5 because the condition (d) of Step 4 is violated. That means we can establish (9) by proving that the following three inequalities hold by the rules of algorithm NMLS: (a) (b) (c) The reasoning for the three inequalities is: (a). As we have Multiprocessor Scheduling: Theory and Applications 112 Next we have because idle interval [a, b] on machine is created by job . Hence we have i.e. the first inequality is proved. (b). This follows because . (c). As we have . For any i A', by (9) and noticing that and , we have That means job appears before , i.e. . We set is processed in interval on machine M i }, i A'; We have because is the last idle job on machine M i for any i A'. Hence we have (10) Now we will show the following (11) holds: (11) It is easy to check that and for any i A', i.e. (11) holds for any j S i (i A') and j = . For any j S i (i A') and j , we want to establish (11) by showing that J j is assigned in Step 4. It is clear that job J j is not assigned in Step 5 because it is not an idle job. Also > j because . Thus we have where the first inequality results from j and the last inequality results from > j. That means J j is not assigned in Step 3 because job J j is not assigned on the machine with the smallest completion time. In addition, observing that job is the last idle job on machine M i and by the definition of S i , we can conclude that J j is assigned on machine M i to start at time . That means j > and J j cannot be assigned in Step 2. Hence J j must be assigned in Step 4. Thus by the condition (b) in Step 4, we have On-line Scheduling on Identical Machines for Jobs with Arbitrary Release Times 113 where the second inequality results from j > . Summing up the conclusions above, for any j S, (11) holds. By (8), (10) and (11) we have Now we begin to estimate the total idle time U(L). Let be the sum of the idle time on machine M i before time point and be the sum of the idle time on machine M i after time point , i = 1, 2, , m. The following facts are obvious by our definitions: By our definition of b and k 1 , we have that b and hence at most machines in A are idle simultaneously in any interval [a', b'] with a'< b' R. Noting that no machine has idle time later than R, we have Thus we have The last inequality follows by observing that the function is a decreasing function of for . The second inequality follows because and is a decreasing function of on . The fact that is a decreasing function follows because < 0 as The next three lemmas prove that is an upper bound for . Without loss of generality from now on, we suppose that the completion time of job J n is the largest job completion time for all machines, i.e. the makespan . Hence according to this assumption, J n cannot be assigned in Step 2. Multiprocessor Scheduling: Theory and Applications 114 Lemma 7. If J n is placed on M k with L k r n < L k+1 , then Proof. This results from = r n +p n and r n +p n . Lemma 8. If J n is placed on M k+1 with L k r n < L k+1 , then Proof. Because = L k+1 +p n and r n +p n , this lemma holds if L k+1 +p n (p n + r n ). Suppose L k+1 +p n > (p n + r n ). For any 1 i m, let is processed in interval on machine M i }. It is easy to see that hold. Let By the rules of our algorithm, we have because J n is assigned in Step 4. Hence we have and . By the same way used in the proof of Lemma 6, we can conclude that the following inequalities hold for any i B: On-line Scheduling on Identical Machines for Jobs with Arbitrary Release Times 115 Thus by (8) and (10) we have The second last inequality results from that and as . The last equality follows because and r n r 11 . Also we have because J n is assigned in Step 4. Hence we have The second inequality results from the fact that is a decreasing function of for . The last inequality results from and the last equation results from equation (4). Lemma 9. If job J n is placed on machine M 1 , then we have Multiprocessor Scheduling: Theory and Applications 116 Proof. In this case we have L 1 r n and = L 1 + p n . Thus we have The next theorem proves that NMLS has a better performance than MLS for m 2. Theorem 10. For any job list L and m 2, we have Proof. By Lemma 5 and Lemma 7—Lemma 9, Theorem 10 is proved. The comparison for some m among the upper bounds of the three algorithms' performance ratios is made in Table 1, where . m α m ǃ m R(m, LS) R(m, MLS) R(m, NMLS) 2 2.943 1.443 2.50000 2.47066 2.3465 3 3.42159 1.56619 2.66667 2.63752 2.54616 9 3.88491 1.68955 2.88889 2.83957 2.7075 12 3.89888 1.69333 2.91668 2.86109 2.71194 oo 4.13746 1.75831 3.00000 2.93920 2.78436 Table 1. A comparison of LS, MLS, and NMLS 6. LS scheduling for jobs with similar lengths In this section, we extend the problem to be semi-online and assume that the processing times of all the jobs are within [l,r], where r 1. We will analyze the performance of the LS algorithm. First again let L be the job list with n jobs. In the LS schedule, let L i be the completion time of machine M i and u i1 , , u ik i denote all the idle time intervals of machine M i (i = 1, 2, , m) just before J n is assigned. The job which is assigned to start right after u ij is denoted by J ij with release time r ij and processing time p ij . By the definitions of u ij and r ij , it is easy to see that r ij is the end point of u ij . To simplify the presentation, we abuse the notation and use u ij to denote the length of the particular interval as well. On-line Scheduling on Identical Machines for Jobs with Arbitrary Release Times 117 The following simple inequalities will be referred later on. (12) (13) (14) where U is the total idle time in the optimal schedule. The next theorem establishes an upper bound for LS when m 2 and a tight bound when m = 1. Theorem 11. For any m 2, we have (15) and . We will prove this theorem by examining a minimal counter-example of (15). A job list L = { J 1 , J 2 , J n } is called a minimal counter-example of (15) if (15) does not hold for L, but (15) holds for any job list L' with |L'| < |L|. In the following discussion, let L be a minimal counter-example of (15). It is obvious that, for a minimal counter-example L, the makespan is the completion time of the last job J n , i.e. L 1 + p n . Hence we have We first establish the following Observation and Lemma 12 for such a minimal counter- example. Observation. In the LS schedule, if one of the machines has an idle interval [0, T] with T > r, then we can assume that at least one of the machines is scheduled to start processing at time zero. Proof. If there exists no machine to start processing at time zero, let be the earliest starting time of all the machines and . It is not difficult to see that any job's release time is at least t 0 because, if there exists a job with release time less than t 0 , it would be assigned to the machine with idle interval [0, T] to start at its release time by the rules of LS. Now let L' be the job list which comes from list L by pushing forward the release time of each job to be t 0 earlier. Then L' has the same schedule as L for the algorithm LS. But the makespan of L' is t 0 less than the makespan of L not only for the LS schedule but also for the optimal schedule. Hence we can use L' as a minimal counter example and the observation holds for L'. Multiprocessor Scheduling: Theory and Applications 118 Lemma 12. There exists no idle time with length greater than 2r when m 2 and there is no idle time with length greater than r when m = 1 in the LS schedule. Proof. For m 2 if the conclusion is not true, let [T 1 , T 2 ] be such an interval with T 2 —T 1 > 2r. Let L 0 be the job set which consists of all the jobs that are scheduled to start at or before time T 1 . By Observation , L 0 is not empty. Let = L \ L 0 . Then is a counter-example too because has the same makespan as L for the algorithm LS and the optimal makespan of is not larger than that of L. This is a contradiction to the minimality of L. For m = 1, we can get the conclusion by employing the same argument. Now we are ready to prove Theorem 11. Proof. Let be the largest length of all the idle intervals. If , then by (12), (13) and (14) we have Next by use of 1 + instead of p n and observe that p n r we have So if m 2, r and , we have because is a decreasing function of . Hence the conclusion for m 2 and r is proved. If m 2 and 1 m r m < − we have because 2 by Lemma 12 and is an increasing function of . Hence the conclusion for m 2 is proved. For m = 1 we have On-line Scheduling on Identical Machines for Jobs with Arbitrary Release Times 119 because < 1 by Lemma 12. Consider L = {J 1 , J 2 } with r 1 = r — ,p 1 = 1, r 2 = 0, p 2 = r and let tend to zero. Then we can show that this bound is tight for m = 1. From Theorem 11, for m 2 and 1 r < we have R(m, LS) < 2 because is an increasing function of r and . This is significant because no online algorithm can have a performance ratio less than 2 as stated in Theorem 3. An interesting question for the future research is then how to design a better algorithm than LS for this semi-online scheduling problem. The next theorem provides a lower bound of any on-line algorithm for jobs with similar lengths when m = 1. Theorem 13. For m = 1 and any algorithm A for jobs with lengths in [1, r], we have where satisfies the following conditions: a) b) Proof. Let job J 1 be the first job in the job list with p 1 = 1 and r 1 = . Assume that if J 1 is assigned by algorithm A to start at any time in [ , r), then the second job J 2 comes with p 2 = r and r 2 = 0. Thus for these two jobs, 1 + r + and = 1 + r. Hence we get On the other hand, if J 1 is assigned by algorithm A to start at any time k, k [r, ), then the second job J 2 comes with p 2 = r and r 2 = k — r + . Thus for these two jobs, 1 + r + k and = 1 +k + . Hence we get Let tend to zero, we have where the second inequality results from the fact that is a decreasing function of for 0. Lastly assume that if J 1 is assigned by algorithm A to start at any time after , then no other job comes. Thus for this case, 1 + and = 1 + . Hence we get For r = 1, we get = 0.7963 and hence R(l, A) 1.39815. Recall from Theorem 11, R(l, LS) = 1.5 when r = 1. Therefore LS provides a schedule which is very close to the lower bound. 7. References Albers, S. (1999) Better bounds for online scheduling. SIAM J. on Computing, Vol.29, 459-473. [...]... 15x10 15x10 15x10 20x10 20x10 20x10 20x10 20x10 30x10 30x10 30x10 30x10 30x10 15x 15 15x 15 15x 15 15x 15 15x 15 666 655 59 7 59 0 59 3 926 890 863 951 958 1222 1039 1 150 1292 1207 9 45 784 848 842 902 1046 927 1032 9 35 977 1218 12 35 1216 1 152 1 355 1784 1 850 1719 1721 1888 1268 1397 1196 1233 1222 666 677 636 619 59 3 926 890 863 951 958 1222 1039 1 150 1292 1207 1010 817 909 899 951 1162 1034 1072 10 25 11 05. .. MT06 6x6 55 55 55 55 55 Dev (%) 0.00 MT10 10x10 930 1 051 980 9 65 950 2. 15 MT20 20x10 11 65 12 65 1182 1191 1178 1.12 ABZ5 10x10 1234 1287 1249 1 252 12 45 0.89 ABZ6 10x10 943 986 952 961 9 45 0.21 ABZ7 20x 15 656 721 711 702 672 2.44 ABZ8 20x 15 6 65 736 699 698 680 2. 25 ABZ9 20x 15 679 739 718 7 15 6 85 0.88 ORB01 10x10 1 059 11 45 1072 1082 1063 0.38 ORB02 10x10 888 919 902 9 05 893 0 .56 ORB03 10x10 10 05 1110 1008... 1311 13 45 1363 1228 1418 1784 1 850 1719 1 752 1898 1 451 155 0 1311 13 35 1 354 666 655 617 607 59 3 926 890 863 951 958 1222 1039 1 150 1292 1207 981 793 869 8 75 927 1127 982 1032 979 1031 1236 1296 1281 1189 1382 1784 1 850 1719 17 35 1888 1368 1 457 1247 1 256 12 85 666 670 607 609 59 3 926 890 863 951 958 1222 1039 1 150 1292 1207 9 65 788 852 844 922 10 85 950 1032 982 1016 1241 12 65 12 95 1178 1388 1784 1 850 1719... for Multiprocessor Scheduling Instance Size BKS1 Heuristic AugNN GA NeuroGenetic Dev (%) LA01 LA02 LA03 LA04 LA 05 LA06 LA07 LA08 LA09 LA10 LA11 LA12 LA13 LA14 LA 15 LA16 LA17 LA18 LA19 LA20 LA21 LA22 LA23 LA24 LA 25 LA26 LA27 LA28 LA29 LA30 LA31 LA32 LA33 LA34 LA 35 LA36 LA37 LA38 LA39 LA40 10x5 10x5 10x5 10x5 10x5 15x5 15x5 15x5 15x5 15x5 20x5 20x5 20x5 20x5 20x5 10x10 10x10 10x10 10x10 10x10 15x10 15x10... 1890 13 25 1498 1 258 1272 1271 666 655 59 9 59 2 59 3 926 890 863 951 958 1222 1039 1 150 1292 1207 950 784 848 842 910 1047 936 1032 957 988 1222 1261 1236 1166 1368 1784 1 850 1719 1721 1888 13 05 1446 1223 1242 1 251 0.00 0.00 0.33 0.34 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0 .53 0.00 0.00 0.00 0.88 0.09 0.97 0.00 2. 35 1.12 0.32 2.10 1.64 1.21 0.96 0.00 0.00 0.00 0.00 0.00 2.92 3 .51 2.26... 0.38 ORB02 10x10 888 919 902 9 05 893 0 .56 ORB03 10x10 10 05 1110 1008 1110 1007 0.19 ORB04 10x10 10 05 1071 1 051 1060 1031 2 .58 ORB 05 10x10 887 959 8 95 899 894 0.78 ORB06 10x10 1010 1110 1 053 1042 1036 2 .57 ORB07 10x10 397 431 410 4 05 399 0 .50 ORB08 10x10 889 1034 9 25 930 910 2.36 ORB09 10x10 934 978 9 45 952 934 0.00 ORB10 10x10 944 1028 978 990 961 1.80 Average 1.20 1Best Known Solution Table 1 Makespan...120 Multiprocessor Scheduling: Theory and Applications Bartal, Y., Fiat, A., Karloff, H & Vohra, R (19 95) New algorithms for an ancient scheduling problem J Comput Syst Sci., Vol .51 (3), 359 -366 Chen, B., Van Vliet A., & Woeginger, G J (1994) New lower and upper bounds for on-line scheduling Operations Research Letters, Vol.16, 221-230 Chen, B & Vestjens, A P A (1997) Scheduling on identical... Research Letters, Vol.21, 1 65- 169 Dosa, G., Veszprem, & He, Y (2004) Semi-online algorithms for parallel machine scheduling problems Computing, Vol.72, 355 -363 Faigle, U., Kern, W., & Turan, G (1989) On the performance of on-line algorithms for partition problems Act Cybernetica, Vol.9, 107-119 Fleischer R & Wahl M (2000) On-line scheduling revisited Journal of Scheduling Vol.3, 343- 353 Galambos, G & Woeginger,... 1 35 Agarwal, A., Jacob V., Pirkul, H., Augmented Neural Networks for Task Scheduling, European Journal of Operational Research, 2003, 151 (3), 481 -50 2 Applegate, D., and Cook, W., A Computational Study of the Job-Shop Scheduling Problem, ORSA Journal on Computing, 1991, 3(2), 149- 156 Barnes, J.W., and Chambers, J.B., Solving the Job Shop Scheduling Problem with Tabu Search, IIE Transactions, 19 95, ... 0) The new ordering based on w.F = (1, 2, 3, 4, 5, 6, 7, 8) At this point, gene in position 4 is not in the same position as the target So we set w5 = w4*(RWK4/RWK5) + 0.1 Or w5 = 1*(10/9) + 0.1 = 1.211 So the new w = (1.0, 1.267, 1.0, 1.0, 1.211, 1.0, 1.0, 1.0) and the new w.F = (19, 15. 2, 14, 10, 10.9, 6, 5, 0) The new ordering based on w.F = (1, 2, 3, 5, 4, 6, 7, 8) which is the target string Switching . NMLS) 2 2.943 1.443 2 .50 000 2.47066 2.34 65 3 3.42 159 1 .56 619 2.66667 2.63 752 2 .54 616 9 3.88491 1.68 955 2.88889 2.83 957 2.70 75 12 3.89888 1.69333 2.91668 2.86109 2.71194 oo 4.13746 1. 758 31 3.00000 2.93920. Computing, Vol.29, 459 -473. Multiprocessor Scheduling: Theory and Applications 120 Bartal, Y., Fiat, A., Karloff, H. & Vohra, R. (19 95) New algorithms for an ancient scheduling problem we have ( 15) and . We will prove this theorem by examining a minimal counter-example of ( 15) . A job list L = { J 1 , J 2 , J n } is called a minimal counter-example of ( 15) if ( 15) does not

Ngày đăng: 21/06/2014, 19:20

Tài liệu cùng người dùng

Tài liệu liên quan