DESIGN AND ANALYSIS OF DISTRIBUTED ALGORITHMS phần 7 docx

60 383 0
DESIGN AND ANALYSIS OF DISTRIBUTED ALGORITHMS phần 7 docx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

348 SYNCHRONOUS COMPUTATIONS Our goal is now to design protocols that can communicate any positive integer I transmitting k +1 packets and using as little time as possible. Observe that with k +1 packets the communication sequence is b 0 : q 1 : b 1 : q 2 : b 2 : : q k : b k . We will first of all make a distinction between protocols that do not care about the content of the transmitted protocols (like C 2 and C 3 ) and those (like R 2 and R 3 ) that use those packets to convey information about I . The first class of protocols are able to tolerate the type of transmission failures called corruptions. In fact, they use packets only to delimit quanta; as it does not matter what the content of the packet is (but only that it is being transmitted), these protocols will work correctly even if the value of the bits in the packets is changed during transmission. We will call them as corruption-tolerant communicators. The second class exploits the content of the packets to convey information about I ; hence, if the value of just one of the bits is changed during transmission, the entire communication will become corrupted. In other words, these communicators need reliable transmission for their correctness. Clearly, the bounds and the optimal solution protocols are different for the two classes. We will consider the first class in details; the second types of communicators will be briefly sketched at the end. As before, we will consider for simplicity the case when a packet is composed of a single bit, that is c = 1; the results can be easily generalized to the case c>1. Corruption-Tolerant Communication If transmissions are subject to corrup- tions, the value of thereceived packets cannot be reliedupon, and so they are used only to delimit quanta. Hence, the only meaningful part of the communication sequence is the k−tuple of quanta q 1 , q 2 , ,q k . Thus, the (infinite) set Q k of all possible k-tuples q 1 , q 2 , ,q k , where the q i are nonnegative integers, describes all the possible communication sequences. What we are going to do is to associate to each communication sequence Q[I ] ∈ Q k a different integer I. Then, if we want to communicate I , we will use the unique sequence of quanta described by Q[I ]. To achieve this goal weneed a bijection betweenk-tuples and nonnegative integers. This is not difficult to do; it is sufficient to establish a total order among tuples as follows. Given two k-tuples Q =q 1 ,q 2 , ,q k  and Q  =q  1 ,q  2 , ,q  k  of positive integers, we say that Q<Q  if 1.  i q i <  i q  i or 2.  i q i =  i q  i and q j = q  j for 1 ≤ j<l, and q l <q  l for some index l,1≤ l ≤ k + 1. COMMUNICATORS, PIPELINE, AND TRANSFORMERS 349 I 0 1 2 3 4 5 6 7 8 9 10 Q[I] 0,0,0 0,0,1 0,1,0 1,0,0 0,0,2 0,1,1 0,2,0 1,0,1 1,1,0 2,0,0 0,0,3 11 12 13 14 15 16 17 18 19 20 21 22 0,1,2 0,2,1 0,3,0 1,0,2 1,1,1 1,2,0 2,0,1 2,1,0 3,0,0 0,0,4 0,1,3 0,2,2 23 24 25 26 27 28 29 30 31 32 33 34 0,3,1 0,4,0 1,0,3 1,1,2 1,2,1 1,3,0 2,0,2 2,1,1 2,2,0 3,0,1 3,1,0 4,0,0 FIGURE 6.9: The first 35 elements of Q 3 according to the total order. That is, in this total order, all the tuples where the sum of the quanta is t are smaller than those where the sum is t +1; so, for example 2, 0, 0 is smaller than 1, 1, 1. If the sum of the quanta is the same, the tuples are lexicographically ordered; so, for example, 1, 0, 2 is smaller than 1, 1, 1. The ordered list of the first few elements of Q 3 is shown in Figure 6.9. In this way, if we want to communicate integer I we will use the k-tuple Q whose rank (starting from 0) in this total order is I . So, for example, in Q 3 , the triple 1, 0, 3 has rank 25, and the triple 0, 1, 4corresponds to integer 36. The solutionprotocol, which wewill callOrder k , thususes the following encoding and decoding schemes. Protocol Order k Encoding Scheme: Given I, the Sender (E1) finds Q k [I ] =a 1 , a 2 , ,a k ; (E2) it sets encoding(I ):=b 0 : a 1 : b 1 : ,: a k : b k , where the b i are bits of arbitrary value. Decoding Scheme: Given (b 0 : a 1 : b 1 : ,: a k : b k ), the receiver (D1) extracts Q =a 1 , a 2 , ,a k ; (D2) it finds I such that Q k [I ] = Q; (D3) it sets decoding(b 0 : a 1 : b 1 : ,: a k : b k ): = I . The correctness of the protocol derives from the fact that the mapping we are using is a bijection. Let us examine the cost of protocol Order k . The number of bits is clearly k +1. B[Order k ](I ) = k +1. (6.7) What is the time? The communication sequence b 0 : q 1 : b 1 : q 2 : b 2 : : q k : b k costs k +1timeunits spent totransmit the bitsb 0 , ,b k , plus  k i=1 q i time 350 SYNCHRONOUS COMPUTATIONS units of silence. Hence, to determine the time T [Order k ](I ) we need to know the sum of the quanta in Q k [I ]. Let f (I,k) be the smallest integer t such that I ≤  t +k k  . Then (Exercise 6.6.12), T[Order k ](I ) = f (I,k) +k +1. (6.8) Optimality We are now going to show that protocol Order k is optimal in the worst case. We will do so by establishing a lower bound on the amount of time required to solve the two-party communication problem using exactly k +1 bit transmissions. Observe that k +1 time units will be required by any solution algorithm to transmit the k + 1 bits; hence, the concern is on the amount of additional time required by the protocol. We will establish the lower bound assuming that the values I we want to transmit are from a finite set U of integers. This assumption makes the lower bound stronger because for infinite sets, the bounds can only be worse. Without any loss of generality, we can assume that U = Z w ={0, 1, ,w −1}, where |U|=w. Let c(w,k) denote the number of additional time units needed in the worst case to solve the two-party communication problem for Z w with k +1 bits that can be corrupted during the communication. To derive a bound on c(w, k), we will consider the dual problem of determining the size ω(t,k) of the largest set for which the two-party communication problem can always be solved using k +1 corruptible transmissions and at most t additional time units. Notice that with k +1 bit transmissions, it is only possible to distinguish k quanta; hence, the dual problem can be rephrased as follows: Determine the largest positive integer w = ω(t,k) such that every x ∈ Z w can be communicated using k distinguished quanta whose total sum is at most t. This problem has an exact solution (Exercise 6.6.14): ω(t,k) =  t +k k  . (6.9) This means that if U has size ω(t,k), then t additional time units are needed (in the worst case) by any communicator that uses k + 1 unreliable bits to communicate values of U. If the size of U is not precisely ω(t,k), we can still determine a bound. Let f (w,k) be the smallest integer t such that ω(t,k) ≥ w. Then c(w,k) = f (w,k). (6.10) COMMUNICATORS, PIPELINE, AND TRANSFORMERS 351 That is Theorem 6.2.1 Any corruption-tolerant solution protocol using k + 1 bits to com- municate values from Z w requires f (w,k) +k +1 time units in the worst case. In conjunction with Equation 6.8, this means that protocol Order k is a worst case optimal. We can actually establish a lower bound on the average case as well (Exercise 6.6.15), and prove (Exercise 6.6.16) that protocol Order k is average-case optimal Corruption-Free Communication () If bit transmissions are error free, the value of a received packet can be trusted. Hence it can be used to convey information about the value I the sender wants to communicate to the receiver. In this case, the entire communication sequence, bits and quanta, is meaningful. What we do is something similar to what we just did in the case of cor- ruptible bits. We establish a total order on the set W k of the 2k +1 tuples b 0 ,q 1 ,b 1 ,q 2 ,b 2 , ,q k ,b k  corresponding to all the possible communication se- quences. In this way, each tuple 2k +1-tuple W [i] ∈ W k has associated a distinct integer: its rank i. Then, if we want to communicate I , we will use the communica- tion sequence described by W [I ]. In the total order we choose, all the tuples where the sum of the quanta is t are smaller than those where the sum is t +1; so, for example, in W 2 , 1, 2, 1, 0, 1 is smaller than 0, 0, 0, 3, 0. If the sum of the quanta is the same, tuples (bits and quanta) are lexicographically ordered; so, for example, in W 2 , 1, 1, 1, 1, 1is smaller than 1, 2, 0, 0, 0. The resulting protocol is called Order+ k . Let us examine its costs. The number of bits is clearly k +1. Let g(I, k) be the smallest integer t such that I ≤ 2 k+1  t +k k  . Then (Exercise 6.6.13), B[Order+ k ](I ) = k + 1 (6.11) T[Order+ k ](I ) = g(I,k) +k +1. (6.12) Also, protocol Order+ k is worst-case and average-case optimal (see exercises 6.6.17, 6.6.18, and 6.6.19). Other Communicators The protocols Order k and Order+ k belong to the class of k +1-bit communicators where the number of transmitted bits is fixed a priori and known to both the entities. In this section, we consider arbitrary communicators, where the number of bits used in the transmission might not be not predetermined (e.g., it may change depending on the value I being transmitted). 352 SYNCHRONOUS COMPUTATIONS With arbitrary communicators, the basic problem is obviously how the receiver can decide when a communication has ended. This can be achieved in many different ways, and several mechanisms are possible. Following are two classical ones: Bit Pattern. The sender uses a special pattern of bits to notify the end of commu- nication. For example, the sender sets all bits to 0, except the last, which is set to 1; the drawback with this approach is that the bits cannot be used to convey information about I . Size Communication. As part of the communication, the sender communicates the total number of bits it will use. For example, the sender uses the first quantum to communicate the number of bits it will use in this communication; the drawback of this approach is that the first quantum cannot be used to convey information about I. We now show that, however ingenious the employed mechanism be, the results are not much better than those obtained just using optimal k + 1-bit communicators. In fact, an arbitrary communicator can only improve the worst-case complexity by an additive constant. This is true even if the receiver has access to an oracle revealing (at no cost) for each transmission the number of bits the sender will use in that transmission. Consider first the case of corruptible transmissions. Let γ (t,b) denote the size of the largest set for which an oracle-based communicator uses at most b corruptible bits and at most t +b time units. Theorem 6.2.2 γ (t,b) <ω(t +1,b) Proof. As up to k + 1 corruptible bits can be transmitted, by Equation 6.9, γ (t,b) =  k j=1 ω(t,j) =  k j=1  t +j j  =  t +k +1 k  −1 <  t +1 +k k  = ω(t +1,b). ᭿ This implies that, in the worst case, communicator Order k requires at most one time unit more than any strategy of any type which uses the same maximum number of corruptible bits. Consider now the case of incorruptible transmissions. Let α(t,b) denote the size of the largest set for which an oracle-based communicator uses at most b reliable bits and at most t +b time units. To determine a bound on α(t,b), we will first consider the size β(t,k) of the largest set for which a communicator without an oracle uses always atmost b reliable bits and at most t +b time units. We know (Exercises 6.6.17) that Lemma 6.2.1 β(t,k) = 2 k+1  t +k k  . From this, we can now derive Theorem 6.2.3 α(t,b) <β(t +1,b). COMMUNICATORS, PIPELINE, AND TRANSFORMERS 353 Proof. As up to k +1 incorruptible bits can be transmitted, α(t,b) =  k j=1 β(t,j). By Lemma 6.2.1,  k j=1 β(t,j) =  k j=1 2 j+1  t +j j  < 2 k+1  t +1 +k k  = β(t +1,k). ᭿ This implies that, in the worst case, communicator Order+ k requires at most one time unit more than any strategy of any type which uses the same maximum number of incorruptible bits. 6.2.2 Pipeline Communicating at a Distance With communicators we have addressed the problem of communicating information between two neighboring entities. What hap- pens if the two entities involved, the sender and the receiver, are not neighbors? Clearly the information from the sender x can still reach the receiver y, but other entities must be involved in this communication. Typically there will be a chain of entities, with the sender and the receiver at each end; this chain is, for example, the shortest path between them. Let x 1 ,x 2 , ,x p−1 ,x p be the chain, where x 1 = x and x p = y; see Figure 6.10. The simplest solution is that first x 1 communicates the information I to x 2 , then x 2 to x 3 , and so on until x p−1 has the information and communicates it to x p . Using communicator C between each pair of neighbors, this solution will cost (p − 1) Bit(C, I) bits and (p − 1) Time(C, I) time, where Bit(C,I) and Time(C, I) are the bit and time costs, respectively of com- municating information I using C. For example, using protocol TwoBits, x can com- municate I to y with 2(p − 1) bits in time I (p −1). There are many variations of this solutions; for example, each pair of neighbors could use a different type of com- municator. There exists a way of drastically reducing the time without increasing the number of bits. This can be achieved using a well known technique called pipeline. The idea behind pipeline is very simple. In the solution we just discussed, x 1 waits until it receives the information from x 0 and then communicates it to x 2 . In pipeline, instead of waiting, x 1 will start immediately to communicate it to x 2 . In fact, each x j FIGURE 6.10: Communicating information from x to y through a line. 354 SYNCHRONOUS COMPUTATIONS FIGURE 6.11: Time–Event diagram showing thecommunicationof I inpipeline fromx 1 to x 4 . will start communicating the information to x j+1 without waiting to receive it form x j−1 ; the crucial point is that x j+1 starts exactly one time unit after x j−1 . To understand how can an entity x j communicate an information it does not yet have, consider x 2 and assume that the communicator being used is TwoBits. Let x 1 start at time t; then x 2 will receive the “Start-Counting” signal at time t +1. Instead of just waiting to receive the “Stop-Counting” message from x 1 , x 2 will also start immediately the communication: It sends a “Start-Counting” signal to x 3 and starts waiting the quantum of silence. It is true that x 2 does not know I , so it does not know how long it has to wait. However, at time t +I, entity x 1 will send the “Stop- Counting” signal that will arrive at x 2 one time unit later, at time t +I +1. This is happening exactly I time units after x 2 sent the “Start-Counting” signal to x 3 . Thus, if x 2 now forwards the “Stop-Counting” signal to x 3 , it acts exactly like if it had the information I from the start! The reasoning we just did to explain why pipeline works at x 2 applies to each of the x j . So, the answer to the question above is that each entity x j will know the information it must communicate exactly in time. An example is shown in Figure 6.11, where p = 4. The sender x 1 will start at time 0 and send the “Stop-Counting” signal at time I. Entities x 2 ,x 3 will receive and send the “Start-Counting” at time 1 and 2, respectively; they will receive and send the “Stop-Counting” at time I +1 and I +2, respectively. Summarizing, the entities will start staggered by one time unit and will terminate staggered by one time unit. Each will be communicated the value I communicated by the sender. Regardless of the communicator C employed (the same by all entities), the overall solution protocol CommLine is composed of two simple rules: PROTOCOL CommLine 1. x 1 communicates the information to x 2 . 2. Whenever x j receives a signal from x j−1 , it forwards it to x j+1 (1 <j<p). COMMUNICATORS, PIPELINE, AND TRANSFORMERS 355 How is local termination detected ? As each entity uses the same communicator C, each x j will know when the communication from x j−1 has terminated (1 <j ≤ p). Let us examine the cost of this protocol. Each communication is done using com- municator C; hence the total number of bits is the same as in the nonpipelined case: (p − 1) Bits(C, I). (6.13) However, the time is different as the p −1 communications are done in pipeline and not sequentially. Recallthat theentities intheline starta unitof timeone after theother. Consider the last entity x p . The communication of I from x p−1 requires Time(C, I); however, x p−1 starts this communication only p −2 time units after x 1 starts its com- munication to x 2 . This means that the total time used for the communication is only (p − 1) + Time(C, I). (6.14) That is, the term p −1isadded to and not multiplied by Time(C, I). In the example of Figure 6.11, where p = 4 and the communicator is TwoBits, the total number of bits is 6 = 2(p −1). The receiver x 4 receives “Start-Counting” at time 3 and the “Stop-Counting” at time i +3; hence the total time is I +3 = I + p − 1; Let us stress that we use the same number of bits asanonpipelined (i.e., sequential) communication; the improvement is in the time costs. Computing in Pipeline Consider the same chain of entities x 1 ,x 2 , ,x p−1 ,x p we have just examined. We have seen how information can be efficiently communi- cated from one end of the chain of entities to the other by pipelining the output of the communicators used by the entities. We will now see how we can use pipeline in something slightly more complex than plain communication. Assume that each entity x j has a value I j , and we want to compute the largest of those values. Once again, we can solve this problem sequentially: First x 1 communi- cates I 1 to x 2 ; each x j (1 <j<p) waits until it receives from x j−1 the largest value so far, compares it with its own value I j , and forwards the largest of the two to x j+1 . This approach will cost (p − 1) Bit(C, I max ) bits, whereC isthe communicator usedby theentities andI max is thelargest value.The time willdependon where I max is located;inthe worstcase,it is x 1 and thetimewill be (p − 1) Time(C, I max ). Let us see how pipeline can be used in this case. Again, we will make all entities in the chain start staggered by one unit of time, and each entity will start waiting a quantum of time equal to its own value. 356 SYNCHRONOUS COMPUTATIONS Let t the time when x 1 (and thus the entire process) starts; for simplicity, assume that they use protocol TwoBits. Concentrate on x 2 . At time t + 1 it receives the “Start- Counting” signal from x 1 and sends it tox 3 . Its goal isto communicate to x 3 the largest of I 1 and I 2 ; to do so, it must send the “Stop-Counting” signal to x 3 exactly at time t  = t + 1 +Max{I 1 ,I 2 }. The question is how can x 2 know Max{I 1 ,I 2 }in time. The answer is fortunately simple. The “Stop-Counting” message from x 1 arrives at x 2 at time t +1 +I 1 (i.e., I 1 time units after the “Start-Counting” signal). There are three possible cases. 1. If I 1 <I 2 , this message will arrive while x 2 is still counting its own value I 2 ; thus, x 2 will know that its value is the largest. In this case, it will just keep on waiting its value and send the “Stop-Counting” signal to x 3 at the correct time t +1 +I 2 = t + 1 +Max{I 1 ,I 2 }=t  . 2. If I 1 = I 2 , this message will arrive exactly when x 2 finishes counting its own value I 2 ; thus, x 2 will know that the two values are identical. The “Stop- Counting” signal will be sent to x 3 immediately, that is, at the correct time t +1 +I 2 = t + 1 +Max{I 1 ,I 2 }=t  . 3. If I 1 >I 2 , x 2 will finish waiting its value before this message arrives. In this case, x 2 will wait until it receives “Stop-Counting” signal from x 1 , and then forward it. Thus, the “Stop-Counting” signal will be sent to x 3 at the correct time t +1 +I 1 = t + 1 +Max{I 1 ,I 2 }=t  . That is, x 2 will always send Max{I 1 ,I 2 } in time to x 3 . The same reasoning we just used to understand how x 2 can know Max{I 1 ,I 2 } in time can be applied to verify that indeed each x j can know Max{I 1 ,I 2 , ,I j−1 } in time (Exercise 6.6.23). An example is shown in Figure 6.12. We have described the solution using TwoBits as the communicator. Clearly any communicator C can be used, provided that its encoding is monotonically increasing, FIGURE 6.12: Time–Event diagram showing the computationof the largest value in pipeline. COMMUNICATORS, PIPELINE, AND TRANSFORMERS 357 that is, if I>J, then in C the communication sequence for I is lexicographically smaller thanthat for J. Notethat protocols Order k and Order+ k are notmonotonically increasing; however,itis not difficult toredefine them sothat theyhave suchaproperty (Exercises 6.6.21 and 6.6.22). The total number of bits will then be (p − 1) Bits(C, I max ), (6.15) the same as that without pipeline. The time instead is at most (p − 1) +Time(C, I max ). (6.16) Once again, the number of bits is the same as that without pipeline; the time costs are instead greatly reduced: The factor (p −1) is additive and not multiplicative. Similar reductions in time can be obtained for other computations, such as com- puting the minimum value (Exercise 6.6.24), the sum of the values (Exercise 6.6.25), and so forth. The approach we used for these computations in a chain can be generalized to arbitrary tree networks; see for example Problems 6.6.5 and 6.6.6. 6.2.3 Transformers Asynchronous-to-Synchronous Transformation The task of designing a fully synchronous solution for a problem can be easily accomplished if there is al- ready a known asynchronous solution A for that problem. In fact, since A makes no assumptions on time, it will run under every timing condition, including the fully syn- chronous ones. Its cost in such a setting would be the number of messages M(A) and the “ideal” time T (A). Note that this presupposes that the size m(A) of the messages used by A is not greater than the packet size c (otherwise, the message must be broken into several packets, with a corresponding increasing message and time complexity). We can actually exploit the availability of an asynchronous solution protocol A in a more clever way and with a more efficient performance than just running A in the fully synchronous system. In fact, it is possible to transform any asynchronous protocol A into an efficient synchronous one S, and this transformation can be done automatically. Thisis achieved byan asynchronous-to-synchronous transformer (or just transformer), a “compiler” that, given in input an asynchronous protocol solv- ing a problem P, generates an efficient synchronous protocol solving P. The essential component of a transformer is the communicator. Let C be a uni- versal communicator (i.e., a communicator that works for all positive integers). An asynchronous-to-synchronous transformer τ [C] is obtained as follows. Transformer τ [C] Given any asynchronous protocol A, replace the asynchronous transmission-reception of each message in A by the communication, using C,ofthe information contained in that message. [...]... the result of AND is 1, all the entities have value 1 and are in state minimum, and thus know the result If the result of AND MIN-FINDING AND ELECTION: WAITING AND GUESSING 3 67 is 0, the entities with value 0 are in state minimum (and thus know the result), while the others are in state large (and thus know the result) Notice that if an entity x has value b(x) = 0, using the waiting function of expression... interesting properties of the waiting function Computing AND and OR Consider the situation where every entity x has a Boolean value b(x) ∈ {0, 1}, and we need to compute the AND of all those values Assume as before that the size n of the ring is known The AND of all the values will be 1 if and only if ∀x b(x) = 1, that is, all the values are 1; otherwise the result is 0 Thus, to compute AND it suffices to... the use of a transform leads to an election protocol for rings, SynchStages, with reduced bits and time costs By integrating pipeline, we can obtain further improvements The cost of minimum-finding and election can be significantly reduced by using other types of “temporal” tools and techniques In this section, we will describe two basic techniques that make an explicit use of time, waiting and guessing... transmission (and corresponding reception) of I in A is replaced by the communication of I using communicator C; this communication requires Time(C, I ) time and Packets(C, I ) packets As at most Tcasual (A) messages must be sent sequentially (i.e., one after the other) and I ≤ 2m(A) , the total number of clock ticks required by S will be Time(S) ≤ Tcasual (A) × Time(C, 2m(A) ) (6. 17) As the information of each... number of bits by using pipeline For example, during every stage of protocol Stages and thus of protocol SynchStages, the information from each candidate must reach the neighboring candidate on each side This operation, as we have already seen, can be efficiently done in pipeline, yielding a reduction in time costs (Exercise 6.6.26) Design Implications The transformation lemma gives a basis of comparison... regardless of whether the network is synchronous or not This impossibility result applies to deterministic protocols, that is, protocols where every action is composed only of deterministic operations A different class of protocols are those where an entity can perform operations whose result is random, for example, tossing a dice, and where the nature of the action depends on outcome of this random event... topologies (Exercises 6.6. 37- 6.6.39) 1 The algorithm is composed of a sequence of rounds 2 In each round, every entity randomly selects an integer between 0 and b as its identity, where b ≤ n 3 If the minimum of the chosen values is unique, that entity will become leader; otherwise, a new round is started To make the algorithm work, we need to design a mechanism to find the minimum and detect if it is unique... between 0 and b, that is, i≤ b Thus, each round will cost at most O(nb) time MIN-FINDING AND ELECTION: WAITING AND GUESSING 369 We have different options with regard to the value b and how the random choice of the identities is made For example, we can set b = n and choose each value with same probability (Exercise 6.6.40); notice, however, that the larger the b is, the larger the time costs of each... n = 3 c n g −1 (n) + c n, 383 MIN-FINDING AND ELECTION: WAITING AND GUESSING where c = O(1) is the number of bits necessary to distinguish between the “Restart,” “Wait1,” “Wait2,” and “Terminate” messages Time Consider now the time costs of DoubleWait Obviously, the time complexity of an iteration is directly affected by the values of the waiting functions f and h, which are in turn affected by the... time complexity is also affected by the number of iterations j= g −1 (n) that depends on the choice of the function g Let us first of all choose the waiting functions f and h The ones we select are f (id(x), j ) = 2 g(j ) id(x), (6.40) which is the standard waiting function when the entities do not start at the same time and where g(j ) is used instead of n; and h(id(x), j ) = 2 g(j ) id(x) + g(j ) − nx . If the result of AND is 1, all the entities have value 1 and are in state minimum, and thus know the result. If the result of AND MIN-FINDING AND ELECTION: WAITING AND GUESSING 3 67 is 0, the entities. other) and I ≤ 2 m(A) , the total number of clock ticks required by S will be Time(S) ≤ T casual (A) ×Time(C, 2 m(A) ). (6. 17) As the information of each of the M(A) messages must be communicated, and. withoutincreasingthe number of bits by using pipeline. For example, during every stage of protocol Stages and thus of protocol SynchStages, the information from each candidate must reach the neighboring candidate

Ngày đăng: 12/08/2014, 16:21

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan