Optical Networks: A Practical Perspective - Part 68 doc

10 232 0
Optical Networks: A Practical Perspective - Part 68 doc

Đang tải... (xem toàn văn)

Thông tin tài liệu

640 PHOTONIC PACKET SWITCHING 9 I o . 2 I "7 I Figure 12.17 Example of a 2 x 2 routing node using a feedback delay line architecture. for a common output port arrive simultaneously, one of them is switched to the output port while the others are switched to the recirculating buffers. In the context of optical switches, the buffering is implemented using feedback delay lines. In the feedback architecture of Figure 12.17, the delay lines connect the outputs of the switch to its inputs. With two delay lines and two inputs from outside, the switch is internally a 4 • 4 switch. Again, if two packets contend for a single output, one of them can be stored in a delay line. If the delay line has length equal to one slot, the stored packet has an opportunity to be routed to its desired output in the next slot. If there is contention again, it, or the contending packet, can be stored for another slot in a delay line. Recirculation buffering is more effective than output buffering at resolving con- tentions because the buffers in this case are shared among all the outputs, as opposed to having a separate buffer per output. The trade-off is that larger switch sizes are needed in this case due to the additional switch ports needed for connecting the recir- culating buffers. For example, in [HK88], it is shown that a 16 • 16 switch requires a total of 112 recirculation buffers, or about 7 buffers per output, to achieve a packet loss probability of 10 -6 at an offered load of 0.8. In contrast, we saw earlier that the output-buffered switch requires about 25 buffers per output, or a total of 400 buffers, to achieve the same packet loss probability. In the feed-forward architecture considered earlier, a packet has a fixed number of opportunities to reach its desired output. For example, in the routing node shown in Figure 12.14, the packet has at most three opportunities to be routed to its correct destination: in its arriving slot and the next two immediate slots. On the other hand, in the feedback architecture, it appears that a packet can be stored indefinitely. This is not true in practice since photonic switches have several decibels of loss. The loss can be made up using amplifiers, but then we have to account for the cascaded amplifier noise as packets are routed through the delay line multiple times. The switch crosstalk 12.4 Buffering 641 also accumulates. Therefore, the same packet cannot be routed through the switch more than a few times. In practice, the feed-forward architecture is preferred to the feedback architecture since it attenuates the signals almost equally, regardless of the path taken through the routing node. This is because almost all the loss is in passing through the switches, and in this architecture, every packet passes through the same number of switches independent of the delay it experiences. This low differential loss characteristic is important in a network since it reduces the dynamic range of the signals that must be handled. 12.4.4 Using Wavelengths for Contention Resolution One way to reduce the amount of buffering needed is to use multiple wavelengths. In the context of PPS, buffers correspond to fiber delay lines. Observe that we can store multiple packets at different wavelengths in the same delay line. We start by looking at a baseline architecture for an output-buffered switch using delay lines that does not make use of multiple wavelengths. Figure 12.18 shows such an implementation, which is equivalent to the output-buffered switch of Figure 12.15 with B buffers per output. Up to B slots of delay are provided per output by using a set of B delay lines per output. T denotes the duration of a time slot. If multiple input packets arriving in a time slot need to go to the same output, one of them is switched out while the others are delayed by different amounts and stored in the different delay lines, so that the output contention is resolved. Note that the set of delay lines together can store more than B packets simultaneously. For instance, a single K-slot delay line can hold up to K packets simultaneously. Therefore the total number of packets that can be held by the set of delay lines in Figure 12.18 is 1 + 2 + + B = B(B + 1)/2. However, since we can have only one packet per slot transmitted out (or a total of B packets in B slots), the effective storage capacity of this set of delay lines is only/3 packets. In its simplest form, we can use wavelengths internal to the switch to reduce the number of delay lines required. Figure 12.19 shows an example of such an output-buffered switch [ZT98]. Instead of providing a set of delay lines per output, the delay lines are shared among all the outputs. Packets entering the switch are sent through a tunable wavelength converter device. (Note that tunable wavelength converter devices are still in research laboratories todaymsee Section 3.8 for some of the approaches being pursued.) At the output of the switch, the packets are sent through an arrayed waveguide grating (AWG). The wavelength selected by the tunable wavelength converter and the output switch fabric port to which the packet is switched together determine the delay line to which the packet is routed by the AWG. 642 PHOTONIC PACKET SWITCHING Figure 12.18 An example of an output-buffered optical switch using fiber delay lines for buffers that does not use wavelengths for contention resolution. Figure 12.19 An example of an output-buffered optical switch using multiple wave- lengths internal to the switch and fiber delay lines for buffers. The switch uses tunable wavelength converters and arrayed waveguide gratings. Figure 3.25 provides a description of how the AWG works in this configuration. For example, consider the first input port on the AWG. From this port, wavelength )~1 is routed to delay line 0, wavelength )~2 is routed to the single-slot delay line, wavelength )~3 is routed to the two-slot delay line, and wavelength ~B is routed to the B-slot delay line. In order to allow a packet at each input of the AWG to be routed to each possible delay line, we need the number of wavelengths, W - max(N, B), where N is the number of inputs. Thus the delay seen by a packet can be controlled by controlling the wavelength at the output of the tunable wavelength converter device. In this case, if we have two input packets on different ports destined to the 12.4 Buffering 643 same output, their wavelengths are chosen such that one of them is delayed while the other is switched through. From a buffering perspective, this configuration is equivalent to the baseline configuration of Figure 12.18. Note that the TWCs must be on the inputs to the switch fabric (not at the outputs) since several packets may leave a switch fabric output on one time slot, on different wavelengths. For instance, in one routing method, a packet bound for output port j is routed to output port j of the switch fabric. Its wavelength is chosen based on the delay required. With the AWG design assumed above, an incoming packet bound for output 1, requiring a single-slot delay, would be converted to wavelength )~2 at the input, and switched to port 1 of the switch fabric. Assuming the same traffic model as before, with p = 0.8, in order to obtain a packet loss probability of 10 -6 for a 16 x 16 switch, we need a total of 25 delay lines, instead of 25 delay lines per output for the case where only a single wavelength is used inside the switch. In Section 12.6, we will study other examples of switch con- figurations that use wavelengths internally to perform the switching and/or buffering functions. We next consider the situation where we have a WDM network. In this case, multiple wavelengths are used on the transmission links themselves. We can gain further reduction in the shared buffering required compared to a single-wavelength system by making use of the statistical nature of bursty traffic across multiple wave- lengths. Figure 12.20 shows a possible architecture [Dan97] for such a switch, again using tunable wavelength converters and delay lines. At the inputs to the switch, the wavelengths are demultiplexed and sent through tunable wavelength converters and then into the switch fabric. The delay lines are connected to the output of the switch fabric. The W wavelengths destined for a given output port share a single set of delay lines. In this case, we have additional flexibility in dealing with contention. If two packets need to go out on the same output port, either they can be delayed in time, or they can be converted to different wavelengths and switched to the output port at the same time. The TWCs convert the input packets to the desired output wavelength, and the switch routes the packets to the correct output port and the appropriate delay line for that output. As the number of wavelengths is increased, keeping the load per wavelength constant, the amount of buffering needed will decrease because, within any given time slot, the probability of finding another free wavelength is quite high. Basically we are sharing capacity among several wavelengths and permitting better use of that capacity. [Dan97] shows that the number of delay lines required to achieve a packet loss probability of 10 -6 at an offered load of 0.8 per wavelength for a 16 x 16 switch drops from 25 per output without using multiple wavelengths to 7 per output using four wavelengths, and to 4 per output when eight wavelengths are present. 644 PHOTONIC PACKET SWITCHING Figure 12.20 An example of an output-buffered optical switch capable of switching multiple input wavelengths. The switch uses TWCs and wavelength demultiplexers. The TWCs convert the input packets to the desired output wavelength, and the switch routes the packets to the correct output port and the appropriate delay line for that output. Table 12.1 Number of delay lines required for different switch architectures. A uniformly dis- tributed offered load of 0.8 per wavelength per input is assumed, with a packet loss probability of 10 -6. The switch size is 16 x 16. Buffering Input Internal Internal Delay Lines Delay Lines Type )~s )~s Fabric per Output Total Output (Figure 12.18) 1 1 Recirculating (Figure 12.17) 1 1 Output (Figure 12.19) 1 64 Output (Figure 12.20) 4 4 Output (Figure 12.20) 8 8 16 x 16 25 400 23 x 23 7 112 16 x 16 Shared 26 64 • 128 7 112 128 x 80 4 64 Table 12.1 compares the number of delay lines required for the different buffer- ing schemes that we considered in this section. Note that the number of delay lines is only one among the many parameters we must consider when designing switch architectures. The others include the switch fabric size, the number of wavelength converters required, and the number of wavelengths used internally (and the associ- ated complexity of the multiplexers and demultiplexers). While we have illustrated a few sample architectures in Figures 12.17 through 12.20, many variants of these 12.4 Buffering 645 architectures have been proposed that trade off these parameters against each other. See [Dan97, ZT98, Hun99, Gam98, Gui98] for more examples. 12.4.5 Deflection Routing Deflection routing was invented by Baran in 1964 [Bar64]. It was studied and implemented in the context of processor interconnection networks in the 1980s [Hi185, Hi187, Smi81]. In these networks, just as in photonic packet-switching net- works, buffers are expensive because of the high transmission speeds involved, and deflection routing is used as an alternative to buffering. Deflection routing is also sometimes called hot-potato routing. Intuitively, misrouting packets rather than storing them will cause packets to take longer paths on average to get to their destinations, and thus will lead to increased delays and lesser throughput in the network. This is the price paid for not having buffers at the switches. These trade-offs have been analyzed in detail for regular network topologies such as the Manhattan Street network [GG93], an example of which is shown in Figure 12.21, or the shufflenet [KH90, AS92], another regular interconnection network, an example of which is shown in Figure 12.22, or both [Max89, FBP95]. Regular topologies are typically used for processor interconnec- tions and may be feasible to implement in LANs. However, they are unlikely to be used in WANs, where the topologies used are usually arbitrary. Nevertheless, these analyses shed considerable light on the issues involved in the implementation of deflection routing even in wide-area photonic packet-switching networks and the re- suiting performance degradation, compared to buffering in the event of a destination conflict. Before we can discuss these results, we need to slightly modify the model of the routing node shown in Figure 12.2. While discussing this figure earlier, we said that the routing node has one input link and output link from/to every other routing node and end node to which it is connected. In many cases, the end node is colocated with the routing node so that information regarding packets to be transmitted or received can be almost instantaneously exchanged between these nodes. In particular, this makes it possible for the end node to inject a new packet into its associated routing node, only when no other packet is intended for the same output link. Thus this new injected packet neither gets deflected nor causes deflection of other packets. This is a reasonable assumption to make in practice. Delay The first consequence of deflection routing is that the average delay experienced by the packets in the network is larger than in store-and-forward networks. In this 646 PHOTONIC PACKET SWITCHING Figure 12.21 The Manhattan Street network with 42 = 16 nodes. In a network with n 2 nodes, these nodes are arranged in a square grid with n rows and columns. Each node transmits to two nodes one in the same row and another in the same column. Each node also receives from two other nodes one in the same row and the other in the same column. Assuming n is even, the direction of transmission alternates in successive rows and columns. ~.A ~.~I - I 0 (0, 00) 4 (1, 00) ~ 0 (0, 00) ', I I"~i I 1 (0, 01) 5 (1, 01) 1 (0, 01) ', ___1 I 2 (0, 10) 6 (1, 10) ~12 (0, 10) l, I 3 (0, 11) 7 (1, 11) ] i 3 (0, 11) ', Figure 12.22 The shufflenet with eight nodes. More generally, a (A, k) shufflenet con- sists of k A k nodes, arranged in k columns, each with A ~ nodes. We can think of a (A, k) shufflenet in terms of the state transition diagram of a k-digit shift register, with each digit in {0, 1 A - 1}. Each node (c, aoal a~-l) is labeled by its column index c ~ {0, 1, 2 k - 1} along with a k-digit string aoal ak-1, ai E {0, 1 A 1}, 0 _ i _< k - 1. There is an edge from a node i to another node j in the following column if node j's string can be obtained from node i's string by one shift. In other words, there is an edge from node (c, aoal a~-l) to a node ((c + 1) modk, ala2 a~-l.), where 9 ~{0,1 A-l}. 12.4 Buffering 647 comparison, not only is the network topology fixed, but the statistics of the packet arrivals between each source-destination pair are also fixed. In particular, the rate of injection of new packets into the network, which is called the arrival rate, for each source-destination pair must be fixed. The delay experienced by a packet consists of two components. The first is the queuing delaymthe time spent waiting in the buffers at each routing node for transmission. There is no queuing delay in the case of deflection routing. The second component of the delay experienced by a packet is the propagation delay the time taken for the packet to traverse all the links from the source node to the destination node. The propagation delay is often larger for deflection routing than for routing with buffers owing to the misdirection of packets away from their destinations. As a result, in most cases, for a given arrival rate, the overall delay in deflection-routed networks is larger than the overall delay in store-and-forward networks. Throughput Another consequence of deflection routing is that the throughput of the network is decreased compared to routing with buffers. An informal definition of the throughput of these networks, which will suffice for our purposes here, is that it is the maximum rate at which new packets can be injected into the network from their sources. Clearly, this depends on the interconnection topology of the network and the data rates on the links. In addition, it depends on the traffic pattern, which must remain fixed in defining the throughput. The traffic pattern specifies the fraction of new packets for each source-destination pair. Typically, in all theoretical analyses of such networks, the throughput is evaluated for a uniform traffic pattern, which means that the arrival rates of new packets for all source-destination pairs in the network are equal. If all the links run at the same speed, the throughput can be conveniently expressed as a fraction of the link speed. For Manhattan Street networks with sizes ranging from a few hundred to a few thousand nodes, deflection routing achieves 55-70% of the throughput achieved by routing with buffering [Max89]. For shufflenets in the same range of sizes, the value is only 20-30% of the throughput with buffers. However, since a shufflenet has a much higher throughput than a Manhattan Street network of the same size (for routing with buffers), the actual throughput of the Manhattan Street network in the case of deflection routing is lower than that of the shufflenet. All these results assume a uniform traffic pattern. So what do these results imply for irregular networks? To discuss this, let us examine some of the differences in the properties of these two networks. One im- portant property of any network is its diameter, which is the largest number of hops on the shortest path between any two nodes in the network. In other words, 648 PHOTONIC PACKET SWITCHING the diameter is the maximum number of hops between two nodes in the network. However, in most networks, the larger the diameter, the greater the number of hops that a packet has to travel even on average to get to its destination. The Man- hattan Street network has a diameter that is proportional to ~/-~, where n is the number of nodes in the network. On the other hand, the shufflenet has a diam- eter that is proportional to log 2 n. (We consider shufflenets of degree 2.) Thus if we consider a Manhattan Street network and a shufflenet with the same number of nodes and edges, the Manhattan Street network will have a lower throughput for routing with buffers than the shufflenet, since each packet has to traverse more edges, on the average. For arbitrary networks, we can generalize this and say that the smaller the diameter of the network, the larger the throughput for routing with buffers. For deflection routing, a second property of the network that we must consider is its deflection index. This property was introduced in [Max89], although it was not called by this name. It was formally defined and discussed in greater detail in a later paper [GG93]. The deflection index is the largest number of hops that a single deflection adds to the shortest path between some two nodes in the network. In the Manhattan Street network, a single deflection adds at most four hops to the path length, so its deflection index is four. On the other hand, the shufflenet has a deflection index of log 2 n hops. This accounts for the fact that the Manhattan Street network has a significantly larger relative throughput~the deflection routing throughput expressed as a fraction of the store-and-forward throughput~than the shufflenet (55-70% versus 20-30%). For arbitrary networks, we can then say that the deflection index must be kept small so that the throughput remains high in the face of deflection routing. Combining the two observations, we can conclude that network topologies with small diameters and small deflection indices are best suited for photonic packet-switching networks. A regular topology designed by combining the Man- hattan Street and shufflenet topologies and having these properties is discussed in [GG93]. In addition to choosing a good network topology (not necessarily regu- lar), the performance of deflection-routing networks can be further improved by using appropriate deflection rules. A deflection rule specifies the manner in which the packets to be deflected are chosen among the packets contending for the same switch output port. The results we have quoted assume that in the event of a conflict between two packets, both packets are equally likely to be deflected. This deflec- tion rule is termed random. Another possible deflection rule, called closest-to-finish [GG93], states that when two packets are contending for the same output port, the packet that is farther away from its destination is deflected. This has the effect of reducing the average number of deflections suffered by a packet and thus increasing the throughput. 12.5 Burst Switching 649 Small Buffers We can also consider deflection routing with a very limited number of buffers, for example, buffers of one or two packets at each input port. If this limited buffer is full, the packet is again deflected. Such limited-buffer deflection-routing strategies achieve higher throughputs compared to the purest form of deflection routing without any buffers whatsoever. We refer to [Max89, FBP95] for the quantitative details. Livelock When a network employs deflection routing, there is the possibility that a packet will be deflected forever and never reach its destination. This phenomenon has been called both deadlock [GG93] and livelock [LNGP96], but the term livelock seems to be more appropriate. Livelock is somewhat similar to routing loops encountered in store-and-forward networks (see Section 6.3), but routing loops are a transient phe- nomenon there, whereas livelock is an inherent characteristic of deflection routing. Livelock can be eliminated by suitably designed deflection rules. However, prov- ing that any particular deflection rule is livelock-free seems to be hard. We refer to [GG93, BDG95] for some further discussion of this issue (under the term deadlock). One way to eliminate livelocks is to simply drop packets that have exceeded a certain threshold on the hop count. 12.5 Burst Switching Burst switching is a variant of PPS. In burst switching, a source node transmits a header followed by a packet burst. Typically the header is transmitted at a lower speed on an out-of-band control channel, although most proposals assume an out-of-band control channel. An intermediate node reads the packet header and activates its switch to connect the following burst stream to the appropriate output port if a suitable output port is available. If the output port is not available, the burst is either buffered or dropped. The main difference between burst switching and conventional photonic packet switching has to do with the fact that bursts can be fairly long compared to the packet duration in packet switching. In burst switching, if the bursts are sufficiently long, it is possible to ask for or reserve bandwidth in the network ahead of time before sending the burst. Various protocols have been proposed for this purpose. For example, one such protocol, called Just-Enough-Time (JET), works as follows. A source node wanting to send a burst first sends out a header on the control channel alerting the nodes along the path that a burst will follow. It follows the header by transmitting the burst after a certain time period. The period is large enough to provide the nodes sufficient time to . switching, a source node transmits a header followed by a packet burst. Typically the header is transmitted at a lower speed on an out-of-band control channel, although most proposals assume an out-of-band. A - 1}. Each node (c, aoal a~ -l) is labeled by its column index c ~ {0, 1, 2 k - 1} along with a k-digit string aoal ak-1, ai E {0, 1 A 1}, 0 _ i _< k - 1. There is an edge from a node. in this case are shared among all the outputs, as opposed to having a separate buffer per output. The trade-off is that larger switch sizes are needed in this case due to the additional switch

Ngày đăng: 02/07/2014, 12:21

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan