Computer Networking A Top-Down Approach Featuring the Internet phần 4 ppt

67 408 0
Computer Networking A Top-Down Approach Featuring the Internet phần 4 ppt

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Principle of Reliable Data Transfer Figure 3.4-7: rdt2.2 receiver Reliable Data Transfer over a Lossy Channel with Bit Errors: rdt3.0 Suppose now that in addition to corrupting bits, the underlying channel can lose packets as well, a not uncommon event in today's computer networks (including the Internet) Two additional concerns must now be addressed by the protocol: how to detect packet loss and what to when this occurs The use of checksumming, sequence numbers, ACK packets, and retransmissions - the techniques already developed in rdt 2.2 - will allow us to answer the latter concern Handling the first concern will require adding a new protocol mechanism There are many possible approaches towards dealing with packet loss (several more of which are explored in the exercises at the end of the chapter) Here, we'll put the burden of detecting and recovering from lost packets on the sender Suppose that the sender transmits a data packet and either that packet, or the receiver's ACK of that packet, gets lost In either case, no reply is forthcoming at the sender from the receiver If the sender is willing to wait long enough so that it is certain that a packet has been lost, it can simply retransmit the data packet You should convince yourself that this protocol does indeed work But how long must the sender wait to be certain that something has been lost? It must clearly wait at least as long as a round trip delay between the sender and receiver (which may include buffering at intermediate routers or gateways) plus whatever amount of time is needed to process a packet at the receiver In many networks, this worst case maximum delay is very difficult to even estimate, much less know with certainty Moreover, the protocol should ideally recover from packet loss as soon as possible; waiting for a worst case delay could mean a long wait until error recovery is initiated The approach thus adopted in practice is for the sender to ``judiciously'' chose a time value such that packet loss is likely, although not guaranteed, to have happened If an ACK is not received within this time, the packet is retransmitted Note that if a packet experiences a particularly large delay, the sender may retransmit the packet even though neither the data packet nor its ACK have been lost This introduces the possibility of duplicate data packets in the sender-to-receiver channel Happily, protocol rdt2.2 already has enough functionality (i.e., sequence numbers) to handle the case of duplicate packets file:///D|/Downloads/Livros/computaỗóo/Computer%20Netwo pproach%20Featuring%20the%20Internet/principles_rdt.htm (8 of 20)20/11/2004 15:52:08 Principle of Reliable Data Transfer From the sender's viewpoint, retransmission is a panacea The sender does not know whether a data packet was lost, an ACK was lost, or if the packet or ACK was simply overly delayed In all cases, the action is the same: retransmit In order to implement a time-based retransmission mechanism, a countdown timer will be needed that can interrupt the sender after a given amount of timer has expired The sender will thus need to be able to (i) start the timer each time a packet (either a first time packet, or a retransmission) is sent, (ii) respond to a timer interrupt (taking appropriate actions), and (iii) stop the timer The existence of sender-generated duplicate packets and packet (data, ACK) loss also complicates the sender's processing of any ACK packet it receives If an ACK is received, how is the sender to know if it was sent by the receiver in response to its (sender's) own most recently transmitted packet, or is a delayed ACK sent in response to an earlier transmission of a different data packet? The solution to this dilemma is to augment the ACK packet with an acknowledgement field When the receiver generates an ACK, it will copy the sequence number of the data packet being ACK'ed into this acknowledgement field By examining the contents of the acknowledgment field, the sender can determine the sequence number of the packet being positively acknowledged Figure 4-8: rdt 3.0 sender FSM file:///D|/Downloads/Livros/computaỗóo/Computer%20Netwo pproach%20Featuring%20the%20Internet/principles_rdt.htm (9 of 20)20/11/2004 15:52:09 Principle of Reliable Data Transfer Figure 3.4-9: Operation of rdt 3.0, the alternating bit protocol file:///D|/Downloads/Livros/computaỗóo/Computer%20Netw proach%20Featuring%20the%20Internet/principles_rdt.htm (10 of 20)20/11/2004 15:52:09 Principle of Reliable Data Transfer Figure 3.4-8 shows the sender FSM for rdt3.0, a protocol that reliably transfers data over a channel that can corrupt or lose packets Figure 3.4-9 shows how the protocol operates with no lost or delayed packets, and how it handles lost data packets In Figure 3.4-9, time moves forward from the top of the diagram towards the bottom of the diagram; note that a receive time for a packet is neccessarily later than the send time for a packet as a result of transmisison and propagation delays In Figures 3.4-9(b)(d), the send-side brackets indicate the times at which a timer is set and later times out Several of the more subtle aspects of this protocol are explored in the exercises at the end of this chapter Because packet sequence numbers alternate between and 1, protocol rdt3.0 is sometimes known as the alternating bit protocol We have now assembled the key elements of a data transfer protocol Checksums, sequence numbers, timers, and positive and negative acknowledgement packets each play a crucial and necessary role in the operation of the protocol We now have a working reliable data transfer protocol! 3.4.2 Pipelined Reliable Data Transfer Protocols Protocol rdt3.0 is a functionally correct protocol, but it is unlikely that anyone would be happy with its performance, particularly in today's high speed networks At the heart of rdt3.0's performance problem is the fact that it is a stop-and-wait protocol To appreciate the performance impact of this stop-and-wait behavior, consider an idealized case of two end hosts, one located on the west coast of the United States and the other located on the east cost The speed-of-light propagation delay, Tprop, between these two end systems is approximately 15 milliseconds Suppose that they are connected by a channel with a capacity, C, of Gigabit (10**9 bits) per second With a packet size, SP, of 1K bytes per packet including both header fields and data, the time needed to actually transmit the packet into the 1Gbps link is Ttrans = SP/C = (8 Kbits/packet)/ (10**9 bits/sec) = microseconds With our stop and wait protocol, if the sender begins sending the packet at t = 0, then at t = microsecs the last bit enters the channel at the sender side The packet then makes its 15 msec cross country journey, as depicted in Figure 3.4-10a, with the last bit of the packet emerging at the receiver at t = 15.008 msec Assuming for simplicity that ACK packets are the same size as data packets and that the receiver can begin sending an ACK packet as soon as the last bit of a data packet is received, the last bit of the ACK packet emerges back at the receiver at t = 30.016 msec Thus, in 30.016 msec, the sender was only busy (sending or receiving) for 016 msec If we define the utilization of the sender (or the channel) as the fraction of time the sender is actually busy sending bits into the channel, we have a rather dismal sender utilization, Usender, of Usender = (.008/ 30.016) = 0.00015 That is, the sender was busy only 1.5 hundredths of one percent of the time Viewed another way, the sender was only able to send 1K bytes in 30.016 milliseconds, an effective throughput of only 33KB/sec - even thought a 1Gigabit per second link was available! Imagine the unhappy network manager who just paid a fortune for a gigabit capacity link but manages to get a throughput of only 33KB! This is a graphic example of how network protocols can limit the capabilities provided by the underlying network hardware Also, we have neglected lower layer protocol processing times at the sender and receiver, as well as the processing and queueing delays that would occur at any intermediate routers between the sender and receiver Including these effects would only serve to further increase the delay and further accentuate the poor performance file:///D|/Downloads/Livros/computaỗóo/Computer%20Netw proach%20Featuring%20the%20Internet/principles_rdt.htm (11 of 20)20/11/2004 15:52:09 Principle of Reliable Data Transfer Figure 3.4-10: Stop-and-wait versus pipelined protocols The solution to this particular performance problem is a simple one: rather than operate in a stop-and-wait manner, the sender is allowed to send multiple packets without waiting for acknowledgements, as shown in Figure 3.4-10(b) Since the many in-transit sender-to-receiver packets can be visualized as filling a pipeline, this technique is known as pipelining Pipelining has several consequences for reliable data transfer protocols: q q The range of sequence numbers must be increased, since each in-transit packet (not counting retransmissions) must have a unique sequence number and there may be multiple, in-transit, unacknowledged packets The sender and receiver-sides of the protocols may have to buffer more than one packet Minimally, the sender will have to buffer packets that have been transmitted, but not yet acknowledged Buffering of correctly-received packets may also be needed at the receiver, as discussed below The range of sequence numbers needed and the buffering requirements will depend on the manner in which a data transfer protocol responds to lost, corrupted, and overly delayed packets Two basic approaches towards pipelined error recovery can be identified: Go-Back-N and selective repeat 3.4.3 Go-Back-N (GBN) Figure 3.4-11: Sender's view of sequence numbers in Go-Back-N In a Go-Back-N (GBN) protocol, the sender is allowed to transmit multiple packets (when available) without waiting for an file:///D|/Downloads/Livros/computaỗóo/Computer%20Netw proach%20Featuring%20the%20Internet/principles_rdt.htm (12 of 20)20/11/2004 15:52:09 Principle of Reliable Data Transfer acknowledgment, but is constrained to have no more than some maximum allowable number, N, of unacknowledged packets in the pipeline Figure 3.4-11 shows the sender's view of the range of sequence numbers in a GBN protocol If we define base to be the sequence number of the oldest unacknowledged packet and nextseqnum to be the smallest unused sequence number (i.e., the sequence number of the next packet to be sent), then four intervals in the range of sequence numbers can be identified Sequence numbers in the interval [0,base-1] correspond to packets that have already been transmitted and acknowledged The interval [base, nextseqnum-1] corresponds to packets that have been sent but not yet acknowledged Sequence numbers in the interval [nextseqnum,base+N-1] can be used for packets that can be sent immediately, should data arrive from the upper layer Finally, sequence numbers greater than or equal to base+N can not be used until an unacknowledged packet currently in the pipeline has been acknowledged As suggested by Figure 3.4-11, the range of permissible sequence numbers for transmitted but not-yet-acknowledged packets can be viewed as a ``window'' of size N over the range of sequence numbers As the protocol operates, this window slides forward over the sequence number space For this reason, N is often referred to as the window size and the GBN protocol itself as a sliding window protocol You might be wondering why even limit the number of outstandstanding, unacknowledged packet to a value of N in the first place Why not allow an unlimited number of such packets? We will see in Section 3.5 that flow conontrol is one reason to impose a limt on the sender We'll examine another reason to so in section 3.7, when we study TCP congestion control In practice, a packet's sequence number is carried in a fixed length field in the packet header If k is the number of bits in the packet sequence number field, the range of sequence numbers is thus [0,2k-1] With a finite range of sequence numbers, all arithmetic involving sequence numbers must then be done using modulo 2k arithmetic (That is, the sequence number space can be thought of as a ring of size 2k, where sequence number 2k-1 is immediately followed by sequence number 0.) Recall that rtd3.0 had a 1-bit sequence number and a range of sequence numbers of [0,1].Several of the problems at the end of this chapter explore consequences of a finite range of sequence numbers We will see in Section 3.5 that TCP has a 32-bit sequence number field, where TCP sequence numbers count bytes in the byte stream rather than packets Figure 3.4-12 Extended FSM description of GBN sender file:///D|/Downloads/Livros/computaỗóo/Computer%20Netw proach%20Featuring%20the%20Internet/principles_rdt.htm (13 of 20)20/11/2004 15:52:09 Principle of Reliable Data Transfer Figure 3.4-13 Extended FSM description of GBN receiver Figures 3.4-12 and 3.4-13 give an extended-FSM description of the sender and receiver sides of an ACK-based, NAK-free, GBN protocol We refer to this FSM description as an extended-FSM since we have added variables (similar to programming language variables) for base and nextseqnum, and also added operations on these variables and conditional actions involving these variables Note that the extended-FSM specification is now beginning to look somewhat like a programming language specification [Bochman 84] provides an excellent survey of additional extensions to FSM techniques as well as other programming languagebased techniques for specifying protocols The GBN sender must respond to three types of events: q q q Invocation from above When rdt_send() is called from above, the sender first checks to see if the window is full, i.e., whether there are N outstanding, unacknowledged packets If the window is not full, a packet is created and sent, and variables are appropriately updated If the window is full, the sender simply returns the data back to the upper layer, an implicit indication that the window is full The upper layer would presumably then have to try again later In a real implementation, the sender would more likely have either buffered (but not immediately sent) this data, or would have a synchronization mechanism (e.g., a semaphore or a flag) that would allow the upper layer to call rdt_send() only when the window is not full Receipt of an ACK In our GBN protocol, an acknowledgement for packet with sequence number n will be taken to be a cumulative acknowledgement, indicating that all packets with a sequence number up to and including n have been correctly received at the receiver We'll come back to this issue shortly when we examine the receiver side of GBN A timeout event The protocol's name, ``Go-Back-N,'' is derived from the sender's behavior in the presence of lost or overly delayed packets As in the stop-and-wait protocol, a timer will again be used to recover from lost data or acknowledgement packets If a timeout occurs, the sender resends all packets that have been previously sent but that have not yet been acknowledged Our sender in Figure 3.4-12 uses only a single timer, which can be thought of as a timer for the oldest tranmitted-but-not-yet-acknowledged packet If an ACK is received but there are still additional transmitted-but-yetto-be-acknowledged packets, the timer is restarted If there are no outstanding unacknowledged packets, the timer is stopped The receiver's actions in GBN are also simple If a packet with sequence number n is received correctly and is in-order (i.e., the data last delivered to the upper layer came from a packet with sequence number n-1), the receiver sends an ACK for packet n and delivers the data portion of the packet to the upper layer In all other cases, the receiver discards the packet and resends an ACK for the most recently received in-order packet Note that since packets are delivered one-at-a-time to the upper layer, if packet k has been received and delivered, then all packets with a sequence number lower than k have also been delivered Thus, the use of cumulative acknowledgements is a natural choice for GBN In our GBN protocol, the receiver discards out-of-order packets While it may seem silly and wasteful to discard a correctly received (but out-of-order) packet, there is some justification for doing so Recall that the receiver must deliver data, in-order, to the upper layer Suppose now that packet n is expected, but packet n+1 arrives Since data must be delivered in order, the receiver could buffer (save) packet n+1 and then deliver this packet to the upper layer after it had later received and delivered packet n However, if packet n is lost, both it and packet n+1 will eventually be retransmitted as a result of the GBN retransmission rule at the sender Thus, the receiver can simply discard packet n+1 The advantage of this approach is the simplicity of receiver buffering file:///D|/Downloads/Livros/computaỗóo/Computer%20Netw proach%20Featuring%20the%20Internet/principles_rdt.htm (14 of 20)20/11/2004 15:52:09 Principle of Reliable Data Transfer - the receiver need not buffer any out-of-order packets Thus, while the sender must maintain the upper and lower bounds of its window and the position of nextseqnum within this window, the only piece of information the receiver need maintain is the sequence number of the next in-order packet This value is held in the variable expectedseqnum, shown in the receiver FSM in Figure 3.4-13 Of course, the disadvantage of throwing away a correctly received packet is that the subsequent retransmission of that packet might be lost or garbled and thus even more retransmissions would be required Figure 3.4-14: Go-Back-N in operation Figure 3.4-14 shows the operation of the GBN protocol for the case of a window size of four packets Because of this window size limitation, the sender sends packets through but then must wait for one or more of these packets to be acknowledged before proceeding As each successive ACK (e.g., ACK0 and ACK1) is received, the window slides forwards and the sender can transmit one new packet (pkt4 and pkt5, respectively) On the receiver side, packet is lost and thus packets 3, 4, and are found to be outof-order and are discarded Before closing our discussion of GBN, it is worth noting that an implementation of this protocol in a protocol stack would likely be structured similar to that of the extended FSM in Figure 3.4-12 The implementation would also likely be in the form of various procedures that implement the actions to be taken in response to the various events that can occur In such event-based programming, the various procedures are called (invoked) either by other procedures in the protocol stack, or as the result of an interrupt In the sender, these events would be (i) a call from the upper layer entity to invoke rdt_send(), (ii) a timer interrupt, and (iii) a call from the lower layer to invoke rdt_rcv() when a packet arrives The programming exercises at the end of this chapter will give you a chance to actually implement these routines in a simulated, but realistic, network setting We note here that the GBN protocol incorporates almost all of the techniques that we will enounter when we study the reliable data transfer components of TCP in Section 3.5: the use of sequence numbers, cumulative acknowledgements, checksums, and a timeout/retransmit operation Indeed, TCP is often referred to as a GBN style of protocol There are, however, some differences file:///D|/Downloads/Livros/computaỗóo/Computer%20Netw proach%20Featuring%20the%20Internet/principles_rdt.htm (15 of 20)20/11/2004 15:52:09 Principle of Reliable Data Transfer Many TCP implementations will buffer correctly-received but out-of-order segments [Stevens 1994] A proposed modification to TCP, the so-called selective acknowledgment [RFC 2018], will also allow a TCP receiver to selectively acknowledge a single outof-order packet rather than cumulatively acknowledge the last correctly received packet The notion of a selective acknowledgment is at the heart of the second broad class of pipelined protocols: the so called selective repeat protocols 3.4.4 Selective Repeat (SR) The GBN protocol allows the sender to potentially ``fill the pipeline'' in Figure 3.4-10 with packets, thus avoiding the channel utilization problems we noted with stop-and-wait protocols There are, however, scenarios in which GBN itself will suffer from performance problems In particular, when the window size and bandwidth-delay product are both large, many packets can be in the pipeline A single packet error can thus cause GBN to retransmit a large number of packets, many of which may be unnecessary As the probability of channel errors increases, the pipeline can become filled with these unnecessary retransmissions Imagine in our message dictation scenario, if every time a word was garbled, the surrounding 1000 words (e.g., a window size of 1000 words) had to be repeated The dictation would be slowed by all of the reiterated words As the name suggests, Selective Repeat (SR) protocols avoid unnecessary retransmissions by having the sender retransmit only those packets that it suspects were received in error (i.e., were lost or corrupted) at the receiver This individual, as-needed, retransmission will require that the receiver individually acknowledge correctly-received packets A window size of N will again be used to limit the number of outstanding, unacknowledged packets in the pipeline However, unlike GBN, the sender will have already received ACKs for some of the packets in the window Figure 3.4-15 shows the SR sender's view of the sequence number space Figure 3.4-16 details the various actions taken by the SR sender The SR receiver will acknowledge a correctly received packet whether or not it is in-order Out-of-order packets are buffered until any missing packets (i.e., packets with lower sequence numbers) are received, at which point a batch of packets can be delivered inorder to the upper layer Figure figsrreceiver itemizes the the various actions taken by the SR receiver Figure 3.4-18 shows an example of SR operation in the presence of lost packets Note that in Figure 3.4-18, the receiver initially buffers packets and 4, and delivers them together with packet to the upper layer when packet is finally received file:///D|/Downloads/Livros/computaỗóo/Computer%20Netw proach%20Featuring%20the%20Internet/principles_rdt.htm (16 of 20)20/11/2004 15:52:09 Principle of Reliable Data Transfer Figure 3.4-15: SR sender and receiver views of sequence number space Data received from above When data is received from above, the SR sender checks the next available sequence number for the packet If the sequence number is within the sender's window, the data is packetized and sent; otherwise it is either buffered or returned to the upper layer for later transmission, as in GBN Timeout Timers are again used to protect against lost packets However, each packet must now have its own logical timer, since only a single packet will be transmitted on timeout A single hardware timer can be used to mimic the operation of multiple logical timers ACK received If an ACK is received, the SR sender marks that packet as having been received, provided it is in the window If the packet's sequence number is equal to sendbase, the window base is moved forward to the unacknowledged packet with the smallest sequence number If the window moves and there are untransmitted packets with sequence numbers that now fall within the window, these packets are transmitted Figure 3.4-16: Selective Repeat sender actions Packet with sequence number in [rcvbase, rcvbase+N-1] is correctly received In this case, the received packet falls within the receivers window and a selective ACK packet is returned to the sender If the packet was not previously received, it is buffered If this packet has a sequence number equal to the base of the receive window (rcvbase in Figure 3.4-15), then this packet, and any previously buffered and consecutively numbered (beginning with rcvbase) packets are file:///D|/Downloads/Livros/computaỗóo/Computer%20Netw proach%20Featuring%20the%20Internet/principles_rdt.htm (17 of 20)20/11/2004 15:52:09 TCP Congestion Control [RFC 1122] R Braden, "Requirements for Internet Hosts Communication Layers," RFC 1122, October 1989 [RFC 1323] V Jacobson, S Braden, D Borman, "TCP Extensions for High Performance," RFC 1323, May 1992 [RFC 2581] M Allman, V Paxson, W Stevens, " TCP Congestion Control, RFC 2581, April 1999 [Shenker 1990] S Shenker, L Zhang and D.D Clark, "Some Observations on the Dynamics of a Congestion Control Algorithm", ACM Computer Communications Review, 20(4), October 1990, pp 30-39 [Stevens 1994] W.R Stevens, TCP/IP Illustrated, Volume 1: The Protocols Addison-Wesley, Reading, MA, 1994 [Zhang 1991] L Zhang, S Shenker, and D.D Clark, Obervations on the Dynamics of a Congestion Control Algorithm: The Effects of Two Way Traffic, ACM SIGCOMM '91, Zurich, 1991 Search RFCs and Internet Drafts If you are interested in an Internet Draft relating to a certain subject or protocol enter the keyword(s) here Query: Press button to submit your query or reset the form: Submit Reset Query Options: Case insensitive Maximum number of hits: 25 Return to Table Of Contents Copyright Keith W Ross and James F Kurose 1996-2000 All rights reserved file:///D|/Downloads/Livros/computaỗóo/Computer%20Net Approach%20Featuring%20the%20Internet/congestion.html (15 of 15)20/11/2004 15:52:15 summary 3.8 Summary We began this chapter by studying the services that a transport layer protocol can provide to network applications At one extreme, the transport layer protocol can be very simple and offer a no-frills service to applications, providing only the multiplexing/demultiplexing function for communicating processes The Internet's UDP protocol is an example of such a no-frills (and no-thrills, from the persective of someone interested in networking) transport-layer protocol At the other extreme, a transport layer protocol can provide a variety of guarantees to applications, such as reliable delivery of data, delay guarantees and bandwidth guarantees Nevertheless, the services that a transport protocol can provide are often constrained by the service model of the underlying network-layer protocol If the network layer protocol cannot provide delay or bandwidth guarantees to transport-layer segments, then the transport layer protocol cannot provide delay or bandwidth guarantees for the messages sent between processes We learned in Section 3.4 that a transport layer protocol can provide reliable data transfer even if the underlying network layer is unreliable We saw that providing reliable data transfer has many subtle points, but that the task can be accomplished by carefully combining acknowledgments, timers, retransmissions and sequence numbers Although we covered reliable data transfer in this chapter, we should keep in mind that reliable data transfer can be provided by link, network, transport or application layer protocols Any of upper four layers of the protocol stack can implement acknowledgments, timers, retransmissions and sequence numbers and provide reliable data transfer to the layer above In fact, over the years, engineers and computer scientists have independently designed and implemented link, network, transport and application layer protocols that provide reliable data transfer (although many of these protocols have quietly disappeared) In Section 3.5 we took a close look at TCP, the Internet's connection-oriented and reliable transportlayer protocol We learned that TCP is complex, involving connection management, flow control, roundtrip time estimation, as well as reliable data transfer In fact, TCP is actually more complex that we made it out to be we intentionally did not discuss a variety of TCP patches fixes, and improvements that are widely implemented in various versions of TCP All of this complexity, however, is hidden from the network application If a client on one host wants to reliably send data to a server on another host, it simply opens a TCP socket to the server and then pumps data into that socket The client-server application is oblivious to all of TCP's complexity In Section 3.6 we examined congestion control from a broad perspective, and in Section 3.7 we showed how TCP implements congestion control We learned that congestion is imperative for the well-being of the network Without congestion control, a network can easily become grid locked, with little or no data being transported end-to-end In Section 3.7 we learned that TCP implements an end-to-end congestion control mechanism that additively increases its transmission rate when the TCP connection's path is judged to be congestion-free, and nultiplicatively decreases its transmission rate when loss occurs This file:///D|/Downloads/Livros/computaỗóo/Computer%20Net %20Approach%20Featuring%20the%20Internet/summary.html (1 of 2)20/11/2004 15:52:15 summary mechanism also strives to give each TCP connection passing through a congested link an equal share of the link bandwidth We also examined in some depth the impact of TCP connection establishment and slow start on latency We observed that in many important scenarios, connection establishment and slow start significantly contribute to end-to-end delay We emphasize once more that TCP congestion control has evolved over the years, remains an area of intensive research, and will likely continue to evolve in the upcoming years In Chapter we said that a computer network can be partitioned into the "network edge" and the "network core" The network edge covers everything that happens in the end systems Having now covered the application layer and the transport layer, our discussion of the network edge is now complete It is time to explore the network core! This journey begins in the next chapter, where we'll study the network layer, and continues into Chapter 5, where we'll study the link layer Copyright 1999 Keith W Ross and James F Kurose All Rights Reserved file:///D|/Downloads/Livros/computaỗóo/Computer%20Net %20Approach%20Featuring%20the%20Internet/summary.html (2 of 2)20/11/2004 15:52:15 Chapter Homework problems Homework Problems and Discussion Questions Chapter Review Questions Sections 3.1-3.3 1) Consider a TCP connection between host A and host B Suppose that the TCP segments traveling from host A to host B have source port number x and destination port number y What are the source and destination port numbers for the segments travelling from host B to host A? 2) Describe why an application developer may choose to run its application over UDP rather than TCP 3) Is it possible for application to enjoy reliable data transfer even when the application runs over UDP? If so, how? Section 3.5 4) True or False: a) Host A is sending host B a large file over a TCP connection Assume host B has no data to send A Host B will not send acknowledgements to host A because B cannot piggyback the acknowledgementson data? b) The size of the TCP RcvWindow never changes throughout the duration of the connection? c) Suppose host A is sending host B a large file over a TCP connection The number of unacknowledged bytes that A sends cannot exceed the size of the receive buffer? d) Suppose host A is sending a large file to host B over a TCP connection If the sequence number for a segment of this connection is m, then the sequence number for the subsequent segment will necessarily be m+1? e) The TCP segment has a field in its header for RcvWindow? f) Suppose that the last SampleRTT in a TCP connection is equal to sec Then Timeout for the connection will necessarily be set to a value >= sec g) Suppose host A sends host B one segment with sequence number 38 and bytes of data Then in this same segment the acknowledgement number is necessarily 42? 5) Suppose A sends two TCP segments back-to-back to B The first segment has sequence number 90; the second has sequence number 110 a) How much data is the first segment? b) Suppose that the first segment is lost, but the second segment arrives at B In the acknowledgement that B sends to A, what will be the acknowledgment number? 6) Consider the Telent example discussed in Section 3.5 A few seconds after the user types the letter 'C' the user types the letter 'R' After typing the letter 'R' how many segments are sent and what is put in the sequence number and acknowledgement fields of the segments Section 3.7 7) Suppose two TCP connections are present over some bottleneck link of rate R bps Both connections have a huge file to send (in the same direction over the bottleneck link) The transmissions of the files start at the same time What is the transmission rate that TCP would like to give to each of the connections? 8) True or False: Consider congestion control in TCP When a timer expires at the sender, the threshold is set to one half of its previous value? file:///D|/Downloads/Livros/computaỗóo/Computer%20Net Down%20Approach%20Featuring%20the%20Internet/hw_2.htm (1 of 5)20/11/2004 15:52:16 Chapter Homework problems Problems 1) Suppose client A initiates an FTP session with server S At about the same time, client B also initiates an FTP session with server S Provide possible source and destination port numbers for : (a) the segments sent from A to S? (b) the segments sent from B to S? (c) the segments sent from S to A? (d) the segments sent from S to B? (e) If A and B are different hosts, is it possible that the source port numbers in the segments from A to S are the same as those from B to S? (f) How about if they are the same host? 2) UDP and TCP use 1's complement for their checksums Suppose you have the following three 8-bit words: 01010101, 01110000, 11001100 What is the 1's complement of the sum of these words? Show all work Why is it that UDP take the 1's complement of the sum, i.e., why not just use the sum? With the 1's complement scheme, how does the receiver detect errors Is it possible that a 1-bit error will go undetected? How about a 2-bit error? 3) Protocol rdt2.1 uses both ACK's and NAKs Redesign the protocol, adding whatever additional protocol mechanisms are needed, for the case that only ACK messages are used Assume that packets can be corrupted, but not lost Give the sender and receiver FSMs, and a trace of your protocol in operation (using traces as in Figure \ref{fig57}) Show also how the protocol works in the case of no errors, and show how your protocol recovers from channel bit errors 4) Consider the following (incorrect) FSM for the receiver for protocol rtd2.1 Show that this receiver, when operating with the sender shown in Figure 3.4-5 can lead the sender and receiver to enter into a deadlock state, where each is waiting for an event that will never occur 5) In protocol rdt3.0, the ACK packets flowing from the receiver to the sender not have sequence numbers (although they have an ACK field that contains the sequence number of the packet they are acknowledging) Why is it that our ACK packets not require sequence numbers? 6) Draw the FSM for the receiver side of protocol rdt 3.0 7) Give a trace of the operation of protocol rdt3.0 when data packets and acknowledgements packets are garbled Your trace should be similar to that used in Figure 3.4-9 8) Consider a channel that can lose packets but has a maximum delay that is known Modify protocol rdt2.1 to include sender timeout and retransmit Informally argue why your protocol can communicate correctly over this channel file:///D|/Downloads/Livros/computaỗóo/Computer%20Net Down%20Approach%20Featuring%20the%20Internet/hw_2.htm (2 of 5)20/11/2004 15:52:16 Chapter Homework problems 9) The sender side of rdt3.0 simply ignores (i.e., takes no action on) all received packets which are either in error, or have the wrong value in the acknum field of an acknowledgement packet Suppose that in such circumstances, rdt3.0 were to simply retransmit the current data packet Would the protocol still work? (Hint: Consider what would happen in the case that there are only it errors; no packet losses and no premature timeouts occur Consider how many times the nth packet is sent, in the limit as n approaches infinity 10) Consider the cross-country example shown in Figure 3.4-10 How big would the window size have to be for the channel utilization to be greater than 90 %? 11) Design a reliable, pipelined, data transfer protocol that uses only negative acknowledgements How quickly will your protocol respond to lost packets when the arrival rate of data ot the sender is low? Is high? 12) Consider transferring an enormous file of L bytes from host A to host B Assumme an MSS of 1460 bytes a) What us the maximum length of L such that TCP sequence numbers are not exhausted? Recall that the TCP number field has four bytes b) For the L you obtain in (a), find how long it takes to transmit the file Assme that a total of 66 bytes of transport, network and data-link header are added to each segment before the resulting packet is sent out over a 10 Mbps link Ignore flow control and congestion control, so A can pump out the segments back-to-back and continuously 13) In Figure 3.5-5, we see that TCP waits until it has received three duplicate ACK before performing a fas retransmit Why you think the TCP designers chose not to perform a fast retransmit after the first duplicate ACK for a segment is received? 14) Consider the TCP procedure for estimating RTT Suppose that x = Let SampleRTT1 be the most recent sample RTT, let SampleRTT2 be the next most recent sample RTT, etc (a) For a given TCP connection, suppose acknowledgements have been returned with corresponding sample RTTs SampleRTT4, SampleRTT3, SampleRTT2, and SampleRTT1 Express EstimatedRTT in terms of the four sample RTTs (b) Generalize your formula for n sample round-trip times (c) For the formula in part (b) let n approach infinity Comment on why this averaging procedure is called an exponential moving average 15) Refer to Figure 3.7-3 that illustrates the convergence of TCP's additive increase, multiplicative decrease algorithm Suppose that instead of a multiplicative decrease, TCP decreased the window size by a constant amount Would the resulting additive increase additive decrease converge to an equal share algorithm? Justify your answer using a diagram similar to Figure 3.7-3 16) Recall the idealized model for the steady-state dynamics of TCP In the period of time from when the connection's window size varies from (W*MSS)/2 to W*MSS, only one packet is lost (at the very end of the period) (a) Show that the loss rate is equal to L = loss rate = 1/[(3/8)*W2 - W/4] (b) Use the above result to show that if a connection has loss rate L, then its average bandwidth is approximately given by: average bandwidth of connection ~ 1.22 * MSS / (RTT * sqrt(L) ) 17) Consider sending an object of size O = 100 Kbytes from server to client Let S=536 bytes and RTT=100msec Suppose the transport protocol uses static windows with window size W a) For a transmission rate of 28 Kbps, determine the minimum possible latency Determine the minimum window size that achieves this latency b) Repeat (a) for 100 Kbps c) Repeat (a) for Mbps d) Repeat (a) for 10 Mbps 18) Suppose TCP increased its congestion window by two rather than by one for each received acknowledgement during slow start Thus the first window consists of one segment, the second of three segments, the third of nine segments, etc For this slow-start procedure: a) Express K in terms of O and S b) Express Q in terms of RTT, S and R file:///D|/Downloads/Livros/computaỗóo/Computer%20Net Down%20Approach%20Featuring%20the%20Internet/hw_2.htm (3 of 5)20/11/2004 15:52:16 Chapter Homework problems c) Express latency in terms of P = min(K-1,Q), O, R and RTT 19) Consider the case RTT = second and O = 100 kBytes Prepare a chart (similar to the charts in Section 3.5.2) that compares the minimum latency (O/R + RTT) with the latency with slow start for R=28Kbps, 100 Kbps, Mbps and 10 Mbps 20) True or False a) If a Web page consists of exactly one object, then non-persistent and persistent connections have exactly the same response time performance? b) Consider sending one object of size O from server to browser over TCP If O > S, where S is the maximum segment size, then the server will stall at least once? c) Suppose a Web page consists of 10 objects, each of size O bits For persistent HTTP, the RTT portion of the response time is 20 RTT ? d) Suppose a Web page consists of 10 objects, each of size O bits For non-persistent HTTP with parallel connections, the RTT portion of the response time is 12 RTT ? 21) The analysis for dynamic windows in the text assumes that there is one link between server and client Redo the analysis for T links between server and client Assume the network has no congestion, so the packets experience no queueing delays The packets experience a store-andforward delay, however The definition of RTT is the same as that given in the section on TCP congestion control (Hint: The time for the server to send out the first segment until it receives the acknowledgement is TS/R + RTT.) 22) Recall the discussion at the end of Section 3.7.3 on the response time for a Web page For the case of non-persistent connections, determine a general expression for the fraction of the response time that is due to TCP slow start 23) With persistent HTTP, all objects are sent over the same TCP connection As we discussed in Chapter 2, one of the motivations behind persistent HTTP (with pipelining) is to diminish the affects of TCP connection establishment and slow start on the response time for a Web page In this problem we investigate the response time for persistent HTTP Assume that the client requests all the images at once, but only when it has it has received the entire HTML base page Let M+1 denote the number of objects and let O denote the size of each object a) Argue that the response time takes the form (M+1)O/R + 3RTT + latency due to slow-start Compare the contribution of the RTTs in this expression with that in non-persistent HTTP b) Assume that K = log2(O/R+1) is an integer; thus, the last window of the base HTML file transmits an entire window's worth of segments, i e., window K transmits 2K-1segments Let P' = min{Q,K'-1} and Note that K' is the number of windows that cover an object of size (M+1)O and P' is the number of stall periods when sending the large object over a single TCP connection Suppose (incorrectly) the server can send the images without waiting for the formal request for the images from the client Show that the response time is that of sending one large object of size (M+1)O: c) The actual response time for persistent HTTP is somewhat larger than the approximation This is because the server must wait for a request for the images before sending the images In particular, the stall time between the Kth and (K+1)st window is not [S/R + RTT - 2K-1(S/R)]+ but is instead RTT Show that 24) Consider the scenario of RTT = 100 msec, O = Kbytes, and M= 10 Construct a chart that compares the response times for non-persistent and file:///D|/Downloads/Livros/computaỗóo/Computer%20Net Down%20Approach%20Featuring%20the%20Internet/hw_2.htm (4 of 5)20/11/2004 15:52:16 Chapter Homework problems persistent connections for 28 kbps, 100 kbps, Mbps and 10 Mbps Note that persistent HTTP has substantially lower response time than nonpersistent HTTP for all the transmission rates except 28 Kbps 25) Repeat the above question for the case of RTT = sec, O = Kbytes , M= 10 Note that for these parameters, persistent HTTP gives a significantly lower response time than non-persistent HTTP for all the transmission rates 26) Consider now non-persistent HTTP with parallel TCP connections Recall that browsers typically operate in this mode when using HTTP/1.0 Let X denote the maximum number of parallel connections that the client (browser) is permitted to open In this mode, the client first uses one TCP connection to obtain the base HTML file Upon receiving the base HTML file, the client establishes M/X sets of TCP connections, with each set having X parallel connections Argue that the total response time takes the form: response time = (M+1)O/R + 2(M/X+1) RTT + latency due to slow-start stalling Compare the contribution of the term involving RTT to that of persistent connections and non-persistent (non-parallel) connections Discussion Questions 1) Consider streaming stored audio Does it make sense to run the application over UDP or TCP? Which one does RealNetworks use? Why? Are there any other streaming stored audio products? Which transport protocol they use and why? Programming Assignment In this programming assignment, you will be writing the sending and receiving transport-level code for implementing a simple reliable data transfer protocol - for either the alternating bit protocol or a Go-Back-N protocol This should be FUN since your implementation will differ very little from what would be required in a real-world situation Since you presumably not have standalone machines (with an OS that you can modify), your code will have to execute in a simulated hardware/ software environment However, the programming interface provided to your routines (i.e., the code that would call your entities from above (i.e., from layer 5) and from below (i.e., from layer 3)) is very close to what is done in an actual UNIX environment (Indeed, the software interfaces described in this programming assignment are much more realistic that the infinite loop senders and receivers that many textbooks describe) Stopping/starting of timers are also simulated, and timer interrupts will cause your timer handling routine to be activated You can find full details of the programming assignment, as well as C code that you will need to create the simulated hardware/software environment at http://gaia.cs.umass.edu/kurose/transport/programming_assignment.htm file:///D|/Downloads/Livros/computaỗóo/Computer%20Net Down%20Approach%20Featuring%20the%20Internet/hw_2.htm (5 of 5)20/11/2004 15:52:16 Network Layer: Introduction and Service Models 4.1 Introduction and Network Service Models We saw in the previous chapter that the transport layer provides communication service between two processes running on two different hosts In order to provide this service, the transport layer relies on the services of the network layer, which provides a communication service between hosts In particular, the network-layer moves transport-layer segments from one host to another At the sending host, the transport layer segment is passed to the network layer The network layer then "somehow" gets the segment to the destination host and passes the segment up the protocol stack to the transport layer Exactly how the network layer moves a segment from the transport layer of an origin host to the transport layer of the destination host is the subject of this chapter We will see that unlike the transport layers, the network layer requires the coordination of each and every host and router in the network Because of this, network layer protocols are among the most challenging (and therefore interesting!) in the protocol stack Figure 4.1-1 shows a simple network with two hosts (H1 and H2) and four routers (R1, R2, R3 and R4) The role of the network layer in a sending host is to begin the packet on its journey to the the receiving host For example, if H1 is sending to H2, the network layer in host H1 transfers these packets to it nearby router, R2 At the receiving host (e.g., H2) , the network layer receives the packet from its nearby router (in this case, R3) and delivers the packet up to the transport layer at H2 The primary role of the routers is to "switch" packets from input links to output links Note that the routers in Figure 4.1-1 are shown with a truncated protocol stack, i.e., with no upper layers above the network layer, since routers not run transport and application layer protocols such as those we examined in Chapters and Figure 4.1-1: The network layer The role of the network layer is thus deceptively simple to transport packets from a sending host to a receiving host To so, three important network layer functions can be identified: q Path Determination The network layer must determine the route or path taken by packets as they flow from a sender to a receiver The algorithms that calculate these paths are referred to as routing algorithms A routing file:///D|/Downloads/Livros/computaỗóo/Computer%20Netw wn%20Approach%20Featuring%20the%20Internet/service.htm (1 of 7)20/11/2004 15:52:17 Network Layer: Introduction and Service Models q q algorithm would determine, for example, whether packets from H1 to H2 flow along the path R2-R1-R3 or path R2R4-R3 (or any other path between H1 and H2) Much of this chapter will focus on routing algorithms In Section 4.2 we will study the theory of routing algorithms, concentrating on the two most prevalent classes of routing algorithms: link state routing and distance vector routing We will see that the complexity of a routing algorithms grows considerably as the number of routers in the network increases This motivates the use of hierarchical routing, a topic we cover in section 4.3 In Section 4.8 we cover multicast routing the routing algorithms, switching function, and call setup mechanisms that allow a packet that is sent just once by a sender to be delivered to multiple destinations Switching When a packet arrives at the input to a router, the router must move it to the appropriate output link For example, a packet arriving from host H1 to router R2 must either be forwarded towards H2 either along the link from R2 to R1 or along the link from R2 to R4 In Section 4.6, we look inside a router and examine how a packet is actually switched (moved) from an input link to an output link Call Setup Recall that in our study of TCP, a three-way handshake was required before data actually flowed from sender to receiver This allowed the sender and receiver to setup the needed state information (e.g., sequence number and initial flow control window size) In an analogous manner, some network layer architectures (e.g., ATM) requires that the routers along the chosen path from source to destination handshake with each other in order to setup state before data actually begins to flow In the network layer, this process is referred to as call setup The network layer of the Internet architecture does not perform any such call setup Before delving into the details of the theory and implementation of the network layer, however, let us first take the broader view and consider what different types of service might be offered by the network layer 4.1.1 Network Service Model When the transport layer at a sending host transmits a packet into the network (i.e., passes it down to the network layer at the sending host), can the transport layer count on the network layer to deliver the packet to the destination? When multiple packets are sent, will they be delivered to the transport layer in the receiving host in the order in which they were sent? Will the amount of time between the sending of two sequential packet transmissions be the same as the amount of time between their reception? Will the network provide any feedback about congestion in the network? What is the abstract view (properties) of the channel connecting the transport layer in the two hosts? The answers to these questions and others are determined by the service model provided by the network layer The network service model defines the characteristics of end-to-end transport of data between one "edge" of the network and the other, i.e., between sending and receiving end systems Datagram or Virtual Circuit? Perhaps the most important abstraction provided by the network layer to the upper layers is whether or not the network layer uses virtual circuits (VCs) or not You may recall from Chapter that a virtual-circuit packet network behaves much like a telephone network, which uses "real circuits" as opposed to "virtual circuits" There are three identifiable phases in a virtual circuit: q q q VC setup During the setup phase, the sender contacts the network layer, specifies the receiver address, and waits for the network to setup the VC The network layer determines the path between sender and receiver, i.e., the series of links and switches through which all packets of the VC will travel As discussed in Chapter 1, this typically involves updating tables in each of the packet switches in the path During VC setup, the network layer may also reserve resources (e.g., bandwidth) along the path of the VC Data transfer Once theVC has been established, data can begin to flow along the VC Virtual circuit teardown This is initiated when the sender (or receiver) informs the network layer of its desire to terminate the VC The network layer will then typically inform the end system on the other side of the network of file:///D|/Downloads/Livros/computaỗóo/Computer%20Netw wn%20Approach%20Featuring%20the%20Internet/service.htm (2 of 7)20/11/2004 15:52:17 Network Layer: Introduction and Service Models the call termination, and update the tables in each of the packet switches on the path to indicate that the VC no longer exists There is a subtle but important distinction between VC setup at the network layer and connection setup at the transport layer (e.g., the TCP 3-way handshake we studied in Chapter 3) Connection setup at the transport layer only involves the two end systems The two end systems agree to communicate and together determine the parameters (e.g., initial sequence number, flow control window size) of their transport level connection before data actually begins to flow on the transport level connection Although the two end systems are aware of the transport-layer connection, the switches within the network are completely oblivious to it On the otherhand, with a virtual-circuit network layer, packet switches are involved in virtual-cicuit setup, and each packet switch is fully aware of all the VCs passing through it The messages that the end systems send to the network to indicate the initiation or termination of a VC, and the messages passed between the switches to set up the VC (i.e to modify switch tables) are known as signaling messages and the protocols used to exchange these messages are often referred to as signaling protocols VC setup is shown pictorially in Figure 4.1-2 Figure 4.1-2: Virtual circuit service model We mentioned in Chapter that ATM uses virtual circuits, although virtual circuits in ATM jargon are called virtual channels Thus ATM packet switches receive and process VC setup and tear down messages, and they also maintain VC state tables Frame relay and X.25, which will be covered in Chapter 5, are two other networking technologies that use virtual circuits With a datagram network layer, each time an end system wants to send a packet, it stamps the packet with the address of the destination end system, and then pops the packet into the network As shown in Figure 4.1-3, this is done without any VC setup Packet switches (called "routers" in the Internet) not maintain any state information about VCs because there are no VCs! Instead, packet switches route a packet towards its destination by examining the packet's destination address, indexing a routing table with the destination address, and forwarding the packet in the direction of the destination (As discussed in Chapter 1, datagram routing is similar to routing ordinary postal mail.) Because routing tables can be modified at any time, a series of packets sent from one end system to another may follow different paths through the network and file:///D|/Downloads/Livros/computaỗóo/Computer%20Netw wn%20Approach%20Featuring%20the%20Internet/service.htm (3 of 7)20/11/2004 15:52:17 Network Layer: Introduction and Service Models may arrive out of order The Internet uses a datagram network layer Figure 4.1-3: Datagram service model You may recall from Chapter that a packet-switched network typically offers either a VC service or a datagram service to the transport layer, and not both services For example, an ATM network offers only a VC service to the ATM transport layer (more precisely, to the ATM adaptation layer), and the Internet offers only a datagram sevice to the transport layer The transport layer in turn offers services to communicating processes at the application layer For example, TCP/IP networks (such as the Internet) offers a connection-oriented service (using TCP) and connectionless service (UDP) to its communicating processes An alternative terminology for VC service and datagram service is network-layer connection-oriented service and network-layer connectionless service, respectively Indeed, the VC service is a sort of connection-oriented service, as it involves setting up and tearing down a connection-like entity, and maintaining connection state information in the packet switches The datagram service is a sort of connectionless service in that it doesn't employ connection-like entities Both sets of terminology have advantages and disadvantages, and both sets are commonly used in the networking literature We decided to use in this book the "VC service" and "datagram service" terminology for the network layer, and reserve the "connection-oriented service" and "connectionless service" terminology for the transport layer We believe this decision will be useful in helping the reader delineate the services offered by the two layers The Internet and ATM Network Service Models Congestion indication Network Architecture Service Model Bandwidth Guarantee No Loss Guarantee Ordering Timing Internet Best Effort None None Any order possible Not maintained None ATM CBR Guaranteed constant rate Yes In order maintained file:///D|/Downloads/Livros/computaỗóo/Computer%20Netw wn%20Approach%20Featuring%20the%20Internet/service.htm (4 of 7)20/11/2004 15:52:17 congestion will not occur Network Layer: Introduction and Service Models congestion will not occur ATM VBR Guaranteed rate Yes In order maintained ATM ABR Guaranteed minimum None In order Congestion Not maintained indication provided ATM UBR None None In order Not maintained None Table 4.1-1: Internet and ATM Network Service Models The key aspects of the service model of the Internet and ATM network architectures are summarized in Table 4.1-1 We not want to delve deeply into the details of the service models here (it can be quite "dry" and detailed discussions can be found in the standards themselves [ATM Forum 1997]) A comparison between the Internet and ATM service models is, however, quite instructive The current Internet architecture provides only one service model, the datagram service, which is also known as "best effort service." From Table 4.1-1, it might appear that best effort service is a euphemism for "no service at all." With best effort service, timing between packets is not guaranteed to be preserved, packets are not guaranteed to be received in the order in which they were sent, nor is the eventual delivery of transmitted packets guaranteed Given this definition, a network which delivered no packets to the destination would satisfy the definition best effort delivery service (Indeed, today's congested public Internet might sometimes appear to be an example of a network that does so!) As we will discuss shortly, however, there are sound reasons for such a minimalist network service model The Internet's best-effort only service model is currently being extended to include so-called "integrated services" and "differentiated service." We will cover these still evolving service models later in Chapter Let us next turn to the ATM service models As noted in our overview of ATM in chapter 1, there are two ATM standards bodies (the ITU and The ATM Forum) Their network service model definitions contain only minor differences and we adopt here the terminology used in the ATM Forum standards The ATM architecture provides for multiple service models (that is, each of the two ATM standards each has multiple service models) This means that within the same network, different connections can be provided with different classes of service Constant bit rate (CBR) network service was the first ATM service model to be standardized, probably reflecting the fact that telephone companies were the early prime movers behind ATM, and CBR network service is ideally suited for carrying real-time, constant-bit-rate, streamline audio (e.g., a digitized telephone call) and video traffic The goal of CBR service is conceptually simple to make the network connection look like a dedicated copper or fiber connection between the sender and receiver With CBR service, ATM cells are carried across the network in such a way that the end-end delay experienced by a cell (the so-called cell transfer delay, CDT), the variability in the end-end delay (often referred to as "jitter" or "cell delay variation, CDV)"), and the fraction of cells that are lost or deliver late (the so-called cell loss rate, CLR) are guaranteed to be less than some specified values Also, an allocated transmission rate (the peak cell rate, PCR) is defined for the connection and the sender is expected to offer data to the network at this rate The values for the PCR, CDT, CDV, and CLR are agreed upon by the sending host and the ATM network when the CBR connection is first established A second conceptually simple ATM service class is Unspecified Bit Rate (UBR) network service Unlike CBR service, which guarantees rate, delay, delay jitter, and loss, UBR makes no guarantees at all other than in-order delivery of cells (that is, cells that are fortunate enough to make it to the receiver) With the exception of in-order delivery, UBR service is thus equivalent to the Internet best effort service model As with the Internet best effort service model, UBR also provides no feedback to the sender about whether or not a cell is dropped within the network For reliable transmission of data over a UBR network, higher layer protocols (such as those we studied in the previous chapter) are needed UBR service might be well suited for non-interactive data transfer applications such as email and newsgroups file:///D|/Downloads/Livros/computaỗóo/Computer%20Netw wn%20Approach%20Featuring%20the%20Internet/service.htm (5 of 7)20/11/2004 15:52:17 Network Layer: Introduction and Service Models If UBR can be thought of as a "best effort" service, then Available Bit Rate (ABR) network service might best be characterized as a "better" best effort service model The two most important additional features of ABR service over UBR service are: q q A minimum cell transmission rate (MCR) is guaranteed to a connection using ABR service If, however, the network has enough free resources at a given time, a sender may actually be able to successfully send traffic at a higher rate than the MCR Congestion feedback from the network An ATM network provides feedback to the sender (in terms of a congestion notification bit, or a lower rate at which to send) that controls how the sender should adjust its rate between the MCR and some peak cell rate (PCR) ABR senders must decrease their transmission rates in accordance with such feedback ABR provides a minimum bandwidth guarantee, but on the other hand will attempt to transfer data as fast as possible (up to the limit imposed by the PCR) As such, ABR is well suited for data transfer where it is desirable to keep the transfer delays low (e.g., Web browsing) The final ATM service model is Variable Bit Rate (VBR) network service VBR service comes in two flavors (and in the ITU specification of VBR-like service comes in four flavors perhaps indicating a service class with an identity crisis!) In real-time VBR service, the acceptable cell loss rate, delay, and delay jitter are specified as in CBR service However, the actual source rate is allowed to vary according to parameters specified by the user to the network The declared variability in rate may be used by the network (internally) to more efficiently allocate resources to its connections, but in terms of the loss, delay and jitter seen by the sender, the service is essentially the same as CBR service While early efforts in defining a VBR service models were clearly targeted towards real-time services (e.g., as evidenced by the PCR, CDT, CDV and CLR parameters), a second flavor of VBR service is now targeted towards non-real-time services and provides a cell loss rate guarantee An obvious question with VBR is what advantages it offers over CBR (for real-time applications) and over UBR and ABR for non-real-time applications Currently, there is not enough (any?) experience with VBR service to answer this questions An excellent discussion of the rationale behind various aspects of the ATM Forum's Traffic Management Specification 4.0 [ATM Forum 1996] for CBR, VBR, ABR and UBR service is [Garret 1996] 4.1.2 Origins of Datagram and Virtual Circuit Service The evolution of the Internet and ATM network service models reflects their origins With the notion of a virtual circuit as a central organizing principle, and an early focus on CBR services, ATM reflects its roots in the telephony world (which uses "real circuits") The subsequent definition of UBR and ABR service classes acknowledges the importance of the types of data applications developed in the data networking community Given the VC architecture and a focus on supporting real-time traffic with guarantees about the level of received performance (even with data-oriented services such as ABR), the network layer is significantly more complex than the best effort Internet This too, is in keeping with the ATM's telephony heritage Telephone networks, by necessity, had their "complexity' within the network, since they were connecting "dumb" end-system devices such as a rotary telephone (For those too young to know, a rotary phone is a nondigital telephone with no buttons - only a dial) The Internet, on the other hand, grew out of the need to connect computers (i.e., more sophisticated end devices) together With sophisticated end-systems devices, the Internet architects chose to make the network service model (best effort) as simple as possible and to implement any additional functionality (e.g., reliable data transfer), as well as any new application level network services at a higher layer, at the end systems This inverts the model of the telephone network, with some interesting consequences: file:///D|/Downloads/Livros/computaỗóo/Computer%20Netw wn%20Approach%20Featuring%20the%20Internet/service.htm (6 of 7)20/11/2004 15:52:17 Network Layer: Introduction and Service Models q q The resulting network service model which made minimal (no!) service guarantees (and hence posed minimal requirements on the network layer) also made it easier to interconnect networks that used very different link layer technologies (e.g., satellite, Ethernet, fiber, or radio) which had very different characteristics (transmission rates, loss characteristics) We will address the interconnection of IP networks in detail Section 4.4 As we saw in Chapter 2, applications such as email, the Web, and even a network-layer-centric service such as the DNS are implemented in hosts (servers) at the edge of the network The ability to add a new service simply by attaching a host to the network and defining a new higher layer protocol (such as HTTP) has allowed new services such as the WWW to be adopted in a breathtakingly short period of time As we will see in Chapter 6, however, there is considerable debate in the Internet community about how the network layer architecture must evolve in order to support the real-time services such a multimedia An interesting comparison of the ATM and the proposed next generation Internet architecture is given in [Crowcroft 95] References [ATM Forum 1996] ATM Forum, "Traffic Management 4.0," ATM Forum document af-tm-0056.0000 On-line [ATM Forum 1997] ATM Forum "Technical Specifications: Approved ATM Forum Specifications." On-line [Crowcroft 1995] J Crowcroft, Z Wang, A Smith, J Adams, "A Comparison of the IETF and ATM Service Models," IEEE Communications Magazine, Nov./Dec 1995, pp 12 - 16 Compares the Internet Engineering Task Force int-serv service model with the ATM service model On-line [Garrett 1996] M Garett, "A Service Architecture for ATM: From Applications to Scheduling," IEEE Network Magazine, May/June 1996, pp - 14 A thoughtful discussion of the the ATM Forum's recent TM 4.0 specification of CBR, VBR, ABR and UBR service Copyright Keith W Ross and Jim Kurose, 1996-2000 All rights reserved file:///D|/Downloads/Livros/computaỗóo/Computer%20Netw wn%20Approach%20Featuring%20the%20Internet/service.htm (7 of 7)20/11/2004 15:52:17 ... some other task and may not even attempt to read the data until long after it has arrived If the application is relatively slow at reading the data, the sender can very easily overflow the connection''s... [Jain 1996] R Jain S Kalyanaraman, S Fahmy, R Goyal, S Kim, "Tutorial Paper on ABR Source Behavior ," ATM Forum/96-1270, October 1996 [Ramakrishnan 1990] K K Ramakrishnan and Raj Jain, "A Binary... We''ll assume that the two connections have the same MSS and RTT (so that if they have the same congestion window size, then they have the same throughput), that they have a large amount of data to

Ngày đăng: 14/08/2014, 13:21

Từ khóa liên quan

Mục lục

  • Local Disk

    • 3. Transport Layer

      • 3.5 Connection-Oriented Transport: TCP

        • TCP Flow Control 3

        • 3.6 Principles of Congestion Control

        • 3.7 TCP Congestion Control

        • 3.8 Summary

        • 3.9 Homework Problems and Discussion Questions

        • 4. Network Layer and Routing

          • 4.1 Introduction and Network Service Models

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan