ADMINISTERING CISCO QoS IP NETWORKS - CHAPTER 3 pps

24 446 0
ADMINISTERING CISCO QoS IP NETWORKS - CHAPTER 3 pps

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Introduction to Quality of Service Solutions in this chapter: ■ Defining Quality of Service ■ Understanding Congestion Management ■ Defining General Queuing Concepts ■ Understanding Congestion Avoidance ■ Introducing Policing and Traffic Shaping Chapter 3 123 110_QoS_03 2/13/01 2:05 PM Page 123 124 Chapter 3 • Introduction to Quality of Service Introduction In this chapter, we will discuss the basic concepts behind Quality of Service (QoS), the need for it, and we will introduce you to several of the types of QoS mechanisms available. Quality of Service itself is not something that you con- figure on a Cisco router, rather it is an overall term that refers to a wide variety of mechanisms used to influence traffic patterns on a network. Congestion Management is a collection of QoS mechanisms that will deal with network congestion as it occurs and will perform various actions on the traffic that is congesting the network. There are several congestion management mechanisms and each behaves differently.This chapter will introduce you to the overall concept of congestion management and some of the congestion manage- ment mechanisms that are available on Cisco routers. Congestion avoidance is another classification within the larger umbrella of QoS mechanisms that focuses on preventing congestion, rather than dealing with it as it happens.This does not mean that congestion avoidance is any better or worse than congestion management, it is simply different.This chapter will dis- cuss the theory behind congestion avoidance and present some possible scenarios where it may be preferable to use congestion avoidance, rather than congestion management. Policing and Traffic Shaping are other groups of mechanisms that may help with network congestion and provide QoS to your network traffic.This chapter will introduce concepts and theories surrounding policing and shaping and will discuss where these may be preferable to other QoS mechanisms. Defining Quality of Service Quality of Service (QoS) is the term used to define the ability of a network to provide different levels of service assurances to the various forms of traffic. It enables network administrators to assign certain traffic priority over others or actual levels of quality with respect to network bandwidth or end-to-en delay. A typical network may have one or many of the following data link layer technolo- gies for which can be QoS enabled: ■ Frame Relay ■ Ethernet ■ Token Ring ■ Point-to-Point Protocol (PPP) www.syngress.com 110_QoS_03 2/13/01 2:05 PM Page 124 www.syngress.com ■ HDLC ■ X.25 ■ ATM ■ SONET Each of these underlying technologies has different characteristics that need to be considered when implementing QoS. QoS can be implemented in conges- tion management or congestion avoidance situations. Congestion management techniques are used to manage and prioritize traffic in a network where applica- tions request more bandwidth than the network is able to provide. By prioritizing certain classes of traffic, congestion management techniques enable business crit- ical or delay sensitive applications to operate properly in a congested network environment. Conversely, collision avoidance techniques make use of the under- lying technologies’ mechanisms to try and avoid congestive situations. Implementing QoS in a network can be a complicated undertaking for even the most seasoned network administrator.There are many different components of QoS, which this book will address on an individual basis to provide you with better understanding of each component. Enabling QoS on a network, when fin- ished, will allow you as the network administrator, a very high level of flexibility to control the flow and actions of the traffic on the network. What Is Quality of Service? Quality of Service is simply a set of tools available to network administrators to enforce certain assurances that a minimum level of services will be provided to certain traffic. Many protocols and applications are not critically sensitive to net- work congestion. File Transfer Protocol (FTP), for example, has a rather large tol- erance for network delay or bandwidth limitation.To the user, FTP simply takes longer to download a file to the target system. Although annoying to the user, this slowness does not normally impede the operation of the application. On the other hand, new applications such as Voice and Video are particularly sensitive to network delay. If voice packets take too long to reach their destination, the resulting speech sounds choppy or distorted. QoS can be used to provide assured services to these applications. Critical business applications can also make use of QoS. Companies whose main business focus relies on SNA-based network traffic can feel the pressures of network congestion. SNA is very sensitive to its hand- shake protocol and normally terminates a session when it does not receive an acknowledgement in time. Unlike TCP/IP, which recovers well from a bad hand- Introduction to Quality of Service • Chapter 3 125 110_QoS_03 2/13/01 2:05 PM Page 125 126 Chapter 3 • Introduction to Quality of Service shake, SNA does not operate well in a congested environment. In these cases, pri- oritizing SNA traffic over all other protocols could be a proper approach to QoS. Applications for Quality of Service When would a network engineer consider designing quality of service into a network? Here are a few reasons to deploy QoS in a network topology: ■ To give priority to certain mission critical applications in the network ■ To maximize the use of the current network investment in infrastructure ■ Better performance for delay sensitive applications such as Voice and Video ■ To respond to changes in network traffic flows The last bullet may seem like a trivial one. After all, traffic flow cannot dra- matically change overnight can it? Naptser © . PointCast © .World-Wide-Web.These are all examples of “self-deployed” applications that cause network administrators nightmares. No one ever planned for Web browsing to take off the way it did, yet today, most of the traffic flowing through the Internet carries the prefix “http.” In order to adapt to these changes in bandwidth requests, QoS can be used to ensure that users listening to radio stations over the Internet do not smother the network traffic vital to the company. Often we find that the simplest method for achieving better performance on a network is to throw more bandwidth at the problem. In this day and age of Gigabit Ethernet and Optical Networking, higher capacities are readily available. More bandwidth does not, however, always guarantee a certain level of perfor- mance. It may well be that the very protocols that cause the congestion in the first place will simply eat up the additional bandwidth, leading to the same con- gestion issues experienced before the bandwidth upgrade. A more judicious approach is to analyze the traffic flowing through the bottleneck, determining the importance of each protocol and application, and determine a strategy to priori- tize the access to the bandwidth. QoS allows the network administrator to have control over bandwidth, latency, and jitter, and minimize packet loss within the network by prioritizing various protocols. Bandwidth is the measure of capacity on the network or a specific link. Latency is the delay of a packet traversing the network and jitter is the change of latency over a given period of time. Deploying certain types of quality of service techniques can control these three parameters. www.syngress.com 110_QoS_03 2/13/01 2:05 PM Page 126 Introduction to Quality of Service • Chapter 3 127 Currently within many corporate networks, QoS is not widely deployed. But with the push for applications such as multicast, streaming multimedia, and Voice over IP (VoIP) the need for certain quality levels is more inherent. Especially because these types of applications are susceptible to jitter and delay and poor performance is immediately noticed by the end-user. End-users experiencing poor performance typically generate trouble tickets and the network adminis- trator is left troubleshooting the performance problem. A network administrator can proactively manage new sensitive applications by applying QoS techniques to the network. It is important to realize that QoS is not the magic solution to every congestion problem. It may very well be that upgrading the bandwidth of a congested link is the proper solution to the problem. However, by knowing the options available, you will be in a better position to make the proper decision to solve congestion issues. Three Levels of QoS QoS can be broken down into three different levels, also referred to as service models.These service models describe a set of end-to-end QoS capabilities. End- to-end QoS is the ability of the network to provide a specific level of service to network traffic from one end of the network to the other.The three service levels are best-effort service, integrated service, and differentiated service.We’ll examine each service model in greater detail. Best-Effort Service Best-effort service, as its name implies, is when the network will make every pos- sible attempt to deliver a packet to its destination.With best-effort service there are no guarantees that the packet will ever reach its intended destination. An application can send data in any amount, whenever it needs to, without requesting permission or notifying the network. Certain applications can thrive under this model. FTP and HTTP, for example, can support best-effort service without much hardship.This is, however, not an optimal service model for appli- cations which are sensitive to network delays, bandwidth fluctuations, and other changing network conditions. Network telephony applications, for example, may require a more consistent amount of bandwidth in order to function properly. The results of best-effort service for these applications could result in failed tele- phone calls or interrupted speech during the call. Integrated Service The integrated service model provides applications with a guaranteed level of ser- vice by negotiating network parameters end-to-end. Applications request the www.syngress.com 110_QoS_03 2/13/01 2:05 PM Page 127 128 Chapter 3 • Introduction to Quality of Service level of service necessary for them to operate properly and rely on the QoS mechanism to reserve the necessary network resources prior to the application beginning its transmission. It is important to note that the application will not send the traffic until it receives a signal from the network stating that the net- work can handle the load and provide the requested QoS end-to-end. To accomplish this, the network uses a process called admission control. Admission control is the mechanism that prevents the network from being over- loaded.The network will not send a signal to the application to start transmitting the data if the requested QoS cannot be delivered. Once the application begins the transmission of data, the network resources reserved for the application are maintained end-to-end until the application is done or until the bandwidth reservation exceeds what is allowable for this application.The network will per- form its tasks of maintaining the per-flow state, classification, policing, and intelli- gent queuing per packet to meet the required QoS. Cisco IOS has two features to provide integrated service in the form of con- trolled load services.They are Resource Reservation Protocol (RSVP) and intel- ligent queuing. RSVP is currently in the process of being standardized by the Internet Engineering Task Force (IETF) in one of their working groups. Intelligent queuing includes technologies such as Weighted Fair Queuing (WFQ) and Weighted Random Early Detection (WRED). RSVP is a Cisco proprietary protocol used to signal the network of the QoS requirements of an application. It is important to note that RSVP is not a routing protocol. RSVP works in conjunction with the routing protocols to determine the best path through the network that will provide the QoS required. RSVP enabled routers actually create dynamic access lists to provide the QoS requested and ensure that packets are delivered at the prescribed minimum quality parame- ters. RSVP will be covered in greater details later in this book. Differentiated Service The last model for QoS is the differentiated service model. Differentiated service includes a set of classification tools and queuing mechanisms to provide certain protocols or applications with a certain priority over other network traffic. Differentiated services rely on the edge routers to perform the classification of the different types of packets traversing a network. Network traffic can be classi- fied by network address, protocols and ports, ingress interfaces or whatever classi- fication that can be accomplished through the use of a standard or extended access list. www.syngress.com 110_QoS_03 2/13/01 2:05 PM Page 128 Introduction to Quality of Service • Chapter 3 129 Understanding Congestion Management Congestion management is a general term that encompasses different types of queuing strategies used to manage situations where the bandwidth demands of network applications exceed the total bandwidth that can be provided by the network. Congestion management does not control congestion before it occurs. It controls the injection of traffic into the network so that certain network flows have priority over others. In this section, the most basic of the congestion man- agement queuing techniques will be discussed at a high level.A more detailed explanation will follow in later chapters in the book.We will examine the fol- lowing congestion management techniques: ■ First in First Out Queuing ■ Priority Queuing ■ Custom Queuing ■ Weighted Fair Queuing (WFQ) Many of these queuing strategies are applied in a situation where the traffic exiting an interface on the router exceeds the bandwidth on the egress port and needs to be prioritized. Priority and Custom Queuing require some basic plan- ning and forethought by the network administration to implement and configure correctly on the router.The network administrator must have a good under- standing of the traffic flows and how the traffic should be prioritized in order to engineer an efficient queuing strategy. Poorly planned prioritization can lead to situations worse that the congestive state itself. FIFO and WFQ, on the other hand, require very little configuration in order to work properly. In the Cisco IOS,WFQ is enabled by default on links of E1 speed (2.048 Mbps) or slower. Conversely, FIFO is enabled by default on links faster than E1 speeds.We will cover these default behaviors in greater details later in this chapter. www.syngress.com 110_QoS_03 2/13/01 2:05 PM Page 129 130 Chapter 3 • Introduction to Quality of Service Defining General Queuing Concepts Before we begin discussing different forms of queuing and QoS strategies, it is important to understand the basics of the queuing process itself. In this section, we will discuss the concepts of packet queues and the key concepts of leaky bucket and tail drops. Queues exist within a router in order to hold packets until there are enough resources to forward the packets out the egress port. If there is no congestion in the router, the packets will be forwarded immediately. A network queue can be compared to a waiting line at a carnival attraction. If no one is waiting for the ride, people just walk through the line without waiting.This represents the state of a queue when the network is not experiencing congestion.When a busload of people arrives to try the new roller coaster, there may not me enough seats to handle everyone on the first ride. People then wait in line in the order they www.syngress.com Router interfaces can only be configured with one type of queuing. If a second queuing technique is applied to the interface, the router will either replace the old queuing process with the newly configured one, or report an error message informing the network administrator that a certain queuing process is in operation and needs to be removed before a new one can be applied. The following shows an error reported when custom queuing is applied over priority queuing: Christy# Christy#conf t Enter configuration commands, one per line. End with CNTL/Z. Christy(config)#interface serial 0/0 Christy(config-if)#priority-group 1 Christy(config-if)# Christy(config-if)#custom-queue-list 1 Must remove priority-group configuration first. Christy(config-if)#end Christy# Queuing on Interfaces 110_QoS_03 2/13/01 2:05 PM Page 130 Introduction to Quality of Service • Chapter 3 131 arrived in until it is their turn to ride the coaster. Network queues are used to handle traffic bursts arriving faster than the egress interface can handle. For example, a router connecting an FastEthernet LAN interface to a T1 WAN cir- cuit will often see chunks of traffic arriving on the LAN interface faster than it can send it out to the WAN. In this case, the queue places the traffic in a waiting line so that the T1 circuit can process the packets at its own pace. Speed mis- matches and queues filling up do not necessarily indicate an unacceptable con- gestion situation. It is a normal network operation necessary to handle traffic going in and out of an interface. Leaky Bucket The leaky bucket is a key concept in understanding queuing theory. A network queue can be compared to a bucket into which network packets are poured.The bucket has a hole at the bottom that lets packets drip out at a constant rate. In a network environment, the drip rate would be the speed of the interface serviced by that queue or bucket. If packets drop in the bucket faster than the hole can let them drip out, the bucket slowly fills up. If too many packets drop in the bucket, the bucket may eventually overflow.Those packets are lost since they do not drip out of the bucket. Figure 3.1 depicts the leaky bucket analogy. www.syngress.com Figure 3.1 The Leaky Bucket Analogy Bursty packets drop in the buckets. Ordered packets leak out of the bucket at a constant and steady rate. 110_QoS_03 2/13/01 2:05 PM Page 131 132 Chapter 3 • Introduction to Quality of Service This mechanism is well suited to handle network traffic that is too large in nature. If packets drop in the bucket in bunches, the bucket simply fills up and slowly leaks out at its constant rate.This way, it doesn’t really matter how fast the packets drop in the bucket, as long as the bucket itself can still contain them.This analogy is used when describing network queues. Packets enter a queue at any given rate, but exit the queue at a constant rate, which cannot exceed the speed of the egress interface. Tail Drop What happens when the bucket fills up? It spills over, of course.When dealing with network queues, these buckets are allocated a certain amount of the router’s memory.This means that these queues are not infinite.They can only hold a pre- determined amount of information. Network administrators can normally con- figure the queue sizes if necessary, but the Cisco Internetwork Operating System (IOS) normally allows for pretty balanced default queue size values.When a queue fills up, packets are placed in the queue in the order that they were received.When the amount of packets that enter the queue exceed the queue’s capacity to hold them, the bucket spills over. In queuing terminology, the queue experiences a tail drop.These tail drops represent packets that never entered the queue.They are instead simply discarded by the router. Upper layer protocols use their acknowledgement and retransmission process to detect these dropped packets and retransmits them.Tail drops are not a direct indication that there is something wrong with the network. For example, it is normal for a 100 Mbps FastEthernet interface to send too much information too fast to a 1.544 Mbps T1 interface.These dropped packets often are used by upper layer protocols to throttle down the rate at which they send information to the router. Some QoS mechanisms such as Random Early Detection (RED) and Weighted Random Early Detection (WRED) make use of these principles to control the level of congestion on the network. Tail drops can obviously impact user response. Dropped packets mean requests for retransmissions.With more and more applications riding on the TCP/IP protocol, tail drops can also introduce another phenomenon known as global synchronization. Global synchronization comes from the interaction of an upper layer mechanism of TCP/IP called the sliding window. Simply put, the transmission window of a single TPC/IP communication represents the number of packets that the sender can transmit in each transmission block. If the block is successfully sent without errors, the window size “slides” upwards, allowing the sender to transmit more packets per interval. If an error occurs in the transmission, www.syngress.com 110_QoS_03 2/13/01 2:05 PM Page 132 [...]... bucket is another mechanism used in QoS It represents a pool of resources that can be used by a service whenever it needs it Unlike the leaky bucket, the token bucket does not let anything drip from the bottom.What goes in the bucket must come out from the top As time passes, tokens are added to www.syngress.com 133 110 _QoS_ 03 134 2/ 13/ 01 2:05 PM Page 134 Chapter 3 • Introduction to Quality of Service... Policing and shaping techniques overcome this limitation Both use the token bucket principle explained earlier in this chapter to www.syngress.com 1 43 110 _QoS_ 03 144 2/ 13/ 01 2:06 PM Page 144 Chapter 3 • Introduction to Quality of Service regulate the amount of information that can be sent over the link.The principle difference between the two techniques is as follows: s Policing Techniques Policing... errors, 0 collisions, 13 interface resets 0 output buffer failures, 0 output buffers swapped out 4 carrier transitions DCD=up DSR=up DTR=up RTS=up CTS=up Christy# www.syngress.com 135 110 _QoS_ 03 136 2/ 13/ 01 2:05 PM Page 136 Chapter 3 • Introduction to Quality of Service Fair Queuing Fair queuing is another form of congestion management Fair queuing, generally referred to as Weighted Fair Queuing (WFQ), is... priority.There are as follows: s Network control precedence (7) s Internet control precedence (6) s Critical precedence (5) s Flash-override precedence (4) s Flash precedence (3) www.syngress.com 110 _QoS_ 03 2/ 13/ 01 2:05 PM Page 137 Introduction to Quality of Service • Chapter 3 s Immediate precedence (2) s Priority precedence (1) s Routine precedence (0) Weighted Fair Queuing When designing a network with... proportional to the precedence of the packet Therefore,WFQ conversations with lower weights will be provided with better service than flows with higher weights www.syngress.com 137 110 _QoS_ 03 138 2/ 13/ 01 2:05 PM Page 138 Chapter 3 • Introduction to Quality of Service Priority Queuing Priority Queuing (PQ) is a powerful and strict form of congestion management PQ allows the network administrator to define... some of them: www.syngress.com 110 _QoS_ 03 2/ 13/ 01 2:06 PM Page 1 43 Introduction to Quality of Service • Chapter 3 Advantages s It prevents congestion from happening in some environments s It maximizes the utilization of a link s It can provide a level of priority through packet precedence Disadvantages s It only works with TCP-based conversations Other protocols such as IPX do not use the concept of a...110 _QoS_ 03 2/ 13/ 01 2:05 PM Page 133 Introduction to Quality of Service • Chapter 3 the window size slides down to a lower value and starts creeping up again.When many TCP /IP conversations occur simultaneously, each conversation increases its window size as packets are successfully transmitted... strategy.Typically, PQ is used when delay-sensitive applications encounter problems on the network A good example is IBM mainframe traffic, Systems Network Architecture (SNA) PQ can be an excellent tool to provide protocols such as Serial Tunneling (STUN), Data Link www.syngress.com 110 _QoS_ 03 2/ 13/ 01 2:05 PM Page 139 Introduction to Quality of Service • Chapter 3 Switching (DLSW), or Remote Source Route... performed by the router’s CPU Some highend router platforms such as Cisco s 7100, 7200, and 7500 series routers offer interface modules which incorporate the smarts required to offload some of the tasks from the main CPU.These Virtual Interface Processor (VIP) cards can be www.syngress.com 141 110 _QoS_ 03 142 2/ 13/ 01 2:06 PM Page 142 Chapter 3 • Introduction to Quality of Service configured to perform many... and implewww.syngress.com 110 _QoS_ 03 2/ 13/ 01 2:06 PM Page 145 Introduction to Quality of Service • Chapter 3 mentation of traffic shaping will be covered in greater detail later in Chapter 8 of this book Generic Traffic Shaping Generic traffic shaping uses the token bucket process to limit the amount of traffic that can leave the egress interface It can be applied on a per-interface basis and make use . Avoidance ■ Introducing Policing and Traffic Shaping Chapter 3 1 23 110 _QoS_ 03 2/ 13/ 01 2:05 PM Page 1 23 124 Chapter 3 • Introduction to Quality of Service Introduction In this chapter, we will discuss the basic. 0/0 Christy(config-if)#priority-group 1 Christy(config-if)# Christy(config-if)#custom-queue-list 1 Must remove priority-group configuration first. Christy(config-if)#end Christy# Queuing on Interfaces 110 _QoS_ 03. transmission, www.syngress.com 110 _QoS_ 03 2/ 13/ 01 2:05 PM Page 132 Introduction to Quality of Service • Chapter 3 133 the window size slides down to a lower value and starts creeping up again.When many TCP /IP conversations

Ngày đăng: 09/08/2014, 14:21

Từ khóa liên quan

Mục lục

  • Cover

  • Table of Contents

  • Foreword

  • Chapter 1

  • Chapter 2

  • Chapter 3

  • Chapter 4

  • Chapter 5

  • Chapter 6

  • Chapter 7

  • Chapter 8

  • Chapter 9

  • Chapter 10

  • Chapter 11

  • Chapter 12

  • Index

  • Related Titles

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan