Đánh giá độ phức tạp

131 702 3
Đánh giá độ phức tạp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Đánh giá độ phức tạp

Lecture Notes CMSC 451 (NTU 520B)CMSC 451 (NTU 520B): Design and Analysis of ComputerAlgorithms1Fall 1999Dave MountLecture 1: Course Intro duction(Thursday, Sep 2, 1999)Reading: Chapter 1 in CLR (Cormen, Leiserson, and Rivest).Professor Carl Smith reviewed material from Chapter 1 in CLR.Lecture 2: Asymptotics and Summations(Tuesday, Sep 7, 1999)Read: Review Chapters 1, 2, and 3 in CLR (Cormen, Leiserson, and Rivest).What is an algorithm? Our text denes an algorithm to be anywell-dened computational pro-cedure that takes some values as input and produces some values as output.Like a cookingrecipe, an algorithm provides a step-by-step method for solving a computational problem. Un-like programs, algorithms are not dependent on a particular programming language, machine,system, or compiler. They are mathematical entities, which can be thoughtofasrunningon some sort of idealizedcomputer with an innite random access memory and an unlimitedword size. Algorithm design is all about the mathematical theory behind the design of goo dprograms.Why study algorithm design? Programming is a very complex task. There are a number ofaspects of programming that make it so complex. The rst is that most programming pro jectsare very large, requiring the coordinated eorts of many people. (This is the topic a courselike CMSC 435 in software engineering.) The next is that many programming pro jects involvestoring and accessing large quantities of data eciently. (This is the topic of courses on datastructures and databases like CMSC 420 and 424.) The last is that many programming pro jectsinvolve solving complex computational problems, for which simplistic or naive solutions maynot be ecient enough. The complex problems mayinvolvenumerical data (the sub ject ofcourses on numerical analysis, like CMSC 466), but often they involve discrete data. This iswhere the topic of algorithm design and analysis is important.Although the algorithms discussed in this course will often represent only a tiny fraction of thecode that is generated in a large software system, this small fraction maybe very importantfor the success of the overall pro ject. An unfortunately common approach to this problem is torst design an inecient algorithm and data structure to solve the problem, and then take thispo or design and attempt to ne-tune its performance. The problem is that if the underlyingdesign is bad, then often no amount of ne-tuning is going to make a substantial dierence.The focus of this course is on how to design goo d algorithms, and how to analyze their eciency.We will study a number of dierenttechniques for designing algorithms (divide-and-conquer,dynamic programming, depth-rst search), and apply them to a number of dierent problems.1Copyright, David M. Mount, 1999, Dept. of Computer Science, University of Maryland, College Park, MD, 20742.These lecture notes were prepared byDavid Mount for the course CMSC 451 (NTU 520B), Design and Analysis ofComputer Algorithms, at the University of Maryland, College Park. Permission to use, copy,modify, and distributethese notes for educational purposes and without fee is herebygranted, provided that this copyright notice appear inall copies.1 Lecture Notes CMSC 451 (NTU 520B)An understanding of goo d design techniques is critical to being able to goo d programs. Inaddition, it is important to be able to quickly analyze the running times of these designs(without expensive prototyping and testing). We will begin with a review of the analysistechniques, whichwere covered in the prerequisite course, CMSC 251. See Chapters 1-3 ofCLR for more information.Asymptotics: The formulas that are derived for the running times of program may often be quitecomplex. When designing algorithms, the main purpose of the analysis is to get a sensefor the trend in the algorithm's running time. (An exact analysis is probably best done byimplementing the algorithm and measuring CPU seconds.) Wewould like a simple wayofrepresenting complex functions, which captures the essential growth rate properties. This isthe purpose of asymptotics.Asymptotic analysis is based on two simplifying assumptions, which hold in most (but not all)cases. But it is important to understand these assumptions and the limitations of asymptoticanalysis.Large input sizes: We are most interested in how the running time grows for large valuesof n.Ignore constant factors: The actual running time of the program depends on various con-stant factors in the implementation (coding tricks, optimizations in compilation, speed ofthe underlying hardware, etc). Therefore, we will ignore constant factors.The justication for considering large n is that if n is small, then almost any algorithm isfast enough. People are most concerned about running times for large inputs. For the mostpart, these assumptions are reasonable when making comparisons between functions that havesignicantly dierent behaviors. For example, suppose wehavetwo programs, one whoserunning time is T1(n)=n3and another whose running time is T2(n)=100n. (The latteralgorithm may be faster because it uses a more sophisticated and complex algorithm, and theadded sophistication results in a larger constant factor.) For small n (e.g., n  10) the rstalgorithm is the faster of the two. But as n becomes larger the relative dierences in runningtime become much greater. Assuming one million operations per second.n T1(n) T2(n) T1(n)=T2(n)10 0.001 sec 0.001 sec 1100 1sec 0.01 sec 1001000 17 min 0.1 sec 10,00010,000 11.6 days 1 sec 1,000,000The clear lesson is that as input sizes grow, the performance of the asymptotically po oreralgorithm degrades much more rapidly.These assumptions are not always reasonable. For example, in any particular application, nis a xed value. It may be the case that one function is smaller than another asymptotically,but for your value of n, the asymptotically larger value is ne. Most of the algorithms that wewill study this semester will have both low constants and low asymptotic running times, so wewill not need to worry about these issues.To represent the running times of algorithms in a simpler form, weuseasymptotic notation,which essentially represents a function by its fastest growing term and ignores constant factors.See Chapter 2 in CLR for the formal \c and n0" denitions. However, for our purposes, thefollowing denitions based on limits is much easier to apply, and holds for virtually all functionsthat arise as running times. (For strange functions where these limits do not exist, you shoulduse the formal denitions instead.)2 Lecture Notes CMSC 451 (NTU 520B)Let f (n) and g(n)betwo functions, whichwe will assume to be positive. Suppose wewantto assert that f (n) and g(n) grow at roughly the same rates for large n (ignoring constantfactors). This would be equivalenttosayinglimn!1f (n)g(n)= cwhere c is some nonzero constant (not 0 and not 1). In asymptotic notation wewritef (n) 2(g(n)). Intuitively it means that f (n)andg(n) are asymptotically equivalent. Suppose wewant to assert that f (n)doesnotgrow signicantly faster than g(n). Then the ratio f (n)=g(n)should either approach a constant (they are equivalent) or 0 (if g(n) grows faster than f (n)).In this case wesay f (n) 2 O(g (n)). (Our text uses = rather than 2,butO(g (n)) is bestthought of as a set of functions.) Here are the complete denitions:Asymptotic Form Relationship Denitionf (n) 2 (g(n)) f (n)  g(n) 0 < limn!1f (n)g(n)< 1.f (n) 2 O(g (n)) f (n)  g(n) 0  limn!1f (n)g(n)< 1.f (n) 2 (g(n)) f (n)  g(n) 0 < limn!1f (n)g(n).f (n) 2 o(g(n)) f (n)  g(n) limn!1f (n)g(n)=0.f (n) 2 !(g (n)) f (n)  g(n) limn!1f (n)g(n)= 1.For example T (n)=(n3+3n2+2n)=6 2 (n3) becauselimn!1T (n)n3= limn!1(n3+3n2+2n)=6n3= limn!116+12n+13n2=16and 0 < 1=6 < 1. (Note that it also follows that T (n) 2 O(n3), and T (n) 2 (n3).) Indeed,this is consistent with the informal notion of asymptotic notation, of ignoring the constantfactor, and considering large values of n (since the largest power of n will dominate for largen). When dealing with limits, the following rule is nice to keep in mind.L'H^opital's rule: If f (n)andg(n) both approach 0 or both approach 1 in the limit, thenlimn!1f (n)g(n)=limn!1f0(n)g0(n)where f0(n)andg0(n) denote the derivatives of f and g relativeton.Some of the trickier asymptotic comparisons to make are those that involve exponentials andlogarithms. Here are some simple rules of thumbtokeep in mind.Constants: Multiplicative and additive constants may be ignored. When constants appearin exponents or as the base of an exponential, they are signicant. Thus 2n  3n, butn2 n3and 2n 3n.Logarithm base: Logarithms in dierent bases dier only by a constant factor, thus log2n log3n.Thus, we will often not specify logarithm bases inside asymptotic notation, as inO(n log n), since they do not matter.3 Lecture Notes CMSC 451 (NTU 520B)Logs and powers: Remember that you can pull an exponent out of a logarithm as a multi-plicative factor. Thus n log2(n2)=2n log2n  n log2n.Exponents and logs: Remember that exponentials and logs cancel one another. Thus 2lg n=n.Logs and polynomials: For any a b  0, (log n)a nb. (That is, logs grow more slowlythan any polynomial.)Polynomials and exponentials: For any a  0b > 1, na bn. (That is, polynomials growmore slowly than any exponential.)Summations: There are some particularly important summations, whichyou should probablycommit to memory (or at least remember their asymptotic growth rates). These are analogousto the basic formulas of integral calculus, and haveaway of cropping up over and over again.Constant Series: For integers a and b,bXi=a1=(b ; a +1) if b  a ; 10 otherwise.Notice that when b = a ; 1, there are no terms in the summation (since the index isassumed to countupwards only), and the result is 0. Be careful to checkthatb  a ; 1before applying this rule.Arithmetic Series: For n  0,nXi=0i =1+2++ n =n(n +1)2:This is (n2).Geometric Series: Let x 6=1be anyconstant (independentofn), then for n  0,nXi=0xi=1+x + x2+ + xn=xn+1; 1x ; 1:If 0 <x< 1 then this is (1). If x>1, then this is (xn), that is, the entire sum isproportional to the last element of the series.Here are some more obscure ones, which come up from time to time. Not all of them are listedin CLR. A goo d source is the appendix in the bo ok \The Analysis of Algorithms" byP.W.Purdom and C. A. Brown.Quadratic Series: For n  0,nXi=0i2=12+22+ + n2=2n3+3n2+ n6:Linear-geometric Series: This arises in some algorithms based on trees and recursion. Letx 6=1 be any constant, then for n  0,n;1Xi=0ixi= x +2x2+3x3+ nxn=(n ; 1)x(n+1); nxn+ x(x ; 1)2:(What happens in the case where x =1?) As n becomes large, this is dominated by theterm (n ; 1)x(n+1)=(x ; 1)2. The multiplicative term n ; 1isvery nearly equal to n forlarge n, and, since x is a constant, wemaymultiply this times the constant(x ; 1)2=xwithout changing the asymptotics. What remains is (nxn).4 Lecture Notes CMSC 451 (NTU 520B)Harmonic Series: This arises often in probabilistic analyses of algorithms. It does not havean exact closed form solution, but it can be closely approximated. For n  0,Hn=nXi=11i=1+12+13+ +1n ln n:There are also a few tips to learn about solving summations.Summations with general bounds: When a summation does not start at the 1 or 0, asmost of the aboveformulas assume, you can just split it up into the dierence of twosummations. For example, for 1  a  bbXi=af (i)=bXi=0f (i) ;a;1Xi=0f (i):Approximate using integrals: Integration and summation are closely related. (Integrationis in some sense a continuous form of summation.) Here is a handy formula. Let f (x)beany monotonical ly increasing function (the function increases as x increases).Zn0f (x)dx nXi=1f (i) Zn+11f (x)dx:Let us consider a simple example. Let A1::4n] be some array.Sample Program Fragmentfori=nto2ndo{forj=1toido{if (Aj] <= A2j]) output "hello"}}In the worst case, how many times is the \hello" line printed as a function of n?Intheworstcase, the elements of A are in ascending order, implying that every time through the loop thestring is output. Let T (n) denote the number of times that the string is output. We can setup one nested summation for each nested loop, and then use the above rules to solve them.T (n)=2nXi=niXj =11:The \1" is due to the fact that the string is output once for each time through the inner loop.Solving these from the inside out, we see that the last summation is a constant sum, and henceT (n)=2nXi=n(i ; 1+ 1) =2nXi=ni:This is just an arithmetic series, with general bounds, whichwe can break into the dierenceof two arithmetic series, starting from 0.T (n) =2nXi=0i ;n;1Xi=0i=2n(2n +1)2;n(n ; 1)2=(4n2+2n) ; (n2; n)2=32(n2+ n) 2 (n2):5 Lecture Notes CMSC 451 (NTU 520B)At this point, it is a goo d idea to go back and test your solution for a few small values of n(e.g. n =0 1 2) just to double-check that you have not made any math errors.Lecture 3: Divide-and-Conquer and Recurrences(Thursday, Sep 9, 1999)Read: Chapter 4 from CLR. Constructive induction is not discussed in CLR.Divide-and-Conquer Algorithms: An important approach to algorithm design is based on divide-and-conquer.Such an algorithm consists of three basic steps: divide, conquer, and combine.The most common example (described in Chapt 1 of CLR) is that of Mergesort. To sort a listof numbers you rst split the list into two sublists of roughly equal size, sort each sublist, andthen merge the sorted lists into a single sorted list.241124248411132312 3132312 3 24841113 2312348381241311 23SplitSort each sublistMergeFigure 1: Mergesort Example.Divide: Split the original problem of size n into a (typically 2, but maybe more) problems ofroughly equal sizes, say n=b.Conquer: Solveeach subproblem recursively.Combine: Combine the solutions to the subproblems to a solution to the original problem.The time to combine the solutions is called the overhead.We will assume that the runningtime of the overhead is some polynomial of n,say cnk, for constants c and k.This recursive subdivision is repeated until the size of the subproblems is small enough thatthe problem can be solved by brute-force.For example, in Mergesort, we subdivide each problem into a = 2 parts, each part of size n=2(implying that b =2). Two sorted lists of size n=2 can be merged into a single sorted list ofsize n in (n) time (thus c = k = 1). This is but one example. There are many more.Analysis: How long does a divide-and-conquer algorithm take to run? Let T (n) be the functionthat describes the running time of the algorithm on a subarray of length n  1. For simplicity,let us assume that n isapower of 2. (The general analysis comes about by considering oorsand ceilings, and is quite a bit messier. See CLR for some explanation.)As a basis, when n is 1, the algorithm runs in constant time, namely (1). Since we areignoring constant factors, we can just say that T (1) = 1. Otherwise, if n>1, then it splitsthe list into two sublists, each of size n=2, and makes two recursive calls on these arrays, eachtaking T (n=2) time. (Do you see why?) As mentioned above, merging can be done in (n)time, whichwe will just express as n (since we are ignoring constants). So the overall running6 Lecture Notes CMSC 451 (NTU 520B)time is described by the following recurrence,which is dened for all n  1, where n is a powerof 2:T (n)=1 if n =1,2T (n=2) + n otherwise.Thisisisvery well known recurrence. Let's consider some general methods for solving recur-rences. See CLR for more methods.Solving Recurrences by The Master Theorem: There are a number of methods for solvingthe sort of recurrences that show up in divide-and-conquer algorithms. The easiest method isto apply the Master Theorem that is given in CLR. Here is a slightly more restrictiveversion,but adequate for a lot of instances. See CLR for the more complete version of the MasterTheorem.Theorem: (Simplied Master Theorem) Let a  1, b>1 be constants and let T (n)betherecurrenceT (n)=aT (n=b)+cnkdened for n  0.Case (1): a>bkthen T (n)is(nlogba).Case (2): a = bkthen T (n)is(nklog n).Case (3): a<bkthen T (n)is(nk).Using this version of the Master Theorem we can see that in our recurrence a =2,b = 2, andk =1,so a = bkand case (2) applies. Thus T (n)is(n log n).There many recurrences that cannot be put into this form. For example, the following recur-rence is quite common: T (n)=2T (n=2) + n log n. This solves to T (n)=(n log2n), butthe Master Theorem (either this form or the one in CLR will not tell you this.) For suchrecurrences, other methods are needed.Expansion: A more basic method for solving recurrences is that of expansion (which CLR callsiteration). This is a rather painstaking process of repeatedly applying the denition of therecurrence until (hopefully) a simple pattern emerges. This pattern usually results in a sum-mation that is easy to solve. If you look at the proof in CLR for the Master Theorem, it isactually based on expansion.Let us consider applying this to the following recurrence. We assume that n is a power of 3.T (1) = 1T (n) = 2Tn3+ n if n>1First we expand the recurrence into a summation, until seeing the general pattern emerge.T (n) = 2Tn3+ n= 22Tn9+n3+ n =4Tn9+n +2n3= 42Tn27+n9+n +2n3=8Tn27+n +2n3+4n9 .= 2kTn3k+k;1Xi=02in3i=2kTn3k+ nk;1Xi=0(2=3)i:7 Lecture Notes CMSC 451 (NTU 520B)The parameter k is the number of expansions (not to be confused with the value of k weintroduced earlier on the overhead). Wewanttoknowhowmany expansions are needed toarrive at the basis case. Todothiswesetn=(3k) = 1, meaning that k =log3n. Substitutingthis in and using the identity alog b= blog awehave:T (n)=2log3nT (1) + nlog3n;1Xi=0(2=3)i= nlog32+ nlog3n;1Xi=0(2=3)i:Next, we can apply the formula for the geometric series and simplify to get:T (n) = nlog32+ n1 ; (2=3)log3n1 ; (2=3)= nlog32+3n(1 ; (2=3)log3n)=nlog32+3n(1 ; nlog3(2=3))= nlog32+3n(1 ; n(log32);1)=nlog32+3n ; 3nlog32= 3n ; 2nlog32:Since log32  0:631 < 1, T (n) is dominated bythe3n term asymptotically, and so it is (n).Induction and Constructive Induction: Another technique for solving recurrences (and thisworks for summations as well) is to guess the solution, or the general form of the solution,and then attempt to verify its correctness through induction. Sometimes there are parameterswhose values you do not know. This is ne. In the course of the induction proof, you willusually nd out what these values must be. We will consider a famous example, that of theFibonacci numbers.F0= 0F1= 1Fn= Fn;1+ Fn;2for n  2.The Fibonacci numbers arise in data structure design. If you study AVL, or height balanced,trees in CMSC 420, you will learn that the minimum-sized AVL trees are produced by therecursive construction given below. Let L(i) denote the number of leaves in the minimum-sized AVL tree of height i.To construct a minimum-sized AVL tree of height i,you create aroot node whose children consist of a minimum-sized AVL tree of heights i ; 1 and i ; 2. Thusthe number of leaves obeys L(0) = L(1)=1,L(i)=L(i ; 1) + L(i ; 2). It is easy to see thatL(i)=Fi+1.L(0) = 1 L(1)=1 L(2)=2 L(3)=3 L(4)=5Figure 2: Minimum-sized AVL trees.If you expand the Fibonacci series for a number of terms, you will observethatFnappearsto grow exponentially, but not as fast as 2n. It is tempting to conjecture that Fn n;1,forsome real parameter , where 1 <<2. We can use induction to prove this and deriveabound on .8 Lecture Notes CMSC 451 (NTU 520B)Lemma: For all integers n  1, Fn n;1for some constant ,1<<2.Proof: We will try to derive the tightest bound we can on the value of .Basis: For the basis cases we consider n = 1. ObservethatF1=1 0, as desired.Induction step: For the induction step, let us assume that Fm m;1whenever 1 m<n. Using this induction hypothesis wewillshow that the lemma holds for nitself, whenever n  2.Since n  2, wehave Fn= Fn;1+ Fn;2.Now, since n ; 1andn ; 2 are both strictlyless than n,we can apply the induction hypothesis, from whichwehaveFn n;2+ n;3= n;3(1 + ):Wewanttoshow that this is at most n;1(for a suitable choice of ). Clearly thiswill be true if and only if (1 + )  2. This is not true for all values of  (for exampleit is not true when  = 1 but it is true when  =2.)At the critical value of  this inequality will be an equality, implying that wewantto nd the roots of the equation2;  ; 1=0:By the quadratic formula wehave =1 p1+42=1 p52:Sincep5  2:24, observe that one of the roots is negative, and hence would not bea possible candidate for . The positiverootis =1+p52 1:618:There is a very subtle bug in the preceding proof. Can you spot it? The error occurs in thecase n =2. Here we claim that F2= F1+ F0and then we apply the induction hypothesis toboth F1and F0. But the induction hypothesis only applies for m  1, and hence cannot beapplied to F0!Toxitwe could include F2as part of the basis case as well.Notice not only did weprove the lemma by induction, but we actually determined the valueof  whichmakes the lemma true. This is why this method is called constructive induction.By the way, the value  =12(1 +p5) is a famous constant in mathematics, architecture andart. It is the golden ratio.Twonumbers A and B satisfy the golden ratio ifAB=A + BA:It is easy to verify that A =  and B = 1 satises this condition. This proportion occursthroughout the world of art and architecture.Lecture 4: Sorting: Review(Tuesday, Sep 14, 1999)Read: Review Chapts. 7 and 8 and read Chapt. 9 in CLR.Review of Sorting: Sorting is among the most basic computational problems in algorithm design.Wearegiven a sequence of items, each associated with a given key value. The problem is9 Lecture Notes CMSC 451 (NTU 520B)to permute the items so that they are in increasing order bykey. Sorting is importantinalgorithm because it is often the rst step in some more complex algorithm, as an initial stagein organizing data for faster subsequent retrieval.Sorting algorithms are usually divided into two classes, internal sorting algorithms,whichassume that data is stored in an array, and external sorting algorithm, which assume that datais stored on tape or disk and can only be accessed sequentially.You are probably familiarwith the standard simple (n2) sorting algorithms, suchasInsertion-sort and Bubblesort.The ecient(n log n) sorting algorithms include as Mergesort, Quicksort,andHeapsort. Thequadratic time algorithms are actually faster than the more complex (n log n) algorithms forsmall inputs, say less than about 20 keys. Among the slow sorting algorithms, insertion-sorthas a reputation as being the better choice.Sorting algorithms often have additional properties that are of interest, depending on theapplication. Here are two important properties.In-place: The algorithm uses no additional array storage, and hence it is possible to sort verylarge lists without the need to allocate additional arrays.Stable: A sorting algorithm is stable if twoelements that are equal remain in the same relativeposition after sorting is completed. This is of interest, since in some sorting applicationsyou sort rst on one key and then on another. It is nice to knowthattwo items that areequal on the second key, remain sorted on the rst key.Here is a quick summary of the (n log n) algorithms. If you are not familiar with anyofthese, check out the descriptions in CLR.Quicksort: It works recursively,by rst selecting a random \pivot value" from the array.Then it partitions the arrayinto elements that are less than and greater than the pivot.Then it recursively sorts each part.Quicksort is widely regarded as the fastest of the fast sorting algorithms (on modernmachines). One explanation is that its inner loop compares elements against a singlepivot value, which can be stored in a register for fast access. The other algorithmscompare two elements in the array. This is considered an in-place sorting algorithm,since it uses no other array storage. (It does implicitly use the system's recursion stack,but this is usually not counted.) It is not stable.This algorithm is (n log n)intheexpectedcase,and(n2) in the worst case. Theprobability that the algorithm takes asymptotically longer (assuming that the pivot ischosen randomly) is extremely small for large n.Mergesort: Mergesort also works recursively. It is a classical divide-and-conquer algorithm.The array is split into two subarrays of roughly equal size. They are sorted recursively.Then the two sorted subarrays are merged together in (n) time.Mergesort is the only stable sorting algorithm of these three. The downside is the Merge-sort is the only algorithm of the three that requires additional array storage (ignoring therecursion stack), and thus it is not in-place. This is because the merging process mergesthe two arrays into a third array. Although it is possible to merge arrays in-place, itcannot be done in (n) time.Heapsort: Heapsort is based on a nice data structure, called a heap, whichisanecient im-plementation of a priority queue data structure. A priority queue supports the operationsof inserting a key, and deleting the element with the smallest key value. A heap can bebuilt for n keys in (n) time, and the minimum key can be extracted in (log n) time.Heapsort is an in-place sorting algorithm, but it is not stable.10 123doc.vn

Ngày đăng: 15/11/2012, 10:17

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan