Log In
Or create an account ->
Imperial Library
Home
About
News
Upload
Forum
Help
Login/SignUp
Index
Part I: Foundations
Chapter List
Chapter 1: The Role of Algorithms in Computing What are algorithms? Why is the study of algorithms
Chapter 1: The Role of Algorithms in Computing
1.2 Algorithms as a technology Suppose computers were infinitely fast and computer memory was free.
Chapter notes There are many excellent texts on the general topic of algorithms, including those by
Chapter notes
Chapter 2: Getting Started This chapter will familiarize you with the framework we shall use throu
Chapter 2: Getting Started
2.2 Analyzing algorithms Analyzing an algorithm has come to mean predicting the resources that the
2.3 Designing algorithms There are many ways to design algorithms. Insertion sort uses an incrementa
Chapter notes In 1968, Knuth published the first of three volumes with the general title The Art of
Chapter notes
Chapter 3: Growth of Functions Overview The order of growth of the running time of an algorithm,
Chapter 3: Growth of Functions
3.1 Asymptotic notation The notations we use to describe the asymptotic running time of an algorithm
3.2 Standard notations and common functions This section reviews some standard mathematical functio
Chapter notes Knuth [182] traces the origin of the O-notation to a number-theory text by P. Bachman
Chapter notes
Chapter 4: Recurrences Overview As noted in Section 2.3.2, when an algorithm contains a recursive
Chapter 4: Recurrences
Technicalities In practice, we neglect certain technical details when we state and solve recurrences
4.1 The substitution method The substitution method for solving recurrences entails two steps: Gue
4.2 The recursion-tree method Although the substitution method can provide a succinct proof that a s
4.3 The master method The master method provides a "cookbook" method for solving recurrences of the
★ 4.4: Proof of the master theorem This section contains a proof of the master theorem (Theorem 4.1
Chapter notes Recurrences were studied as early as 1202 by L. Fibonacci, for whom the Fibonacci numb
Chapter notes
Chapter 5: Probabilistic Analysis and Randomized Algorithms This chapter introduces probabilistic
Chapter 5: Probabilistic Analysis and Randomized Algorithms
5.2 Indicator random variables In order to analyze many algorithms, including the hiring problem, we
5.3 Randomized algorithms In the previous section, we showed how knowing a distribution on the input
5.4 ★ Probabilistic analysis and further uses of indicator random variables This advanced section f
Chapter notes Bollobás [44], Hofri [151], and Spencer [283] contain a wealth of advanced probabili
Chapter notes
Part II: Sorting and Order Statistics
Chapter List
Chapter 6: Heapsort Overview In this chapter, we introduce another sorting algorithm. Like merge
Chapter 6: Heapsort
6.1 Heaps The (binary) heap data structure is an array object that can be viewed as a nearly complet
6.2 Maintaining the heap property MAX-HEAPIFY is an important subroutine for manipulating max-heaps.
6.3 Building a heap We can use the procedure MAX-HEAPIFY in a bottom-up manner to convert an array A
6.4 The heapsort algorithm The heapsort algorithm starts by using BUILD-MAX-HEAP to build a max-heap
6.5 Priority queues Heapsort is an excellent algorithm, but a good implementation of quicksort, pres
Chapter notes The heapsort algorithm was invented by Williams [316], who also described how to imple
Chapter notes
Chapter 7: Quicksort Quicksort is a sorting algorithm whose worst-case running time is Θ(n2) on an
Chapter 7: Quicksort
7.2 Performance of quicksort The running time of quicksort depends on whether the partitioning is ba
7.3 A randomized version of quicksort In exploring the average-case behavior of quicksort, we have m
7.4 Analysis of quicksort Section 7.2 gave some intuition for the worst-case behavior of quicksort
Chapter Notes The quicksort procedure was invented by Hoare [147]; Hoare's version appears in Probl
Chapter Notes
Chapter 8: Sorting in Linear Time Overview We have now introduced several algorithms that can sor
Chapter 8: Sorting in Linear Time
8.1 Lower bounds for sorting In a comparison sort, we use only comparisons between elements to gain
8.2 Counting sort Counting sort assumes that each of the n input elements is an integer in the rang
8.3 Radix sort Radix sort is the algorithm used by the card-sorting machines you now find only in c
Chapter notes The decision-tree model for studying comparison sorts was introduced by Ford and Johns
Chapter notes
Chapter 9: Medians and Order Statistics Overview The ith order statistic of a set of n elements i
Chapter 9: Medians and Order Statistics
9.1 Minimum and maximum How many comparisons are necessary to determine the minimum of a set of n e
9.2 Selection in expected linear time The general selection problem appears more difficult than the
9.3 Selection in worst-case linear time We now examine a selection algorithm whose running time is O
Chapter notes The worst-case linear-time median-finding algorithm was devised by Blum, Floyd, Pratt,
Chapter notes
Part III: Data Structures
Chapter List
Chapter 10: Elementary Data Structures In this chapter, we examine the representation of dynamic s
Chapter 10: Elementary Data Structures
10.2 Linked lists A linked list is a data structure in which the objects are arranged in a linear or
10.3 Implementing pointers and objects How do we implement pointers and objects in languages, such a
10.4 Representing rooted trees The methods for representing lists given in the previous section ext
Chapter notes Aho, Hopcroft, and Ullman [6] and Knuth [182] are excellent references for elementar
Chapter notes
Chapter 11: Hash Tables Overview Many applications require a dynamic set that supports only the d
Chapter 11: Hash Tables
11.1 Direct-address tables Direct addressing is a simple technique that works well when the univers
11.2 Hash tables The difficulty with direct addressing is obvious: if the universe U is large, stor
11.3 Hash functions In this section, we discuss some issues regarding the design of good hash functi
11.4 Open addressing In open addressing, all elements are stored in the hash table itself. That is,
11.5 ★ Perfect hashing Although hashing is most often used for its excellent expected performance, h
Chapter notes Knuth [185] and Gonnet [126] are excellent references for the analysis of hashing al
Chapter notes
Chapter 12: Binary Search Trees Overview Search trees are data structures that support many dynam
Chapter 12: Binary Search Trees
12.1 What is a binary search tree? A binary search tree is organized, as the name suggests, in a bin
12.2 Querying a binary search tree A common operation performed on a binary search tree is searching
12.3 Insertion and deletion The operations of insertion and deletion cause the dynamic set represen
12.4 ★ Randomly built binary search trees We have shown that all the basic operations on a binary s
Chapter notes Knuth [185] contains a good discussion of simple binary search trees as well as many
Chapter notes
Chapter 13: Red-Black Trees Chapter 12 showed that a binary search tree of height h can implement
Chapter 13: Red-Black Trees
13.2 Rotations The search-tree operations TREE-INSERT and TREE-DELETE, when run on a red-black tree
13.3 Insertion Insertion of a node into an n-node red-black tree can be accomplished in O(lg n) tim
13.4 Deletion Like the other basic operations on an n-node red-black tree, deletion of a node takes
Chapter notes The idea of balancing a search tree is due to Adel'son-Vel'ski i and Landis [2], who i
Chapter notes
Chapter 14: Augmenting Data Structures There are some engineering situations that require no more t
Chapter 14: Augmenting Data Structures
14.2 How to Augment a Data Structure The process of augmenting a basic data structure to support add
14.3 Interval Trees In this section, we shall augment red-black trees to support operations on dynam
Chapter Notes In their book, Preparata and Shamos [247] describe several of the interval trees that
Chapter Notes
Part IV: Advanced Design and Analysis Techniques
Chapter List
Chapter 15: Dynamic Programming Overview Dynamic programming, like the divide-and-conquer method,
Chapter 15: Dynamic Programming
15.1 Assembly-line scheduling Our first example of dynamic programming solves a manufacturing proble
15.2 Matrix-chain multiplication Our next example of dynamic programming is an algorithm that solves
15.3 Elements of dynamic programming Although we have just worked through two examples of the dynami
15.4 Longest common subsequence In biological applications, we often want to compare the DNA of two
15.5 Optimal binary search trees Suppose that we are designing a program to translate text from Engl
Chapter notes R. Bellman began the systematic study of dynamic programming in 1955. The word "progra
Chapter notes
Chapter 16: Greedy Algorithms Overview Algorithms for optimization problems typically go through
Chapter 16: Greedy Algorithms
16.1 An activity-selection problem Our first example is the problem of scheduling several competing
16.2 Elements of the greedy strategy A greedy algorithm obtains an optimal solution to a problem by
16.3 Huffman codes Huffman codes are a widely used and very effective technique for compressing dat
16.4 ★ Theoretical foundations for greedy methods There is a beautiful theory about greedy algorith
16.5 ★ A task-scheduling problem An interesting problem that can be solved using matroids is the pro
Chapter notes Much more material on greedy algorithms and matroids can be found in Lawler [196] and
Chapter notes
Chapter 17: Amortized Analysis Overview In an amortized analysis, the time required to perform a
Chapter 17: Amortized Analysis
17.1 Aggregate analysis In aggregate analysis, we show that for all n, a sequence of n operations ta
17.2 The accounting method In the accounting method of amortized analysis, we assign differing charg
17.3 The potential method Instead of representing prepaid work as credit stored with specific object
17.4 Dynamic tables In some applications, we do not know in advance how many objects will be stored
Chapter notes Aggregate analysis was used by Aho, Hopcroft, and Ullman [5]. Tarjan [293] surveys the
Chapter notes
Part V: Advanced Data Structures
Chapter List
Chapter 18: B-Trees Overview B-trees are balanced search trees designed to work well on magnetic
Chapter 18: B-Trees
Data structures on secondary storage There are many different technologies available for providing m
18.1 Definition of B-trees To keep things simple, we assume, as we have for binary search trees and
18.2 Basic operations on B-trees In this section, we present the details of the operations B-TREE-SE
18.3 Deleting a key from a B-tree Deletion from a B-tree is analogous to insertion but a little more
Chapter notes Knuth [185], Aho, Hopcroft, and Ullman [5], and Sedgewick [269] give further discussi
Chapter notes
Chapter 19: Binomial Heaps Overview This chapter and Chapter 20 present data structures known as
Chapter 19: Binomial Heaps
19.1 Binomial trees and binomial heaps A binomial heap is a collection of binomial trees, so this s
19.2 Operations on binomial heaps In this section, we show how to perform operations on binomial hea
Chapter notes Binomial heaps were introduced in 1978 by Vuillemin [307]. Brown [49, 50] studied the
Chapter notes
Chapter 20: Fibonacci Heaps Overview In Chapter 19, we saw how binomial heaps support in O(lg n)
Chapter 20: Fibonacci Heaps
20.1 Structure of Fibonacci heaps Like a binomial heap, a Fibonacci heap is a collection of min-heap
20.2 Mergeable-heap operations In this section, we describe and analyze the mergeable-heap operation
20.3 Decreasing a key and deleting a node In this section, we show how to decrease the key of a node
20.4 Bounding the maximum degree To prove that the amortized time of FIB-HEAP-EXTRACT-MIN and FIB-HE
Chapter notes Fibonacci heaps were introduced by Fredman and Tarjan [98]. Their paper also describes
Chapter notes
Chapter 21: Data Structures for Disjoint Sets Some applications involve grouping n distinct elemen
Chapter 21: Data Structures for Disjoint Sets
21.2 Linked-list representation of disjoint sets A simple way to implement a disjoint-set data struc
21.3 Disjoint-set forests In a faster implementation of disjoint sets, we represent sets by rooted t
21.4 ★ Analysis of union by rank with path compression As noted in Section 21.3, the running time of
Chapter notes Many of the important results for disjoint-set data structures are due at least in par
Chapter notes
Part VI: Graph Algorithms
Chapter List
Chapter 22: Elementary Graph Algorithms This chapter presents methods for representing a graph and
Chapter 22: Elementary Graph Algorithms
22.2 Breadth-first search Breadth-first search is one of the simplest algorithms for searching a gr
22.3 Depth-first search The strategy followed by depth-first search is, as its name implies, to sea
22.4 Topological sort This section shows how depth-first search can be used to perform a topological
22.5 Strongly connected components We now consider a classic application of depth-first search: deco
Chapter notes Even [87] and Tarjan [292] are excellent references for graph algorithms. Breadth-fir
Chapter notes
Chapter 23: Minimum Spanning Trees Overview In the design of electronic circuitry, it is often ne
Chapter 23: Minimum Spanning Trees
23.1 Growing a minimum spanning tree Assume that we have a connected, undirected graph G = (V, E) wi
23.2 The algorithms of Kruskal and Prim The two minimum-spanning-tree algorithms described in this s
Chapter notes Tarjan [292] surveys the minimum-spanning-tree problem and provides excellent advance
Chapter notes
Chapter 24: Single-Source Shortest Paths Overview A motorist wishes to find the shortest possible
Chapter 24: Single-Source Shortest Paths
Variants In this chapter, we shall focus on the single-source shortest-paths problem: given a graph
Optimal substructure of a shortest path Shortest-paths algorithms typically rely on the property tha
Negative-weight edges In some instances of the single-source shortest-paths problem, there may be ed
Cycles Can a shortest path contain a cycle? As we have just seen, it cannot contain a negative-weigh
Representing shortest paths We often wish to compute not only shortest-path weights, but the vertic
Relaxation The algorithms in this chapter use the technique of relaxation. For each vertex v ∈ V, w
Properties of shortest paths and relaxation To prove the algorithms in this chapter correct, we shal
Chapter outline Section 24.1 presents the Bellman-Ford algorithm, which solves the single-source sh
24.1 The Bellman-Ford algorithm The Bellman-Ford algorithm solves the single-source shortest-paths
24.2 Single-source shortest paths in directed acyclic graphs By relaxing the edges of a weighted dag
24.3 Dijkstra's algorithm Dijkstra's algorithm solves the single-source shortest-paths problem on a
24.4 Difference constraints and shortest paths Chapter 29 studies the general linear-programming pr
24.5 Proofs of shortest-paths properties Throughout this chapter, our correctness arguments have rel
Chapter notes Dijkstra's algorithm [75] appeared in 1959, but it contained no mention of a priority
Chapter notes
Chapter 25: All-Pairs Shortest Paths Overview In this chapter, we consider the problem of findin
Chapter 25: All-Pairs Shortest Paths
Chapter outline Section 25.1 presents a dynamic-programming algorithm based on matrix multiplicati
25.1 Shortest paths and matrix multiplication This section presents a dynamic-programming algorithm
25.2 The Floyd-Warshall algorithm In this section, we shall use a different dynamic-programming for
25.3 Johnson's algorithm for sparse graphs Johnson's algorithm finds shortest paths between all pai
Chapter notes Lawler [196] has a good discussion of the all-pairs shortest-paths problem, although
Chapter notes
Chapter 26: Maximum Flow Overview Just as we can model a road map as a directed graph in order to
Chapter 26: Maximum Flow
26.1 Flow networks In this section, we give a graph-theoretic definition of flow networks, discuss t
26.2 The Ford-Fulkerson method This section presents the Ford-Fulkerson method for solving the maxi
26.3 Maximum bipartite matching Some combinatorial problems can easily be cast as maximum-flow probl
26.3 Maximum bipartite matching
26.4 ★ Push-relabel algorithms In this section, we present the "push-relabel" approach to computing
26.5 ★The relabel-to-front algorithm The push-relabel method allows us to apply the basic operations
Chapter notes Ahuja, Magnanti, and Orlin [7], Even [87], Lawler [196], Papadimitriou and Steiglitz
Chapter notes
Part VII: Selected Topics
Chapter List
Chapter 27: Sorting Networks Overview In Part II, we examined sorting algorithms for serial comp
Chapter 27: Sorting Networks
27.1 Comparison networks Sorting networks are comparison networks that always sort their inputs, so
27.2 The zero-one principle The zero-one principle says that if a sorting network works correctly wh
27.3 A bitonic sorting network The first step in our construction of an efficient sorting network is
27.4 A merging network Our sorting network will be constructed from merging networks, which are netw
27.5 A sorting network We now have all the necessary tools to construct a network that can sort any
Chapter notes Knuth [185] contains a discussion of sorting networks and charts their history. They
Chapter notes
Chapter 28: Matrix Operations Overview Operations on matrices are at the heart of scientific com
Chapter 28: Matrix Operations
28.1 Properties of matrices In this section, we review some basic concepts of matrix theory and some
28.2 Strassen's algorithm for matrix multiplication This section presents Strassen's remarkable rec
28.3 Solving systems of linear equations Solving a set of simultaneous linear equations is a fundam
28.4 Inverting matrices Although in practice we do not generally use matrix inverses to solve system
28.5 Symmetric positive-definite matrices and least-squares approximation Symmetric positive-defini
Chapter notes There are many excellent texts available that describe numerical and scientific comput
Chapter notes
Chapter 29: Linear Programming Many problems can be formulated as maximizing or minimizing an objec
Chapter 29: Linear Programming
General linear programs In the general linear-programming problem, we wish to optimize a linear func
An overview of linear programming In order to describe properties of and algorithms for linear progr
Applications of linear programming Linear programming has a large number of applications. Any textbo
Algorithms for linear programming This chapter studies the simplex algorithm. This algorithm, when i
29.1 Standard and slack forms This section describes two formats, standard form and slack form, that
29.2 Formulating problems as linear programs Although we shall focus on the simplex algorithm in thi
29.3 The simplex algorithm The simplex algorithm is the classical method for solving linear programs
29.4 Duality We have proven that, under certain assumptions, SIMPLEX will terminate. We have not yet
29.5 The initial basic feasible solution In this section, we first describe how to test if a linear
Chapter notes This chapter only begins to study the wide field of linear programming. A number of bo
Chapter notes
Chapter 30: Polynomials and the FFT The straightforward method of adding two polynomials of degree
Chapter 30: Polynomials and the FFT
Chapter outline Section 30.1 presents two ways to represent polynomials: the coefficient represent
30.1 Representation of polynomials The coefficient and point-value representations of polynomials ar
30.2 The DFT and FFT In Section 30.1, we claimed that if we use complex roots of unity, we can evalu
30.3 Efficient FFT implementations Since the practical applications of the DFT, such as signal proc
Chapter notes VanLoan's book [303] is an outstanding treatment of the Fast Fourier Transform. Press
Chapter notes
Chapter 31: Number-Theoretic Algorithms Number theory was once viewed as a beautiful but largely u
Chapter 31: Number-Theoretic Algorithms
31.1 Elementary number-theoretic notions This section provides a brief review of notions from elemen
31.2 Greatest common divisor In this section, we describe Euclid's algorithm for computing the great
31.3 Modular arithmetic Informally, we can think of modular arithmetic as arithmetic as usual over t
31.4 Solving modular linear equations We now consider the problem of finding solutions to the equat
31.5 The Chinese remainder theorem Around A.D. 100, the Chinese mathematician Sun-Tsu solved the pr
31.6 Powers of an element Just as it is natural to consider the multiples of a given element a, modu
31.7 The RSA public-key cryptosystem A public-key cryptosystem can be used to encrypt messages sent
31.8 ★ Primality testing In this section, we consider the problem of finding large primes. We begin
31.9 ★ Integer factorization Suppose we have an integer n that we wish to factor, that is, to decomp
Chapter notes Niven and Zuckerman [231] provide an excellent introduction to elementary number theo
Chapter notes
Chapter 32: String Matching Overview Finding all occurrences of a pattern in a text is a problem
Chapter 32: String Matching
Notation and terminology We shall let Σ* (read "sigma-star") denote the set of all finite-length str
32.1 The naive string-matching algorithm The naive algorithm finds all valid shifts using a loop th
32.2 The Rabin-Karp algorithm Rabin and Karp have proposed a string-matching algorithm that performs
32.3 String matching with finite automata Many string-matching algorithms build a finite automaton t
32.4 ★ The Knuth-Morris-Pratt algorithm We now present a linear-time string-matching algorithm due t
Chapter notes The relation of string matching to the theory of finite automata is discussed by Aho,
Chapter notes
Chapter 33: Computational Geometry Overview Computational geometry is the branch of computer scie
Chapter 33: Computational Geometry
33.1 Line-segment properties Several of the computational-geometry algorithms in this chapter will
33.2 Determining whether any pair of segments intersects This section presents an algorithm for dete
33.3 Finding the convex hull The convex hull of a set Q of points is the smallest convex polygon P f
33.4 Finding the closest pair of points We now consider the problem of finding the closest pair of p
Chapter notes This chapter barely scratches the surface of computational-geometry algorithms and tec
Chapter notes
Chapter 34: NP-Completeness Overview Almost all the algorithms we have studied thus far have been
Chapter 34: NP-Completeness
NP-completeness and the classes P and NP Throughout this chapter, we shall refer to three classes of
Overview of showing problems to be NP-complete The techniques we use to show that a particular probl
Decision problems vs. optimization problems Many problems of interest are optimization problems, in
Chapter outline This chapter studies the aspects of NP-completeness that bear most directly on the a
34.1 Polynomial time We begin our study of NP-completeness by formalizing our notion of polynomial-t
34.2 Polynomial-time verification We now look at algorithms that "verify" membership in languages. F
34.3 NP-completeness and reducibility Perhaps the most compelling reason why theoretical computer s
34.4 NP-completeness proofs The NP-completeness of the circuit-satisfiability problem relies on a di
34.5 NP-complete problems NP-complete problems arise in diverse domains: boolean logic, graphs, arit
Chapter notes The book by Garey and Johnson [110] provides a wonderful guide to NP-completeness, dis
Chapter notes
Chapter 35: Approximation Algorithms Many problems of practical significance are NP-complete but a
Chapter 35: Approximation Algorithms
Chapter outline The first four sections of this chapter present some examples of polynomial-time ap
35.1 The vertex-cover problem The vertex-cover problem was defined and proven NP-complete in Section
35.2 The traveling-salesman problem In the traveling-salesman problem introduced in Section 34.5.4,
35.3 The set-covering problem The set-covering problem is an optimization problem that models many r
35.4 Randomization and linear programming In this section, we study two techniques that are useful
35.5 The subset-sum problem An instance of the subset-sum problem is a pair (S, t), where S is a set
Chapter notes Although methods that do not necessarily compute exact solutions have been known for t
Chapter notes
Part VIII: Appendix: Mathematical Background
Chapter List
Chapter notes
Chapter notes
Chapter notes
Chapter 2: Getting Started
Chapter 3: Growth of Functions
Chapter 4: Recurrences
Chapter 6: Heapsort
Chapter 7: Quicksort
Chapter 8: Sorting in Linear Time
Chapter 9: Medians and Order Statistics
Chapter 10: Elementary Data Structures
Chapter 11: Hash Tables
Chapter 12: Binary Search Trees
Chapter 13: Red-Black Trees
Chapter 14: Augmenting Data Structures
Chapter 15: Dynamic Programming
Chapter 16: Greedy Algorithms
Chapter 17: Amortized Analysis
Chapter 18: B-Trees
Chapter 19: Binomial Heaps
Chapter 20: Fibonacci Heaps
Chapter 21: Data Structures for Disjoint Sets
Chapter 22: Elementary Graph Algorithms
Chapter 23: Minimum Spanning Trees
Chapter 24: Single-Source Shortest Paths
Chapter 25: All-Pairs Shortest Paths
Chapter 26: Maximum Flow
Chapter 27: Sorting Networks
Chapter 28: Matrix Operations
Chapter 29: Linear Programming
Chapter 30: Polynomials and the FFT
Chapter 31: Number-Theoretic Algorithms
Chapter 32: String Matching
Chapter 33: Computational Geometry
Chapter 34: NP-Completeness
Chapter 35: Approximation Algorithms
Chapter 8: Sorting in Linear Time
Chapter 11: Hash Tables
Chapter 16: Greedy Algorithms
Chapter 19: Binomial Heaps
Chapter 20: Fibonacci Heaps
Chapter 21: Data Structures for Disjoint Sets
Chapter 22: Elementary Graph Algorithms
Chapter 23: Minimum Spanning Trees
Chapter 24: Single-Source Shortest Paths
Chapter 25: All-Pairs Shortest Paths
Chapter 26: Maximum Flow
Chapter 28: Matrix Operations
Chapter 29: Linear Programming
Chapter 30: Polynomials and the FFT
Chapter 31: Number-Theoretic Algorithms
Chapter 32: String Matching
Chapter 35: Approximation Algorithms
Chapter 1: The Role of Algorithms in Computing
Chapter 2: Getting Started
Chapter 3: Growth of Functions
Chapter 4: Recurrences
Chapter 5: Probabilistic Analysis and Randomized Algorithms
Chapter 6: Heapsort
Chapter 7: Quicksort
Chapter 8: Sorting in Linear Time
Chapter 9: Medians and Order Statistics
Chapter 10: Elementary Data Structures
Chapter 11: Hash Tables
Chapter 12: Binary Search Trees
Chapter 13: Red-Black Trees
Chapter 14: Augmenting Data Structures
Chapter 15: Dynamic Programming
Chapter 16: Greedy Algorithms
Chapter 17: Amortized Analysis
Chapter 18: B-Trees
Chapter 19: Binomial Heaps
Chapter 20: Fibonacci Heaps
Chapter 21: Data Structures for Disjoint Sets
Chapter 22: Elementary Graph Algorithms
Chapter 23: Minimum Spanning Trees
Chapter 24: Single-Source Shortest Paths
Chapter 25: All-Pairs Shortest Paths
Chapter 26: Maximum Flow
Chapter 27: Sorting Networks
Chapter 28: Matrix Operations
Chapter 29: Linear Programming
Chapter 30: Polynomials and the FFT
Chapter 31: Number-Theoretic Algorithms
Chapter 32: String Matching
Chapter 33: Computational Geometry
Chapter 34: NP-Completeness
Chapter 35: Approximation Algorithms
Chapter 1: The Role of Algorithms in Computing
Chapter 2: Getting Started
Chapter 3: Growth of Functions
Chapter 4: Recurrences
Chapter 5: Probabilistic Analysis and Randomized Algorithms
Chapter 6: Heapsort
Chapter 7: Quicksort
Chapter 8: Sorting in Linear Time
Chapter 9: Medians and Order Statistics
Chapter 10: Elementary Data Structures
Chapter 11: Hash Tables
Chapter 12: Binary Search Trees
Chapter 13: Red-Black Trees
Chapter 14: Augmenting Data Structures
Chapter 15: Dynamic Programming
Chapter 16: Greedy Algorithms
Chapter 17: Amortized Analysis
Chapter 18: B-Trees
Chapter 19: Binomial Heaps
Chapter 20: Fibonacci Heaps
Chapter 21: Data Structures for Disjoint Sets
Chapter 22: Elementary Graph Algorithms
Chapter 23: Minimum Spanning Trees
Chapter 24: Single-Source Shortest Paths
Chapter 25: All-Pairs Shortest Paths
Chapter 26: Maximum Flow
Chapter 27: Sorting Networks
Chapter 28: Matrix Operations
Chapter 29: Linear Programming
Chapter 30: Polynomials and the FFT
Chapter 31: Number-Theoretic Algorithms
Chapter 32: String Matching
Chapter 33: Computational Geometry
Chapter 34: NP-Completeness
Chapter 35: Approximation Algorithms
← Prev
Back
Next →
← Prev
Back
Next →