Searching data and sorting through data are fundamental algorithms. Searching refers to iterating over the data structure’s elements to retrieve some data. Sorting refers to putting the data structure’s elements in order. The searching and sorting algorithms are different for every data structure. This chapter focuses on searching and sorting for arrays. By the end of this chapter, you will understand how to use common sorting and searching algorithms for arrays.
Searching
As mentioned, searching is the task of looking for a specific element inside a data structure. When searching in an array, there are two main techniques depending on whether the array is sorted. In this section, you’ll learn about linear and binary searching. Linear searches are especially flexible because they can be used with both sorted and unsorted data. Binary searches are specifically used with sorted data. However, a linear search has a higher time complexity than a binary search.
Linear Search
Time Complexity: O(n)

Linear search
As another example, with an array of [1,2,3,4,5] and a search term of 3, it would take three iterations to complete (1, 2, 3). The reason why this algorithm has a Big-O of O(n) is that, in the worst-case scenario, the entire array needs to be iterated. For example, if the search term is 5, it takes five iterations (1, 2, 3, 4, 5). If 6 is the search term, it goes through the entire array (1, 2, 3, 4, 5) and then returns false because it was not found.
As noted previously, a linear search algorithm like this is great because it works whether or not the array is sorted. In a linear search algorithm, every element of the array is checked. So, you should use a linear search when the array is not sorted. If the array is sorted, you can do the search faster via a binary search.
Binary Search
Binary search is a searching algorithm that works on sorted data. Unlike the linear search algorithm, in which every element of the array is checked, binary searches can check the middle value to see whether the desired value is greater or smaller than it. If the desired value is smaller, this algorithm can search through the smaller parts, or it can search through the bigger parts if the desired value is bigger.

Binary search in the left half of the array

Binary search in the right half of the array
The binary search algorithm is fast but can be done only if the array is sorted. It checks the middle element if that is the element that is being searched for. If the search element is bigger than the middle element, the lower bound is set to the middle element plus one. If the search element is less than the middle element, the higher bound is set to the middle element minus one.
This way, the algorithm is continuously dividing the array into two sections: the lower half and the upper half. If the element is smaller than the middle element, it should look for it in the lower half; if the element is bigger than the middle element, it should look for it in the upper half.
Binary searches are used by humans without them even knowing. An example is a phone directory that is arranged from A to Z by last name.
If you are given the task of finding someone with the last name of Lezer, one would first go to the L section and open it halfway through. Lizar is on that page; this means that the lower section contains L + [a to i] and the upper section contains L + [i to z] last names. You would then check the middle of the lower section. Laar appears, so you would now check the upper section. This process repeats until Lezer is found.
Sorting
Sorting is one of the most important topics in computer science; it is faster and easier to locate items in a sorted array than in an unsorted sorted array. You can use sorting algorithms to sort an array in memory for searching later in the program or to write to a file for later retrieval. In this section, different sorting techniques will be explored. We will start with the naive sorting algorithms and then explore efficient sorting algorithms. Efficient sorting algorithms have various trade-offs that should be considered during usage.
Bubble Sort

First run of the bubble sort

The rest of the bubble sort runs
Time Complexity: O(n2)
Space Complexity: O(1)
Bubble sort is the worst type of sort because it compares every pair possible, whereas other sorting algorithms take advantage of the presorted parts of the array. Because bubble sort uses nested loops, it has a time complexity of O(n2).
Selection Sort

Selection sort
Time Complexity: O(n2)
The time complexity for selection sort is still O(n2) because of the nested for loop .
Insertion Sort

Insertion sort
Time Complexity: O(n2)
Space Complexity: O(1)
Again, this sorting algorithm has a quadratic time complexity of O(n2) like bubble and insertion sort because of the nested for loop.
Quicksort
Quicksort works by obtaining a pivot and partitioning the array around it (bigger elements on one side and smaller elements on the other side) until everything is sorted. The ideal pivot is the median of the array since it will partition the array evenly but getting the median of an unsorted array linear time to compute. Hence, a pivot is typically obtained by taking the median value of the first, middle, and last elements in the partition. This sort is a recursive one and uses the divide-and-conquer methodology to break the quadratic complexity barrier and get the time complexity down to O(nlog2(n)). However, with a pivot that partitions everything on one side, the time complexity is worse case: O(n2).

Quicksort
Time Complexity: O(nlog2(n)) on average, O(n2) for worst case
Space Complexity: O(log2(n))
One downside about a quicksort algorithm is that it could potentially be O(n2) if a bad pivot is always picked . A bad pivot is one that it does not partition the array evenly. The ideal pivot is the median element of the array. In addition, a quicksort algorithm takes a bigger space complexity of O(log2(n)) compared to other sorting algorithms because of the call stack in recursion.
Use a quicksort algorithm when the average performance should be optimal. This has to do with the fact that quicksort works better for the RAM cache.
Quickselect
Quickselect is a selection algorithm to find the kth smallest element in an unordered list. Quickselect uses the same approach as a quicksort algorithm. A pivot is chosen, and the array is partitioned. Instead of recursing both sides like quicksort, however, it recurses only the side for the element. This reduces the complexity from O(nlog2(n)) to O(n).
Time Complexity: O(n)
Mergesort

Mergesort
The merging function works by taking the two arrays (left and right) and merging them into one resultant array. The elements need to be compared as they get merged to preserve order.
Time Complexity: O(nlog2(n))
Space Complexity: O(n)
Mergesort has a large space complexity of O(n) because of the need to create n number of arrays to be merged later. Use mergesort when a stable sort is needed. A stable sort is one that’s guaranteed not to reorder elements with identical keys. Mergesort is guaranteed to be O(nlog2(n)). A disadvantage of mergesort is that it uses O(n) in space.
Count Sort

Count sort
Time Complexity: O(k+n)
Space Complexity: O(k)
Use count sort when you’re sorting integers with a limited range. This will be the fastest sort for this case.
JavaScript’s Built-in Sort
JavaScript has a built-in sort() method for an array object, which sorts elements by ascending order. To use it, there is an optional parameter that you can pass in a comparator function.
In the previous example, notice that numbers starting with 1 came first (1, 12), then numbers starting with 2, and so forth. This is because no comparator function was passed and JavaScript converted the elements into a string and sorted it according to the alphabet.
The sort() function can be useful when you need a quick way to sort something without implementing it yourself.
Summary
There are two ways to search inside an array: linear search and binary search. Binary search is faster with O(log2(n)) time complexity, while linear search has O(n) time complexity. However, the binary search can be performed only on a sorted array.
Sorting Summary
Algorithm | Time Complexity | Space Complexity |
---|---|---|
Quicksort | O(nlog2(n)) | O(nlog2(n)) |
Mergesort | O(nlog2(n)) | O(nlog2(n)) |
Bubble sort | O(n2) | O(n2) |
Insertion sort | O(n2) | O(n2) |
Selection sort | O(n2) | O(n2) |
Count sort | O(k + n) | O(k) |
Exercises
USE THE IMPLEMENT SQUARE ROOT FUNCTION FOR AN INTEGER WITHOUT USING ANY MATH LIBRARIES
Time Complexity: O(n)
This is essentially a linear search since it has to linearly check one by one the value for the square root.
Time Complexity: O(log2(n))
Bonus: Find a Square Root of a Float
FIND IF TWO ELEMENTS OF AN ARRAY ADD UP TO A GIVEN NUMBER
Time Complexity: O(n2)
Space Complexity: O(1)
There is a lot of checking, and hence it takes quadratic time.
Time Complexity: O(n)
Space Complexity: O(n)
This algorithm cuts the time complexity to O(n) but takes O(n) space as well to store items into the store object.
FIND AN ELEMENT WITHIN AN ARRAY THAT APPEARS ONLY ONCE
Time Complexity: O(log2n)
Space Complexity: O(1)
CREATE A JAVASCRIPT SORT COMPARATOR FUNCTION THAT WOULD SORT STRING BY LENGTH
Examples
As shown, there’s a lot of flexibility with these comparators, and they can be used for sorting without needing to implement a sort yourself.
IMPLEMENT A WORD COUNTER LIST
Create a function that generates an object of words (as keys) and the number of times the words occur in a string ordered by highest to lowest occurrences.
Here’s some example input: practice makes perfect. get perfect by practice. just practice.
Time Complexity: O(nlog2(n))
Space Complexity: O(n)
Time complexity is limited by the sorting algorithm that the JavaScript engine uses. Most use either mergesort or quicksort, which are both O(nlog2(n)).