## Goldman Sachs Interview Question

Applications Developers**Country:**India

First assume the "Partitioning" problem:

Assume "A" is unsorted and choose its p-th location as the "pivot". Assume A[p] = a and "a" is the i-th smallest element of A. Then, a partitioning over "p" yields a reordering of A such that "a" is located at the i-th location, and any element located at j < i is smaller than "i" (an consequently, the ones at j > i are larger).

This is done in O(n) where n is the length of A.

Quick sort is nothing but a series of recursive calls to partitioning

```
QuickSort(A, p)
{
i = Partition(A, p);
QuickSort(A[1..i - 1], p1);
QuickSort(A[i + 1....n], p2);
};
```

The important question is how to choose "p". The answer is "randomly and uniformly". The average performance of QuickSort for uniformly chosen "p" is optimal, e.g., O(n.log(n)) time.

Worst-case however, is O(n^2) which occurs with O(1 / n!) chance when "p" is selected uniformly in each recursive call.

What is quicksort is best answered by yourself by reading that chapter of your algorithms textbook. At the very min. you should know about these sorting algorithms:

- bigphatkdawg September 18, 2013Insertion sort (the most practicaly n^2 sort)

Merge sort (the stable n*log(n) comparison sort, which also works on linked lists)

Quicksort (the fastest comparison sort on arrays in practice)

Counting sort (fast integer sort when the range of integers is small)

And when you learn mergesort, study the "merge" subroutine well and realize that it is useful outside the sort. Same goes for the "partition" subroutine of quicksort.

Choosing a pivot is best done by picking a random pivot.