Google Interview Question
Software Engineer / DevelopersCountry: United States
Bubble/insertion/merge sorts are stable.
Selection/quick/heap sorts are not stable.
Quicksort:
Also, I don't understand your "we have to use quicksort" comment.
For very unbalanced splits QS produces good results ... you should explain this better. Also, no mention of the fact that QS isn't often used in hard real time systems if the worst case (n^2) can't be tolerated for some inputs.
Also, the usual implementation of quicksort (subroutine partition) does too many swaps when there are duplicate keys. So often quicksort is avoided when the data will have a lot of repeated keys.
7l8okl687um k5heuo tjmn bthi\ckeswkeswkeswkeswkeswkeswgnoiswkorghinnnnnnnnnnnnnnnnnnnnnnn3aewtgtgtgtgregngngngngngngngngngngngngngngngngngnt4ignikbhgkbhgflnklhngkhol;tirogkbhgonbmgkfnklnvckinbogfh7l8okl687um k5heuo tjmn bthi\ckeswkeswkeswkeswkeswkeswgnoiswkorghinnnnnnnnnnnnnnnnnnnnnnn3aewtgtgtgtgregngngngngngngngngngngngngngngngngngnt4ignikbhgkbhgflnklhngkhol;tirogkbhgonbmgkfnklnvckinbogfhiul7kijuythjmgkinjknjhkijnpthj[tyhlm[mnknp[oniono[ijo]0mnjomnj-jm-[
jm-oj-[ojm-[mj-othju mebht juyyyyyi
o
o9 i9ii
999
Quick sort: When you don't need a stable sort and average case performance matters more than worst case performance. A quick sort is O(N log N) on average, O(N^2) in the worst case. A good implementation uses O(log N) auxiliary storage in the form of stack space for recursion.
Merge sort: When you need a stable, O(N log N) sort, this is about your only option. The only downsides to it are that it uses O(N) auxiliary space and has a slightly larger constant than a quick sort. There are some in-place merge sorts, but AFAIK they are all either not stable or worse than O(N log N). Even the O(N log N) in place sorts have so much larger a constant than the plain old merge sort that they're more theoretical curiosities than useful algorithms.
Heap sort: When you don't need a stable sort and you care more about worst case performance than average case performance. It's guaranteed to be O(N log N), and uses O(1) auxiliary space, meaning that you won't unexpectedly run out of heap or stack space on very large inputs.
Insertion sort: When N is guaranteed to be small, including as the base case of a quick sort or merge sort. While this is O(N^2), it has a very small constant and is a stable sort.
Bubble sort, selection sort: When you're doing something quick and dirty and for some reason you can't just use the standard library's sorting algorithm. The only advantage these have over insertion sort is being slightly easier to implement.
Did you not read Hingle's answer? There are "better" reasons for using bubble and selection sort than the weak one in your answer.
My suspicion was correct (based on the fact that you must have spend < 1 min. on this page and skipped reading better answers).... you simply googled the topic and copy pasted an answer from: stackoverflow.com/questions/1933759/when-is-each-sorting-algorithm-used
Selection Sort : Best Worst Average Case running time all O(n^2). However the number of swaps required is always n-1. Hence it is useful when the amount of satellite data(auxiliary data aside from the key) is large and for some reason pointer based swapping is not feasible.
- Hingle McCringleBerry September 29, 2013Insertion Sort : Best case running time is O(n). Hence useful for adding new data to a presorted list. Also this is fast when the number of elements is small/ you need an online algorithm.
Bubble Sort:Where we know the data is very nearly sorted. Say only two elements are out of place. Then in one pass, BS will finish it and in the second pass, it will see everything is sorted and then exit. Only takes 2 passes of the array.
Merge Sort : For external sorting algorithms, it is the choice of sort.
Quicksort : Remember that quicksort is dependent on choice of pivot. So when we have absolutely random data values, and we will need to sort them, we have to use quick sort. Even for very unbalanced split QS produces good results.
Heap Sort: It is an inplace O(n logn) algorithm hence we can use it when we cannot afford the extra space required for merge sort. Also has a O(n logn) worst case time as compared to O(n^2) of quicksort.