## Linkedin Interview Question

Software Developers**Team:**Mobile

**Country:**United States

**Interview Type:**Phone Interview

Sort the array, all same numbers will be next to each other, compare number with the previous one if they are same print it, otherwise move on.

```
public static void findDuplicateNumber() {
int[] array = { 1, 1, 2, 3, 4, 4, 5, 8, 8, 9, 10, 12, 12, 18 };
System.out.println("Origianl Array = " + Arrays.toString(array));
for (int i = 0; i <= array.length - 2; i++) {
if (array[i] == array[i + 1]) {
System.out.print(array[i] + ",");
i = i + 1;
}
}
}
```

This should give us O(n log n) time complexity. does it look right?

This could probably be solve with a Hashset and you wouldn't need to sort the array. You would loop through the array and for each element check if it is contained in the set (which is O(1) ) and if it is add it to a list otherwise add it to the set and return the list. The time complexity would be O(n).

```
public static List<Integer> findDuplicates(int[] arr){
Set<Integer> store = new HashSet<Integer>();
List<Integer> dups = new ArrayList<Integer>();
for(int i = 0; i < arr.length; i++){
if(store.contains(arr[i])){
dups.add(arr[i]);
} else {
store.add(arr[i]);
}
}
return dups;
```

}

```
// O(n) speed
std::vector<int> FindDuplicates(std::vector<int> input_array)
{
std::unordered_map<int, int> hashTable;
std::vector<int> output;
for (int i=0; i < input_array.Size(); i++)
{
// find is O(1) on std::unordered_map
if (hashTable.find(input_array[i]))
{
hashTable[i]++;
if (hashTable[i] == 2)
{
out_put.push_back(input_array[i]);
}
}
else
{
hashTable[i] = 1;
}
}
return output;
}
```

1. If the elements in the array are less than size of the array then go for following algorithm

2. Otherwise you can use data structure like HashMap.

- Rushikesh Gaidhani September 14, 2016