## Yahoo Interview Question

Testing / Quality AssurancesHow space complexity for hashmap is O(n)?it has to be O(k)k-width between minimum and maximum in de array.so is time

first sort the array in (NLOGN)

then use the following function

```
void prin_unique(int arr[],int size)
{
int *a,*b,*c;
a=&arr[0];
b=&arr[1];
c=&arr[2];
while(b<=&arr[size-1])
{
if(*a!=*b && *b!=*c)
cout<<"unique is "<<*b<<endl;
a++;
b++;
c++;
}
}
```

to print the unique elements

extra space is only 3 integer pointer.

Sort the array. Time: O(NlogN)

Replace duplicates with -1

next print elements != -1

void print_unique(int *a ,int n)

{

int i ,prev = a[0];

for(i=0;i < n-1;i++){

if(prev == a[i+1]){

prev = a[i+1];

a[i] = a[i+1] = -1;

}else{

prev = a[i+1];

}

}

for(i= 0 ;i < n;i++)

if(a[i] != -1)

printf("%d ",a[i]);

putchar('\n');

}

We need not sort. We haven't been asked a constant space algorithm.

So,

Step 1) Walk thru array (say A) and find largest number => O(N).

Step 2) Create parallel array B in which just the number of occurences of each element are stored (frequency array line in couting sort) => O(N) in time and O(K) in space where K doesnt depend on N and is the largest number in the array that we found in step 1)

Now go thru array B and print i for all B[i] = 1. => O(N).

Complete algo: O(3N) in time and O(K) in space.

There might be more efficient ways.

if its given that you can use extra space then i think hashing will be the best solution... but if no extra space can be use then in that case just use this program which puts the duplicate value to -1 and at the same time print the unique value if its not equal to -1

void removeduplicate(int a[],int n)

{

int i,j,temp;

for(i=0;i<n;i++)

{

temp=a[i];

if(temp!=-1&&i!=n-1)

{

for(j=i+1;j<n;j++)

{

if(a[j]!=-1)

{

if(a[i]==a[j])

{

temp=-1;

a[j]=-1;

}

}

}

if(temp!=-1)

cout<<" "<<temp;

}

}

If we do not want to use additional storage like HashTable and just print the unique values provided that the array is sorted before hand, here is the C# code below. FindUnique function assumes that nums is already sorted:

```
private static void FindUnique(int[] nums)
{
int i = 0;
int j = i + 1;
while(i < nums.Length - 1 && j < nums.Length)
{
if (nums[i] == nums[j])
{
j++;
if (j < nums.Length && nums[j - 1] != nums[j])
{
i = j;
j = i + 1;
}
continue;
}
else
{
Console.Write("{0} ", nums[i]);
i++;
j = i + 1;
}
}
}
```

it can be solved in nlogn complexity using heap sort( dose not matter which u use, mix heap or max heap) below is example using max heap sort:-

a) Heap sort in each iteration gives the max element of the element to be sorted. store it in a temp var say test_unique

b) when u get the max_element from the next iteration of the heap sort compare it with the test_unique from the previous iteration. If they are not same then add test_unique into to unique array.

when the heap sort completes you have your array sorted & as well as unique array with unique elements in the array.

Note heap sort is an in place sort so space complicity is less ( in fact only for the loop variants )

This will be better than directly sorting the array and then eliminating the duplicates. in each of the heapify operation we will be reducing the size of the array to be sorted. It will be O(NlogN) but will we very effective. Good solution bro keep it up :)

M a little bad at heap and hash, and so thot that if we can sort the best we can do is with the worst case complexity of O(NlogN). So after that if we can simply traverse the array to check for the unique elements n skipping over the repeated ones till we find a new element. It is just a thot. Suggestions are always welcome :)

i have a question, unfortunately I dont see this solution as the best one.. Aren't you heap sorting n times? Which means total running time = n * (n log n)[worst case]?

I still prefer hashing with counter and running thru the hash ds to find uniques - runtime: O(n) at the cost of O(n) space storage

Do a single pass. Keep on adding the elements to a Hashset. Check if the elements exists every time we add. If found, remove the element. Finally the Hashset will only have unique elements.

O(n)

* Validate null arrays

* Validate empty arrays

* Create a Hashmap, worstcase Size: N

* Iterate through the array O(n) and set in the Hashmap if the current element is unique or already exists in the map, this also could be achieved using a Hashmap<Integer, Integer> and set the count of the element, but the problem doesn't case about how many times any given element is repeated, only if it is unique or not.

* Iterate on the hashmap O(n) and print only the unique items.

- Fernando May 22, 2011