Amazon Interview Question
Software Engineer / DevelopersCountry: United States
Interview Type: Phone Interview
int test2 ()
{
int src[] = {1, 2, 3, 4, 1, 2, 3, 3, 6, 9, 0, 8, 8, 6};
int test[ 100 ] = {0}; // size must be bigger then max element in array src
int result = -1;
for(int i = _countof(src)-1; i >= 0; i--)
{
if( !test[ src[i] ] )
{
result = src[i];
}
test[ src[i] ]++;
}
return result;
}
int test2a ()
{
int src[] = {-214483648, -214743647, -20043648, -204740648, -150000, 20, 20, 20, 60999090, -214483648, 214743646, 214483645, 1000900, 2661223};
map<int, bool> test;
int result = -1;
for(int i = sizeof(src)/sizeof(src[0])-1; i >= 0; i--)
{
if( test.find(src[i]) == test.end() )
{
result = src[i];
}
test[ src[i] ] = true;
}
return result;
}
Here I have used Map from STL. In first pass, initialize the map such that unique element values will be zero and the rest of the values will be -1. In second pass for the element that has map value zero return that element.
#include <iostream>
#include <map>
using namespace std;
int firstNonDup(int arr[], int size)
{
map<int, int> mymap;
map<int, int>::iterator itr;
for(int i=0; i<size;i++)
{
if(!mymap.count(arr[i]))
mymap[arr[i]] = 0;
else
mymap[arr[i]] = -1;
}
for(itr = mymap.begin(); itr != mymap.end(); itr++)
cout << "Key: " << itr->first << " Value: " << itr->second << endl;
for(int i=0; i<size;i++)
{
if(!mymap.find(arr[i])->second)
return arr[i];
}
return 0;
}
int main()
{
//int arr[] = {7, 97, 33, 33, 11, 11, 97};
int arr[] = {1,2,3,1,2,3,4,5,6,7,7,6,5};
int val = firstNonDup(arr, 13);
cout << "First non duplicate value is: " << val << endl;
getchar();
return 0;
}
Result:
Key: 1 Value: -1
Key: 2 Value: -1
Key: 3 Value: -1
Key: 4 Value: 0
Key: 5 Value: -1
Key: 6 Value: -1
Key: 7 Value: -1
First non duplicate value is: 4
Result for case1, which I commented in code
Key: 7 Value: 0
Key: 11 Value: -1
Key: 33 Value: -1
Key: 97 Value: -1
First non duplicate value is: 7
What about just loading up a hashmap incrementing the count for each element on the first pass... (element, count) = ({1,2},{2,3},{10,1})... then make a second pass of the original array and the first value corresponding in the hash with count=1 is the number? performance O(2n) from 2 passes but O(n) amortized?
Maintain extra array and hashtable.
If you see an input array element A, for first time, put it on end of the extra array at say index i. Insert A into hashtable, with key as A and value as i, increment i.
If A has appeared before, lookup the index where we put it in the extra array and clear that part.
Proceed to next element in input array.
Once you are done with input array, walk the extra array and pick the first element which isn't cleared.
To maintain cleared status, you can use a Pair of integer, bool if you want.
The pseudo code will be something like this
struct ClearedInt {
public int value;
public bool cleared;
};
int firstUnique(int [] inp) {
ClearedInt [] extra = new ClearedInt[inp.Length];
HashMap <int, int> seen = new HashMap<int,int>();
int currIndex = 0;
foreach (int A in inp) {
if (seen.Contains(A)) {
int idx = seen[A];
extra[idx].cleared = true;
continue;
}
extra[currIndex].value = A;
extra[currIndex].cleared = false;
seen[A] = currIndex;
currIndex++;
}
foreach (ClearedInt cA in extra) {
if (!cA.cleared) return cA.value;
}
throw NoneUniqueException();
}
There is a bug. You only should loop in the extra array till currIndex-1. Of course, if you are using a vector type array, then it should be fine.
But in the code if we are checking for contains it is a O(n) again, so essentially it will become o(n2).....
instead what i was thinking the hash map what you are calling i will call it is a key value pair or a dictionary(i am more a .Net guy).
for every element we will check if the entry is present in the dictionary or not by simple trying to retrieve the value by passing the ky, if it is present let us increment the count of the value.
at the end we can check whose count is only one and then print the output. how does that sound
static void Main(string[] args)
{
int[] input = {1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 6,1, 12, 3, 34, 1};
int firstNumber= input[0], secondNumber = input[0];
var numbers = new Dictionary<int, int>();
foreach (int t in input)
{
try
{
if (numbers[t] == 0)
{ }
}
catch (Exception)
{
numbers.Add(t, 1);
continue;
}
numbers[t]++;
}
foreach (KeyValuePair<int, int> abc in numbers)
{
if (abc.Value == 1)
{
Console.WriteLine(abc.Key);
break;
}
}
Console.ReadLine();
}
well the reason I used a hashtable is the lookup is O(1) and i'm not familiar with .net but does your solution lookup pairs in O(1) and still preserve the order of the original array? Just curious
Use Java's BitSet API. Traverse the given array. Use each element of the array as an index into the Bit vector. While you do that, check if the bit is set or not. If not, set it, else if set, then flag it - that is your first repeating number.
int src[] = {1, 2, 3, 4, 1, 2, 3, 3, 6, 9, 0, 8, 8, 6};
std::vector<int> out;
std::vector<int>::iterator it;
for(int i = 0; i < sizeof(src) / sizeof(src[0]); ++i)
if ( (it = find(out.begin(), out.end(), src[i])) != out.end() )
out.erase(it);
else
out.push_back(src[i]);
std::cout << out.front() << std::endl;
int printFirstUnique(vector<int> a)
{
map<int, int> nums;
for(int i = 0; i < a.size(); i++)
nums[a[i]]++;
for(int i = 0; i < a.size(); i++)
if(nums[a[i]] == 1)
return a[i];
}
There is a problem here, what if there is a number 253242 and 1 1 2 2 3 4 4 3 in the array you will end up storing till 253242. I am not sure how feasilbe is this.
if you just replace the map with a hash you should be fine since maps are considered balanced binary trees, which would cut down the lookup time from O(logN) to O(1) giving you an overall performance of O(N)
Integer findFirstUnique(int[] array){
Integer firstUnique = null;
LinkedHashSet<Integer> set = new LinkedHashSet<Integer>();
if(array != null)
for(int i = array.length -1; i >= 0; i--){
if(set.add(new Integer(array[i]))){
firstUnique = array[i];
}
}
return firstUnique;
}
Code is wrong. Please check the output of the code for following array, {1,2,3,1,2,3,4,5,6,7,7,6,5}. Your code gives output as 1 which is wrong.
// Have 2 sets, one set of unique elements and other set for duplicates
public static Integer unique(int[] a) {
Set<Integer> uniques = new LinkedHashSet<Integer>();
Set<Integer> duplicates = new LinkedHashSet<Integer>();
for (Integer i : a) {
if (!uniques.add(i)) {
duplicates.add(i);
}
}
uniques.removeAll(duplicates);
return (Integer) uniques.toArray()[0];
}
I have found out more now...
down vote
This page summarises some of the time comlplexities for various collection types with Java, though they should be exactly the same for .NET.
I've taken the tables from that page and altered/expanded them for the .NET framework. See also the MSDN pages for SortedDictionary and SortedList, which detail the time complexities required for various operations.
--------------------------------------------------------------------------------
Searching
Type of Search/Collection Types Complexity Comments
Linear search Array/ArrayList/LinkedList O(N) Unsorted data.
Binary search sorted Array/ArrayList/ O(log N) Requires sorted data.
Search Hashtable/Dictionary<T> O(1) Uses hash function.
Binary search SortedDictionary/SortedKey O(log N) Sorting is automated.
static void Main(string[] args)
{
int[] input = { 1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 6, 1, 12, 3, 34, 1 };
int firstNumber = input[0], secondNumber = input[0];
var number2 = new HashSet<int>();
var numbers = new Dictionary<int, int>();
foreach (int t in input)
{
if (numbers.ContainsKey(t))
{
numbers[t]++;
}
else
{
numbers.Add(t, 1);
continue;
}
}
foreach (KeyValuePair<int, int> abc in numbers)
{
if (abc.Value == 1)
{
Console.WriteLine(abc.Key);
break;
}
}
Console.ReadLine();
}
downvote? cut & paste from stackoverflow. LOL.
In fact here:
stackoverflow com/questions/851949/asymptotic-complexity-of-net-collection-classes
take to Vectors/ArrayLists/HashTable one contain single-elements and another multipleOccuring Elements, So any element if its not there in both the Lists, then will be put in Vector1(singleelement List), any element found in one of it will be removed from singleelementList and put in MultipleOccurance. So after this.... Re-Traverse the original list any number found in the SingleList is the one
consider this piece of code..
Map data=new HashMap();
for(int i=0;i<arr.length;i++)
{
Integer tmp=(Integer)data.get(new Integer(arr[i]));
Integer nwtmp=(tmp==null)? new Integer(1):new Integer(tmp.intValue()+1);
data.put(new Integer(arr[i]),nwtmp);
}
Iterator itr=data.entrySet().iterator();
int res;
while(itr.hasNext())
{
Map.Entry entry=(Map.Entry) itr.next();
Integer key=(Integer)entry.getKey();
Integer count=(Integer)entry.getValue();
int cnt=count.intValue();
if(cnt ==1)
{
res=key.intValue();
break;
}
}
//res will be the output
Quick Solution -
- shaad May 01, 2012Try using LinkedHashMap(read java class) to preserve the order of insertion into hashtable.
e.g. 1,1,3,4,4,3,5,6,6,7
Step1 - Traverse the given array, build a LinkedHashMap. [key, value] = [array[i], count]
a. Check the key(array[i]) in hashtable, if its there increase the value(count) for that key by one. else put in hashtable with value as 1.
Step 2 - Traverse the map and keep checking count, if its 1 return the key.
I think, it can be solved without extra space. Bitwise operator may help. Thinking.....