## HadoopUser

BAN USERHaving 7 years of experience in JAVA development of e-commerce and banking domain.With good knowledge of Data structures, Big data , Data bases, current working as a lead engineer.

- 0of 0 votes

Answershow to read a big data file to get the top K values?

- HadoopUser in India| Report Duplicate | Flag | PURGE

Ebay Software Engineer / Developer

its ia binary index tree problem where frequency is also added.

- HadoopUser December 28, 2013@Avinash :

1) HashMap is a map based on hashing of the keys. It supports O(1) get/put operations. Keys must have consistent implementations of hashCode() and equals() for this to work.

2) LinkedHashMap is very similar to HashMap, but it adds awareness to the order at which items are added (or accessed), so the iteration order is the same as insertion order (or access order, depending on construction parameters).

There is no guarantee that hash map will preserve the order in which it is inserted.Hence although we will find out the duplicates using hash map but the order will get change.

This answer is incorrect as hashset does not preserve the insertion order, for that we can used LinkedHashSet but then we can use it if space is not a constraint.

- HadoopUser December 26, 2013This answer is incorrect as hashmap does not preserve the insertion order, for that we can used LinkedHashMap but then we can use it if space is not a constraint.

- HadoopUser December 26, 2013I have solve it using count sort.I will create one more array of size K where K is the range of numbers it can have.Now suppose N numbers are there than the time complexity is

O(K * N)

let say numbers are 3,5,5,8,4,7,8,2,7,9,6,6

then another array will be of size 9 as numbers varies from 1-9

now in this array i will put the count of numbers at index = number , so

index 1 2 3 4 5 6 7 8 9

count 0 1 1 1 2 2 2 2 1

I will finally traverse the new array and any index with count > 1 means it is duplicate.

**CareerCup**is the world's biggest and best source for software engineering interview preparation. See all our resources.

using level order traversal and keeping a hash map to preserve the order

- HadoopUser October 28, 2018void printTree(TreeNode root){

Queue<Pair> q = new LinkedList<Pair>();

TreeMap<Integer,LinkedList<Integer>> map = new TreeMap<Integer,LinkedList<TreeNode>> ();

if(root==null){

return;

}

q.add(new Pair(root,0));

int hd =0;

addValue(map, 0, root.val)

while(q.size()>0){

Pair pair = (Pair)q.poll();

TreeNode temp = q.getTreeNode();

hd = q.getHd();

if(temp.left!=null){

q.add(new Pair(temp.left,hd-1));

addValue(map, hd-1, temp.val);

}

if(temp.right!=null){

q.add(new Pair(temp.right,hd+1));

addValue(map, hd+1, temp.val);

}

}

Iterator<Integer> itr = map.getKeySet();

while(itr.hasNext()){

int key = (Integer)itr.next();

LinkedList l = map.get(key);

while(l!=null){

System.out.print(l.val+" ");

l= l.next;

}

}

}

void addValue(map, int key, int val){

if(map.get(key)==null){

LinkedList<Integer> l = new LinkedList<Integer>();

l.add(root.val);

map.put(0,l);

} else{

LinkedList<Integer> l = (LinkedList<Integer>)map.get(key);

l.add(val);

}

}

class Pair {

TreeNode node;

int hd;

public Pair(TreeNode node,int hd){

this.node = node;

this.hd =hd;

}

}