kg
BAN USERThis problem belongs to a special category of problems known as LCS (Largest Common Sequence) problem.
Google to learn more, coz billy's definition of the problem has inaccuracy, it doesn't talk about maintaining the order in the common sequence.
You might refer to algorithm book by Cormen as well.
To find a LCS, it takes O(nm) time and space. The requirement of smallest window in A makes the
problem more complicated.
I came up with a code with runtime O(max (m*n,n*n)) and space theta(mn) as given below.
The runtime calculation is loosly bound and might be less than I calculated. Any comment is most welcome.
//
void findSmallestWindow(int[] A, int[] B){
n=sizeof(A)
m=sizeof(B)
int[][] C=new int[n+1][m+1];
int[][] minArr={{1,1,1},{1,1,1}}; //1st col=start of interval in A, 2nd col=end of interval in A,
//3rd col=interval in A
//initialize first row and first column of C[][] to zero  code is omitted
//this for loop run time is O(mn)
for(int i=1; i<=n; i++){
for(int j=1; j<=m; j++){
if(A[i1]==B[j1]){
C[i][j]=C[i1][j1]+1;
}else{
//assign C[i][j] to max ( cell above (i,j), cell left to (i,j) )
if(C[i][j1]>C[i1][j]){
C[i][j]=C[i][j1];
}else{
C[i][j]=C[i1][j];
}
}
}
} //both for loops ends here
//Note that at this point each value in C(i,j) indicates length of largest
//common sub sequence between the arrays A[0, i1] and B[0,j1]
//Runtime O(n(n+m)) for the loop
for(int i=n; i>nm1; ){
if(A[i1]==B[m1] && A[i2]!=B[m1]){
i=findMinInterval(i,minArr,A,B,C);
continue;
}
i;
}
if(minArr[0][2]==1){
//print min window in A <minArr[0][0], minArr[0][1]>
}else{
//print min window in A <minArr[1][0], minArr[1][1]>
}
}
//This method assume that there is an interval with end index in A= k1. This method finds the
//min start index for the end and finds the interval. It also find if this interval is smaller than
//previous smallest interval.
//Run time O(m+n)
int findMinInterval(int k, int[][]minArr, int[] A, int[] B, int[][]C){
int n=sizeof(A);
int m=sizeof(B);
int index=1;
if(minArr[0][2]==1){
index=0;
}
minArr[index][1]=k1; //initialize the end of current interval
for(int i=k, j=m; i>nm1 && j>0; ){
if(A[i2]==B[j2]){
i;
j;
}else if(C[i1][j]>C[i][j1]){
i;
}else{
j;
}
}
if(j==0){ //one complete match found
minArr[index][0]=i; //initialize the start index of the current interval in A
minArr[index][2]=minArr[index][1] minArr[index][0];//initialize the interval
//compare with previous minimum interval and set the bigger interval to 1
if(minArr[(index+1)%2][2]!=1){
if(minArr[0][2]>minArr[1][2]){
minArr[0][2]=1;
}else{
minArr[1][2]=1;
}
}
}
return i;//return the begining of next interval check
}
The algorithm pointed out by 'sunny' is by far the best. This algorithm runs in worst case scenario O(n) time and with no extra memory.
A pseudo code of the algorithm for java type languages which don't have pointer/passbyref concept:
//This method will be invoked with findLCA(n1,n2, root(Tree), 0, sizeOf(Tree),null) where n1, n2 and Tree
//are inputs
//At any point, currHt maintains the height of the current Node = curr,
//flag firstOneFound indicates that during the eular walk if we encountered either n1 or n2 before.
//when we encounter one of n1 & n2 first time, we initialize minHt and lca and we keep track of subsequent
//nodes in the walk whose currHt is lesser than minHt (thus a parent/ancestor of current node) untill we
//encounter one of the n1 & n2
Object[] findLCA(Node n1, Node n2, Node curr, boolean firstOneFound, int currHt, minHt, Node lca){
if (curr==null) return null;
Object[] output={lca,firstOneFound,minHt};
//Include code X here
if(curr.left!=null){
Object[] temp=findLCA(n1,n2,curr.left,output[1],currHt+1,output[0]);
output=temp;
//Include code X here
}
if(curr.right!=null){
Object[] temp=findLCA(n1,n2,curr.right,output[1],currHt+1,output[0]);
output=temp;
//Include code X here
}
}
//Code X follows
if(curr==n1  curr==n2){
if(output[1]){ //
return output;
}else{
output[0]=curr; //lca=curr;
output[1]=true; //firstOneFound=true;
output[2]=currHt; //minHt=currHt;
}
}else if(output[1] && currHt<output[2]){
output[2]=currHt;
output[0]=curr;
}
Addendum to my post 
Reason: This is a logical reasoning I came up with:
If we look into a building block of Binary Tree, then we see a parent and it's two children.
Now we have the freedom of choosing any node out of 3 given nodes as parent. Once a parent node is decided, we have the freedom of interchanging the left and right child keeping the same spatial tree structure with three nodes. Thus given a 3node binary tree we need two constraints about the two degrees of freedom (parentchild and ordering of children).
Now we need a way to check for these two constraints and one way could be using its traversed form.
Breadth First Traversal provides only one information> parentchild
DFT of preorder provides only one information> (1) parentchild (same as BFT)
DFT of postorder provides only one information> (1) parentchild (reverse of preorder)
DFT of inorder provides only one information> (1) parentchild
Since BFT and preorder DFT provides exactly same info, we might consider one of them.
Since postorder DFT is just mirror image of preorder DFT, we might not get any additional information. In fact traversing Preorder form from reverse direction will give you postorder form. Thus we choose one of these three form. In order DFT reveals different information (for any node, all nodes on the left are left child/left grand child) we should consider inorder DFT as second form for finding information. Thus two different for should be sufficient to provides us information about two degrees of freedom.
With same logic, a binary search tree has only one degree of freedom. For example, with a three node structure we have the freedom to choose the parent. Once we choose the parent, the left and right children are determined automatically. Thus we need to choose one of the four options.
Now I've noticed that inorder DFT alone is not sufficient but any one of the rest three forms (BFT/preorder/postorder) alone is sufficient for BST (I couldn't find logic)
//I'll propose another approach with some assumptions and the idea can be extended as required
//Assumptions:
1> sets will have nonnegative integers (0,1,2...)
2> elemnts of the set will be unique
3> the values of the elements ranges between 0 to 31 , both inclusive
With these assumtions each set can be represented by a single interger (32 bit) where
ith LO bit is set to ON if element i exists in the set.
Example: a set S={1,5,7} will be represented by integer s=(0000 0000 0000 0000 0000 0000 1010 0010)binary
Now union of two sets = S1 U S2 = s1 OR s2 in O(1) time
Now intersection of two sets = S1 intersection S2 = s1 AND s2 in O(1) time
In certain situations such representation can give you amazing runtime and memory saving
//Another approach  add an element to hash table if it doesn't exist other wise delete it from hash table
//Modified Ankit's code inside the first 'for' loop, and thus after the for loop all elements left out in
//the hash table are all eligible oddoccurance elements
if (ht[OddArray[i]] == null)
ht.Add(OddArray[i], 1);
else
ht.remove(OddArray[i]);
For languages like java, where passbyref is not possible, below algoirtihm will work.
Node BT2DLinedList(Node curr, Node prev){ //at invoke, prev points to previous element in linked list
if (curr==null) return null;
Node lastAdded=null; //points to last element added to lined list else points to 'prev'
Node leftChild=cur.left;
Node rightChild=cur.right;
//Call to left child
if(leftChild != null ) {
lastAdded=BT2DLinedList(leftChild , prev);
}else{
lastAdded=prev;
}
//Inorder action  updating a node's left as previous and right as next node
curr.right=lastAdded;
if(lastAdded!=null){ lastAdded.left=curr; }
lastAdded=curr;
//Call to right child
if(rightChild != null ) {
lastAdded=BT2DLinedList(rightChild , lastAdded);
}
return lastAdded;
}
//For any Binary Tree, invoke this method withh 1st arg=root, 2nd arg=null;
//when this method returns, it returns the largest node, that can be used as the head of the doubly linked list
Dasineni, why do you say that the subtraction is costlier than shift operation.
A probable actual machine language implementation for these two operations would be:
Subtraction:
load a into register R1
load constant 1 into register R2
subtract R2 from R1 and save in R1 #This is a single instruction with unique OP code
#Now R1 = a1
Shift operation:
load a into register R1
load constant 1 into register R2
right shift R2 bits from R1 and save in R1 #This is a single instruction with unique OP code
#Now R1=a>>1
three pointer pointing 3 consecutive elements will do the work.
proof: the problem can stated as we have x no of unique element and x no same elements. I want to prove that there will be at least one case where one unique element will be between two same element  thus the two neighboring same and 1 unique kind can be pointed by three pointers and can be compared.
lets assume we've x unique elements and there by (x+1) places to insert x same elements to form the given array. At any point of all possible insertion we can see that either two or more same elements are together (thus can be detected by three pointer comparison). if this is not the case then there will be at most one scenario where two similar element will have 2 unique element between them and rest all cases will have exactly one unique element between them  thus can be detected by three pointer comparison.
i think this logic is known as Principle of Pigeon Hole to geeks :)
Algorithm:
i=0 //1st index
j=1 //2nd index
k=2// 3rd index
if elements pointed by i, j and k does not have two common elements
then
i=k+1
j=i+1
k=j+1
else
the common element is the similar element
end if
//Note only i is sufficient, for simplicity j and k are used
I do not understand why one should load the whole content in a map (hashtable or tree etc). Even though it will give you timeefficiency but think of the memory requirement. Imagine you want to view a productionlog file of huge size, say 4GB. Do you think you can load that in RAM. In fact Microsoft Notepad suffers this size problem.
My proposal is to tradeoff this timeefficiency at the cost of handling unlimited size files. To do that, I'll keep the current cursor position in a variable. As you type, the cursor moves and the variable value increases.
Moreover to support other functionalities like, replace, insert, delete, append (similar to insert) and undo, I would like to have a data structure say, action. An 'action' might have three attributes  positionForAction, stringValue and actionType. So for invoking each method like replace, insert, delete etc you pass an action instance as method argument.
For undo, we need to store previous state for each action. One way to deal this is to store the undo action in a stack for each action. For example, if you delete char 'P' at location 1243, then create an action instance (positionForAction=1234, string='P', actionTpye=insert) and store it in a stack.
Search is another musthave feature and for that I would implement a good string pattern search algorithm such as KnuthMorrisPratt Algorithm.
This question is not about hash table or Btree, it's about a set of problems known as "searching based on multiple keys" or "searching based on secondary keys". Hashing on all keys or kd tree can be of interests based on specific case. The idea of three separate hash table (or tree) each one for one key can be useful for this specific case.
Any knowledgeable person, pls enlighten us!
I still didn't get the question and the solution by 'weicool'. Can anyone explain it to me?
In general, a hash table is implemented with an array having an initial size. When an object is added to the hashtable, internally hash function will compute the index in the internal array and put the object. The implementation also decides whether to contain only one object per array index. If so, then the array will be required to be resized when more objects will be added to the hashtable (based on the load factor). In an alternate implementation, when more than one object are allowed per index, the internal array might not be resized, but a liked list will be used per index to hold all the objects that hashes to same index. In the first case, search/get operation is always O(1) but a add/put operation might not be O(1), due to resizing when object sizes exceed the internal array size. In the second implementation, put/get operation is always O(1) [with proper add to liked list implementation] but the search/get operation might not be O(1) since in case of collision, a linear time search in the liked list is required.
I think the question is looking for a solution to achieve O(1) time search/get operation for the 2nd implementation?
First of all I didn't assume the Queue is Java's library class the implementation is left to the user, though Java's library class should work.
Secondly, irrespective of implementation, the above code required enqueue(..) method to add only one child(or element) at a time.
The code snippet "children.enqueue(t.child); " in my algorithm assumes t.child holds/points to only one node. The basic idea of implementing nary tree with a node class having only 2 references of "Node" type is  "child" variable points to left most child and "sibling" variable points to next sibling.
Concrete Example: A tree has root =R and the root has three children A,B and C. The representation will be:
R.child=>A
R.sibling=>null
A.child=>null
A.sibling=>B
B.child=>null
B.sibling=>C
C.child=>null
C.sibling=>null
Hope it helps!
I feel the algorithm works for a tree with just a root. Pls see the comment next to each line of code
Example: Tree has a root node =r.
The method breadthFirstTraversal(..) will be called as
breadthFirstTraversal(r).
public void breadthFirstTraversal(Node root){ //root=r
//Declare a queue
Queue children=new Queue(); //Creates an empty queue
children.enqueue(root); //add r to the queue  only one element
while(!children.isEmpty()){ //returns true
Node t=chilren.dequeue(); //t=r and children is empty now
while(t!=null){ //returns true
t.print(); //prints r , since t=r
if(t.child!=null){ //returns false sice root r doesn't have child
children.enqueue(t.child);
}
//For root sibling will not be traversed
if(t.equals(root)){ //returns true
t=null; //next time the while loop at top breaks since the queue is empty
}else{
t=t.sibling;
}
}
}
}
No, it can’t be found unless you restart from the beginning.
Since khanji uses variable bit length to encode its characters, it is a case of Prefix coding, which is designed to get the benefit of less space compared to fixed bit length encoding like ASCII. In such encoding the bit patterns of characters are so designed that any character can be identified from its unique prefix bits (certain bits from left if you write bits of a character in little Endian style) but the characters can’t be identified from its prefix bits. This is why, when a set of such characters are encoded in binary form, each characters can be identified by reading from beginning (left to right) but moving in reverse direction will fail to identify the characters.
One such example is the Network IP address class scheme. Class A has prefix 0, class B prefix 10, Class C 110 and Class D with prefix 1110. Looking at any IP address’ prefix you can tell its class but not from its suffix. In this particular example, all IP addresses are fixed length of 32 bits so a suffix of 32 bit (eventually all the bits) can do the task, but in case of variable IP address lengths it would have not been possible to identify their class from their suffix.
The above solution works fine, when size of numbers is small and all numbers can be loaded into RAM in one go. The data structure will be in the RAM and it requires to hold all the numbers.
The term “millions of 30 bit binary numbers” gives a notion that all the numbers can’t be accommodated in RAM in one go and hence we need a better solution for this. Btree data structures are used for secondary storage but not sure if they will be useful in this case.
Therese
a binary tree is different than a binary search tree.
A tree which can have at most two children is a binary tree.
A tree which can have at most two children and if it has two children then they must follow an order  normally the order is key of left child < key of right child.
Number each card 1 to 52, so after each shuffle we'll get a number sequence from the cards say {23,5,52,....total 52 terms}.
No we can calculate a the hash value of the sequence with radix=52
Example: {10.....29,2,37}
h=(52^51*10 + ...+ 52^2*29 + 52^1*2 + 52^0*37) mod 52
Now comparing h values will give an idea about randomness of sequences.
Different h value will ensure different sequences, where as same h values do NOT necessarily mean same sequence, but chances are very high.
Got this idea from "RabinKarp Algorithm" for string pattern search.
Note that any value for radix will work where as radix=52 will give best result for less collision (different sequence giving same h value)
I'm assuming a number system with base 36, 'coz I've 36 distinct characters (09, az) to represent the numbers from 035.
Also to accomodate positive, negative and null I'll use left most 3 characters (I'll explain them later) thus I'm left with 83=5 characters to present all 32bit integers. Before I explain more lets see few examples:

Decimal Numbers = String Representation

0 0000 0000
1 0000 0001
.. ...
9 0000 0009
10 0000 000a
11 0000 000b
... ...
35 0000 000z
36 0000 0010 = (36^1 *1 + 36^0 *0)
....
99 0000 002r = (36^1 *2 + 36^0 *27)
and so on.
Now to accomodate negative numbers I assume leftmost 2 characters will be 10
, in order to avoid ambiguity, I aslo assume leftmost character is zero if its a positive integer.
Example: decimal (99) <=> new system (1000 002r)
I couldn't understand the rquirement about null from this question. Still to
represent a null value for an integer I can assume left most three characters will be 110. This is why I mentioned earlier that I'm left with 5 characters to represent the absolute value of the intergers with 32 bits.
Note: ascii / unicode values for 09, az are same and they occupy contigious locations in the ascii table. Thus we can use a transformation to make them
represent intended integer value. For example
int_value_in_my_system (char x) = ascii_value(char x)  ascii_value(zero)
*********************************************************************
Eila  can u explain in more detail with some examples may be:)
Few comments on the disscusions so far:
1>Associating prime numbers is not a good idea 'coz finding huge set of prime numbers itself is a big task in term of runtime and memory offcourse.
2> If we dont have any extra memory then quick sort or any good in house sort will be a good idea with run time  bestcase(nLogn) and worstcase (n*n)
3>If extra memory if available, then using hashtable (or hashset based on language specific implementation) will be better choice than array of size 2^16. 'Coz array needs 2^16 memory anyway where as Hashtablememory will grow as we process the string. If 2nd and 3rd characters are duplicate , hashtable uses only 2 memory spaces to discover it compared to array's huge fixed size of 2^16. Runtime of both is O(n)
4> Finally "hitesh1407"'s solution of using bits to indicate if a char is present in the string is a very good idea given the chances of the input string having duplicate is very less.
So if we know the input string is very prone to have duplicates  hashtable is a better choice, otherwise option <4> may turn out a good choice.
Pseodocode:array index starts from 0,1,...

p=given number to find
N=size of array
k=index of smallest element //amount of rotation in O(N) time
l=left index=k
r=right index = (k1+N)%N
l1=transform(l)=(lk+N)%N
r1=transform(r)=(rk+N)%N
while(l1<=r1)
{
mid1=(l1+r1)/2
mid=inverseTransform(mid1)=(mid+k)%N
if(p== [mid]) break
//p occurs at index mid. [mid] indicates value at index=mid
if(p> [mid])
{
l1=mid1+1
}
else
{
r1=mid11
}
}
//code to check if p doesn't exist in array

Note: variables l,r,mid indicates the indices of the given array
and variable l1,r1,mid1 are indices if the array would have not
been rotated,i.e. the lowest index (zero) contains smallest element
Assumptions: Input is String instead of char array.
Code is in Java:
public static void printAllSubsets(String s){
int[] base=new int[s.length()];
String s1="";
while(!s1.equals(s)){
s1="";
nextCombination(base);
for(int i=0;i<base.length;i++){
if(base[i]==1){
s1=s1+s.charAt(i);
}
}
System.out.println(s1);
}
}
public static void nextCombination(int[] arr){
int i=arr.length1;
while(i>=0){
if(arr[i]==0){
break;
}
i;
}
if(i<0){
return;
}
arr[i++]=1;
while(i<arr.length){
arr[i++]=0;
}
}

kg
January 15, 2007 Start of Pseudocode
b=string of bits
u=integer
D=suitable data structure (hashtable will be a good idea)
where element=(key,value)
value=(occurance counter OC of key, pairing counter PC of key)
for each number b in file
{
u=binary string to int(b)
if( compliment(u) exits in D AND OC(u) >0 )
{
OC(compliment(u))=OC(compliment(u))1
PC(compliment(u))=PC(compliment(u))+1
}
else
{
if (u exists in D)
{
OC(u)=OC(u)+1
}
else
{
D.add(u,1,0) // 1=OC 0=PC
}
}
}
//Scan through the Data Structure D to generate pairing information
End of Pseudocode
Runtime = O(nk) where n=number of binary numbers, k=number of bits in each number.
*Garbage Collection  JAVA*
During execution of program in JVM, Objects are created in JVM Heap and variables (Local or Static) that hold the reference to these objects will be residing in JVM Stack. Such variables in the stack are called root object, as they are the root object to directly refer to one object at heap and other objects referenced indirectly can be accessed through these root objects.
An object residing in Heap can not be accessed if no variables (residing in JVM stack) are present to hold reference to them and such Objects are called Garbage, as they occupy memory but are not usable.
There are methods to Identify and Clean such objects:
1. Reference Count: Objects those are not referenced (directly and indirectly) are identified and removed. Failed to identify cyclic garbage objects.
Finding circularity in a Single Linked List is easy but in reality, the objects in a heap forms nary tree, where an object (a node) can have references to many objects (many children nodes). In such structures (a tree by definition can not have circularity, so it becomes a graph which can have cyclic structure), finding circularity is very difficult (need to find out if there exists any algorithm for this). Thus we need a different approach for Garbage Collection.
2. Mark and Sweep: Objects those are accessible (directly and indirectly) from root objects are marked first. Then scan through all objects in the heap and remove unmarked objects.
An extra variable can be used for storing mark/unmark information per object. This method eliminates cyclic garbage but creates fragmentations of Heap. The normal program execution is suspended while the garbage collection algorithm runs which makes the program less responsive.
3. Stop and Copy: Marked or Accessible objects are copied to contiguous memory location in a reserved area of the Heap and then current heap is cleaned completely. Reserved area will be used as active heap and cleaned area will be reserved for next garbage collection.
Even though it overcomes fragmentation issue, the program needs to be halted to execute garbage collecting algorithm like before. It also requires double the heap memory size to accommodate reserve heap.
4. Mark and Compact: Marked or Accessible objects are copied to same heap in a contiguous locations overriding existing garbage and rest of the garbage will be cleaned.
Ref: http://www.brpreiss.com/books/opus5/html/page414.html#SECTION0014000000000000000000
Assumption:
1> Its a Complete Binary Tree (Binary is not mentioned and A node with two
pointers can be used for nary tree.So binary is not obvious from the
question)
2> Gneral convention of forming Complete tree from left to right is assumed.
This means when adding a child to a node, we'll add left child prior to
its right.
Algorithm with O(LogN) : (I've used Java style)
addNode(Tree t, TreeNode newNode) {
int n= t.size;
TreeNode p=t.root;
int ncomplete=0; //Stores the size of the Binary tree with root=p
//if it would have been completely full
int nPrevComplete=0; //stores size of the completely full BT with
//hight one less to ncomplete
int Lr=0; //Stores the no of nodes required to have
//ncomplete from nPrevComplete
int lr=0; //Actual nodes supplied to nPrevComplete
nPrevComplete=func(n); // func(X) returns an integer by changing all
//bits to the left of most significant bit of
//X to 1. Example func(5)=7
//This is to be done. Looking for O(1) time
//solution. Any HELP!!!!
while(n>2){
nComplete=nPrevComplete;
nPrevComplete=nPrevComplete>>>1 // Logical Right shift
LR=nComplete  nPrevComplete;
// As an enhancement this => LR=nComplete XOR nPrevComplete
lr=n  nPrevComplete;
if( (lr>= (LR/2)) && (lr<LR) ){ //Move to right of p
p=p.rChild;
n=(nPrevComplete 1)/2 + lr  (LR/2);
}else{ //Move to left of p
p=p.rChild;
n=(nPrevComplete 1)/2 + lr  (LR/2);
}
}
//p is the node whose child will be new node
//Giving preference to left child as a convention
if(p.lChild == null){
p.lChild=newNode;
}else{
p.rChild=newNode;
}
}//End of method
//Note, variables 'LR' and 'lr' are redandant and are used for ease of understanding
1> If input trees are provided with their structures (which is the case here, I believe) then Varun's algorithm works FINE. (thnx Ro for pointing me out)
2> If input is flattened trees (traversed form of the trees) then 2 traversed forms for each Binary Tree and one traversed form for each CBT/BST are required.(Ref to my previous post)
Since the given tree is not Binary tree, I assumed it an nary Tree. Here is my pseudocode written in Java style for breadth first printing of a nary tree which will also work for a Binary tree.
/*
This is for nary Tree where a node can have any number of children.Such tree can be implemented with each node having only two refs  one refering to its left most child and the other refering to its iimediate sibling. This policy prevents us from having a node implemented with n references,each for a child.
We assumed node implementation class "Node" have two members  child and sibling of Node type.
Node class also has a print() method and a equals(..) method.
Queue implemented by a class "Queue" and it supports enqueue(..), dequeue() and isEmpty()methods.
isEmpty() returns true if queue is empty
*/
public void breadthFirstTraversal(Node root){
//Declare a queue
Queue children=new Queue(); //Creates an empty queue
children.enqueue(root);
while(!children.isEmpty()){
Node t=chilren.dequeue();
while(t!=null){
t.print();
if(t.child!=null){
children.enqueue(t.child);
}
//For root sibling will not be traversed
if(t.equals(root)){
t=null;
}else{
t=t.sibling;
}
}
}
}
For a Complete Binary tree or a Binary Search tree, only one traversal would do
For a Binary tree, min two traversals  Inorder and post/preorder are required.
Take counter examples to understand the requirement for Binary Tree.
Example 1:
Lets have two Binary trees T1 and T2. T1 has 3 nodes with root=a, left=b and right=c.
T2 has root=a, a's right=b and b's right=c. Now Preorder traversal of both T1 and T2
will be of same form a,b,c (considering root,left,right order).
Thus T1 and T2 having different structures can have same form in Preorder traversal.
But thier Inorder traversal forms are different. T1(b,a,c) and T2(a,b,c)
Example 2:
Lets have two Binary trees T2 and T3.T2 has root=a, a's right=b and b's right=c.
T3 has 3 nodes with root=a, a's right=b and b's left=c.Now Preorder (root,left,right)
traversal of both T2 and T3 will be of same form a,b,c. Moreover the Postorder (left,right,root)
traversal of both T2 and T3 will also be of same form c,b,a.
Thus T1 and T2 having different structures can have same form in Preorder and Postorder traversal.
Thus one of the form must be Inorder traversal.
Would appreciate a logical or mathematical proof for this property:
Shik
You are right in saying O(N) + O(M) equivalent to O( max of (N,M)).
But O(n+m) is NOT wrong from the perspective of Onotation, the only difference being O(n+m) gives much loose bound than O( max of (N,M)).
Proof:
Let f1=O(g(n)) for n>=n1 and f2=O(h(n)) for n>=n2 then
f1+f2 <= c1*g(n) + c2*h(n) for n>= max(n1,n2)
<= c/2 * ( g(n)+h(n) ) where c is defined as c=max(c1,c2) .... <I>
<= c* max(g(n),h(n)) ....<II>
Now,
<I> implies f1+f2 = O( g(n)+h(n) )
<II> implies f1+f2 = O( max(g(n),h(n)) )
First thing we should understand is that Binary Tree (BT), Complete Binary Tree (CBT), and Binary Search Tree(BST) are different and BT is most generic out of the three.
Each node's key will be provided and no information about parent/child relationship will be provided unless a solution requires it.
1>Varun's solution of Inorder and one of pre/post order works fine for BT, CBT and BST. Parent or Child relation not required.
Serializatn: O(n) Deserializatn: ??
2> Vader's solution (4th post) works for CBT and BST (Parent or Child relation not required) but fails for BT
Serializatn: O(n) Deserializatn: O(n)
3> Jack's solution (3rd point) works for BT, CBT and BST as it needs Parent or Child relation for each node.
Serializatn: O(n) Deserializatn: O(n)
4>Neeraj's solution is an enhancement of Vader's to accommodate BT. Thus works for all and no parent/child info reqd.
Serializatn: O(n) Deserializatn: O(n)
5>Chiru's solution, the way I understand, looks same as Jack's.
New Solution Proposed:
Visualize the given BT as a part of the CBT, where unsupplied nodes are replaced by a null/dummy nodes. Now number each node starting from root in a breadth wise traversal fashion. Thus root gets #1, root's left child gets #2, root's right child gets #3 and so on. Since in a complete binary tree each node's position is unique, a node having a given number will have a unique position.
Example: root = a, root's left=b, b's left=c
To make it a complete binary tree we need a's right (=dummy node), b's right (another dummy node)and two more dummy nodes for a's right's children.
Now if we number all the nodes, a will have #1, b is #2 and c is #4.
Knowing the positional numbers of a, b and c we can reconstruct the BT easily.
Serializatn: O(n) Deserializatn: O(n)
In terms of memory space, I've a notion that proposed solution will have better memory optimization amongst others.
#Thanks to Dushyant for this solution
Repileenjconnelly, Cloud Support Associate at Absolute Softech Ltd
I have extensive experience in providing breaking news and giving precise details to the viewers. Interested in leading/working with ...
Open Chat in New Window
To be precise:
 kg March 16, 20081000*p = 1000 *(2p^22p^3) is cubic equation, not quadratic.
when you transform it to quad 1=3p2p^2 , you are diving by p and assuming p not = 0, which is incorrect
It should be calculated as:
p=3p^2  2p^3 //diving by 1000
=> p (2p^2  3p^1 + 1) =0
=> either p=0 or 2p^2  3p^1 + 1=0
Thus three solutions of p=0,1/2,1
The reason I pointed this out is
for p=0, wining change game1=wining change game2
for 0.5>p>0 wining change game1 > wining change game2
for p=0.5, wining change game1=wining change game2
for 1>p>0.5 wining change game1 < wining change game2
for p=1.0, wining change game1=wining change game2