oOZz
BAN USERGoogle+ ozslatersd
I first thought this was the classic two egg puzzle (classic-puzzles.blogspot.com/2006/12/google-interview-puzzle-2-egg-problem.html), but it turns out this one is slightly different, so I am changing the solution:
Drop the egg at every other floor starting at 2nd, i.e., 2, 4, 6, 8, ...
if it breaks at that ith level, drop the second egg at i-1th floor, if the second egg breaks, then first is the toughest, second otherwise.
Then there are two cases right:
//if you're left node then return right node
if (node == node->parent->left) return node->parent->right
//if you're the right child, then return parent->parent->right->left
if (node == node->parent->right) return node->parent->parent->right->left
Of course you need to check for nullness, for instance, node->parent, before calling node->parent->left, etc.
Newton Method is a good way finding the square root of integers. I have a couple of questions though:
Why do you pick x0 = 1? Aren't you supposed to choose x0 = n as your initial guess?
What is the significance of choosing 0.00000000000009 as the precision?
What is the complexity of this algorithm?
This question has been asked many times before. I've already reported as duplicate.
1. Create a DAG from letters by comparing each word with the next. Since the word list is sorted, comparing the letters by location, gives which letter comes before the other.
2. Topological sort of the DAG will give you the order of the letters in the alphabet
My solution above finds "the" element that is A[i] =i. If you're trying to find all of the elements that satisfies for all A[i] = i, which I think what you're trying to do, then you can't do it in O(log n), . You might as well just write a single loop and check the elements sequentially, why bother with the middle point?!
By the way, for your 1. bug, I meant the left of the array. So for the array [-1. 1. 2. 4], your program returns [-1, 1, 2], but the solution is [1,2].
@DashDash, you're actually right. Now I read the question carefully. The question says find the elements (all the elements!) that are equal to their index.
In the worst case, if you're given an array with all the element values already equals to their index position, {0, 1, 2, 3, 4, 5, 6, ...}, then the solution is O(n).
My solution was for finding "the" element, that is A[i] = i.
@Up. First, take a deep breath :)
There are two bugs in your solution:
1. The program doesn't check the left of the array, when arr[mid] == mid. Run your program against [-1, 1, 2, 4]. The program finds arr[1] = 1, then checks arr[2] == 2, and then returns [-1, 1, 2].
2. The program will never check the right of the middle (part larger than arr[mid]). Run your program against [-4, -2, 0, 3, 10]. It will return None, but arr[3] = 3.
Then you can also provide the reverse routine :)
static void reverse(int *arr, int start, int end) {
if (arr == 0) return;
while (start < end) {
int tmp = arr[start];
arr[start++] = arr[end];
arr[end--] = tmp;
}
}
Look, the point here is that this is an elegant algorithm, which I've learned from Programming Pearls and I wanted to share. You can google and find even more different solutions. For instance, I can even write another C/C++ solution with memmove and memcpy to rotate any array in O(1) time with some extra storage.
- oOZz June 23, 2013There is a O(log n) solution using binary search to eliminate half the candidates at each iteration. Let's say you're looking for A[i] = i, then for any i > 0 A[i] >= A[i-1]+1, because the array is increasingly sorted. Then imagine you've created an array A' such that each element is A[i]-i, then A' is also increasingly sorted. Therefore, you can do a binary search for 0 in A' to find the index such that A[i] = i. By the way, we'll not need to actually create A', just assume that A'[i] = A[i] -i.
Here is the code:
int findIndexEqualsValue(const vector<int> &A) {
int l = 0, r = A.size() - 1;
while (l <= r) {
int m = (l+r)/2;
int val = A[m] - m;
if (val == 0) {
return m;
} else if (val > 0) {
r = m - 1;
} else { // val < 0
l = m + 1;
}
}
return -1;
}
This is code runs in O(log n)
- oOZz June 23, 2013@slbb You're still on the wrong page. This problem has nothing to do with the subsets, because sets can only have unique characters and the order of elements in a set is not important, but in this problem it matters. I think you're getting confused, because the subsets are contained in the permutations as well. Secondly the duplicates are much more than 5. In permutations we're trying to avoid counting the words twice, if the duplicate letters are swapped.
@Up AFAIK, you need to solve it case by case. For example,
P(9,1) = 6, this one is obvious, because you only have 6 distinct letters.
P(9,2) = two letter words
Case 1:
All Es only 1 way: EE
Case 2:
All Is only 1 way: II
Case 3:
All Fs is only 1 way: FF
Case 4:
All different two letter words is P(6,2) = 6*5 = 30, because 6 distinct letters
Then P(9, 2) = 1+1+1+30=33
P(9,3) = three letter words
Case 1:
two Es and one of C,F,I,N,T
5*(3!/2!)= 15
Case 2:
Two Is and one of C,E,F,N,T
5*(3!/2!)= 15
Case 3:
Two Fs and one of C,E,I,N,T
5*(3!/2!)= 15
Case 4:
All different three letter words is P(6,3) = 6*5*4 = 120, because 6 distinct letters
Then P(9, 3) = 15+15+15+120=165
...
0. Let's say your original array is A = {1, 2, 3, 4, 5, 6, 7, 8} and you wanna rotate by k = 2
1. Reverse the whole array => A' = {8, 7, 6, 5, 4, 3, 2, 1}
2. Reverse the first k elements of A' => A' = {7, 8, 6, 5, 4, 3, 2, 1}
3. Reverse the remaining elements of A' starting from the kth element => A' = {7, 8, 1, 2, 3, 4, 5, 6}
4. return A'
The reverse operation is O(1) space, so this program is O(n) time and O(1) space
Here is the code in C++
template <typename T>
void rotate_array(vector<T>& A, int k) {
k %= A.size();
std::reverse(A.begin(), A.end());
std::reverse(A.begin(), A.begin() + k);
std::reverse(A.begin() + k, A.end());
}
This algorithm is from Programming Pearls!
- oOZz June 22, 2013This question is not clear. There are so many factors that may change the solution.
It's not even interesting to search a number in streaming case. For instance, if you can read as fast as the streaming numbers, then you don't even need a data structure to search the number, you can just compare as the numbers coming in.
@Up you're right, it's not clear. if it's any number of characters then you have to calculate all the 1 to 9 letter word permutations and add them.
If P(n, k) is the number of k-permutations of a set of n elements, then you need to calculate P(9,9) + P(9,8)+...+P(9,1). Of course, you need to also take the duplicates into consideration, which makes the math more complicated.
But then again it's not clear, for instance, if single letter words are valid, etc.
Unless there is another bitwise trick, you can use a union:
union uintbyte {
unsigned int val;
char[4] bytes;
}
unsigned int computeByteSum(unsigned int A, unsigned int B)
uintbyte bA;
uintbyte bB;
uintbyte bC;
bA.val = A;
bB.val = B
bC.bytes[0] = (bA.bytes[0] + bB.bytes[0]) >> 1;
bC.bytes[1] = (bA.bytes[1] + bB.bytes[1]) >> 1;
bC.bytes[2] = (bA.bytes[2] + bB.bytes[2]) >> 1;
bC.bytes[3] = (bA.bytes[3] + bB.bytes[3]) >> 1;
return bC.val;
}
How would you tell the randomized partition subroutine to partition the set around the (k-1)st element?
You don't tell the randomized partition subroutine anything. It partitions the array around the pivot element A[p] and returns p. Therefore, you need to call it until the p = n-k (not p = k -1 actually, I corrected it above.)
static void quick_select(int[] array, int start, int end, int k) {
int ktop = n - k;
while (true) {
int p = rnd_partition(array, start, end);
if (ktop < p) {
end = p-1;
} else if (ktop > p) {
start = p+1;
} else {
// at this point p == n-k is true
// so print array [k, n)
break;
}
}
}
static int rnd_partition(int[] array, int start, int end) {
int low = start -1;
int high = end;
// randint() return a random int between start and end
swap(array, randint(start, end), end);
int pivot = array[end];
while (true) {
while(array[++low] < pivot);
while(array[--high] > pivot && high > low);
if (low >= high) break;
swap(array, low, high);
}
swap(array, low, end);
return end;
}
There are only two good solutions for this problem.
1. Using hashtable as @zbesst and @coding.arya suggested, which is O(n) time and space
2.Sorting solution which is O(n lg n) time and O(1) space.
I saw someone -1'd the posts of @zbesst and @coding.arya. When people do that, they should at least have the courtesy to write the reason. I'll +1 you guys.
This is a variation of the selection problem (see en.wikipedia.org/wiki/Selection_algorithm). Basically, you need to use the partition subroutine from quicksort. Since this is a common problem, there is even utilities in certain languages for this operation, e.g, std::partition in C++.
Partition the array until p = n-k. p is the pivot that is returned by the partition function.This will reorder the array such a way that all the elements that appear before index p (pivot) are less than A[p] and all elements after p are greater. At this point the reordered array A' will have all the "top" k numbers on the right side of the pivot.
"Expected" time complexity is O(n) for randomized partition subroutine. O(1) space.
@Epic_coder. O(n) is obviously the time complexity of the function. As you can see I am passing, the matrix and the map as a parameter to the function.
However, if you really wanna be meticulous about it, you can treat the matrix as 1 D and use one of these tricks:
1. eli.thegreenplace.net/2008/08/23/initializing-an-array-in-constant-time/
2. stackoverflow.com/questions/10797284/how-to-zero-out-array-in-o1
1. Create a nxn matrix and initialize to -1
2. Normalize the key values, e.g., if the inorder traversal of the nodes are 377, 11, 5, 1055, 3, 8, ... then map them integers starting from 0 to n, (377, 0), (11, 1), (5, 2), (1055, 3), (3, 4), (8, 5), .... You can map the tree nodes instead of keys to integers too, i.e., Instead of HashMap<Integer, Integer>, you can use HashMap<TreeNode, Integer>.
3. Traverse the tree and set the matrix value for the parent
Note that, you can use any tree traversals, inorder, preorder, or postorder, for step 2 and 3.
here is the recursive preorder implementation
// initial call
ancestorMatrix(root, null, m, map);
public static void ancestorMatrix(TreeNode node, TreeNode parent, int[][] m, Map<Integer, Integer> map) {
if (node == null) return;
if (parent != null) m[map.get(node.data)][map.get(parent.data)] = parent.data;
ancestorMatrix(node.left, node, m, map);
ancestorMatrix(node.right, node, m, map);
}
The time complexity is O(n).
- oOZz June 17, 2013@arturo, @Andrei's solution is fine, it just needs a little tweak on the delete operation. The inserts are to the end of the vector, which is O(1). However for deletes, instead of deleting from the ith position (which is O(n)), he needs to copy the last element over to the ith position and delete the last element, because deleting from the end from a vector is O(1).
Please note, that these operations are "amortized O(1)", which means when you consider entire sequence of operations of the program, in the worst case growing and shrinking the array can cannot occur again and again for a long time, thus "amortizing" its cost. (see en.wikipedia.org/wiki/Amortized_analysis)
@AnonymousMe, You can compare both trees using inorder traversal
1. Pass both root nodes to the function and iterate them at the same time, compare the nodes, just like @bunnybare mentioned above. (recommended)
2. Alternatively, you can traverse the trees one by one and save the null children as nodes too. e.g., if this is your tree
3
/ \
1 2
/
4
Your in-order will be: null, 4, null, 1, null, 3, null, 2, null
Then compare these two lists.
@Dumbo You do realize that in the worst case, inserting all the elements to a BST will be O((n+m)^2), which is the case where the arrays are sorted. I think you meant a balanced tree such as AVL, Red+Black Tree. Then this will be O((n+m) log (n+m)), with space complexity O(n+m).
Using one hashtable or two does not change the big-O space complexity, which is still O(n), because asymptotically O(2n) = O(n). Therefore, a balanced tree solution has the same space, but worse time complexity than the hashtable solution that I suggested.
I've also been thinking about the first solution that I've requested. It's possible to d it with one hashtable for both union and intersection. I'll update the above original solution, based on your suggestion to use 1 hashtable:
1. Insert elements to the hashtable and keep their counts
2. For union, you iterate the hashtable and print the items
For intersection, iterate the items in the hashtable and print keys, that have count >=2
@arturo You have a good point +1. I think Andrei is using a STL vector, because he mentioned push_back(). Insertion or removal of elements of std::vector are linear in distance to the end of the vector, which is O(n), if you're not inserting/removing from/to the end of the vector, which is what you're doing in your remove function (see en.cppreference.com/w/cpp/container/vector for complexity of operations).
I think a better choice would be a simple array with dynamic resizing and shrinking. You can use a load factor (3/4) to determine when to grow or resize the array and the complexity will be _amortized_ O(1).
@Subhajit I was thinking of two different approaches. One of them is external sorting and it is clear to understand (see wiki for details). This is a good solution, if you read the entire streams in to a file/disk first and then merge them. What if though, you're doing the external sorting and meanwhile you have still numbers coming in from the input streams non-stop?
The other approach is sorting the numbers as they are coming in from multiple input streams and they may be coming asynchronously. For this, you need to find a way to sort the numbers as they are coming in in different speeds. A heap is a perfect data structure for asynchronous streams to sort the streaming elements. The single insert operation is O(log n) and you can build the heap in O(n log n) time from multiple streams for n numbers.
Design a stream data structure (interface) that is "iterable" by numbers for your input streams, which will have let's say hasNext() and next() method implementations. For instance for Java, this is like implementing the Iterable interface:
public interface MyStreamInterface<Integer> implements Iterable<Integer> {}
All of your input stream classes will implement this interface. Then you can have two threads sharing a min-heap, one for reading from the streams and inserting them into the min-heap (Producer) and the other one writing from the min-heap to a file (Consumer):
Producer:
for (MyStreamInterface stream: streams) {
if (stream.hasNext()) {
minHeap.insert(stream.next());
}
}
Consumer:
while (!minMeap.isEmpty()) {
file.writeInt(minHeap.extractMin())
}
Of course, writing to and reading from the shared min-heap has to be synchronized as in Producer-Consumer problem.
- oOZz June 17, 2013@emalaj23 First of all, the power set of tiles is 2^7, but that's not what you're describing. You're describing the permutations of each letter. Let's say for alphabet a,b,c the possible permutations are:
a
b
c
ab
ac
ba
bc
ca
cb
abc
acb
bac
bca
cab
cba
which is 19, it's not 2^3=8. Power set is set of subsets and order is not important. In this problem, each different order of letters makes a different word.
For 7 tiles with all distinct letters, your "subset" will be:
7! + 7!/1! + 7!/2! + 7!/3! + 7!/4! + 7!/5! + 7!/6!
5040 + 5040 + 2520 + 840 + 210 + 42 + 7 = 13699
I suggest you implement your algorithm and test it first, then you can see the complexity better.
One egg is tougher than the other. That's implied in the question, because the question is asking which one is the toughest.
- oOZz July 02, 2013Therefore, they can't break on the same floor, because then that means they are equally tough and there won't be a solution.