Loler
BAN USER- 1 Answer Questions deleted by OP?
I have noticed that a couple of questions have been deleted. (One was a very new maximum product sub-array question, and one was a relatively new zig-zag of a 1-D array, which has a nice answer by showell).
- Loler March 10, 2013
It seems unlikely that the mods are deleting those, as those seemed like valid questions.
Is it possible that the OP deleted them after getting an answer? (Possibly homework and didn't want to get caught...).
Is there are a way to get back the questions (and the answers)? Prevent such things from happening in the future? (Eg. Don't allow a delete of a question after 2-3 answers have been posted).| Flag | PURGE
If you knew Lucas' theorem, then you could solve this easily.
The number is nothing but
(3000000 choose (3000000 - 30)) * ((3000000 - 30) choose (3000000 - 60)) * ( (3000000 - 60) choose (3000000 - 90)) ....
Each binomial coefficient can be computed modulo the prime p using Lucas's theorem (which involves writing the numbers in base p).
If you don't know Lucas' theorem, perhaps you can try and do some clever cancellations to simplify it...
@aka: Yes something like that.
@eugene: Oops. Well played by LAP I suppose :-), and looks like it might be too late to delete now.
@frager/oOZz: It is O(n). Just store the intermediate results of Kadane's algorithm... (Kadane's is what I mean by "standard dynamic programming algorithm".)
@oOZz: During an interview they don't expect perfect code which will compile at the get go. If you want to use some library method, you can just assume it. I would say it is easier to write code in an interview, than using a compiler: you won't get bogged down by the useless details (like does strcpy take dst first or src?) and can concentrate on the parts that matter.
@Anonymous: I don't see a problem asking DP questions. The point of the questions is not to see whether you get the answer or not, but to see how you tackle it. You will be judged relative to other candidates on the same question (at least in Google), so everyone is in the same boat.
Sorry if I missed responding to someone.
An O(n) solution is possible.
We consider all "split points", i.e. points such that one subarray lies to the left of it, and one to the right, and compute the best for each possible split point.
Dynamic programming works.
Given Array A[1,...n]
Using the standard dynamic programming algorithm, we can compute for a given i, the maximum and minimum sum subarrays in A[1...i] and A[i+1 ... n]. Note that the point between i and i+1 is a split point.
This can be done by making two passes once from 1 to n, and other from n to 1 and give us four arrays with the max and min sub-array sums.
Now given the above four arrays, for split point between i and i+1, we can take the max and min combinations (max from right, min from left and max from left, min from left), and get the combination which gives the better result.
Once we have the max for each split point, we get the global optimum.
O(n) time, O(n) space.
Ok, I had misunderstood you earlier. This works, and guaranteed O(nlog n). Good job!
I believe I have a proof, brief attempt below:
Given a permutation P of 1,2,3...,n, Call the signature S_P of P, the function f which maps {1,2,...,n} to {0,1,2..., n-1} such that f(k) = next-smallest according to your definition (replacing None by 0).
The claim is that two permutations P1 (a1,a2, ..,an) and P2 (b1,b2, ...,bn) give rise to the same binary tree if and only if S_P1 is identical to S_P2.
Assume S_P1 = S_P2
First we show that the first element of P1 and P2 must be same. Suppose b1 = aj for some j > 1. Then since S_P2(aj) = 0, we must have that a1 > a_j. So we must have that S_P1(a1) = 0 but S_P2(a1) >= aj as a1 appears after aj in P2.
Now we can parittion and show for the subtrees.
DIdn't try proving the other way.
Given x (the number to be looked for) and array A with the +-1 property.
One algorithm you can do is.
Set guess_index = 0
Compute the difference D = |x - A[guess_index]|
if D == 0, then we have found it. Otherwise we can skip D-1 elements (as we know they can't be be equal to x), so set guess_index += D and repeat.
Pseudo code
int find(int x, int [] A) {
uint guess = 0;
uint D = abs(x - A[guess]);
while (D > 0) {
guess += D;
if (guess >= A.Length()) break;
D = abs(x - A[guess]);
}
return D == 0 ? guess : -1;
}
No clue about the time complexity.
- Loler June 04, 2013Another way of looking at this.
You want the number of triples (i,j,k) such that n > i >= j >= k >= 0
Now if you consider the triple (i+2, j+1, k), then we have that n+1 >= i+2 > j+1 >k >= 0. Thus all we need to do is select 3 numbers among 0,1,2,..., n+2 and sort them, subtract 1 from the middle and 2 from the max to get our possible i,j,k.
Thus, the number is
n+1 choose 3.
Since temp starts out as 1, the answer is
1 + (n+1 choose 3)
@aka: It is the same, but you seem to have skipped the implementation details of how to maintain the level information etc, which is not hard, but also not trivial. Also, I wanted to mention the drawbacks of DFS, and added an implementation.
btw, if one chooses to use BFS, the solutions will all be quite similar. Don't you think? :-)
I had to modify your code to run it (probably java version assumptions). I gave it a test input where I thought it might fail, but it didnt!
So, now I am not sure what your algorithm is.
What exactly is next-smallest? Can you please define it and give a couple of examples?
Thanks.
EDIT: This is a failing test case
// Comment to prevent formatting problems.
int [] arr1 = {8, 7, 9};
int [] arr2 = {8, 10, 7};
System.out.println(willMakeSame(arr1, arr2));
prints true, instead of false.
- Loler June 04, 2013BFS (breadth first search) is likely the way to go (unless utterly space constrained). DFS (depth first search) could be inefficient. Consider a tree with a million nodes in the left sub-tree, but just one in the right. If you traverse the left subtree first, you will waste a lot of time.
At each level you can count the number of leaves and bailing out as soon as you can tell when the number isn't going to be the same as the number of nodes at that level.
Something like this (haven't tried verifying it etc)
bool LeavesAtSameLevel(Tree<T> tree) {
Queue <T> queue;
if (tree == null || tree.isLeaf()) return true;
queue.push(tree);
uint expected = 1; // How many at current level we expect
uint seen = 0; // How many nodes we have seen at current level.
uint queued = 0; // How many we have queued for next level.
uint leaves = 0; // How many leaves we have seen at current level.
while (queue.hasElements()) {
Tree<T> cur = queue.pull();
seen++;
if (cur.Left) {
queue.push(cur.Left);
queued++;
}
if (cur.Right) {
queue.push(cur.Right);
queued++;
}
if (cur.IsLeaf()) {
leaves++;
}
// Seen some leaves, but not the same as the total nodes seen.
if (leaves > 0 && seen != leaves) {
return false;
}
if (seen == expected) {
// Finished traversing one level.
expected = queued;
seen = queued = 0;
}
}
return true;
}
@Bharath: The catch is that you need a stable partition: The relative ordering of elements must not change. Once you do that, you can do recursively, by comparing the first elements of the two partitions you get. You don't have to do any other preprocessing.
If you actually look to take the same element as pivot (as you seem to suggest), then you will get wrong answers, counterexample:
1 2 3
1 3 2
Assumption: All numbers are distinct (otherwise, you need to tell us about the BST insertion algorithm).
One solution (but essentially same as the tree compare solution in spirit): you can do something like quicksort:
Given Arrays A and B, check if A[0] = B[0] (if not, return false).
Now construct A_more, A_less and B_more, B_less where A_more contains elements of A which are > A[0] (and appear in the same order as they appear in A). This is basically the partition step in quicksort. Note that you need the partition to be stable. It is possible to do that in-place, but is very complex.
Now, recursively compare A_more, B_more and compare A_less and B_less. You can add optimizations to compare lengths of arrays to bail out quicker.
@Arun, you don't seem to have read the whole sentence. I repeat here:
r choose 0, r choose 1, r choose 2, ... r choose r, can be computed in O(r) time, by computing r choose t in O(1) time after already having computed r choose (t-1)
Note the part about computing r choose t in O(1) time _after_ having computed r choose (t-1).
@try: Here is an explanation using your sample. [-1 1 -1 3 -1]
We compute the S_i
[-1 0 -1 2 1]
The two properties which I mention in the answer are basically to say that k+1 is a starting point.
Property 1 says that we can get to any station after the starting station (but before the n^th station).
Property 1 is computed, starting from right
[1 2 -1 0 -1]
1 is min seen so far (in [])? True.
2 is min seen so far (in [1,2])? False.
-1 is min seen so far (in [1 2 -1])? True
0 ? False
-1 ? True.
So we get
[True, False, True, False, True] (note we need to reverse this)
Property 2 states that we can get to any station before the starting station.
[-1 0 -1 2 1], S_n = 1
For -1, is S_k - S_n = -1 - 1 <= -1? True
For 0, is S_k - S_n = 0 - 1 <= -1? True.
For -1, is S_k - S_n = -1 -1 <= -1? True.
For 2, 2 - 1 <= -1? False.
For 1 False.
So we get
[True, True, True, False, False]
Take the intersection, and we get the result.
This is because you are doing a using namespace std, which I believe makes the code use std::swap, rather than the swap you defined.
Try this:
#include<iostream>
#include <stdlib.h>
using std::cout;
using std::endl;
struct node
{
int a;
int b;
};
typedef struct node Node;
void swap(void *a,void *b)
{
void *temp;
temp=a;
a=b;
b=temp;
}
int main()
{
Node *a1,*b1;
a1=(Node*)malloc(sizeof(Node));
b1=(Node*)malloc(sizeof(Node));
a1->a=10;
a1->b=20;
b1->a=30;
b1->b=40;
cout<<a1->a<<" "<<a1->b<<endl;
cout<<b1->a<<" "<<b1->b<<endl;
cout << a1 << " " << b1 << endl;
swap(a1,b1);
cout << a1 << " " << b1 << endl;
cout<<a1->a<<" "<<a1->b<<endl;
cout<<b1->a<<" "<<b1->b;
}
Glad you liked it. It does seem a little complicated and perhaps you are right, there might be something simpler. One simplification we can do is to pick the locations of the minimums (in S_k). Those and (some of) the last segment (to the right of the last minimum) will contain the possible starting points.
- Loler March 28, 2013@showell: Perhaps I am missing something.
I agree this seems to work for a single point, but how does this algorithm enumerate all the starting points?
Once you have gotten a possible starting point (declare victory) how will it find the next one? How do you eliminate the next starting point in O(1) time?
Seems like a positive surplus does not allow you to either eliminate or declare victory for the next starting city.
Say the differences were 100, 50, 100, -150,
From the first city, you do 100 + 50 + 100 - 150 and are back to starting point with 100 in the tank.
Note that you can start at the second station and finish the trip for this set of differences.
But if the differences were 100, 49, 100, -150,1
The trip for the starting point still finishes with 100 in the tank, but you cannot finish the trip starting at the second station (1 liter short).
I don't see the algorithm distinguishing between 49 and 50 case.
@PKT: The multipliers: 1 -2 1, 1 -3 3 -1, 1 -4 6 -4 1, 1 -5 10 -10 5 -1 are all rows of the pascal's triangle (with a - sign thrown in alternately), and are binomial coefficients as I mentioned in Observation 2.
What part are you having difficulty with? (if any...)
@jay: It is only _expected_ linear. Theoretically it is worst case (emphasis on worst case) quadratic.
Note that, I am not at all saying that we should reject the hash based solution for that reason. In fact, this is a very practical solution!
Just mentioning a possible reason why people are not upvoting :-)
We need to find _all_ possible starting points, which makes this an interesting problem.
An O(n) time algorithm is possible.
Assume we are going clockwise starting with an empty tank, and the petrol stations are numbered 1 to n.
Let p_i be the amount that will be in the car when an (empty) car fills up at station i and goes to station i+1 (i.e. distance between i and i+1 - petrol available at station i). Note: We allow p_i to be negative.
Let S_i = p_1 + p_2 + ... + p_i
If S_n < 0, there are no starting points, as we don't have enough petrol to complete the circle. So, assume S_n >= 0.
Now station k+1 is a valid starting point if the following holds true:
1) For every n >= j >= k+1 we have that S_j - S_k >= 0
2) For every 1 <= j <= k, we have that S_j - (S_k - Sn) >= 0
In O(n) time, we can find all k with property 1, by traversing the S[1,..n] array right to left and keeping track of the minimum seen so far, and checking if the minimum seen so far is >= current element.
A similar algorithm will work for finding all k with property 2.
Take the intersection of the k we found for 1 and the k we found for 2.
This is some python to demonstrate
def starting_points (p, d):
n = len(p)
S, count = [0]*n, 0
s = 0
while count < n:
s += p[count] - d[count]
S[count] = s;
count += 1
# No possible starting points.
if S[n-1] < 0:
return
# find k with property 1
prop1 = [False] * n
k, min_so_far = n-1, S[n-1]
while k >= 0:
if S[k] <= min_so_far:
prop1[k] = True
min_so_far = min(S[k], min_so_far)
k = k-1
# find k with property 2
prop2 = [False] * n
k, min_so_far = 0, S[0]
while k < n:
if (S[k]-S[n-1]) <= min_so_far:
prop2[k] = True
min_so_far = min(S[k], min_so_far)
k = k+1
# starting points
k = 0
while k < n:
if prop1[k] and prop2[k]:
yield (k+1)% n
k += 1
def main():
prices = [3,4,5]
distances = [7,1,1]
for start in starting_points(prices, distances):
print start
if __name__ == "__main__":
main()
This approach is good, but 2 comments.
1) You are better of checking before inserting into hashmap, so you can break early if needed. Only one pass needed. Also, you won't have the problem of dealing with the case when you add the same element to get the target (eg. array is [2] and you are looking for 4).
2) This is worst case quadratic (bad input for hash function).
You tried to use the code formatting {, but forgot one.
Here is the formatted code. I ran it through an online formatter.
public void findStockRange(int[] input)
{
int tempMinPos=0, tempMaxPos=0;
int minPos=0, maxpos=0;
int diff=Integer.MIN_VALUE;
for(int i=1;i<input.length;i++)
{
if(input[tempMinPos]>input[i])
{
tempMinPos=i;
}else if (input[tempMaxPos]<input[i] && i>tempMinPos)
{
tempMaxPos=i;
if((input[tempMaxPos]-input[tempMinPos]) >diff)
{
diff= (input[tempMaxPos]-input[tempMinPos]);
minPos=tempMinPos;
maxpos=tempMaxPos;
}
}
}
System.out.println("Buy :" + minPos);
System.out.println("Sell :" + maxpos);
System.out.println("profit "+ diff);
If the array is of size N, and we are required to do k (>= 1) iterations, then it is possible to do this in O(k) time (yes, independent of N!), using the following observations:
Observation 1)
If at some intermediate step the array was x_1, x_2, ..., x_M and we then performed one iteration on it, to get x_2 - x_1, ..., x_M - x_{M-1}, the sum of the elements is x_M - x_1.
Thus if you want to find the sum after k iterations, you need to find the first and last element after k-1 iterations and subtract.
Observation 2)
After r iterations, x_1, x_2, ..., x_M, the first element becomes
x_{r+1} - r x_r + (r choose 2) x_{r-1} - (r choose 3) x_{r-2} + ...
Similar formula can be given for the last term.
(Yes, those are binomial coefficients).
Observation 3)
r choose 0, r choose 1, r choose 2, ... r choose r, can be computed in O(r) time, by computing r choose t in O(1) time after already having computed r choose (t-1).
@alex: I don't understand your question. Can you please elaborate?
btw, I ran the above code on your input and the algorithm gives a value of (8,8) for total_count and max_len and that matches the brute force value. (Of course, both the methods could have bugs...)
This seems essentially correct, apart from the fact that you are not handling the negative case correctly.
@Anonymous, it is not doubling, but tripling!
In case there are many ways to measure the same weight this is more efficient than brute force enumeration. O(n L) where L is the total number of weights.
In case the measurements are unique, this is possibly slightly worse than the brute force. Theta(3^n) vs Omega(n 3^n) (if my calculations are correct).
+1.
Yes, with n weights, Theta(3^n) is the worst case(i edited my answer to mention that).
btw, why talk about NP-Completeness (or more accurately NP-hardness as it is not a decision problem)? Why do you think it is even relevant?
(Checking if 0 is possible using at least one weight, is NP-Complete and same as subset sum though).
Rep
RepGayle L McDowell, CEO at CareerCup
Gayle Laakmann McDowell is the founder / CEO of CareerCup, which provides programming interview prep for candidates interviewing with Microsoft, Google ...
Rep
@msft...
- Loler November 29, 2013Why not? Don't underestimate people, even if they are visitors of this site (;-))
I remember coming up with a linear time algorithm for this, based on cumulative sums.