0xF4
BAN USERYes you are right. My proof has a mistake:
(x1+x2)/2 == (x1+x2+2)/2 (fmod2)
(y1+y2)/2 == (y1+y2+2)/2 (fmod2)
the above is obviously false. It should be re-written as:
(x1+x2)/2 == (x1+x2+4)/2 (fmod2)
(y1+y2)/2 == (y1+y2+4)/2 (fmod2)
We need to consider more reflected rooms:
CCCC
BBBC
BABC
BBBC
or
DDDD
CCCD
BBCD
ABCD
It yields at most 16 points, exactly as you described.
Thanks for pointing to my mistake!
If I were the interviewer asking this problem, the ideal answer I'd expect is following:
This program is similar to subset-sum and the difference that set size is fixed k=N/2. It appears that finding a set of size k summing to a given target to be "easier" than subset-sum, but this problem is still NP complete.
Proof by contradiction.
Suppose there's a P algorithm to solve this problem. If this is a case then we can use this algorithm to solve a subset-sum by finding k subsets of sizes 1,2...k and find the one which is closest. Note, that repeating the P algorithm k times yields P algorithm, so we have a P algorithm to solve subset-sum. Contradiction.
Conclusion: k-subset-sum is as "hard" as subset-sum.
That said, we can run solve this problem by doing brute force (for small Ns) and/or using approximated/heuristic methods. I can propose few ideas:
1) Time constrained brute-force with printing best solution found
2) Applying aggressive pruning - it is possible if numbers are positive
3) Applying dynamic programming and/or memoization if numbers are integers and if range of possible sums allows that
4) Relaxation Technic (already proposed) and other forms or gradient descent method.
5) Randomized swap with reporting of best result found and using simulated annealing technic to escape local optimas
6) Various fast O(Nlogn) heuristic/greedy algorithms based on pre-sorting
A friend of mine suggested an extended version of this problem where we have a rectangular room and consider a cases whether width/height is a rational number or not.
It seems that rational ratio always yields finite solution. The problem can be reduced to the square version with the size of NxN where N = LCM(p,q), and p/q are integers such as p/q=width/height.
Irrational version is awkward to reason about. If we exclude degenerate cases (where line between killer and victim is perpendicular to the walls or is lying on the diagonal) is solution always infinite or is there a class of inputs yielding a finite solution?
It's OK to derive a hypothesis from any assumption, as soon as you prove the hypothesis later without relying on the original assumption. Instead of describing my thinking process, I could just say that I guessed the hypothesis, without mentioning those assumptions. Assuming that solution exists (without knowing it a priori) and use that assumption derive some properties of the solution, is pretty powerful trick which helped me to solve many algorithmic puzzles.
> I think your assumption about the guardians which must be at a symmetrical position is not true for every case.
Let me expand the symmetry claim to be crystal clear. Let X be a set of all intersection points of fatal bullet trajectories (whether it is finite or not – doesn't matter at this point). The symmetry claim is following: for every fatal path T all points X belonging to T are symmetrical in respect to the center of T. Note, that those symmetry reasonings aren't about guardians yet. Those reasonings only provide hints where it makes common sense to put guardians.
Now, assumption of existence of a solution allows to claim that there exists a finite set G (a subset of X) which covers all the fatal paths. G is obviously a solution and there may be many such Gs. While not all solutions can be constructed as such Gs, it can be shown that every _finite_ solution has one of Gs as a subset. Thus, minimal G must be an optimal solution (min no of guardians). Now, using all other data (fmod 2 and midpoints) it is also possible to prove that the the optimal solution is unique.
Wow... it seems people really like to find complicated solutions for very simple problems.
The simplest approach is to check whether the angle abc is right (dot product of vector (a-b) and (c-b) ), compute d' from those three (d' = (a-b)+(c-b)) and if it d' doesn't match d then answer is "none of those".
sauare vs. rectangle can be checked by |a-b|=|c-b|.
If input points are in arbitrary order (not sorted cw or ccw) then run this algorithm for both abcd and bacd.
Note: this will work in 3d as well
> Could you elaborate more about x1 x2 y1 y2
>and how do you understand that only B rooms
>are needed for consideration?
Killer pos: (x1,y1)
Victim pos: (x2,y2) (real one or mirrored within A/B area)
The middle point on the straight line between (x1,y1)-(x2,y2) is
((x1+x2)/2, (y1+y2)/2). This point is obviously withing A/B area.
Now let's look at rooms other than A and B. It's easy to see that if (x2,y2) is a victim position in A/B then all other victim positions have coordinates in a form (x2+2*i, y2+ 2*j) for any pair of integers i,j. Note that if i==j==0 then this matches the original victim positions within A/B.
OK, now let's try to compute a generic form of a midpoint coordinate:
((x1 + x2+2*i)/2, (y1+ y2 + 2*j)/2)
it is easy to see that this is exactly same pair of numbers under fmod2 as ((x1 + x2)/2, (y1+ y2)/2), which means that we already have a guardian there.
Thus, we only need to consider case where i==0 and j==0 which is precisely A/B area.
Let me know if this isn't clear enough, I'll try to explain better.
Follow-up: since there will be not more than 9 victims in A/B area 9 guardians is enough to protect them (after de-mirroring and deduplication there could be less than 9 for some inputs).
You are right, the solution (as provided in the example without bullet reflection) is not necessarily unique. This is totally fine, if problem allows multiple solutions it's typically OK to find just one of them.
However, surprisingly, the optimal solution (min. no of guardians) for the case with bullet reflection is unique.
Hint: for the simple example with bullet reflection - you also need only one guardian to protect the victim.
This is actually pretty good interview question.
Describing my thought process which allowed me to find the solution.
First of all, the question is much more complex than initially seems. For each fixed killer and victim positions (minus degenerate cases) there's infinite number of bullet trajectories which are fatal to the victim. Each of those trajectory killing the poor victim from infinite number of angles. How I can protect the victim with finite number of guardians!?
Blocking infinite number of paths with finite number of points is possible if and only if all those fatal paths have finite number of intersection points where we can put guardians. Using reasoning by symmetry those intersection points must be either exactly in the midway of the bullet (center of the trajectory) or at least symmetrical in respect to the center of the trajectory.
I wanted to see the intersection pattern and I decided to draw few trajectories and I confirmed that interesting intersection points are in the middle of the trajectories.
While drawing, I also found that it is very inconvenient to draw all the reflections, so I started to think about the problem in the following way:
Room walls are mirrors, and killer is pointing to the victim with the laser pointer, instead of shooting a bullet. In interpretation version of a problem I need to put guardians in the places to block the killer's view of a victim. It's easy to observe that killer sees infinite number of victim's reflections (except degenerate cases).
Now I can actually draw infinite reflected space on the paper with an infinite grid of reflected room, as seen by killer. Bullet trajectories are now straight lines instead of poly-lines.
If you start drawing the infinite space of reflections described above, you can see that it consists of repeated tiles where each tile is a block of 2x2 rooms. This was my “Aha” moment. I realized that I need to find intersection points using fmod 2 arithmetic (floating point modulo).
A little bit of fmod 2 math for coordinates of intersection points:
(x1+x2)/2 == (x1+x2+2)/2 (fmod2)
(y1+y2)/2 == (y1+y2+2)/2 (fmod2)
showed to me that I don't need to consider all the reflections of the rooms – only the immediate neighborhood: reflections which reflect an x or y coordinate at most once. If A is a room and B is a reflection, this is the only fragment of space I need to consider:
BBB
BAB
BBB
Killer is in A, victim is in the room A, 8 reflections of the victim can be seen in rooms B. In total killer sees 9 victims.
Now, the algorithm:
1) Connect the killer with 9 victims with straight segments and find a middle point of those segments
2) Some middle points might be outside of room A, in this case reflect them back to A.
3) De-duplicate the points (duplicates might occur in previous step).
4) Place guardian to each point
Update: the proof above has a mistake. The fmod2 identities should be written as:
(x1+x2)/2 == (x1+x2+4)/2 (fmod2)
(y1+y2)/2 == (y1+y2+4)/2 (fmod2)
Which tells that 16 rooms has to be considered not 9. So victim needs at most 16 guards.
OK. If I were the interviewer asking this question, I'd expect ideal answer to be the following:
1) Prove that this problem is NP-hard by showing its equivalence to well known Set cover problem
2) Reason about limits (e.g. number of kids and dishes will be reasonably small), estimate the search space and conclude that "brute-force"-like solution (e.g. backtracking with pruning of the search space) will be likely practical.
3) Propose an approximated solution - a greedy algorithm. (E.g. at each step take most liked dish, remove"satisfied" kids and repeat). Prove some important properties of of this algorithm (for instance if answer is "1" or "2" then this it is guaranteed to be optimal)
4) Code a solution which
a) starts with a greedy algorithm to find the first estimation
b) terminate if answer is optimal (by the properties of algorithm)
c) run backtracking with pruning
d) optionally limit execution of backtracking by time, ending with best found solution
5) talk about possible alternatives of backtracking and mention at least one randomized search algorithms
Step 2 and Step 3 are to find shortest edges btw each pair. Those steps aren't problematic - not all of those edges will be part of the final solution.
The step 4 - that's where we have a problem.
I think I miscalculated the complexity. I realized that the problem is isomorphic to Rectilinear Steiner Tree. problem. Latter is NP-complete
As requested, here's explanation:
Let's see all the powers of 2 between 1 and 4 (N=4)
N = 1 -> 0000010 (2^1 or 1<<1)
N = 2 -> 0000100 (2^2 or 1<<2)
N = 3 -> 0001000 (2^3 or 1<<3)
N = 4 -> 0010000 (2^4 or 1<<4)
sum -> 0011110
We can compute 0011110 as 0100000 - "2"
So the answer is (1<<(N+1))-2 or (2<<N)-2
Solution 1) Standard BFS or DFS will do the job. O(N^2) time, O(N^2) space. The solution is linear in terms of number of elements in the array
Solution 2) We can't do better than O(N^2) time but we can save space. If we scan the matrix top-down row-by-row we can track location of snake ends and their length at the current row. For this we need O(N) space. When we process each row we extend snake ends and in some cases merge them.
Solution 3) We can optimize solution 2 and reduce it to O(1) space if we allow to destroy original matrix. We just store the data there by updating rows in place.
Solution 4) We can do O(1) space, if we are allowed to temporarily mutate the original matrix but aren't allowed to destroy it (e.g. must leave intact at the end of algorithm) we can do it in O(1) space as well. We can store information in unused bits of integers in that matrix OR re-encode numbers by extending them to negative range in a way so they carry extra information but can be restored at exit.
This is example of the hash-based algorithm with O(N) time and O(1) space. Similarly to a straightforward implementation it adds characters to the prefix and checks whether resulting string is a palindrome, however all operations within a loop are performed in expected O(1) time.
// Returns length of a min palindrome which can be constructed from s
// by appending characters to the left of the string
// O(N) time, O(1) space
tnt minPalindrome(const string& s)
{
int n = s.size();
if (s.size() <= 1) return s.size();
// suppose the s is palindrome
int x = n / 2 - 1; // end of left part: abCdcba or abCcba
int y = (n + 1) / 2; // start of right part: abcdCba or abcCba.
hash_helper leftSuffix;
hash_helper right;
for (int i = 0; i <= x; i++) {
leftSuffix.push_back(s[i]);
right.push_back(s[n - 1 - i]);
}
if (right.hash == leftSuffix.hash) {
if (isPalindrome(s,n)) return n; // s is palindrome
}
// s isn't a palindrome, appending characters to the prefix until combined string becomes a palindrome
int j = 0; // pos of appended character
int m = n;
hash_helper leftPrefix;
for (;;) {
char c = s[n - 1 - j++];
leftPrefix.push_back(c);
if (m % 2 == 0)
leftSuffix.pop_back(s[x--]);
else
right.push_back(s[--y]);
m++;
uint32_t leftHash = leftPrefix.concat(leftSuffix);
if (leftHash == right.hash && isPalindrome(s, m))
return m;
}
return m;
}
isPalindrome() performs full (expensive) check for "palindromness" only when we have hash match:
bool isPalindrome(const string&s, int m)
{
int i = 0;
int j = s.size()*2 - m - 1;
while (i < j) {
if (s[i] != s[j])
return false;
i++; j--;
}
return true;
Finally, we need a hash function implementation which supports push_back, pop_back and concat operations.
The following implementation is based on linear congruential generator and it is very good hash function among those which are simple to implement.
// linear congruential generator based implementation
struct hash_helper
{
uint32_t hash; // hash value
const uint32_t MOD = 16777213; // prime number 2^24 - 3
const uint32_t A = 127; // factor
const uint32_t AI = 10039907; // modular multiplicative inverse of A
uint32_t F;
hash_helper() :hash(0), F(1) {};
void push_back(unsigned char c)
{
hash = ((hash * A) % MOD + c) % MOD;
F = (F * A) % MOD;
}
void pop_back(unsigned char c)
{
hash = (hash + MOD - c) % MOD;
hash = ((uint64_t)hash * AI) % MOD;
F = ((uint64_t)F * AI) % MOD;
}
uint32_t concat(const hash_helper& b)
{
return ((((uint64_t)hash * b.F) % MOD) + b.hash) % MOD;
}
};
The following hash function is not as good as previous, but it is easier to understand - it is based purely on shifts and XORs. If you struggle to understand how the previous hash function works, start with this one:
// Cyclic polynomial implementation. Based on circular shift and XOR operations.
struct hash_helper
{
uint32_t hash; // hash value
int len; // len of the hashed substring
hash_helper() :hash(0), len(0) {};
void push_back(unsigned char c)
{
hash = ((hash << 1) | (hash >> 31)) ^ c;
len++;
}
void pop_back(unsigned char c)
{
assert(len);
hash ^= c;
hash = ((hash >> 1) | (hash << 31));
len--;
}
uint32_t concat(const hash_helper& b)
{
int q = b.len % 32;
return ((hash << q) | (hash >> (32-q))) ^ b.hash;
}
};
Here's a solution.
First of all, I'd explain interviewer that precise solution is impossible (due to impossibility of absolutely precise clock synchronization in a distributed system), so any solution will be an approximation anyway.
As a second step, I'd explain interviewer that aiming for high precision of finding the exact winner will unavoidably introduce unwanted serialization point and will kill the performance of the system (essentially it will kill the scale), so we need to find a practical and pragmatic solution which doesn't compromise with perf, but we can relax accuracy constraint. At the same time we have to ensure that banner is shown to exactly one user. Or more precisely - to at MOST one user (since the reply with the banner can be lost).
This can be achieved by keeping a local counter of requests on each server. Those servers periodically send their counters to Aggregation System which is responsible for aggregation and - global counting. They don't send updates very frequently - but time interval is tuned so it doesn't create a lot of network traffic. In response to updates the last known global counter is returned, so the each server has an approximate view of the global state. Now each server can use trivial statistic computation to estimate rate of growth of local counter and the view o global counter, so it can make his own decision which of HIS clients is billionth user. Server makes such decision for one and only one request. Only for that request - we delay the processing and perform real time message exchange with another system, let's call it Coordination System, to confirm whether the server can display the banner.
The Coordination System has to be globally-central and fault-tolerant. It has to implement Paxos based lock-step state machine or similar protocol to guarantee that there will be not more than 1 winner even in the presence of failures of servers of coordination system. In the case if we don't want such complexity, I guess FB has enough many to pay for N cars [for any N:)], so the system can be simplified and we can give probabilistic guarantees.
The Aggregation System should be also designed fault tolerant, but constraints here are relaxed and simple eventual consistency is enough. It makes sense for Aggregation System to hierarchical - we can have one child branch of the system per FB datacenter, so we can localize the update traffic.
> Still trying to understand how does this work
It's actually very easy. For simplicity of explanation let's assume that local and global are not arrays but matrices of N*k.
global[i][j] = is a max profit which we can get in first i days with at most j buy+wait+sell transactions.
local[i][j] = is a max profit which we can get in first i days with at most j transactions where we sold
the stock on *last* day where prices were growing.
global[n-1][k] gives the answer.
local[i][j] can be computed as a maximum of profit buy exploring two possible options - we sold the stock on the last diff>=0 day or we don't.
global just acculates the maximum found so far
Those relationships are encoded in C++ as follows:
local[i][j] = max(global[i-1][j-1]+max(diff, 0),local[i-1][j]+diff);
global[i][j] = max(local[i-1][j], global[i-1][j]);
Now, since we're always refering to the previous column [i-1] - we don't need to store whole matrices - and can only track last column, so all that results
into nice O(N*k) time O(k) space algorithm.
This is not the only DP solution. Please find another O(N*k) time and O(k) space algorithm below posted by me - it gives exactly the same result
but it is based on a different recurrent relationships. You will see that those relationship model the same process, but in a different (more generic) representation. You might find them easier to understand (at least I tried to design them with this goal)
Here's detailed explanation how my algorithm works.
Note - in this interpretation k - is number of transactions where sell counts as a separate transaction from buy.
The k is always even here.
This DP solution is based on the following recurrent relationships:
s0[i][j] - max possible ballance on our cash account on ith day when at most j transactions were made and we don't have a stock
s1[i][j] - max possible ballance on our cash account on ith day when at most j transactions were made and we do have a stock
We need to maximize the cash on our account with exit condition: at most k transactions and we don't have a stock.
So our answer is s0[n][k]
We start with the condition where we have 0 transactions - 0$ ballance and we don't have a stock:
s0[0][0] = 0$
The initial condition where we have a stock is impossible - making it unfavorable by setting negatively-infinite ballance:
s1[0][0] = -INFINITY$
Now we can use the following recurrent relationships:
1) If have stock at the end if ith day - this means we either just bought it (cash decreases by v[i]) or we did nothing.
chose maximum of those two options - since we want to maximize
s1[j] = max(s1[j], s0[j-1]-v[i]);
2) If don't have stock at the end if ith day - this means we ether just sold it (cash increases by v[i]) or we did nothing.
chose maximum of those two options - since we want to maximize
s0[j] = max(s0[j], s1[j-1]+v[i]);
Now, if you notice that we always refer to [i-1] which means we don't have to store the whole matrix: we just need the last column.
This gives O(N*k) time, O(k) memory algorithm.
BTW, I realized what is your intent, and I think I found the problem. Could you please try replacing the body of your loop with something like this:
if (str[i1] == str[i2]){
t++;
i1++;
}
else {
if (str[0] == str[i2]) {
t = 0;
i1 = 1;
}
else {
t = -1;
i1 = 0;
}
}
i2++;
and let us know if it works as expected on edge cases?
If yes, that means we finally have simple and clean O(N) time / O(1) space solution!
Thanks! This completely changes the picture. Without this condition I wasn't even able to understand your intent. Could you please edit your original post with the solution updating return expression and add "if (n==0) return true" to make it 100% perfect.
Let's vote this up, this O(N) solution deserves to be on the top.
Here's my compact implementation of O(log(m+n)) time algorithm. It is based on the findKth with the complexity of O(log(min(k,m,n))). O(1) space.
Please note that O(log(m+n)) is better than previously proposed O(log(m)+log(n)) since latter equals to O(log(nm)) and it is also better than O(log(n)*log(m))
int findKth(int A[], int m, int B[], int n, int k)
{
assert(k < m + n && k >= 0);
if (m == 0) return B[k];
if (n == 0) return A[k];
if (k == 0) return min(A[0], B[0]);
int halfK = (k + 1) / 2;
int asub, bsub;
asub = min(halfK, m);
bsub = min(halfK, n);
if (A[asub - 1] < B[bsub - 1])
return findKth(A + asub, m - asub, B, n, k - asub);
else return findKth(A, m, B + bsub, n - bsub, k - bsub);
}
double findMedianSortedArrays(int A[], int m, int B[], int n) {
if (m + n == 0) return INT_MIN;
int k = (m + n) / 2;
double result = findKth(A, m, B, n, k);
if ((m + n) % 2 == 0)
result = (result + findKth(A, m, B, n, k - 1)) / 2;
return result;
}
Here's my compact implementation of O(log(m+n)) time algorithm. It is based on the findKth with the complexity of O(log(min(k,m,n))). O(1) space.
Please note that O(log(m+n)) is better than previously proposed O(log(m)+log(n)) since latter equals to O(log(nm))
int findKth(int A[], int m, int B[], int n, int k)
{
assert(k < m + n && k >= 0);
if (m == 0) return B[k];
if (n == 0) return A[k];
if (k == 0) return min(A[0], B[0]);
int halfK = (k + 1) / 2;
int asub, bsub;
asub = min(halfK, m);
bsub = min(halfK, n);
if (A[asub - 1] < B[bsub - 1])
return findKth(A + asub, m - asub, B, n, k - asub);
else return findKth(A, m, B + bsub, n - bsub, k - bsub);
}
double findMedianSortedArrays(int A[], int m, int B[], int n) {
if (m + n == 0) return INT_MIN;
int k = (m + n) / 2;
double result = findKth(A, m, B, n, k);
if ((m + n) % 2 == 0)
result = (result + findKth(A, m, B, n, k - 1)) / 2;
return result;
}
Found an interesting property of application of Ukkonen's algorithm on this problem. Once all characters from the repeating pattern are added to the tree, the algorithm enters steady state and doesn't change the state until it hits first character which doesn't match the pattern. Such first character causes aggressive splitting. If there a repetition of a (larger) pattern algorithm goes into steady state again. The answer is 'true' if last letter added to the tree doesn't change the state.
abababxabababxabababy
^ - adding a
^ - adding b
^^^^ - nothing happens (state isn't changed)
^ - adding x causes a lot of splitting in the tree
^^^^^.... nothing happens until y is hit
Nice idea. +1
So far this is the only O(N) scheme which works.
Let's try to brainstorm so we can simplify it so it can be done on a whiteboard in less than 30 minutes.
Here's my 5 cents:
In fact we don't need full implementation of Ukkonen's algo for this problem. First of all it is not necessary to execute the last step of appending end-of-string char at the end. Also the final answer can be provided while processing of the last letter of the input string, without need of completion of the suffix tree construction.
I like the direction of your thinking. I also like your extension, where last part of the string can be the prefix of the pattern. This doesn't contradict problem definition, it makes sense, and, in fact, makes problem more interesting completely killing my o(N^(1+e)) implementation above, since the % trick can't be applied anymore.
I wonder whether this can be evolved into something which really works.
For now, let me prove that you algorithm (as written) is wrong by finding a long repeating pattern where t at the end is less than half-length. Let me do even simpler thing - let's find long string with repeating pattern where t = 0 at the end.
This can be done by "executing" your algorithm in reverse order and reconstructing the input, given t = 0 at the end. If t = 0 then it must be -1 at the previous step and so on... Since we assumed that there is a repeating pattern and t starts with -1, the t's sequence must also follow repeating cycles. Something like -1 -1 0 -1 -1 0 ... -1 -1 0
This yields input strings like
abaaba
abaabaaba
aba...abaaba
Those are counter-examples for your algorithm. I don't see an immediate way to fix it. Do you?
- 0xF4 January 06, 2015If this is the case, that means that average time complexity is O(N*log log N). I think worst case is still o(N^(1+epsilon)).
In fact, average time complexity for the algorithm is even better than O(N* log log N), since inner loop terminates earlier in most of the cases for average input.
> how can N processors find out max element in O(1)?
Easy. For instance, we create a distributed "linked list" using N processors, where each processor stores single number and we keep all numbers/processors sorted. In order to organize a linked list, each processor might keep an index of the peer. Finding MAX() in this case is single message round-trip with the last processor, since it holds the maximum.
insert, delete and search operations are implemented by broadcasting to all processors, and in worst case the processor might forward the data the peer at most once.
Insert is the most complex operation here. It can be implemented in O(1) time complexity in following way. Let's say processor receives a broadcasted message with instruction to insert number X. If it is the last processor then new processor is allocated and becomes the last one. If it is not he last processor, but X greater than the number which is stored locally, it asks the processor on the right whether its number is equal or less than X - if reply is positive then new processor is allocated and "inserted" in between of those two, otherwise nothing happens.
This is the one of most reasonable ways of implementing constructor
Why didn't I initialize the board? Because good MineSweeper implementation doesn't place mines during the construction, but defers it to the first move of the player. Assuming mines < dim**2 The first cell opened by the player is always empty and mines are randomly distributed among remaining dim**2-1 unopened cells. This improves usability of the game: user never loses with 1st move. Implementation of Minesweeper in Windows behaves like I described.
- 0xF4 September 27, 2015