Alex M.
That's a good approach. I'm only not sure about max heap structure.
As far as I understand, it's not a regular max heap, but some special heap. It supports HeapPushPop operation, meaning that when we push a new element to the heap, it automatically pops min element. I'm not sure it's still logK complexity for such operation for a regular max heap.
However, we can use balanced binary search tree of size K, for instance:
 find min  O(LogK) (worst), O(1) (average, with min tracking)
 remove element  O(LogK)
 insert element  O(LogK)
So, the same O(LogK) complexity for required operations and O(N*LogK) overall complexity.
That's an interesting solution, but how can you know how much water we consumed on the previous rows? It also depends on total amount of liters we trying to put, right?
For instance, when we calculate nonzero value for a glass in the 4th row, the third row might contain either 3 liters (1/2 + 1 + 1/2) or 3 liters (1 + 1 + 1).
It's a good solution, however one scenario is missing here:
you won't count rectangles "nested by Y". Here is an example.
if we have rectangles {0, 0, 3, 5}, {1, 1, 4, 4} and {2, 2, 5, 3} this algo reports 0. But there are 3 pairs of intersected rectangles.
Alex's solution here in the thread covers this case: we subtract all the rectangle which are completely below the current one and completely above the current one from total number of rectangles that have intersection with the current one by X.
Good approach. I would also propose to use List<ulong> rather than List<byte> and Base = 10^19. This way you need only 64 bits to represent pointer in a list and 64 bits to represent 18digits number = 128 bytes in total.
For List<byte> and Base = 10 you need: (64 bytes (to represent a pointer) + 1 byte to represent 1digit number) * 18 = 65 * 18 = 1170 bytes to represent 18digits number.
Good explanation, but, I think, you could still use the same approach with linked lists for addition as well  that gives you better performance, than m*n. And the result will be a spare matrix as well: for result row i you'll have a new linked list of length A[i].Length (length of linked list for i row in matrix A) + B[i].Length (length of linked list for i row in matrix B) as max.
 Alex M. February 08, 2017It's more effective to keep values in a sorted List<Pair<int x, List<Pair<int y, value>>>>. That way you don't have to iterate through N*M elements. That's the whole point of the question: the input matrix is huge but spare. So, you need to come up with something that allows you not to iterate through all N*M elements.
 Alex M. February 08, 2017Equals() is a 'standard' method of the 'object' class. Default implementation for reference types is reference equality. If you need a smarter way of equality comparison, you need to override it and, probably, implement complex and expensive equality comparison.
In addition to hash table data structure, hash codes also might be used for easier objects comparison. However it doesn't substitute it completely (because of collisions): so, if the hash values are equal you could call a real 'full' equality comparison.
That doesn't seem to work. What if last download reset happened 24h ago (as well as the first download) and 8 more downloads occurred in the last hour. Next download, say, in the next hour causes the reset timestamp as well as counter get updated. So, now we could start 9 more downloads, which gives us 18 downloads for the last 2 hours.
You need to keep a list (in a separate table) of all 10 downloads (timestamps) for a user, so that you could easily get and update the earliest and the latest downloads for the user.
It's a good solution for an array in external memory. But it only gives an approximate result (because of division) where an exact result might exist.
I would also propose to use not a single value on every step, but a buffer of values (say, 10 values) to minimize data read time:
M(10*(n+1)) = M(10*n)/10 + (A[10*n] + ...+A[10* + 9])/ (10*(n+1))
It's clearer if you move the check if(counter==h) out of the method like:
if(node.left == null
&& node.right == null
&& counter==h)
{
printArray(arr,counter, h);
return; // since left and right are null
}
This approach requires two passes of the tree. If you use extra space to keep discovered paths in memory, you could do it with just one pass of the tree.
 Alex M. January 11, 2017There is some optimization even for brute force: memorize prime numbers and for the next number (K) to check we could check if the number is divisible by those early memorized prime numbers only (not all the numbers from 2 to Sqrt(K)).
And there is another pretty simple and efficient algorithm to compute prime numbers: check sieve of Eratosthenes algorithm.
I don't think this will work for all the cases. Imagine input slices set as {2, 3, 10, 15} and number of people is 5. The smallest number of slices in the list is 2. But this is not max number of slices a person could get (which is 5 in this case).
And what is time complexity for your solution? O(NLgN) to sort the input set and Lg(min(Slice(i)))*N to find a right number of slices.
I think, the second part can be solved in linear time (O(N)).
It's too simplified. What if there are two consecutive calls Read(1000) and Read(500). Since original API ReadFromDisk reads next 256 bytes, it's reasonable if Read(int N) behave the same way. In this case you need to track down a position in file and keep buffer that contains remaining of the last returned result.
 Alex M. June 03, 2016Looks like order of the words matters in the question. Otherwise, Ngrams for doc1 would have been = 'This is a', 'is a dog', 'This is dog', 'This a dog'.
With this assumption the algo is incorrect.
I would propose a kind of modified version of KMPstring comparison, replacing characters with words. We scan a doc word by word and fill hash(doc_word[i]...doc_word[i+n]) so that the order matters (similar to KMP). Do the same for the next doc and check if the hash already contains an element with a particular key.
This algo works with the correction in one of the comments.
If the question is to find if there are such subtrees of a given length L (yes/no) or just if there are subtrees (which is equivalent to subtrees of a given length 2) , it's possible to implement this in O(N). If the question is to find all such subtrees (any possibl length), it becomes O(N^3).
I think that's a good algorithm. Only a small correction: if I understood this correctly.. step 2, when they have the same semester, the weight should be set to 2 since in order to start the next one in the fall you need to have this one completed in this fall. So, you could start the next one not in the same semester or not even in the next semester, but in a semester after that.
 Alex M. May 19, 2016It's pretty good solution but requires a small fix I think.
This:
dp[i][j] = 1;
if (dp[i1][j] < 0) continue;
needs to be replaces with this
dp[i][j] = int.MaxValue;
if (dp[i1][j] == int.MaxValue) continue;
Otherwise,
min(dp[i][k], dp[i1][j] + abs(A[i] k));
always returns dp[i][k], which has just been set to 1;
 Alex M. April 26, 2016Since tmp refers the same object, calling CPeekIterator.Peek causes advancing in currIt and correspondingly it works as a regular Next().
A right solution for this is to have an internal var that contains current element:
private CIterator iter;
private int current;
public CPeekIterator(CIterator inputIter)
{
this.iter = inputIter;
this.current = int.MinValue;
}
public int Next()
{
return this.current = this.iter.Next();
}
public int Peek()
{
return this.current;
}

Alex M.
April 23, 2016 The idea behind is good. But it took time to understand the explanations.
Here is the same in other words:
0. A.Length = N, B.Length = K. Hash[i] = 1, where i belongs to [0..K1]. x = 0. z = K  1.
1. Until the first element of B array is found in A (A[x]) keep increasing x and z.
2. y = x + 1
3. For A[y] till y == z AND y < N AND z < N:
* if Hash[A[y]] does't exist (A[y] is not an element of B), z = z + 1
if Hash[A[y]] exists AND (Hash[A[y]] == 1 OR Hash[A[y]] < x) (A[y] is an element of B AND (we haven't met it before or we last we met it before current start index) ), update Hash[A[y]] = y.
if Hash[A[y]] exists AND Hash[A[y]] != 1 AND Hash[A[y]] >= x (A[y] is an element of B AND we have met this element within the current [x,z] range), then z = z + 1, update Hash[A[y]] = y. If Hash[A[x]] == x, then z = z + 1. Scan for the first A[q] which is element of B. x = q.
* y = y + 1. Go to 3.
5. If y == z push interval [x,z] to intervals list. y = y + 1. Go to 3.
6. If y == z == N  1: If Hash[A[x]] == x, then z = z + 1. Scan for the first A[q] which is element of B. x = q. Go to 3.
Main idea here is that we always try to keep so many elements of array A in a set between x and z indexes, that we potentially can find all B elements in that set.
Hopefully, this is a little clearer.
However, it might be applied to different lengths strings, but generateWildcardStrings method needs to be modified in this case, for example:
 add '*' symbol to (target.Lenght + 1) positions to specify one additional character
 add '\' symbol to (target.Lenght) positions to specify one missing character
That's a correct and elegant solution with O(N*K). But that's really not so easy to understand the logic behind the scene without any explanations provided.
The logic here is that for every element in the prime set a[j] we track the element in the result array target[last_index[a[j]]] that we used to build a new element in the result array on the kth step: target[k] = target[last_index[a[j]]] * a[j]. Therefore we imply, that on the next (k+1)th iteration for an element a[j] of the prime set, it's enough to check only the following product: target[last_index[a[j]] + 1] * a[j], since target[last_index[a[j]]] was previously processed and target[last_index[a[j]] + 2] is evidently bigger than target[last_index[a[j]] + 1].
And a minor comment  x[0] is missing in the output.
Open Chat in New Window
It's not an optimal solution in terms of overall time, or in other words, time of the last order departure. Not sure if that is required in this problem though. But if you have something like:
the solution is not optimal as might be expected:
 Alex M. April 28, 2017