Ehsan
BAN USERI have a background in electrical engineering and telecommunications and very interested in computer science and programming puzzles.
It can be solved with dynamic programming in O(n^2) as suggested before. Java code:
public static int solve(int[] buckets) {
int[][][] dpValue = new int[buckets.length][buckets.length][2];
int[] sums = new int[buckets.length];
sums[0] = buckets[0];
// cumulative sum to be used later in finding partial sums (from i to j).
for (int i = 1; i < buckets.length; i++)
sums[i] = sums[i - 1] + buckets[i];
for (int i = buckets.length - 1; i > -1; i--) {
// if only one bucket left, best decision is to pick it.
dpValue[i][i] = new int[] {buckets[i], buckets[i]};
for (int j = i + 1; j < buckets.length; j++) {
// total worth of gold?
int sumIJ = sums[j] - sums[i] + buckets[i];
for (int k = 0; k < 2; k++) {
// gain the other player gets if we pick first?
int pickFirst = dpValue[i + 1][j][1 - k];
int pickLast = dpValue[i][j - 1][1 - k];
// total worth - best gain for other player = our gain => maximize it by minimizing the other player gain.
dpValue[i][j][k] = sumIJ - Math.min(pickFirst, pickLast);
}
}
}
return dpValue[0][buckets.length - 1][0];
}
You can use the same 2-sum algorithm over the tree. No need to serializing into an inorder or any other traversal.
Point 1: tree.Min();
Point 2: tree.Max();
The only operation needed is successor and predecessor (same as left and right in the array). To get the successor or predecessor you need O(1) operations.
I can think of a shortest path solution. Consider a graph with vertices on different rows. On row "r" we have "Nr" vertices where "Nr" is the number of locations for r-th word.
Vertices of each row are connected to vertices of the row immediately after.
In other words, j-th vertex of row i is connected to those vertices of row "i + 1" whose positions are after that of the j-th vertex of row i.
Add a super vertex at row 0 and a super vertex at row r + 1.
super vertex at row 0 is connected to all of row 1 and all of row "r" are connected to super vertex at row r + 1.
The total number of vertices is equal to the total number of given positions + 2.
The total number of edges is O(total number of vertices).
The edge weights between vertices are the difference in their positions.
row 0 and row r + 1 are connected with 0-weight edges.
Shortest path from 0 -> r + 1 in O(n^2).
Example for above:
Vij : vertex at row i, column j (i = 1, 2, 3; j = 1, 2,3)
Edge list: (start, end, weight)
(V11, V22, 13 - 5)
(V11, V23, 18 - 5)
(V12, V22, 13 - 9)
(V12, V23, 18 - 9)
(V13, V33, 18 - 17)
(V21, V31, 8 - 4)
(V21, V32, 19 - 4)
(V21, V33, 21 - 4)
(V22, V32, 19 - 13)
(V22, V33, 21 - 13)
(V23, V32, 19 - 18)
(V23, V33, 21 - 18)
Then add V0 and edges (V0, V11, 0), (V0, V12, 0), and (V0, V13, 0).
Also add V4 and edges (V31, V4, 0), (V32, V4, 0), (V33, V4, 0).
Finally, find the shortest path from V0 to V4. In this case, it will go through V13, V23, V32.
Overall complexity is quadratic in total number of positions (regardless of number of rows).
It might be possible to find a faster approach due to the limited structure of the graph. I am assuming Dijkstra or something similar is used.
This is a DP solution. O(n^2) time.
Given a string x1, x2, x3, ...,xn you can do the following:
Either add character at beginning, which will be xn
or pass if x1 == xn.
dp[1, n] = min (dp[2, n - 1] + COST_PASS, dp[1, n - 1] + COST_INSERT).
COST_INSERT = 1;
COST_PASS = 0 if the first and last characters of substring are equal, or INF.
Here is the code (it only returns the number of insertions. the total length will be length of string + number of insertions.).
public class MakePlindromic {
private int[][] dp;
private String x;
public int Solve(String x) {
this.x = x;
dp = new int[x.length()][x.length()];
for (int[] arr : dp) Arrays.fill(arr, -1);
return makePalindromic(0, x.length() - 1);
}
private int makePalindromic(int i, int j) {
if (j <= 1) return 0;
if (dp[i][j] == -1) {
char first = x.charAt(i), last = x.charAt(j);
int costInsert = 1, costPass = 1000000;
if (first == last) {
costPass = 0;
}
int pass = makePalindromic(i + 1, j - 1) + costPass;
int ins = makePalindromic(i, j - 1) + costInsert;
dp[i][j] = Math.min(pass, ins);
}
return dp[i][j];
}
public static void main(String[] args) {
String test1 = "racexyzart";
String test2 = "abddcdd";
String test3 = "aacecaaa";
MakePlindromic solve = new MakePlindromic();
System.out.println(solve.Solve(test1));
System.out.println(solve.Solve(test2));
System.out.println(solve.Solve(test3));
}
}
/// Returns: 5, 5, 1
(V^2 - V)/2 is the maximum number. Let's prove it.
Let G(V, E) be an acyclic graph with the most number of edges. Then:
Claim: There is a (the) sink node with exactly V - 1 incoming edges where a sink node is a vertex with no outgoing edges.
Proof: First note that we have to have a sink node. Otherwise, any vertex will have at least 1 outgoing edge and hence, a cycle.
Now note that for the optimal graph, the sink node must have "V - 1" incoming edges. Simply if it does not, then you can add the missing edges to the sink node and still produce no cycles. Therefore, the graph could not have possibly been optimal.
Therefore, the sink node has "V - 1" incoming edges.
Now the rest is easy. Remove this sink node from this graph and all the edges connected to it. You will get another G(V - 1, E) which is optimal. By recusing, we can go all the way to the graph with one vertex and 0 edges. This way, we are removing
(V - 1), (V - 2), ..., (1) edges to get to that graph. Therefore, the total number of edges is at most (V^2 - V) / 2.
On the other hand, as suggested above, you can indeed construct a graph with this many edges. Add an edge from vertex "0" to all other vertices except itself. Then from vertex "1" to all vertices except {0, 1}. Iterate until exactly "(V^2 - V) / 2" edges are added.
Here is a non-recursive solution. The complexity is of course O(N1 x N2 x ... x Nk) where Ni is the number of elements in set i.
public static void main(String[] args) {
String[][] sets = new String[][] {{"quick", "slow"}, {"brown", "red"}, {"fox", "dog"}};
int[] state = new int[sets.length];
int p = 0;
while (true) {
for (int i = 0; i < state.length; i++) {
System.out.print(sets[i][state[i]] + " ");
}
System.out.println();
state[p]++;
while( state[p] == sets[p].length) {
state[p] = 0;
p++;
if (p == sets.length) return;
state[p]++;
}
p = 0;
}
}
Just adding a minor comment that this problem is NP-complete, i.e, let's assume you have a general case with N servers and M tasks and let the capacity of all servers be the same. Then, the question whether or not you can fit all tasks inside servers is equivalent to packing M stones in N bins, i.e., you are answering a general instance of the bin-packing problem. If you can make it work with N, then you keep reducing N until you find the smallest number of servers.
That being said, the recursive DFS, i.e., backtracking is the best bet as far as I can tell.
I would ask what are the EXACT operations to support on the new DS. I will assume the required DS is a double linked list the same as that of java (LinkedList<>) so, here are the main ops:
1- Add/Remove first
2- Add/Remove last
Use first stack to add to the head, and the second stack for adding to the tail. Removing is simply popping from the corresponding stack. So far, everything O(1).
Problem is when one stack is empty, i.e., you pop from stack 2 until it is empty. The next tail is indeed the last element of stack 1 now.
To avoid this situation and keep stacks balanced, whenever size of one stack becomes much larger than the other, we can use some external memory to rearrange and balance the stacks. The amortized cost is O(N), e.g.,
say stack 1 has size N and stack 2 size 3xN. In 4N operations we can make them
of size 2N and 2N. Now we need at least 4N inserts or 4/3N consecutive removes from one stack to get into unbalanced state. Either way, the amortized cost is between 1-3 basic operations per DS function.
Here is a DP based solution running in O(N).
First step is to ignore the first set of 'a' s until we hit the first b.
'aabbaba' => 'bbaba'
Second step is to parse the string and divide it into runs of 'bbbaaa' represented as tuples.
E.g., 'bbaba' => (2, 1), (1, 1)
The rest is done recursively:
Let's say we have the following pattern now (B1, A1), (B2, A2), ..., (Bm, Am)
Let j be the best run to do the reverse, i.e., the smallest (lexicographically) string is
Aj Bj Aj - 1 Bj - 1 ... A1 B1 Bj +1 Aj + 1....Bm Am.
Then my claim is that "j" is either "1" or the best run for the suffix starting at B2.
Now all we need to do is to determine whether or not A1 B1 B2 A2 ... or Aj Bj .... is the smaller string.
This can be done in O(max (B1 + A1, Bj + Aj) ) .
Therefore, the overall complexity is O(N).
here is the full code:
public static Tuple<Integer, Integer> minReversedSubString(String s) {
StringBuilder sb = new StringBuilder();
// removing first couple of 'a' characters
for (int i = 0; i < s.length(); i++) {
if (sb.length() == 0) {
if (s.charAt(i) == 'b') sb.append('b');
}
else sb.append(s.charAt(i));
}
if (sb.length() == 0) return new Tuple<Integer, Integer>(0, 0);
String str = sb.toString();
// now our string starts with 'b'
//get different runs of 'bbbbaaaa'
// ex: 'bbaabbbba' has runs of (2, 2) and (3, 1)
ArrayList<Tuple<Integer, Integer>> runs = getRuns(str);
int index = bestReverse(runs, 0);
int pos = 0;
for (int i = 0; i < index + 1; i++) {
pos += runs.get(i).first();
pos += runs.get(i).second();
}
int first_b = s.length() - str.length();
return new Tuple<Integer, Integer>(first_b, first_b + pos - 1);
}
public static int bestReverse(ArrayList<Tuple<Integer, Integer>> runs, int i) {
if (i == runs.size() - 1) return i;
int index = bestReverse(runs, i + 1);
if (runs.get(i).second() > runs.get(index).second()) return i;
else if (runs.get(i).second() < runs.get(index).second()) return index;
else {
if (runs.get(i).first() < runs.get(index).first()) return i;
else return index;
}
}
public static ArrayList<Tuple<Integer, Integer>> getRuns(String str) {
ArrayList<Tuple<Integer, Integer>> runs = new ArrayList<>();
int b = 0, a = 0;
boolean runcomplete = false;
for (int i = 0; i < str.length(); i++) {
if (str.charAt(i) == 'b') {
if (a > 0) {
runs.add(new Tuple<>(b, a));
a = 0;
b = 0;
i--;
}
else {
b++;
}
} else {
a++;
}
}
if (b > 0) {
runs.add(new Tuple<>(b, a));
}
return runs;
}
I would use a min-Heap as follows:
1- Read a number.
2- Store it in the heap.
3- If heap.size() > 10^5, extract-Min() (drop current min)
At the end we have exactly 10^5 elements .
Assuming that we can read the stream multiple times, there is O(1) memory O(n log n) time using binary search.
Initialize low = 0, high = 10^6
Repeat (around 20 times)
1- Let mid = (low + high) / 2.
2- Read all numbers and count how many above mid.
3- Is count > targeted count? (i.e., 10^5)?
YES- low = mid;
NO- if Equal return it, other wise, high = mid.
I suppose so. What I have done above is not all that different. It is a graph discovery. I did not use the right terminology though since the underlying graph is not a tree, but a DAG. However, the idea of rank where rank here is the distance to the furthest away node. Since we calculate the rank for each node only once, the total run time is the number of elements, which is the same as DFS.
I guess DFS is simple to explain though. All you need to remember is the depth of your stack.
You can use the void* type as the generic type. At the end of the day, all you need to store from the object is a "pointer" to it.
vector<void*> arr;
int *a = new int;
*a = 1;
double* b = new double;
*b = 2.0;
Type* t = new Type();
arr.push_back(a);
arr.push_back(b);
arr.push_back(t);
Type* tt = (Type*) arr[2];
however, I would not suggest this. Since (void*) does not carry any information about type. The better approach, is (as suggested above) use some inheritance. As it is in C#, Java, etc. (Object class).
- Ehsan June 01, 2014Unless I got the question wrong, in your example, {10, 6, 7, 8, 9} should be the solution. The length is 5 and the diff of max and min is 4.
The way I understand from examples is that you are looking for a subset of numbers (rather than sequence since you mentioned it is not sequential) such that their max - min is 1 + size of the subset.
The algorithm I found for it is the following:
1) Sort the numbers (ascending)
2) Subtract "i" from number at index "i".
3) After subtraction, look for indices with the same value. The longest difference in indices is your length.
In your example:
1) sort => {0 6 6 7 8 9 10}
2) remove "i": {0 5 4 4 4 4 4}
3) similar keys (5 x "4") => {6, 7, 8, 9, 10}
For integers, this runs in linear time (count/radix sort). In general NxLog(N) for comparison-based sorting.
You will be shifting everything left. Use pointers s, d and copy s into d. Then if s == ',' increase s so that you copy some number into d.
int main() {
char str[20] = "123,45,6,7,9";
int d = 0, s = 0;
while (str[s] != '\0') if (str[s] == ',') s++; else str[d++] = str[s++];
while (str[d] != '\0') str[d++] = ' ';
cout << str << endl;
}
Given that you were told numbers are 32-bit integers, they were probably looking for some bit-level magic, or a probabilistic approach as explained above.
I decided to choose a different approach just for fun and run a divide and conquer code. I can argue that the complexity is at worst O(N^2) but on average O(N log(N)). It uses stack memory for recursion so O(log(N)) memory. I am also assuming I can modify the elements of array and that all elements are positive.
#include <iostream>
#include <cstdio>
using namespace std;
bool exists(int, int[], int, int);
bool firstUnique(int[], int, int, int&);
int main() {
int a[10] = { 1, 3, 5, 2, 5, 10, 3, 1, 9, 2 };
int pos = -1;
firstUnique(a, 0, 10, pos);
cout << a[pos] << endl;
getc(stdin);
}
bool firstUnique(int a[], int start, int end, int& pos) {
if (end - start == 1) {
pos = a[start] > 0 ? start : pos;
return a[start] > 0;
}
if (firstUnique(a, start, (start + end) / 2, pos)) {
if (!exists(a[pos], a, (start + end) / 2, end)) return true; // found it
else {
a[pos] = -a[pos];
return firstUnique(a, pos + 1, end, pos);
}
}
else {
return firstUnique(a, (start + end) / 2, end, pos);
}
}
bool exists(int num, int arr[], int start, int end) {
bool found = false;
for (int i = start; i < end; i++) if (num == arr[i]) {
arr[i] = -arr[i];
found = true;
}
return found;
}
IDEA: Divide into two halfs. Find the first unique element in first half and verify if it happens in the second half.
If not, return true and the corresponding index.
If yes, then mark all occurrences as "seen" (i.e., negate numbers) and solve the problem for the smaller array [pos + 1, end] where pos was the index of the unique element in first half.
At worse, the size is decreased by one every time which leads to O(n^2). Average performance, I believe, is much faster.
Please let me know of your thoughts on this.
A run of length > 1 has the property that at some index, s[i] == s[i - 1]. However, you want to report the beginning of the run, so we must have s[i] == s[i - 1] and s[i - 1] != s[i - 2]. Then (i - 1) is the offset.
#include <iostream>
#include <string>
#include <vector>
using namespace std;
int main () {
string s = "aabbccdaddddaddaddaeeeddedeeeeeeedsddfffkfkfkkk";
vector<int> offset;
if (s[0] == s[1]) offset.push_back(0);
for (int i = 2; i < s.size(); i++) if (s[i] == s[i - 1] && s[i - 1] != s[i - 2]) offset.push_back(i - 1);
for (auto it = offset.begin(); it != offset.end(); ++it) cout << *it << " ";
return 0;
}
Thanks for the comment.
You only skip over a word if
A) it does not belong to the list
B) it belongs to the list but it has been recorded more than once.
In your example if "first" has been recorded multiple times, then we skip over it. But when we reach "seen", we will check if another word "seen" has been spotted before (by the end-pointer at some previous iteration). If so, the count has to be > 1 and we can skip over it. Otherwise, the loop breaks and "begin-pointer" will not increase.
Note that I only count a word when the end-pointer reaches one.
"First" and "Shortest" are two different things.
Let N = length of paragraph and W = length of word list
The following is an O(N) time and O(W) space solution
The idea is that you keep two pointers to the beginning and end of the window. Also, keep a list of the number of occurrences of each of the given words in the window.
A) Increase the end-pointer and each time, if the newly seen word belongs to our list, add its counter
B) If you have seen all the required pairs, report the window (add it to "pairs")
C) Increase the begin pointer under either of the following conditions:
C-1) The word begin-pointer refers to does not belong to the list
C-2) The word belongs to the list but it has occurred more than once in the window (makes the window shortest possible)
Continue until you reach last.
Complexity analysis: O(W) space to remember the count and O(N) time for the while loop.
To see the O(N) time note that each word in paragraph is visited "at most" twice, once by end-pointer and once by begin-pointer.
words = ['list', 'of', 'words'];
string = 'this part of text that includes words and lists but not list of words and if you need a list which contains many more words you should try a longer list of terms or list of words'
para = string.split(' ');
count_words = dict()
for w in words:
count_words[w] = 0
pos_begin = 0
pos_end = 0
not_seen = len(words)
pairs = []
while pos_end < len(para):
if para[pos_end] in words:
if count_words[para[pos_end]] == 0:
not_seen -= 1
count_words[para[pos_end]] += 1
while pos_begin < pos_end:
if para[pos_begin] in words:
if count_words[para[pos_begin]] == 1:
break
else:
count_words[para[pos_begin]] -= 1
pos_begin += 1
# We might have found a shorted window
if not_seen == 0:
pairs.append([pos_begin, pos_end])
pos_end += 1
print pairs
While this is a nice observation, it is the answer to a different question, and perhaps based on a different model.
You are asked to quantify the ratio of girls / boys not what happens to the whole population. What if there are no couples and the villagers are self-cloning aliens?
Regardless, based on your model, at any generation "t", we have E[boys] = E[girls]. This expected value could be vanishing-ly small, assuming a marriage model, or excessively large if we assume a self-cloning model.
Remove the nested for loop. Instead of making all pairs, simply add the following line
count += len1 * len2;
In the main while loop, in every iteration either pos_1 or pos_2 or both are increased by at least "1". So it will take at most "2N" iterations.
When printing the pairs, the reason you get O(N^2) is something like this:
int[] arr = {1, 1, 1, 1, 5, 5, 5, 5, 5, 5}
where there are "NxM" pairs to list (N = 4 and M = 6 in this example). But we if you only need the count, then simply add "N * M" to the count and increase pos_1 by N and pos_2 by M.
- Ehsan March 18, 2014The ratio is 1. Think about it this way.
Given "N", "N / 2" of families will have a single boy. The other "N / 2" will have a girl as a first child.
Now let's assume the second "N / 2" give their first-born (girl) to the first "N / 2" family. So we have "N / 2" families with 1 boy and 1 girl, and "N / 2" families where they keep trying until they get a boy. By recursion, you find that the ratio has to be one.
His approach is correct, not the fastests, but correct, and needs almost 2000N comparisons.
For each new number, you make a comparison with min of 1000 numbers. Then, if you replace, you do another 1000 comparisons to find the next min. This adds up to 2000. In total, it will be 2000N since you repeat for any of the N numbers.
Thanks.
Actually, it is not O(N^2) as long as the number of pairs of indices is not O(N^2). In each iteration of the while loop, I increase either pos_1 or pos_2. So at most, there will be 2N iterations if there are no repeated elements.
It can be best said that the running time is O(N + M) where M is the number of pairs. In the example above, M = 9.
This idea is from 3-SUM problem. But only works on sorted arrays.
Technically, you could go through the array 1000 times and each time get the max value which is smaller than the previous max value you had already found. This is exactly This will almost require 2000(n) comparisons.
Another approach, with an extra O(n) space is this:
1) Build a Max-Heap in O(N) time and space (I guess will take around 3n swaps and compares).
2) Extract-Max 1000 time which takes at worst 2000 log(N) time.
We all spend time working on these types of problems. The reason is to learn and give feedback. While everyone is entitled to down/up voting whatever they want, it is best if a feedback is provided. How on earth would you downvote a correct solution? Unless you found and error and would like to share, in such a case, your efforts would be much appreciated.
- Ehsan March 12, 2014You should have asked for dictionary API.
1) If it is a HashMap<String, String>, then there is not much to do except for checking all pairs.
2) If it is a Binary Search Tree, and it supports ceil and floor, OR, it is a more sophisticated structure that can tell you given a string "abc" if there are any words having "abc" as prefix, then you could do a backtracking and search through the words which exactly contain your letters.
Rough idea:
static int max_length;
static String longest_word;
BacktrackingFunc(String prefix) {
if (!dictionary.containsPrefix(prefix))
return;
if (prefix.length() > max_length) {
longest_word = prefix;
max_length = prefix.length();
}
for (int i = 0; i < given_str.length(); i++) {
BackTrackingFunct(prefix + given_str.charAt(i));
}
}
By taking the second approach, you will at worst search for all the words in the dictionary. But that would be the case where the dictionary only contains words made of given letters.
To get an idea, consider the case where you have letters "a, b, c" and a very large dictionary.
Then, to find a word made of "a,b,c" and length "N" you will at worst call the dictionary 3^N times, but that is the case where all such combinations exists in the dictionary.
First of all, what I would do, was to use a
HashMap<Integer, LinkedList<Integer>> hashMap;
So that I could also store the location of other index. If you just need the count, then use hashMap.get(i).size(). This way you can also identify the pairs, as well as the number of repetitions.
However, the array in your example is sorted. In such a case, you can actually solve it in O(1) space and O(N + M) time where "N" is the length of input, and "M" is the number of pairs of indices with a difference equal to "K". I used the idea from a O(1) space solution for 3-sum problem (google it).
Keep two pointers pos_1 <= pos_2. If array is sorted ascendingly, then array[pos_2] >= array[pos_1].
1- Check cmp = array[pos_2] - array[pos_1].
2- If "cmp" is smaller than "k", the desired difference, increase pos_2 to obtain a larger difference. Else, decrease pos_1.
3- If cmp == 0, then you already found a pair [pos_1, pos_2]. All you need to do is to resolve the repeated element case. For that, increase pos_1 until the value changes. Do the same for pos_2. Then generate all the possible tuples (pos_1, pos_2) for the ranges you found.
Here is the java code.
public static void main(String[] args) {
int[] arr = {1, 1, 5, 5, 5, 6, 9, 16, 27};
listKdiff(arr, 4);
}
public static void listKdiff(int[] arr, int k) {
int pos_1 = 0;
int pos_2 = 0;
while(pos_2 < arr.length) {
int cmp = arr[pos_2] - arr[pos_1];
if (cmp < k) {
pos_2++;
}
else if (cmp > k) {
pos_1++;
}
else {
// length of the subarrays of equal values starting from pos_1 and pos_2
int len1 = 0, len2 = 0;
while (arr[pos_1 + len1] == arr[pos_1]) {
len1++;
}
while (arr[pos_2 + len2] == arr[pos_2]) {
len2++;
if (len2 + pos_2 == arr.length - 1) {
break;
}
}
// print all pairs of duplicates
for (int i = 0; i < len1; i++) {
int a = pos_1 + i;
for (int j = 0; j < len2; j++) {
int b = pos_2 + j;
System.out.println("[" + a + ", " + b + "]");
}
}
pos_1 = pos_1 + len1;
pos_2 = pos_2 + len2;
}
}
}
This is the output I got. The first 6 pairs corresponds to combinations of (1, 5) and the last three are combinations of (5, 9) (I printed the indices):
[0, 2]
[0, 3]
[0, 4]
[1, 2]
[1, 3]
[1, 4]
[2, 6]
[3, 6]
[4, 6]
Hi. Thanks for feedback. Having the parent in a node is reasonable otherwise the tree cannot support many operations including balancing and delete. But this is not important here.
Also, please provide me with a test case that the code failed. It works for me here :).
I will first explain the code posted above, then attach a program which does the counting for tree nodes with no parents.
I should say that both algorithms run in O(n^2). I have a feeling faster than n^2 might not exist.
I give you an example. "A" is the root
(A, 0)
(C, 0) (F, 0)
(B, 0) (D, 0) (E, 0) (G, 0)
1) A: by calling A.floodInc(), "A" increases its value and sends the message to C and F. "A" marks ask_children = true" so that it will not ask them again.
(A, 1)
(C, 0) (F, 0)
(B, 0) (D, 0) (E, 0) (G, 0)
2) C, F do the same. But since they have a parent "A", they will also send a floodInc to "A". Then, A will receive the "Inc" command twice. Note that this time, "A" will not ask for Inc from its children due to the ask_children flag.
(A, 3)
(C, 1) (F, 1)
(B, 1) (D, 1) (E, 1) (G, 1)
3) B, D, E, G will do the same. Let's trace B's message
First, it is received at C, so C increases its value
(A, 3)
(C, 2) (F, 1)
(B, 1) (D, 1) (E, 1) (G, 1)
Then C will resend the message to its parent.
(A, 4)
(C, 2) (F, 1)
(B, 1) (D, 1) (E, 1) (G, 1)
The same process happens for D, E, G, ..
(A, 7)
(C, 3) (F, 3)
(B, 1) (D, 1) (E, 1) (G, 1)
Code for no parent case:
public class BSTSimple {
private static BSTSimple dummy;
public static void main(String[] args) {
dummy = new BSTSimple();
int[] arr = {8, 4, 12, 2 , 1, 14, 11, 18};
Node root = null;
for (int i = 0; i < arr.length; i++) {
Node n = insert(root, arr[i]);
if (root == null) {
root = n;
}
}
while (incValue(root));
System.out.println(root.value);
System.out.println(root.left.value);
}
public class Node {
public Node left, right;
public int key;
public int value;
public boolean counted = false;
public Node(int key, int value) {
this.key = key; this.value = value;
}
}
public static boolean incValue(Node n) {
if (n.left == null && n.right == null) {
if (n.counted) {
return false;
}
n.value += 1;
n.counted = true;
return true;
}
if (n.left != null) {
while(incValue(n.left)) {
n.value++;
return true;
}
}
if (n.right != null) {
while(incValue(n.right)) {
n.value++;
return true;
}
}
if (n.left != null || n.right != null) {
if (!n.counted) {
n.counted = true;
n.value++;
return true;
}
}
return false;
}
public static Node insert(Node root, int key) {
if (root == null) {
root = dummy.new Node(key, 0);
return root;
}
else {
Node node = root;
while (node.key != key) {
if (node.key > key) {
if (node.left == null) {
node.left = dummy.new Node(key, 0);
}
node = node.left;
}
else {
if (node.right == null) {
node.right = dummy.new Node(key, 0);
}
node = node.right;
}
}
return node;
}
}
}
I am building the tree from bottom up, and it is non-recursive.
If N = 2^m then the algorithm runs in O(1) space and returns a perfectly balanced tree with "N" elements.
If N = 2^m + k, then the algorithm runs in O(1) space, but first build a tree that might have "m" extra nodes. Then these nodes will be removed.
public class Node {
public Node left, right, parent;
public int key;
public Node(Node parent) {
this.parent = parent;
}
}
public Node getRoot(int[] sorted_array) {
// Assuming sort is ascending
int N = sorted_array.length;
int count = 0;
int max_h = (int) (Math.log(N) / Math.log(2));
int h = max_h;
Node root = new Node(null);
Node node = root;
// In O(log(N)) time, get the left most
while(count < N) {
if (node.left == null && h == 0) {
node.key = sorted_array[count++];
node = node.parent;
h++;
}
else if (node.left == null) {
node.left = new Node(node);
node = node.left;
h--;
}
else if (node.left != null && node.right == null) {
node.key = sorted_array[count++];
node.right = new Node(node);
node = node.right;
h--;
}
else {
node = node.parent;
h++;
}
}
while (node != root) {
node = node.parent;
if (node.key == 0) {
node.parent.left = node.left;
node.left.parent = node.parent;
}
}
return root;
}
Java code.
public static void draw(int N) {
if (N > 0) {
int m = 2 * (N - 1) + 1;
int mid = N - 1;
for (int i = 0; i < m; i++) {
for (int j = 0; j < m; j++) {
int d = Math.max(Math.abs(i -mid), Math.abs(j - mid)) + 1;
System.out.print(d);
}
System.out.print("\n");
}
}
}
Java code.
public static void draw(int N) {
int N = Integer.parseInt(args[0]);
if (N > 0) {
int m = 2 * (N - 1) + 1;
int mid = N - 1;
for (int i = 0; i < m; i++) {
for (int j = 0; j < m; j++) {
int d = Math.max(Math.abs(i -mid), Math.abs(j - mid)) + 1;
System.out.print(d);
}
System.out.print("\n");
}
}
}
I attached a simple implementation in Java. The idea I used is this:
1- Flood an increment command from root (this is recursive).
2- When a node receives the command, it calls INC (add + 1), calls it for parent and calls it for its children. However, there is a flag to make sure children are called only once.
To get the number of nodes in the tree, this method must be called from the root of the tree.
root.floodInc();
System.out.println(root.value);
Node:
public class Node {
Node left, right, parent;
int value;
int key;
boolean asked_children = false;
public void inc() {
this.value += 1;
}
public void floodInc() {
this.inc();
if (parent != null) {
parent.floodInc();
}
if (!asked_children) {
asked_children = true;
if (left != null) {
left.floodInc();
}
if (right != null) {
right.floodInc();
}
}
}
public Node(int key, int value, Node parent) {
this.parent = parent;
this.value = value;
this.key = key;
}
}
The solution I have in mind has two parts:
- Ehsan November 28, 2014A) from the words, figure out which letter precedes which. This gives you a DAC where there is an edge from char 'a' to char 'b' only if 'a' supersedes 'b' in the order.
B) Run DFS to get (a) topological order. The topological order is the reverse order of the items you mark as visited.