famee
BAN USERIf the var is 0 initially, min value can be 2 and max can be 200.
To answer Huwanda's question, here is what can happen to get the value of 2.
P1 and P2 both read var = 0 in their registers. P1 gets CPU, increments 99 times, so var = 99, cached value of var = 99, however, P2 register has this value = 0.
P1 gets context switched out and P2 gets CPU. Because P2 has already a copy of var in its register, it will increment it to make it 1. When P1 writes it in its cache, cache overwrites P1's value. Now cache contains var = 1.
P2 gets context switched out and P1 gets CPU. P1 reads var = 1 in its register from cache and gets context switched out.
P2 gets CPU, copies var in its register and increments it 100 times making var = 100 in cache. However, var value already copied in P1's register = 1.
P2 finishes, P1 gets CPU, already has var = 1 in its registers (if it reads from cache, it will read 100, but since it's already copied in register, it won''t consult cache). P1 increments var to 2, overwrites the value in the cache which was 100 before. Final value = 2.
Cache can be replaced with memory (RAM) in above analysis, because they are always kept coherent in hardware architecture. However, variables copied in registers can do the damage if memory locations are not properly synchronized.
We can use an arraylist to store old root of the list and new root of the list.
Before the execution, list will contain only old root, after the execution, list will contain old root and new root.
static void reverseList(Node head, ArrayList<Node> rootContainer) {
if(head == null) return;
else if(head.next == null) {
rootContainer.add(head);
return;
}
else {
Node child = head.next;
reverseList(child, rootContainer);
head.next.next = head;
if(head == rootContainer.get(0)) {
head.next = null;
}
}
}
You don't need that much complexity to get O(m+n) for two sorted strings. This is merely a linear scan of two strings. e.g., {1,3,5,6,7,9} and {1,2,5,6,7,12}, a linear scan will suffice to find the longest common substring.
1st match starts at : 1, length = 1
scan both arrays forward till a match.
2nd match starts at: 5, length = 3
done (there is no going forward and backward that could make it O(m*n)).
That's a great point Cerberuz, and might make the problem more challenging. There could be two interpretations possible from the problem statement.
1. Find k closest points such that sum of distance among all the points is minimized.
2. Find a circle with smallest diameter that can contain k points (k closest points in as small space as possible).
I think the problem statement hints towards the second interpretation, so we need to find k elements such that the max distance between any two points is minimized (smallest diameter). However, even if we consider the second case, things are not easy. Minimizing the sum of distances of k-1 points from the center (kth point) does not ensure minimizing the diameter of the circle (which my earlier post assumed). The diameter represents the maximum distance between any two points in the sub-cluster. Let us see how can we minimize the diameter :
(i) Initially maintain an NxN matrix and compute distances of all points from each other O(n^2).
(ii) Take a point p1 and find k-1 closest elements to p1.O(n)
(iii) Maintain a heap of distances of all k points with each other. Build heap - O(k^2) time, O(k^2) space. Also find Sk for k points which is the sum of distances from the rest of k-1 points.
(iv) Get the next point (k+1) and find its maximum distance from any of the k points. If this maximum is greater than the max Heap, ignore this point and move to the next one. Else remove that point of the max Heap distance (each distance has two points associated with it) whose Sk is larger. Now remove all k distances of that point from the heap, and insert new k distances in the heap. O(klgk + klgk) = O(klgk).
(v) Repeat point (iv) for the rest of the points.
Total time complexity = O(n^2) + O(k^2 + nklgk) = O(n^2).
The issue with pre-order or post-order is that the order is not sorted. Looking for one node from tree a in tree b will no longer be a binary search, so pre-order and post-order traversals will not give any benefit. The linear scan that we could do in case of in-order traversal will not be possible here, therefore achieving O(n) complexity will be hard.
- famee January 09, 2013We can pre-process dictionary into a hashmap such that the key is the sorted letters of the word and the value is the list of all actual words (anagrams) in the dictionary. Then we can use a backtracking recursive algorithm to start with the smallest word that can be constructed from the letters and continue doing this until we find a mismatch in dictionary, from where we can backtrack and try the next larger word and so on. The time complexity: sorting letters (if the word length is small, we can consider it O(1), otherwise O(klgk) where k is the length of the longest word), looking up in the dictionary is O(1), constructing the new sentence with spaces O(n). The recursive solution in best case (with no backtracking) is O(n + klgk) where n is the total number of letters in the sentence. In worst case, the algorithm becomes O(n^2).
If the dictionary has anagrams, we can replace the shuffled words with any of the anagram (we are only required to make a sentence out of dictionary words).
This looks like a clustering problem except that we need to find just one cluster of k closest elements rather than finding clusters for all elements (i.e., k-means clustering which is NP-hard). We can have an O(n^2 lgn) solution where for each point we compute its distance from other points. For each point, we have a list of n distances. We can sort each list in O(nlgn) and find k-1 smallest element and compare the sum of distances of these points with the already calculated minimum sum of distances and replace if needed.
Total time complexity = n * (O(n) + O(nlgn)) = O(n^2 lgn).
We can use the order statistics method to find k-1 smallest elements in each list, which makes the algorithm O(n^2).
Let us say the number of nodes in tree a is M and in b is N. In naive method, we can search each element of one tree in the other tree. Searching will cost M lg N (or simply O(nlgn) if M ~ N and then the tree matching will cost O(M) worst case till the nodes don't match any further. We keep track of the node from where the match started and the length of the longest common sub-tree discovered so far, and replace these variables if the next match is longer. Overall cost = O(M lg N) + O(M) = O(M lg N). If M ~ N, cost = O(nlgn).
Can we improve on this ? Let us think of the BSTs as sorted arrays (which we can obtain by an O(n) in-order traversal of the tree). Our problem is similar to finding the longest common substring of the two sorted arrays which takes an O(N+M) time. Once we find the longest substring, we need to re-construct the common substring tree O(lg N). Overall complexity = O(n) which seems to beat the other strategy.
EDIT: Thanks to das. The longest common substring in in-order traversal will need to be rechecked for whether the common sub-trees are isomorphic as well. If this check is performed constant number of times, our complexity will still be O(n) otherwise it be also become O(nlgn).
We can attempt this problem the following way:
Case 1: If the range [min, max] >> the number of elements N in the list, we can generate a random number "min + rand(max-min+1)". Sort the list which is done once, O(nlgn) preprocessing cost. Binary search the number in the list, if found, generate another number, else return the number. We have high probability of hitting a random number not in the list because range of min,max is very large. O(lg N) time, O(1) space to generate one random number.
Case 2: If the range [min,max] = O(n) and there are few elements not in the list. We can generate another list of missing elements (which will be far fewer than N) and generate a random number out of this missing elements list. If there is just one missing element, that will be outputted every time. O(1) time, O(1) space because there are few elements in the new array.
Case 3: If the range[min,max] = O(n) and there are nearly as many missing as the numbers in the list within some constant multiple. We can use case 1 with the disadvantage of unbound time to get a random (in case if keeps on hitting the element in the list but with the advantage of no additional space. We can use case2 with the disadvantage of using O(n) additional space but random number generation is O(1) worst case.
Good point Arvind. As long as we need to output the line with maximum points, the algo should still be okay with +1 increments. However, if we need to know the exact number of points that fall on a single line, we can do the following:
for each line, we nominate a parent point based on which point on this line was encountered first. Then in the two loops of i and j for two points(i being outer loop), we increment line count by 2 for the first occurrence in map for two points, and later on increment line count by 1 on each subsequent occurrence of the same line with same parent. For next occurrences of the line, if parent is different, we do not increment the count. As an example, for 3 points (1,1), (2,2), (3,3) all lying on the same line, we increment the count by 2 for (1,1),(2,2) and by 1 for (1,1),(3,3) both lines with same parent (1,1), but do not increment the line count for (2,2),(3,3) because these two points have already been accounted for in an earlier loop with parent (1,1). So we have a count of 3 in the map which is the correct answer. This means at each map check, we also need to check the parent of the line, which adds to the complexity of the algorithm, but is needed to get the exact number of points lying on a line
If (pt1, pt2) and (pt3, pt4) have same slope though they are parallel lines, the algo will consider them passing through single line.
Without storing y-intercept, it is not possible to decide if different points with same slopes pass through single line or not.
Moreover, storing duplicate slopes in array will waste space. HashMap count mechanism will take care of that.
I think Grr1967's own solution is the best possible solution. It is safe to assume that each hashtable insert operation has amortized O(1) cost, so that the overall complexity of the algorithm is O(n^2).
We really don't need auxiliary stack to construct BST if we have links to parent nodes.
Here is a java implementation without stack:
TreeNode constructBST (List<Integer> preOrderWalk) {
TreeNode root = new TreeNode(preOrderWalk.get(0));
TreeNode n = root;
TreeNode p = null;
for (int i = 1; i < preOrderWalk.size(); i++) {
int key = preOrderWalk.get(i);
if(key < n.key) {
TreeNode left = new TreeNode(key);
left.parent = n;
n.left = left;
p = n;
n = n.left;
}
else {
p = getSuccessor(n);
while(p != null && key > p.key ) {
n = p;
p = getSuccessor(n);
}
TreeNode right = new TreeNode(key);
right.parent = n;
n.right = right;
p = n;
n = n.right;
}
}
return root;
}
If we can consider the total length of x-axis as n, then we have a simpler O(n) solution as follows:
static void getSideView (int[] dist, int[][] dim) {
int numBuildings = dist.length;
int totalWidth = 0;
for (int i = 0; i < numBuildings; i++) {
totalWidth = Math.max(totalWidth, dist[i] + dim[i][1]);
}
int[] mask = new int [totalWidth+1];
for (int i = 0; i < numBuildings; i++) {
for(int j = dist[i]; j < dist[i] + dim[i][1]; j++) {
mask[j] = Math.max(dim[i][0], mask[j]);
}
}
int prevMask = 0;
int prevIndex = 0;
for (int i = 0; i <= totalWidth; i++) {
if(mask[i] == prevMask) {
continue;
}
System.out.print((i-prevIndex) + "R ");
if(mask[i] > prevMask) {
System.out.print((mask[i] -prevMask) + "U ");
}
else if(prevMask > mask[i]) {
System.out.print((prevMask - mask[i] ) + "D ");
}
prevMask = mask[i];
prevIndex = i;
}
System.out.println();
}
Here is an O(nlgn) solution in java. It creates a TreeMap of events (so that all events are in sorted order of occurrence), and maintains a maxHeap for masking active heights of buildings (If removing an element from Heap is not O(lgn), we can use another TreeMap instead of maxHeap).
static void getSideView (int[] dist, int[][] dim) {
int numBuildings = dist.length;
Map<Integer, Integer> events = new TreeMap<Integer, Integer>();
for (int i = 0; i < numBuildings; i++) {
events.put(dist[i], dim[i][0]);
events.put(dist[i]+dim[i][1], (-1) * dim[i][0]);
}
int prevMask = 0;
int prevIndex = 0;
Queue<Integer> activeHeights = new PriorityQueue<Integer>
(numBuildings, Collections.reverseOrder());
for (Map.Entry<Integer, Integer> e : events.entrySet()) {
int newMask = 0;
if(e.getValue() > 0) {
activeHeights.offer(e.getValue());
newMask = activeHeights.peek();
if(newMask > prevMask) {
System.out.print((e.getKey() - prevIndex) + "R ");
System.out.print((newMask - prevMask) + "U ");
prevMask = newMask;
prevIndex = e.getKey();
}
}
else {
activeHeights.remove((Integer)(e.getValue() * (-1)));
if(activeHeights.isEmpty()) {
newMask = 0;
}
else {
newMask = activeHeights.peek();
}
if(newMask < prevMask) {
System.out.print((e.getKey() - prevIndex) + "R ");
System.out.print((prevMask - newMask) + "D ");
prevMask = newMask;
prevIndex = e.getKey();
}
}
}
System.out.println();
}
public static void main (String[] args) {
int[] dist = new int[] {5,10,15};
int[][] dim = new int[][] {{10,25},{20,15},{5,5}};
getSideView(dist, dim);
}
nice one. The number of trees for N = 1,2,3,4,5,6,7 corresponds to 1,2,5,14,42,132,429 respectively.
Here is a recursive code in java:
static int computeOrderedTrees (int n) {
if (n <= 0) return 0;
if (n == 1) return 1;
else {
// for trees rooted at max and min value
int sum = 2 * computeOrderedTrees(n-1);
// for trees not rooted at max and min value
for (int i = 1; i < n -1; i++) {
sum += computeOrderedTrees (n-1-i) * computeOrderedTrees (i);
}
return sum;
}
}
Sorry not to be clear. MaxHeapify(A) that comes from Algo book by CLRS assumes that all the nodes in the subtree rooted at A.left and A.right form a heap, while the root may be smaller than any of its children.
LEFT child index: arr[i].left = arr[2 * i];
RIGHT child index: arr[i].right = arr[2 * i + 1];
static void maxHeapify(Node[] arr, int rootIndex) {
if(arr.length <= rootIndex) return;
int maxChild = rootIndex;
int left = arr[rootIndex].left;
int right = arr[rootIndex].right;
if(left < arr.length && arr[left].key > arr[rootIndex].key) {
maxChild = left;
}
if(right < arr.length && arr[right].key > arr[maxChild].key) {
maxChild = right;
}
if(maxChild != rootIndex) {
int maxKey = arr[maxChild].key;
arr[maxChild].key = arr[rootIndex].key;
arr[rootIndex].key = maxKey;
maxHeapify(arr, maxChild);
}
}
It should be calculated as simply as the following, given totalLines:
Random r = new Random();
double rGen = r.nextDouble();
int lineNum = (int) (Math.round(rGen * (totalLines - 1))) + 1;
return lineNum;
An integer can hold ~ 2^31 (2 billion line numbers), the double rGen can hold even more, so the million line numbers can be accessed using int for line number and double precision for random numbers.
I think it should be something along these lines.
class SoccerLeague {
public enum Result {
WIN_BY_FIRST,
WIN_BY_SECOND,
DRAWN
};
List<Team> teams;
public SoccerLeague (int numTeams) {
teams = new ArrayList<Team>();
}
void addTeam (String name) {
teams.add(new Team(name));
}
public Team getTeam(String a) {
for (Team t : teams) {
if(t.name.equals(a)) {
return t;
}
}
return null;
}
void processMatchResult(String a, String b, Result result) {
Team first;
Team second;
if((first = getTeam(a)) == null ) return;
if((second = getTeam(b)) == null ) return;
first.played ++;
second.played ++;
if(result == Result.WIN_BY_FIRST) {
first.won++;
first.points += 3;
second.lost ++;
}
else if (result == Result.WIN_BY_SECOND) {
second.won++;
second.points += 3;
first.lost ++;
}
else {
first.drawn ++;
second.drawn ++;
first.points ++;
second.points ++;
}
}
void printResult(String a) {
Team team = getTeam(a);
if(team == null) return;
System.out.format("%4s %6d %3d %4d %5d %6d\n", team.name,
team.played, team.won, team.lost, team.drawn, team.points);
}
int getPoints (String a) {
Team team = getTeam (a);
if(team == null) return 0;
return team.points;
}
class Team {
String name;
int points;
int played;
int won;
int lost;
int drawn;
public Team(String A) {
this.name = A;
points = 0;
played = 0;
won = 0;
lost = 0;
drawn = 0;
}
}
public static void main (String[] args) {
SoccerLeague sl = new SoccerLeague(3);
sl.addTeam("A");
sl.addTeam("B");
sl.addTeam("C");
sl.processMatchResult("A", "B", Result.WIN_BY_FIRST);
sl.processMatchResult("B", "C", Result.WIN_BY_FIRST);
sl.processMatchResult("C", "A", Result.DRAWN);
sl.processMatchResult("A", "B", Result.WIN_BY_SECOND);
sl.processMatchResult("B", "C", Result.WIN_BY_SECOND);
sl.processMatchResult("C", "A", Result.WIN_BY_FIRST);
System.out.format("Team Played Won Lost Drawn Points\n");
sl.printResult("A");
sl.printResult("B");
sl.printResult("C");
int points;
points = sl.getPoints("A");
System.out.println("Team A points = " + points);
points = sl.getPoints("B");
System.out.println("Team B points = " + points);
points = sl.getPoints("C");
System.out.println("Team C points = " + points);
}
}
Here is the DP approach to solve the problem.
First, we need to calculate the minimum number of boards of advertisement needed.
Next step is to find recursive solution for sub-problems.
If there are n boards that need t advertisements and m consecutive boards have been skipped,
then
Our final result will be numWays (N, numBoards, 0).
There are two ways to subdivide the problem. One to put an advertisement at nth billboard and other is not to put an advertisement on the billboard. If m = M-1 billboards have been skipped, then we cannot skip the current billboard and we have to place an advertisement on current billboard (condition 1). If m < M-1 billboards have been skipped, it is our option whether to put an advertisement on current billboard or not (condition 2).
The terminating condition of 1 means that the current placement is valid. A condition of 0 means the current placement is invalid.
Because of the recursive subproblems, the actual solution is exponential run time, however, if we use DP approach of creating solution bottom-up, our solution will be O(N*numBoards*M).
Here is the java code for a working implementation.
- famee June 02, 2013