zortlord
BAN USERMisread one little '!' while computing by hand. Funny how that can change a complete analysis. HOWEVER, this is not a O(n) algorithm... I think it's closer to O(n log n). This can be seen when you test it with a fully populated tree. The following is test code (slightly modified to track the number of computation iterations):
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.Stack;
public class TreeToDll {
static int iterations = 0;
public static void main(String[] args){
Node root = constructTree(3);
//run the test
Node list = TreeToDll.convert(root);
//move to head
while(list.left != null){
list = list.left;
}
//output list
while(list != null){
System.out.print("" + list.val + " -> ");
list = list.right;
}
System.out.println("\niterations = "+iterations);
}
private static Node constructTree(int size) {
Node root = new Node();
root.val = 1;
LinkedList<Node> list = new LinkedList<Node>();
LinkedList<Node> alt = new LinkedList<Node>();
LinkedList<Node> temp = null;
int depth = 1;
int val = 1;
list.add(root);
while(depth < size){
while(!list.isEmpty()){
Node node = list.removeFirst();
Node left = new Node();
val++;
left.val = val;
node.left = left;
alt.add(left);
Node right = new Node();
val++;
right.val = val;
node.right = right;
alt.add(right);
}
temp = list;
list = alt;
alt = temp;
depth ++;
}
return root;
}
static class Node{
Node left, right;
int val;
}
//return tail of double-linked list
static Node convert(Node root) {
while (true) {
while (root.left != null && root != root.left.right) {
iterations ++;
root = relinkLeft(root);
}
//go back
while (root.right != null && root == root.right.left) {
iterations ++;
root = root.right;
}
if (root.right == null) return root;
iterations ++;
root = relinkRight(root);
}
}
static Node nextOnLeft(Node n) {
Node left = n.left;
while (left.right != null && left != left.right.left){
iterations ++;
left = left.right;
}
return left;
}
//return orphan
static Node relinkLeft(Node n) {
Node orphan = n.left;
Node left = nextOnLeft(n);
n.left = left;
left.right = n;
return orphan;
}
static Node nextOnRight(Node n) {
Node right = n.right;
while (right.left != null && right != right.left.right) {
iterations++;
right = right.left;
}
return right;
}
//return orphan
static Node relinkRight(Node n) {
Node orphan = n.right;
Node right = nextOnRight(n);
n.right = right;
right.left = n;
return orphan;
}
}
Thus far, I think hiuhchan's and my solution are the only non recursive ones that can solve the following tree. If I'm wrong, please, someone show me a better way because I don't like the approach. It seems "wrong".
.....1
../.....\
.2.......3
...\
.....4
.../
.5
Edit: CT's solution does work and it appears to be O(n)
- zortlord January 10, 2015@Anonymous
I'm not sure that a trie would be of benefit. Finding all the applicable words would then take extra work ( O(n!) Where n is the number of characters for the entry) because you'd have to traverse the entire stricture to get out the words that to return them.
@Skorpius
But, assuming 'p' is the size of the entire set, either n or m could be roughly equivalent to p and your algorithmic complexity returns to O (2^p) which is the same thing as the original solution.
With the way the algorithm is already written, it will preform the optimization that you suggest (all the negative numbers are considered first but search branches that cannot be overcome by positive numbers are discarded). If you need to, follow some of the code by hand with a few trivial examples.
This is an NP-Complete problem so, well, it's going to be O(2^n) where n is the length of the subset. memory consumption will be O(n) too.
public static void printZeroSubsets(long[] set){
//used to prune branches that shouldn't be considered
//don't search further if the max or min value remaining cannot get the value back to 0
long max[] = new long[set.length];
long min[] = new long[set.length];
//a sort could speed up the branching with the additional max / min tracking
Arrays.sort(set);
int index = set.length -1;
if(set[index] > 0){
max[index] = set[index];
}
else{
min[index] = set[index];
}
for(int i = set.length -2; i > -1; i--){
if(set[i] > 0){
max[i] = max[i+1] + set[i];
min[i] = min[i+1];
}
else{
max[i] = max[i+1];
min[i] = min[i+1] + set[i];
}
}
Worker worker = new Worker(set, max, min);
worker.execute();
}
class Worker{
private long sum;
private long[] set;
private long[] max;
private long[] min;
StringBuilder b;
public Worker(long[] set, long[] max, long[] min){
this.set = set;
this.max = max;
this.min = min;
this.b = new StringBuilder();
}
public void execute(){
this.executeRecur(0, true);
}
private void executeRecur(int index, boolean notAdded){
//case being searched for
if(this.sum == 0 && notAdded){
String str = this.b.toString();
System.out.println("{"+str+"}");
notAdded = false;
}
//normal search base case
if(index == this.set.length){
return;
}
//use the max / min tracking to stop branching early
if(this.sum > 0 && this.sum + this.min[index] > 0){
return;
}
else if(this.sum < 0 && this.sum + this.max[index] < 0){
return;
}
//search if this value was NOT included
this.executeRecur(index + 1);
//search if this value WAS included
int bLength = this.b.length();
this.sum += this.set[index];
if(bLength > 0){
this.b.append(',');
}
this.b.append(this.set[index]);
this.executeRecur(index + 1, true);
this.sum -= this.set[index];
this.b.setLength(bLength);
}
}
O(n) runtime complexity and O(log n) max memory (it seems like someone asks this question at least once a week...):
class Node{
Node left, right;
char c;
}
public static void print(Node node){
if(node == null){
return;
}
LinkedList<Node> list = new LinkedList<Node>();
LinkedList<Node> alt = new LinkedList<Node>();
LinkedList<Node> temp;
list.add(node);
StringBuilder b = new StringBuilder();
while(!list.isEmpty()){
while(!list.isEmpty()){
node = list.removeFirst());
if(node.left != null){
alt.add(node.left);
}
if(node.right != null){
alt.add(node.right);
}
b.append(node.c);
}
System.out.println(b.toString());
b.setLength(0);
temp = list;
list = alt;
alt = temp;
}
}
I like this approach, but I have a huge improvement to your map. If the map used the counts of letters in 's' and mapped to a collection of valid words produced by those keys, the number of possible entries in the map would be significantly reduced and computational complexity of searching the map would be reduced from O(n!) to O(n^2) where n is the length of 's'. An off-the-cuff example of what I mean is something like
'e1g2' -> { 'egg', 'geg', 'gge' } //if all those strings are valid.
'e1g1o1' -> { 'ego', 'eog', 'geo', 'goe', 'oeg', 'oge' }
etc
Assumptions:
1. This function is only going to be called once. If this were to be called multiple times, it should use preprocessing like building a trie
2. valid characters are ASCII (8 bit representation)
This function will operate in O(n) complexity and O(n) memory (only O(n) to store the results before returning them) where n is the number of words:
public static ArrayList<String> getValidWords(String str, String[] dict){
if(str == null || dict == null){
throw new NullPointerException():
}
int[] strSig = getSig(str);
ArrayList<String> results = new ArrayList<String>();
for(String word : dict){
if(validSig(strSig, word){
results.add(word);
}
}
return results;
}
private static int[] getSig(String str){
int[] sig = new int[256];
for(char c : str.toCharArray()){
sig[cToI(c)]++;
}
return sig;
}
private int cToI(char c){
return (int)c;
}
private static boolean validSig(int[] sig, String word){
if(word.length() > 4){
return false;
}
int[] usableSig = new int[256];
for(char c : word.toCharArray()){
int i = cToI(c);
if(usableSig[i] == sig[i]){
return false;
}
usableSig[i]++;
}
return true;
}
If the dictionary file cannot fit in memory, then I would stream the file and pull each individual word out for comparison. This change would only affect the first method and only really around the while loop.
- zortlord January 08, 2015Starting at convert using the example tree, the tree,
in left link while,
1.left <-> 5.right and 2 becomes root
2.left <-> 4.right and 4 becomes root
no more lefts so moving on
in 'go back' while
2 becomes root
5.left == null so moving on
relink right makes
2.right <-> 5.left and 5 becomes root.
repeating main while loop
in left link while: (this is the part that I think is repeating inappropriately)
5.left <-> 2.right and 2 becomes root.
2.left <-> 4.right and 4 becomes root.
4 has no left node, so loop breaks
in next while loop, 1 become root after 3 iterations
...
I don't think the code as it is O(n). If I have a balanced tree like
____1
___/___\
__2____3
_/__\__/__\
4__5_6__7
Then the entire left side gets built correctly, but when relinkRight gets called and returns node 5, the while(true) reexecutes and then traverses completely back over the left side again before working on the right side of node 1. I think this approach may be O(n^2) due to the retraversals to the left.
- zortlord January 08, 2015How about permutation generation... Will operate as O(n!) complexity with O(n) memory:
public static void printPerms(String str){
if(str == null){
throw new NullPointerException();
}
if(str.length() == 0){
throw new IllegalArgumentException();
}
long max = fact(str.length());
PermBuilder p = new PermBuilder(str.toCharArray());
for(long i = 0; i < max; i++){
System.out.println(p.build(i));
}
}
private static long fact(long val){
long res = val;
while(val > 1){
val--;
res *= val;
}
return res;
}
static class PermBuilder{
private char[] arr;
public PermBuilder(char[] arr){
this.arr = arr;
}
public String build(long val){
char[] res = new char[this.arr.length];
System.arraycopy(this.arr, 0, res, 0, res.length);
for(int index = 0; index < res.length -1; index++){
int denom = res.length - index;
int swapIndex = index + val / denom;
char c = res[swapIndex];
char t = res[index];
res[index] = c;
res[swapIndex] = t;
val /= denom;
}
return new String(res);
}
}
format FTFY:
public class quest {
public static void main (String[] args) {
int n =3 ;
int base =2 ;
int remainder =0 ;
String new=" " ;
while (n>0) {
remainder =(n/base) ;
n=(n/base) ;
new=remainder+new ;
}
System.out.println(new) ;
}
}
}
I assume that you are trying to print out n=3 in base 2 (there is no javadoc, so that could be thought of as an error too). However, there are a lot of squirrelly things that are going on and wrong. Firstly, remainder is being computed incorrectly in the loop. Secondly, you cannot concatonate strings together like you are for new. Lastly there is an extra '}'. I also don't like the String concatonation being done that way since it will create lots of extra String objects, but changing that would require more significant repair and alteration of the code.
Fixed code:
public class quest {
public static void main (String[] args) {
int n =3;
int base =2;
int remainder =0;
String new="";
while (n>0) {
remainder = (n % base);
n= (n / base);
new=Integer.toString(remainder)+new;
}
System.out.println(new) ;
}
}
How about O(n) complexity and O(n) memory:
public static Map<String, Set<String>> dupesMap(Map<String, String> table){
HashMap<String, String> keyMap = new HashMap<String, String>();
HashMap<Set<String>> resultsMap = new HashMap<Set<String>>();
for(Entry<String, String> entry : table.entrySet()){
String oldKey = entry.getKey();
String oldVal = entry.getValue();
String resultKey = keyMap.get(oldVal);
if(resultKey == null){
keyMap.put(oldVal, oldKey);
resultsMap.get(oldKey, new HashSet<String>());
}
else{
Set<String> dupes = resultsMap.get(resultKey);
dupes.add(oldKey);
}
}
return resultsMap;
}
Here is a non-recursive version that is O(1) in memory complexity and O(n^2) in algorithmic complexity for an unbalanced tree ( O( n Log n) for a balanced one):
static class Node{
Node left, right;
String val;
}
public static Node flatten(Node root){
if(root == null){
return null;
}
Node[] calc = popLeft(root);
root = calc[1];
Node head = calc[0];
Node tail = head;
while(root != null){
calc = popLeft(root);
tail.right = calc[0];
calc[0].left = tail;
tail = calc[0];
root = calc[1];
}
return head;
}
//removes the leftmost node and fixes the tree
//returns Node array where [0] is the removed node and
//[1] is the new root of the tree
private static Node[] popLeft(Node node){
if(node.left == null){
return new Node[]{node, node.right};
}
Node parent = node;
Node child = parent.left;
while(child.left != null){
parent = child;
child = child.left;
}
Node[] results = new Node[]{child, node};
parent.left = child.right;
return results;
}
This can be EXTREMELY easy if you build up the list by recursive construction and returning the start and end of each sublist:
static class Node{
Node left, right;
String val;
}
public static Node flatten(Node root){
if(root == null){
return null;
}
return flattenRecur(root)[0];
}
private static Node[] flattenRecur(Node node){
Node start = node;
Node end = node;
Node[] results = null;
if(node.left != null){
results = flattenRecur(node.left);
start = results[0];
results[1].right = node;
node.left = results[1];
}
if(node.right != null){
results = flattenRecur(node.right);
end = results[1];
results[0].right = node;
node.right = results[0];
}
if(results == null){
return new Node[]{start, end};
}
results[0] = start;
results[1] = end;
return results;
}
How about O(n) complexity and O(n) memory:
public static ArrayList<String> getPath(String[] arr){
HashSet<String> fromSet = new HashSet<String>(arr.length / 2);
HashMap<String, String> toMap = new HashMap<String, String>(arr.length / 2);
from(int i = 0; i < arr.length -1; i += 2){
fromSet.add(arr[i+1]);
toMap.put(arr[i], arr[i+1]);
}
String start = null;
for(String str : toMap.getKeys()){
if(!fromSet.contains(str)){
start = str;
break;
}
}
ArrayList<String> results = new ArrayList<String>(arr.length / 2 + 1);
while(start != null){
results.add(start);
start = toMap.get(start);
}
return results;
}
BFS approach that is O(n) regarding the contents of the tree, and O(n) in memory consumption- more specifically, the maximal memory will never exceed half of n. Additionally, only 2 object constructions:
static class TreeNode{
TreeNode left, right;
char val;
}
public static void print(TreeNode node){
LinkedList<TreeNode> list = new LinkedList<TreeNode>();
LinkedList<TreeNode> alt = new LinkedList<TreeNode>();
LinkedList<TreeNode> temp = null;
list.add(node);
while(!list.isEmpty()){
print(list);
while(!list.isEmpty()){
node = list.removeFirst();
if(node.left != null){
alt.add(node.left);
}
if(node.right != null){
alt.add(node.right);
}
}
temp = list;
list = alt;
alt = temp;
}
}
private static void print(LinkedList<TreeNode> list){
Iterator<TreeNode> iter = list.iterator();
StringBuilder b = new StringBuilder();
while(iter.hasNext()){
TreeNode node = iter.next();
b.append(node.val);
}
java.lang.System.println(b.toString());
}
This can be done in O(n) complexity using a process similar to calculation of T for the KMP algorithm. If there is a pattern then there should exist a run of ascending values at the end of T with a length at least half the total length of the string. This has the added benefit of not requiring perfectly formed sets of the pattern (ie: 'xyzxyzx' would be positive since it's a repetition of 'xyz').
This is a sample of what I would expect as internal calculations:
Str: x y z x y z x y z x
t: -1 -1 -1 0 1 2 3 4 5 6
true since t ends as >= Str.length() / 2
Str: x y z d x y z x y z x
t: -1 -1 -1 -1 0 1 2 0 1 2 0
false
public static boolean hasPattern(String str){
//build T
if(str == null){
throw new NullPointerException();
}
if(str.length() < 2){
return false;
}
int t = -1;
int i1 = 0;
int i2 = 1;
while(i2 < str.length()){
if(str.charAt(i1) == str.charAt(i2)){
t++;
i1++;
}
else{
t = -1;
i1 = 0;
}
i2++;
}
//verify correct length
return (t + 1) >= (str.length() >>> 1);
}
@haroldtreen
Algorithmically, the problem seems easy so I would expect that they are not judging on slick algorithm things but rather on perfect, usable, tested code
If I were wrong about that then I guess there may be ways to speed that general approach up like:
1. Like you said, are the values in the tsvs in the same order and will they always have the same values (speed the extraction of tsv values)?
2. Are there alternative ways to compute the bandwidth/latency (speed extraction of tsv values)?
3. Is there some ordering to the bin values that could be exploited (filenames, date times, etc) (do all files need to be computed )?
There could be lots of reasons why you weren't selected which may have nothing to do with your performance. Don't sweat it- "sometimes you get the bear and sometimes the bear gets you"
Here's an pseudo algorithm that should solve the problem:
average = 0
bandwidth = 0
for each file
_tsv = convert file
_read tsv and populate tempLatency and tempBand
_delete tsv
_average += tempLatency
_bandwidth += tempBand
average /= number of files
return average and bandwidth
If you want to get spiffy, you could try some hashing or something like that if there is a chance the files could be identical or something like that. But I think the approach above is probably the best
Assumptions:
- Each number in the array can only be used once
- X is <= length of the array
This can be solved with a costly backtracking algorithm.
public static ArrayList<int[]> getSumPairs(int[] arr, int x){
if(arr == null){
throw new NullPointerException();
}
if(x < 2){
throw new IllegalArgumentException();
}
Worker worker = new Worker(arr, x);
worker.execute();
return worker.getResults();
}
static class Worker{
int[] arr;
int[] vals;
int limit;
int index;
int sum;
ArrayList<int[]> results;
Worker(int[] arr, int x){
this.arr = arr;
this.limit = x;
this.vals = new int[x];
this.results = new ArrayList<int[]>();
}
void execute(){
this.executeRecur(0);
}
void executeRecur(int arrPos){
if(this.index > 2 && this.sum % this.limit == 0){
int[] pair = new int[this.index];
for(int i = 0; i < this.index; i++){
pair[i] = this.vals[i];
}
this.results.add(pair);
}
if(this.index == this.limit){
return;
}
if(arrPos == this.arr.length){
return;
}
int localPosition = this.index;
this.index++;
for(int i = arrPos; i < this.arr.length; i++){
this.vals[localPosition] = this.arr[i];
executeRecur(i);
}
this.index--;
}
ArrayList<int[]> getResults(){
return this.results;
}
}
Without much more description of what you're asking, I'm assuming that you mean to remove duplicate numbers in the array while preserving the original value when possible.
I can do this in O(n) with the following algorithm:
-Scan the array to figure out which numbers are already present
-for each number, replace it if necessary using a 1-upped counted value that skips already used numbers
public static void replaceDups(int[] arr){
HashMap<Integer, Boolean> isSafe = new HashMap<Integer, Boolean>();
for(int i : arr){
isSafe.put(i, true);
}
int c = 1;
for( int i = 0; i < arr.length; i++){
Boolean isUsed = isSafe.get(arr[i]);
if(!isUsed){
while(used.containsKey(c)){
c++;
}
arr[i] = c;
c++;
}
else{
isSafe.put(arr[i], false);
}
}
}
First, let me preface this with saying that your approach is the best I can think of.
What about for the example [8, 8, 9, 9, 11, 11, 12, 12]?
At first it would appear that this, too, is O(n log n). But maybe this should be thought of in a different way. What about if we let 'k' represent the number of different ages in the array. Then the problem becomes O(k log n). And, the only reason that the discrepancy appears is when k approaches n. When k is larger, then the performance approaches O(n log n).
Assumptions:
1. The times are not hours, but some measure. could be day, could be millis, etc
2. The flow rates are always positive
Simple algorithm that will running in O(n log n):
1. Sort all the events by their start time into a priority queue (earliest First) (heap1)
2. create a Priority Queue of events sorted by their end time (earliest first) (heap2)
4. create a value to track the best flow rate total
5. create a value to track the running flow rate total
6. while contents still exist in the heap1
___a. find the earliest value from heap1 and heap2
___b. while heap1 starts with the earliest starting time
______i. remove the event from heap1 with that starting time.
______ii. add the flow rate from the event to the running value total
______iii. add the event to heap2
___c. while heap2 starts with the earliest ending time
______i. remove the event from heap2 with that ending time
______ii. remove the flow rate from the even from the running total
___d. if the running flow rate total is better than the best flow rate total, store it
7. return the best flow rate total
static class Event{
int startTime, endTime, rate;
}
public static int getMaxFlow(Event[] events){
PriorityQueue<Event> heap1 = new PriorityQueue<Event>(events.length, new Comparator<Event>(){
public int compare(Event e1, Event e2){
return e2.startTime-e1.startTime;
}
});
PriorityQueue<Event> heap2 = new PriorityQueue<Event>(events.length, new Comparator<Event>(){
public int compare(Event e1, Event e2){
return e2.endTime-e1.endTime;
}
});
for(Event event : events){
heap1.add(event);
}
int bestFlowRate = Integer.MIN_VALUE;
int totalFlowRate = 0;
while(!heap1.isEmpty()){
int earliest = heap1.peak().startTime;
if(!heap2.isEmpty() && heap2.peek().endTime < earliest){
earliest = heap2.peek().endTime;
}
while(!heap1.isEmpty() && heap1.peek().startTime == earliest){
Event event = heap1.poll();
totalFlowRate += event.rate;
heap2.add(event);
}
while(!heap2.isEmpty() && heap2.peek().endTime == earliest){
Event event = heap2.poll();
totalFlowRate -= event.rate;
}
if(totalFlowRate > bestFlowRate){
bestFlowRate = totalFlowRate;
}
}
return bestFlowRate;
}
Edit: PriorityQueue constructor using a comparator must also have a capacity specified...
- zortlord December 23, 2014
@CT
- zortlord January 12, 2015I didn't go any higher than 10 deep on the tree and their appeared to be an increasing multiplier on the amount of iterations. At about 2^20 nodes (20 deep) it appears that the multiplier bottoms out at about 2.5 * n iterations are needed to solve each tree. So it would be O(2.5 n) -> O(n).