Michael.J.Keating
BAN USER
Good eye :)
- Michael.J.Keating May 22, 2014I think the issue comes when you have many horses in frequently alternating colors and fewer stables.
- Michael.J.Keating May 21, 2014ak - Yes, that is why we would be reading from a buffered file reader. The hashtable and MinHeap should have no problem fitting into 1GB of memory. There are only about 100k commonly used words in the English language - which is far below our threshold.
- Michael.J.Keating May 20, 2014Turns out to be a fibonacci pattern. Consider:
width possibilities (1 = vertical, 0 = horizontal and must be in pairs)
1 1 - 1
2 2 - 11/00
3 3 - 111/100/001
4 5 - 1111/1100/1001/0011/0000
5 8 - 11111/11100/11001/10011
10000/00111/00001/00000
This simple logic
public static int countWays(int n) {
if (n < 1)
return 0;
if (n == 1)
return 1;
else if (n == 2)
return 2;
else
return countWays(n - 1) + countWays(n - 2);
}
To add DP to this (caching):
static int[] cache = new int[w + 1];
static int countWays(int w) {
if (w == 1)
return 1;
if (w == 2)
return 2;
// already computed this value?
if (cache[w] != 0)
return cache[w];
// compute and cache
cache[w] = countWays(w - 1) + countWays(w - 2);
return cache[w];
}
Or, you could just do it iteratively:
static int countWays(w) {
if (w == 1)
return;
if (w == 2)
return;
int last = 2;
int lastlast = 1;
int ways = 0;
for (int i = 3; i <= w; i++) {
ways = lastlast + last;
lastlast = last;
last = ways;
}
return ways;
}
I think a trie would me a fine alternative to a hash table here assuming there are no memory issues.
- Michael.J.Keating May 20, 2014My suggestions:
-Use something like a Buffered File Reader to read from the 200GB text file.
-Use a HashTable (where key=word, value = count) to track the count of the usage for each word (assuming these strings are words).
-Use a MinHeap (sorted on frequency count) to track the top K words. Fill the heap initially with K words. Then, if the next word has a higher frequency count. pop the min value off the heap and replace it with the next word. Repeat till done reading the 200GB file.
Wouldn't the min heap option take only O(k) space - rather than O(n + k) - since we're only storing the top 10 items?
- Michael.J.Keating May 20, 2014// link-to-parent version
Node findNextNode(node n) {
if (n == null)
return null;
// is there a right child?
if (n.right != null) {
n = n.right;
// find the left-most node of the right branch
while (n.left != null)
n = n.left;
return n;
}
// no right node
// find the first parent of a left node
while (n.parent != null) {
if (n == n.parent.left)
return n.parent;
n = n.parent;
}
return null;
}
// in-order traverse version - no link to parent
Node findNextNode(Node n, Node givenNode, BoolWrap nextIsIt) {
if (n == null)
return null;
if (n.left != null)
findNextNode(n.left, givenNode, nextIsIt);
if (nextIsIt.val)
return n;
if (n == givenNode)
nextIsIt.val = true;
if (n.right != null)
findNextNode(n.left, givenNode, nextIsIt);
}
class BoolWrap {
boolean val;
}
Using breadth-first search here:
public void drawGraph(node n, graphics g) {
if (n == null)
return;
Queue<Node> q = new Queue<Node>();
g.drawSmallCircle(n.point);
n.visted = true;
q.enQueue(n);
while (!q.isEmpty()) {
n = q.deQueue();
for (Node a : n.adjNodes) {
if (!a.visted) {
g.drawSmallCircle(a.point);
g.drawLine(n.point, a.point);
a.visted = true;
q.enQueue(a);
}
}
}
}
Anon - I don't see why a hashtable or a trie would wouldn't work just fine for this.
- Michael.J.Keating May 16, 2014I also considered the reverse in-order traversal that geekvjaks mentioned. Here's my version:
void sumGreaterNodes(Node n, IntWrap sum) {
if (n.right != null)
sumGreaterNodes(n.right, sum);
n.greaterValuesSum = sum.value;
sum.value += n.value;
if (n.left != null)
sumGreaterNodes(n.left, sum);
}
class IntWrap {
public int value;
}
Nice, but it's not 'in place' (memory wise) - meaning you can't use the hashtable.
- Michael.J.Keating May 13, 2014Recursion works well here:
public void firstMinusLast(Node first, Node lastPlusOne) {
// are we done?
if (first == lastPlusOne)
return;
// find the last node (it's just before lastPlusOne)
Node last = first;
while (last.next != lastPlusOne)
last = last.next;
// now subtract the last value from the first value
first.val = first.val - last.val;
// recurse!
firstMinusLast(first.next, last);
}
// we start with
// 10->30->50->70->NULL
// curNode is 50, newNode is 40
// first, insert newNode after curNode;
newNode.next = curNode.next;
curNode.next = newNode;
// now we have
// 10->30->50->40->70->NULL
// now swap values of curNode and curNode.next
int tempVal = curNode.value;
curNode.value = curNode.next.value;
curNode.next.value = tempVal;
// now we have
// 10->30->40->50->70->NULL
"In this example the program should return 3 ."
Shouldn't it return 5?
For e.g. array[]={6,10,6,7,8,9,0}
seq {10,6,7,8,9} diff is 1 len 5
The bit vector size would be only need to be 32 bytes (256 bits).
- Michael.J.Keating May 06, 2014While using a BitVector or a 256 element flag array, as in this solution, seems more elegant - using a hashtable is also O(n). This is because each put and get is constant time in hashmaps/hashtables.
- Michael.J.Keating April 30, 2014public static ArrayList<Byte> findCommonElements(byte[] a, byte[] b) {
// record values in 'a' in a HashMap - O(n)
HashMap<Byte, Boolean> mapA = new HashMap<Byte, Boolean>();
for (byte val : a)
mapA.put(val, true);
// record common values in a new HashMap - O(n)
HashMap<Byte, Boolean> mapAB = new HashMap<Byte, Boolean>();
for (byte val : b) {
if (mapA.get(val) != null)
mapAB.put(val, true);
}
// the common values - O(n)
return new ArrayList<Byte>(mapAB.keys);
}
Thanks for the clarification.
- Michael.J.Keating April 28, 2014I don't think the distance between adjacent vertices is 'additional information' in Dijkstra's algorithm. Also, I don't see how going uphill or downhill has any relevance to the usual goal of finding the shorted distance from source to destination nodes. So, nothing should change from Dijkstra's algorithm. What am I missing?
- Michael.J.Keating April 28, 2014public static int pow(int a, int b) {
if (b == 0)
return 0;
if (b == 1)
return a;
// odd exponents?
Boolean isOddExponent = b % 2 == 1;
int orginalBase = a;
// iteratively square the base (a)
// and reduce the exponent (b) by half
// until the exponent is less than 2;
while (b >= 2) {
a *= a;
b /= 2;
}
// one exponent left?
if (isOddExponent )
a *= orginalBase;
return a;
}
If the numbers are too numerous to fit into memory try reading them from a file stream. Then you are limited to file size. If that doesn't suffice, use multiple files on multiple hard drives, machines, etc.
- Michael.J.Keating April 24, 2014This looks like O(n2) when b[] is reversed (worst case).
- Michael.J.Keating April 23, 2014My thought is to use a function pointer (that is, a callback) in place of an abstract class.
Take a polymorphic validator, for instance. It might be a PDF validator or a text validator, etc. But, the client code only knows the abstract version until runtime.
// OOP version
// validate a document - it might be text, pdf or whatever
public void readFile(String filename, AbstractValidator v) {
// morphs into the specific validator at runtime
if (v.validate(filename) == false)
return;
// do more stuff
}
/* C version */
void readFile(char *filename, int (*validate)(char*)) {
/* morphs at runtime */
if (validate(filename) == 0)
return;
/* do stuff */
}
Clever. The problem is
a[i] = p[a[i]-1];
Which will over write values in 'a' that you'll need in subsequent iterations of that very loop.
- Michael.J.Keating April 20, 2014Nice solution. However, consider the worst case when the array, As, is reversed. Say, [100000,99999,....1]. In this case, it seems to me, the double loop of this algorithm degrades into O(n2).
- Michael.J.Keating April 18, 2014My approach would be to use heapsort and mirror the changes. Heapsort is in-place and O(n log n).
public class HeapSorter2 extends Heapsorter {
protected int[] _mirrorArray
public HeapSorter2(int[] arr, int[] mirrorArray) {
_mirrorArray = mirrorArray;
super(arr);
}
@Override
protected void swap(int a, int b) {
// the regular swap (all changes to array made here)
int temp = _arr[a];
_arr[a] = _arr[b];
_arr[b] = temp;
// the mirror array swap (will mirror all array changes)
temp = _mirrorArray[a];
_mirrorArray[a] = _mirrorArray[b];
_mirrorArray[b] = temp;
}
}
static public int rand7() {
// This will gives us a result of
// 0 to 7 (binary 000 to 111)
// which should be evenly distributed.
// reject 7 so we have 0 to 6
int result = 7;
while (result == 7) {
result = randBit() * 4 + randBit() * 2 + randBit();
}
return result;
}
// gets a random 0 or 1
static public int randBit() {
// Eliminate zero so we have an equal number
// of even/odd values (1, 2, 3, or 4)
int result = 0;
while (result == 0) {
result = rand5();
}
return result % 2;
}
How do you do this in place? Each time you 'put a number in its correct position' you are overwriting a number you will need later on.
- Michael.J.Keating April 17, 2014Quicksort is O(n2) in the worst case. Heapsort is an in-place O(n log n) sort. Maybe that would work.
- Michael.J.Keating April 17, 2014"I think using binary search is O( min(a.length, b.length) + max(log(a.length), log(b.length))."
Except it's O( min(a.length, b.length) * max(log(a.length), log(b.length)).
This is O(a.len + (a.len * log b.len)) which is not an improvement
- Michael.J.Keating February 20, 2014Even if you can assume a.length < b.length it may very well be that its complexity of
O(a.len + (a.len log b.len)
is worse than
O(a.len + b.len)
Assume b and c are integers:
int[] array = new int[2];
int[0] = b;
int[1] = c;
result = array[a];
I'm guessing that accessing the pointers that make up the linked list (node->next->next..) doesn't count toward the 'one pointer' we can use. If this assumption is correct, 'head' is the one pointer in this solution.
- Michael.J.Keating November 02, 2013Very nice. My only issue is that c[0] will be null (or undefined) if there is a carry on the most significant digit.
- Michael.J.Keating October 24, 2013public static List<String> getPermutations(char[] source) {
// Algorithm:
//
// to start, create a list of one item for each letter - in this
// case: [a,b,c,d]
//
// {a} {b} {c} {d}
//
// the iterative logic:
// -add permutation list to all permutations
// -discard the first permutation leaving {b} {c} {d}
// -discard the last item from our source items
// [a,b,c,d] - leaving [a,b,c] in this case)
// -permutate the source items (here: [a,b,c]) into the remaining
// lists - in this case: {b},{c},{d}
// 'a' into {b},{c},{d}
// 'b' into {c},{d}
// 'c' into {d}
//
// {ab,ba,ac,ca,ad,da} {bc,cb,db,bd} {cd,dc}
//
// -add orginal permutations lists to 'all permutations' and discard
//
// repeat iterative logic above:
// -add permutation list to all permutations
// -discard the first permutation leaving: {bc,cb,db,bd} {cd,dc}
// -discard last item in source items: [a,b,c] becomes [a,b]
// -permutate source items [a,c] into remaining lists:
// 'a' into {bc,cb,db,bd}
// 'b' into {cd,dc}
//
// {abc,bac,bca,acb..} {bcd,dbc,cdb,bdc..}
//
// the last iteration should have one source item and one list to permutate.
// here it would be 'a' into {bcd,dbc,cdb,bdc..}
List<ArrayList<String>> currentPermutationsLists =
new ArrayList<ArrayList<String>>();
List<String> allPermutations = new ArrayList<String>();
// populate currentPermutations with initial values
// also these initial permutations to 'all permutations'
for (char ch : source) {
ArrayList<String> permutations = new ArrayList<String>();
permutations.add("" + ch);
currentPermutationsLists.add(permutations);
}
while (source.length > 1) {
// add the currentPermutations to 'all permutations'
for (ArrayList<String> permutations : currentPermutationsLists)
allPermutations.addAll(permutations);
// discard the first permutation list
currentPermutationsLists.remove(0);
// discard the last letter from source
source = Arrays.copyOf(source, source.length - 1);
// now create new permutations
List<ArrayList<String>> newPermutationsLists =
new ArrayList<ArrayList<String>>();
for (char ch : source) {
ArrayList<String> newPermutations = new ArrayList<String>();
for (ArrayList<String> permutations :
currentPermutationsLists) {
newPermutations.addAll(
generatePermutations(permutations, ch));
}
newPermutationsLists.add(newPermutations);
currentPermutationsLists.remove(0);
}
currentPermutationsLists = newPermutationsLists;
// if this is the last iteration, add the currentPermutations
// to 'all permutations'
if (source.length == 1) {
allPermutations.addAll(currentPermutationsLists.get(0));
}
}
return allPermutations;
}
/**
* Creates a new set for permutations with
* currentPermutations and the added character, newChar
*/
private static ArrayList<String> generatePermutations(ArrayList<String>
currentPermutations, char newChar) {
ArrayList<String> generatedPermutations = new ArrayList<String>();
// no permutations? just use this letter as a start
if (currentPermutations.isEmpty()) {
generatedPermutations.add("" + newChar);
return generatedPermutations;
}
for (String str : currentPermutations) {
// insert newChar at each index of str
for (int i = 0; i < str.length(); i++) {
generatedPermutations.add(insertChar(str, newChar, i));
}
// add the final permutation (with newChar at the end)
generatedPermutations.add(str + newChar);
}
return generatedPermutations;
}
private static String insertChar(String str, char ch, int index) {
// buffer for new string creation
char[] array = new char[str.length() + 1];
// 'i' is the index into string
// 'j' is index into the new string
for (int i = str.length() - 1, j = array.length - 1; i >= 0; i--) {
char nextChar = str.charAt(i);
// chars to the left of the insertion index
// move to the corresponding index
if (i < index)
array[i] = nextChar;
// insertion char
else if (i == index) {
array[j] = nextChar;
array[i] = ch;
j = i;
}
// shift to the right
else {
array[j] = nextChar;
j = i;
}
}
return new String(array);
}
public static void getPermutationsTest() {
char[] chars = {'a', 'b', 'c'};
List<String> permutations = StringUtil.getPermutations(chars);
System.out.println(permutations);
}
// output
[a, b, c, ab, ba, ac, ca, bc, cb, abc, bac, bca, acb, cab, cba]
No, after pop() the new head node already has the min value for that level in the stack.
- Michael.J.Keating October 23, 2013This looks like same algorithm I used.
If N=rows it's O(N^3).
If n=elements it's O(n sqrt(n))
But I agree - time and energy would be better spent on the problem itself.
A couple of things my solution does not take advantage of is the properties of the table having the same width and height, and that the columns are sorted in addition to the rows. But I couldn't see how to take advantage of this.
Yes, I understand they are dummy variables. But if the question read:
"Given a sorted 2D X * X array" instead of "Given a sorted 2D N x N array"
I imagine we would consider 'n' or 'N' the number of elements rather than the number of rows.
Also, it's not clear to me how considering O(n) to represent the row count rather than the element count is more helpful - particularly when it's clear that one must touch every element in the matrix.
It seems to me that, in something like O(n), 'n' is commonly understood as referring to the number of elements one is asked to sort, etc.. NxN, in this particular question, is referring the width and height of a table which is a totally different meaning. I think people have been confusing the two.
- Michael.J.Keating October 20, 2013I'm not trying to trick anyone, really :)
If you have a table, for instance, of 10 rows and and 10 columns (NxN) and you are asked to sort the entire table into a single array, you have 100 elements to deal with - not 10. n=100 for big O analysis. If your table was 5 rows and 20 columns, you would also have an n=100 situation - not n=5, not n=20. The N in NxN is not the number of elements - it's the height and width of the matrix.. n, on the other hand, is the number of elements your algorithm is dealing with. That's how I see it :)
It's also interesting that the sorted nature of the matrix is irrelevant to this approach - which appears to be O(n log n) regardless of the input.
- Michael.J.Keating October 19, 2013Or, at least it's very close :). I'm using an iterator with a array of indexes - one to each row. Same algorithm, really.
- Michael.J.Keating October 19, 2013This is what my solution is doing. See above..
- Michael.J.Keating October 19, 2013On an NxN matrix, wouldn't we have n=NxN? So, for a 10x10 matrix: n=100 as there are 100 elements?
- Michael.J.Keating October 19, 2013On a 3x3 array (n=9), this solution does 27 comparisons if I'm not missing something.
Anyway, I believe this should be O(n x sqrt(n)) complexity.
public static int[] convert2DArrayto1DArray(int[][] array) {
if (array == null)
return null;
// example inputs:
// [1,4,7] [1,2,3]
// [2,5,8] [4,5,6]
// [3,6,9] [7,8,9]
// desired output:
// [1,2,3,4,5,6,7,8,9]
// allocate the target array
int[] target = new int[array.length * array.length]; // NxN
// merge the sorted arrays
int[] currentIndexes = new int[array.length];
for (int i = 0; i < target.length ; i++) {
// which array has our next value?
int arrayWithValue = 0;
int minValue = Integer.MAX_VALUE;
for (int j = 0; j < currentIndexes.length; j++) {
// have we reached the end of this array already?
if (currentIndexes[j] > array.length - 1)
continue;
// possible next value from this array?
if (array[j][currentIndexes[j]] < minValue) {
minValue = array[j][currentIndexes[j]];
arrayWithValue = j;
}
}
target[i] = minValue;
currentIndexes[arrayWithValue]++;
}
return target;
}
public static void test() {
int[][] arrays = {
{1,4,7},
{2,5,8},
{3,6,9}};
int[] array = Util.convert2DArrayto1DArray(arrays);
for (int value : array)
System.out.print(value + " ");
}
// output:
// 1 2 3 4 5 6 7 8 9
Correct - except this is Java so it is passed by reference by default. This is reason for the wrapper
- Michael.J.Keating May 22, 2014