Killedsteel
BAN USERBecause I love algorithms and math! I believe that each algorithm is backed by strong fundamental mathematics, and it is that mathematics that I like to uncover.
Interesting problem, can be solved with post-order traversal. Traverse left child, then right child. Make each child traversal return a "signature". This signature identifies the tree rooted at the left and right children uniquely.
Now build current node's signature by doing:
signature = left_signature + "<identifier>" + curr.val + "<identifier>" + right_signature
The "identifier" tags are there to help separate out the values so that the string hash doesn't assign the same signature to 2 different trees (e.g. a signature "33344" could be left_signature="33", curr.val="3", right_signature="44" or left_signature="3", curr.val="334", right_signature="4")
Once we have this signature, simply check in a global hash set if this signature occurred before, if it did, return true (this solves the original problem). For fetching the dupes, we can do the following (Java):
public class TreeNodeTest {
TreeNode testRootPos;
TreeNode testRootNeg; // negative match, test no dupes
@Before
public void setUp() {
testRootPos = new TreeNode(1);
TreeNode a = new TreeNode(2);
TreeNode b = new TreeNode(3);
TreeNode c = new TreeNode(4);
TreeNode d = new TreeNode(2);
TreeNode e = new TreeNode(4);
TreeNode f = new TreeNode(4);
testRootPos.left = a;
testRootPos.right = b;
a.left = c;
b.left = d;
b.right = e;
d.left = f;
testRootNeg = new TreeNode(4);
TreeNode _a = new TreeNode(2);
TreeNode _b = new TreeNode(4);
TreeNode _c = new TreeNode(1);
TreeNode _d = new TreeNode(2);
TreeNode _e = new TreeNode(4);
TreeNode _f = new TreeNode(3);
testRootNeg.left = _a;
testRootNeg.right = _b;
_a.left = _c;
_b.left = _d;
_b.right = _e;
_d.left = _f;
}
@Test
public void testHasDuplicateSubtreesTrue() {
List<TreeNode> results = TreeNodeTest.hasDuplicateSubtrees(testRootPos);
assertEquals(2, results.size());
assertEquals(4, results.get(0).val);
assertEquals(2, results.get(1).val);
assertEquals(4, results.get(1).left.val);
assertNull(results.get(1).right);
assertNull(results.get(1).left.left);
assertNull(results.get(1).left.right);
}
@Test
public void testHasDuplicateSubtreesFalse() {
List<TreeNode> results = TreeNodeTest.hasDuplicateSubtrees(testRootNeg);
assertEquals(0, results.size());
}
private static Set<String> seen = new HashSet<>();
private static Map<String, TreeNode> dupes = new HashMap<>();
public static List<TreeNode> hasDuplicateSubtrees(TreeNode root) {
recursiveVerify(root);
return new ArrayList<>(dupes.values());
}
private static String recursiveVerify(TreeNode curr) {
if(curr == null) return "N";
String lVal = recursiveVerify(curr.left);
String rVal = recursiveVerify(curr.right);
String signature = lVal + "I" + Integer.toString(curr.val) + "I" + rVal;
if(seen.contains(signature)) {
if(!dupes.containsKey(signature)) {
dupes.put(signature, curr);
}
}
seen.add(signature);
return signature;
}
}
class TreeNode {
int val;
TreeNode left;
TreeNode right;
TreeNode(int x) {val = x;}
}
Time complexity: O(n), space-complexity: O(n)
- Killedsteel March 03, 2017Given a level m and k leaves to add for the (m+1)'th level, I can do it in 2*C(m, k/2) ways.
Assuming you mean height-balanced binary trees, and assuming the 'preorder sequence length' just means the number of nodes in the tree (n), we can find a minimum height h such that
2^h - 1 <= n && 2^(h+1) - 1 > n
We then compute 2 numbers:
m = 2^h
k = (n - 2^h + 1)
That's it! We return our previous formula: 2*C(m, k/2) = 2*C(2^h, (n - 2^h + 1))
Interesting problem, this can be solved by transposing to the minimum path problem. Transform your given input array to the following 2-D matrix:
A =
2 4 7 3
3 5 8 4
6 1 5 4
4 2 6 1
Now "moving to adjacent numbers on the row below" transforms to moving only in the bottom and right directions. Start node is top-left corner, destination node is bottom-right corner. This can be solved with a standard dynamic programming approach, O(n^2) time.
M[i][j] = min( M[i][j+1], M[i+1][j]) + A[i][j]
With no loss of generality, we can assume A <= B for all cases.
We can then compute the minimum number of squares formed by the recursive method:
recursiveCompute(A, B):
if A == B then: return 1;
if A == 1 then: return B;
return (B/A) + recursiveCompute((B%A), A);
Note that B/A here is integer division (e.g. 7/2 = 3)
In your 13 x 29 example, you have:
recursiveCompute(13, 29) = (29/13) + recursiveCompute(3, 13)
= 2 + (13/3) + recursiveCompute(1, 3)
= 2 + 4 + 3
= 9
Note the similarity to Euclid's GCD computing algorithm!
EDIT: edited my answer per sw's comment on A = B case. Thanks!
Could you explain the problem a bit more please? Is each vacation in the int[48] array a number between 1-7 (vacation days for that week)?
We could simply create a 2D matrix with rows = city, columns = weeks (so 48 columns). Go through each column and pick the max from that column. Now just visit the cities at these max-vacation points for all 48 weeks.
if(k > n):
return (k/n) + ( k%n > n/2 ? (n - (k%n) + 1) : (k%n) );
if(k == n):
return 1;
if(k < n):
return k;
Simple O(1) solution :)
EDIT: edited based on observation by Anthony Mai's answer.
Good problem, can be done in O(n^2) where n = max filter size.
Since we are expected to receive a mixture of filter types (ones with wildcard, ones without) it makes sense to separate out the 'simple' filters (without wildcards) into a HashSet. That way, checking for those becomes O(1).
For the wildcard filters, we make sure we go through the prefix before the '*' and the suffix after the '*' and carefully compare characters from the input and the filter. We do this for each filter.
There's a more optimized way of doing this, by ingesting the wild card filters into a trie structure, where if a newly added regex has a * then it 'subsumes' all other previously seen filters from that node (e.g. 'a*b' subsumes 'aa*b'). But the solution below is presented in Java and a basic one.
public class RestrictedWildcardMatch {
static final Set<String> simpleFilters = new HashSet<>();
static final List<String> wildcardFilters = new ArrayList<>();
public static void main(String[] args) {
addFilter("a*b");
addFilter("aa*b");
addFilter("abc");
addFilter("abdr*");
System.out.println(isInBlackList("accb"));
System.out.println(isInBlackList("ab"));
System.out.println(isInBlackList("ax")); // false
System.out.println(isInBlackList("aaxvfbb"));
System.out.println(isInBlackList("abc"));
System.out.println(isInBlackList("aa")); // false
System.out.println(isInBlackList("bb")); // false
System.out.println(isInBlackList("abdreee")); // true
}
public static void addFilter(String filter) {
if(filter.contains("*")) {
wildcardFilters.add(filter);
} else {
simpleFilters.add(filter);
}
}
public static boolean isInBlackList(String input) {
if(simpleFilters.contains(input)) return true;
for(String wildcardFilter : wildcardFilters) {
if(innerCheck(input, wildcardFilter)) return true;
}
return false;
}
private static boolean innerCheck(String input, String wildcardFilter) {
if(input.length() < wildcardFilter.length()-1) return false;
int i=0;
boolean passedFirst = true;
// make sure the filter prefix until the '*'
// matches with the input
while(wildcardFilter.charAt(i) != '*') {
if(wildcardFilter.charAt(i) !=
input.charAt(i)) {
passedFirst = false;
break;
}
i++;
}
if(!passedFirst) return false; // no point checking further
// we're able to swing to the last char of the wildcard
// filter and compare from there BECAUSE there is only
// one '*' guaranteed from the problem statement
int k = input.length() - 1;
for (int j = wildcardFilter.length() - 1; j > i; j--) {
if (wildcardFilter.charAt(j) !=
input.charAt(k)) {
return false;
}
k--;
}
return true;
}
}
public Table bookReservation(Reservation req) {
Table ret = null;
if(req.startTime.between(openTime, closeTime) &&
req.startTime.plus(reservationDurations.get(req.startTime)).between(openTime, closeTime)) {
int minDifference=Integer.MAX_VALUE;
for(Table t : tables) {
if(!t.isTaken() && t.maxPartySize - req.partySize < minDifference) {
minDifference = t.maxPartySize - req.partySize;
ret = t;
}
}
}
ret.markTaken(); // so that future requests don't use this table
return ret;
}
Although, I wouldn't use a List<> datastructure to store all the tables, since they are constantly being allocated and de-allocated based on requests. I'd use a hash map with it's key as party size, and value as a list of tables that serve that party size.
Every time we get a request to make a reservation, we can do binary search on the keys of this hashmap using the request partySize. Then we pull out the first available table of that size and return. If this is the last such table, we remove this key from the hashmap, so that any further requests will attempt to use a table of larger size. Similarly, as tables become available again, we can just add it back to the hash map using the table's maxPartySize. Such an implementation exists in Java, for example. It's called TreeMap.
Interesting problem. The first idea that comes to mind is a greedy algorithm, where I split my current array value v[i] only if v[i] > sum(v'[j]) where sum(v[j]*2^j) = v[i]*2^i for 0 <= j <= n
So the algorithm will be:
min_sum = 0
for i in 1 to n:
if (v[i] > sum(1's in binary representation of v[i]) then:
min_sum = min_sum + sum(1's in binary representation of v[i])
else: min_sum = min_sum + v[i]
return min_sum
As you mention that you can always add more empty spots to the input array v if need be, and as we're guaranteed that v[i] will only affect any elements coming AFTER it, we will always be able to get to the last index of v.
If we assume just 3-bit numbers for simplicity, a number n > sum(1's in n's binary representation) just after n = 2, so almost always, we will see a replacement. Since they are 3-bit numbers, each number can at-max affect 2 bits after itself...with a bounded start, this means that the algorithm terminates in worst-case (n + 3) iterations.
This can be modeled as a graph with an intersection as a node. Each road is an edge, so that a two-way road between an intersection and another is 2 directed edges between the 2 nodes.
To find a pair of intersections that have a unique path between them, we need to do a 2-pass BFS starting at a random node. First pass we note down all the nodes that have been visited through 2 or more paths (say via a 'visited' field, that we ++ the second time a node is visited) and then in the second pass, we see if there is a node that got visited == 1. We can return on the first such node that we visit.
Time complexity: O(|V| + |E|), extra space O(|V|).
EDIT 1: I don't think it's best to model each lane as a directed edge between 2 intersections, since that means a 2-way road will always create a cycle. What we can do is instead model the intersections as nodes, and a generic "road" as an edge. Each edge can then have a property "1-way" or "2-way", along with a "sourceNode" property (in case it's a 1-way).
Additionally, when we do the BFS, we only traverse the "1-way" edges, since otherwise, there's always a 2'nd route that you can create by just looping within 2-nodes.
From your example, looks like corresponding words of each array index either match, or don't match (and will never match in the future). Given this, it's rather simple:
a = null, b = null;
for every time it.next() is called:
a = itA.next()
b = itB.next()
if(a == b) {
return a;
} else {
if(a != null) {
tmp = a;
a = null;
return tmp;
} else {
tmp = b;
b = null;
return tmp;
}
}
Interesting problem. It can be solved in O(n) time and O(n) space. The idea of the algorithm is to store the string built so far, and the replication factor every time "[" is encountered in a stack, and pop from the stack whenever a "]" is encountered.
For example, if input string is "2[c3[ab]]" then the following sequence is executed:
1. See first "[" and push ("", 2) onto stack
2. See "c" and do stringSoFar = "c"
3. See second "[" and push (stringSoFar, 3) = ("c", 3) onto stack. Reset stringSoFar = ""
4. See "a" and set stringSoFar = "a"
5. See "b" and set stringSoFar = "ab"
6. See first "]" and pop ("c", 3) from stack. Now set stringSoFar = "c" + "ab"x3 = "cababab"
7. See second "]" and pop ("", 2) from stack. Now set stringSoFar = "" + "cababab"x2 = "cabababcababab"
8. Reach end of input. Return stringSoFar.
Here is my code implementation in Java:
public class StringDecoder {
public static void main(String[] args) {
System.out.println(decode("3[a2[bd]g4[ef]h]".toCharArray()));
}
public static String decode(char[] input) {
Stack<Pair> history = new Stack<>();
int multiplier = 1;
int iterIdx = 0;
StringBuilder localBuilder = new StringBuilder();
while(iterIdx < input.length) {
StringBuilder numBuilder = null;
if(isNumber(input[iterIdx])) { // read replication lengths
numBuilder = new StringBuilder();
while (isNumber(input[iterIdx])) {
numBuilder.append(input[iterIdx]);
iterIdx += 1;
}
}
if(numBuilder != null) {
multiplier = Integer.valueOf(numBuilder.toString());
}
if(input[iterIdx] == '[') {
// trigger - store state on stack
history.push(new Pair(multiplier, localBuilder.toString()));
localBuilder = new StringBuilder();
} else if(input[iterIdx] == ']') {
// trigger - pop stack and add to global builder
// cash in on localBuilder
Pair last = history.pop();
String toAppend = replicate(localBuilder.toString(), last.nextMultiplier);
localBuilder = new StringBuilder();
localBuilder.append(last.builtSoFar)
.append(toAppend);
} else {
localBuilder.append(input[iterIdx]);
}
iterIdx += 1;
}
return localBuilder.toString();
}
private static String replicate(String template, Integer numReplicas) {
StringBuilder sb = new StringBuilder();
for(int j=0; j<numReplicas; j++) {
sb.append(template);
}
return sb.toString();
}
private static boolean isNumber(char c) {
switch(c) {
case '0':
case '1':
case '2':
case '3':
case '4':
case '5':
case '6':
case '7':
case '8':
case '9':
return true;
default:
return false;
}
}
}
class Pair {
Integer nextMultiplier;
String builtSoFar;
Pair(Integer nextMultiplier, String builtSoFar) {
this.nextMultiplier = nextMultiplier;
this.builtSoFar = builtSoFar;
}
}
Quick algorithm sketch to solve this:
1. Find all prime factors of x (using the Sieve of Eratosthenes, for example) and store each factor's power in an array prime_power[]
3. Initialize product = 1
2. Iterate over each of the prime factors and do for each prime p: product *= ((prime_power[p]%2 == 0) ? 1 : p)
3. return product
See Careercup question id=5695158902325248 for a similar question
- Killedsteel January 07, 2017This can be done in O(n^2) (have to traverse each of the elements in the matrix).
I did this by first printing the upper-left triangle in the matrix (with the opposite/minor diagonal inclusive) and then printing the bottom-right triangle (minor diagonal exclusive). Just involves some index play and need to make sure you don't get an ArrayOutOfBoundsException.
Below solution is in Java:
public class PrintZigZag {
public static void main(String[] args) {
int[][] test1 = new int[][]{{1, 2, 3}, {4, 5, 6}, {7, 8, 9}};
printZigZag(test1);
System.out.println(); // new line for a new test
int[][] test2 = new int[][]{{1, 2, 3, 4}, {5, 6, 7, 8}, {9, 10, 11, 12}, {13, 14, 15, 16}};
printZigZag(test2);
}
// matrix is N x N
public static void printZigZag(int[][] matrix) {
int len = matrix.length;
// upper left triangle (including minor diagonal)
for(int i=1; i<=len; i++) {
for(int j=1; j<=i; j++) {
int next = matrix[i - j][j-1];
System.out.print(next + " ");
}
}
// lower right triangle (excluding minor diagonal)
for(int i=1; i<len; i++) {
for(int j=i; j<len; j++) {
int next = matrix[len + i - j - 1][j];
System.out.print(next + " ");
}
}
}
}
Running above two tests yields:
1 4 2 7 5 3 8 6 9
1 5 2 9 6 3 13 10 7 4 14 11 8 15 12 16
Oops, didn't read the question too well. A solution for N x M matrix is required. Slight changes to the above code does the trick:
public class PrintZigZag {
public static void main(String[] args) {
int[][] test1 = new int[][]{{1, 2}, {4, 5}, {7, 8}};
printZigZag(test1);
System.out.println(); // new line for a new test
int[][] test2 = new int[][]{{1, 2, 3}, {5, 6, 7}, {9, 10, 11}, {13, 14, 15}};
printZigZag(test2);
}
// matrix is N x N
public static void printZigZag(int[][] matrix) {
int N = matrix.length;
int M = matrix[0].length;
// upper left triangle (including minor diagonal)
for(int i=1; i<=N; i++) {
for(int j=1; j<=Math.min(M, i); j++) {
int next = matrix[i - j][j-1];
System.out.print(next + " ");
}
}
// lower right triangle (excluding minor diagonal)
for(int i=1; i<M; i++) {
for(int j=i; j<M; j++) {
int next = matrix[N + i - j - 1][j];
System.out.print(next + " ");
}
}
}
}
Could you please explain the logic of your code and/or add comments to it for clarity?
- Killedsteel May 03, 2015Complexity isn't really O(2^N). Rather, it's (N choose k), which comes down to O(n!/k!(n-k)!).Depending on what K is, it can be anywhere between O(n) to O(n!).
- Killedsteel April 28, 2015My solution in Java using dynamic programming(with a test case):
public class ChildrenDropOffProblem {
public static void main(String[] args) {
System.out.println(computeMinWeightedSum(
new int[]{10, 7, 5, 9}, // number of children that can be dropped off at each stop
new int[]{20, 2, 2}, // distance between successive stops
26, // number if children to start out with
2 // 1-indexed stop to start at (in this case, A[1])
));
}
public static int computeMinWeightedSum(int[] A, int[] B, int numChildren, int start) {
int n = A.length;
assert start >= 1 && start <= n;
assert n >= 2 && B.length == n - 1;
int[][] C = new int[n][numChildren];
for(int i=0; i<n; i++) {
for(int w=0; w<numChildren; w++) {
if(w <= A[i]) {
C[i][w] = 0; // initialization
} else {
C[i][w] = Integer.MAX_VALUE;
}
}
}
for(int w=0; w<numChildren; w++) {
for(int k=0; k<n; k++) {
int prevBest = Integer.MAX_VALUE;
int wNew = 0;
if(w <= A[k]) {
continue; // as per initialization, C[k][w] = 0
} else {
wNew = w - A[k];
}
for(int m=0; m<n; m++) {
if(m != k) {
int innerBest = 0;
if(wNew >= A[m]) {
innerBest = C[m][wNew];
}
innerBest += (wNew+1)*distance(B, k, m);
if(innerBest < prevBest) prevBest = innerBest;
}
}
C[k][w] = prevBest;
}
}
return C[start-1][numChildren-1];
}
public static int distance(int[] B, int start, int end) {
assert start < B.length;
assert end < B.length;
int trueStart, trueEnd;
if(start < end) {
trueStart = start;
trueEnd = end;
} else {
trueStart = end;
trueEnd = start;
}
int dist = 0;
for(int j=trueStart; j<trueEnd; j++) dist += B[j];
return dist;
}
}
And a fact that your answer uses (for clarity of others) is that when a number is XOR'd with 0, it yields itself.
- Killedsteel April 17, 2015Here's working code for this problem, along with a test case (input expected to be clean):
public class LogCuttingProblem {
public static void main(String[] args) {
System.out.println(calculateLogCuttingCost(100, new int[]{23, 34, 35, 68, 71, 88, 93}));
}
public static int calculateLogCuttingCost(int n, int[] markings) {
int m = markings.length;
int[] newMarkings = new int[m+2];
newMarkings[0] = 0;
newMarkings[m+1] = n;
for(int j=1; j<m+1; j++) {
newMarkings[j] = markings[j-1];
}
m = newMarkings.length;
int[][] cmatrix = new int[m][m];
cmatrix[0][0] = 0;
cmatrix[0][1] = newMarkings[1];
cmatrix[m-2][m-1] = n - newMarkings[m-2];
for(int i=1; i<m-2; i++) {
cmatrix[i][i+1] = newMarkings[i+1] - newMarkings[i]; // sub-logs with no markings
}
for(int windowSize=1; windowSize<m; windowSize++) {
for(int i=0; i+windowSize<m; i++) {
int j = i + windowSize;
int minCost = Integer.MAX_VALUE;
for(int k=i+1; k<j; k++) {
int currCost = cmatrix[i][k] + cmatrix[k][j] + newMarkings[j] - newMarkings[i];
if(currCost < minCost) minCost = currCost;
}
if(j != i+1)cmatrix[i][j] = minCost;
}
}
return cmatrix[0][m-1];
}
}
C(i, j) = MIN/k {C(i,k) + C(k, j) + L(j - i)}, k = i+1, …,j-1
What the above line says is basically, if I cut at position k between i and j, then to minimum cost of this cut, is going to be the minimum cost of further cutting up rods ranging {i, k} and {k, j}. This is as per the optimal substructure rule. In addition, I need to add the cost of making this cut at k, which is L(j - i).
L(j - i) is the length of the rod from marking i to marking j (remember, the markings aren't always uniform! :) )
Also, an edit: j ranges from {i, i+1, ..., m+1}.
We'd have to use dynamic programming to solve this one, since there are 2^(m) combinations of making even the first cut.
My algorithm is listed below in pseudocode:
Rod-cutting problem
Say we have range i...j in consideration.
C(i, j) = cost of cutting a rod between markings i and j
i, j = 0, 1, …, m+1
C(i, j) = MIN/k {C(i,k) + C(k, j) + (j - i)}, k = i+1, …,j-1
So,
i → 0, 1, …,m+1
j → 0, 1, …,m+1
for each {i,j}
k → i+1, …,j-1
Time complexity: O(m^3)
Space complexity: O(m^2)
Initialization: C(0, 0) = 0; C(i, i) = 1 for i = 1, …,m+1
Return: C(0, m+1)
By the way, another solution is to do this via a lookup table (you'd have to ask the interviewer how frequently this operation is to be performed though). If frequency and usage of this operation is high, it makes sense to keep in memory all the 8-choose-3 combinations of setting 3 bits from 8. We can then simply do a lookup to return true/false.
- Killedsteel April 13, 2015Good solution, using the Kerninghan's algorithm.
- Killedsteel April 13, 2015The logic is basically at each match, to check if bestMatchIdx was set before. If it was, compare it with current value, and if currentValue < previousValue, change bestMatchIdx to point to i. Else, leave it as is.
In the end, return bestMatchIdx (it was initialized to -1, so if no assignment is done, it will return with -1).
One limitation of the above however, is I don't need to continue matching through the string if my second match happened immediately. However, that may need additional lines of code (i.e. > 6).
My solution (although it's 6 lines with some cheating ;) )
// some test cases...
public static void main(String[] args) {
System.out.println(function("124609", '5','5'));
System.out.println(function("124609", '2','0'));
System.out.println(function("124609", '2','4'));
}
public static int function(String s, char a, char b) {
int bestMatchIdx = -1;
for(int i=0; i<s.length(); i++) {
if(s.charAt(i) == a || s.charAt(i) == b) {
bestMatchIdx = (bestMatchIdx > 0 && s.charAt(i) > s.charAt(bestMatchIdx))?bestMatchIdx:i;
}
}return bestMatchIdx;
}
But how does this take into account the case where your iterator encounters the larger of a and b first? You're returning i immediately.
- Killedsteel April 12, 2015Good solution.
- Killedsteel April 10, 2015You're assuming that the graph is a Directed Acyclic Graph(DAG). I think the interviewer meant a graph in general.
- Killedsteel April 06, 2015Upon rethinking actually, in the most general case(moving diagonal too), one can use DFS to solve this as a graph problem. So worst case isn't exponential, but it is polynomial in V (i.e. O(V^2)), where V = number of vertices/cells. So you were right on that one ;)
- Killedsteel April 06, 2015Corrections:
- "The number of ways to go from entry point m to exit point n in the above case is 8."
- Your solution makes a fourth assumption(actually, this assumption can be derived from the previous three listed above): You never visit a given cell more than once.
FWIW, I think the above assumption is a valid one.
What if your entry point and exit point are right next to each other? Imagine like so
m 0 0
n 0 0
0 0 0
The number of ways to go from entry point m to exit point n in the above case is 7.
Your solution makes three-fold assumptions (which should be clarified from the interviewer):
a. You can only travel in rightward and downward directions.
b. You cannot exit from an exit point if it exists on the same rail as an entry point, and you started from that entry point.
c. A valid traversal from an entry point to an exit point can only happen if i(m) >= i(n) where i(m) is the row index of the entry point, and i(n) is the row index of the exit point.
If traveling in all directions is feasible, then I don't see a polynomial solution to this problem. It can basically be reduced to a path enumeration problem, which is O(2^n).
How does this algorithm respond to the following types of inputs?
1. arr = [6,7,6,7,6,7,6,7] (expected answer is 8)
2. arr = [6,6,6,6,6,6,6,7] (expected answer is 8)
I believe in case (2) above, the runtime of your algorithm is O(n^2). For case (1), I believe that your code doesn't solve the problem, since it will always output "1" (it never sees farther than 1 element apart).
Additionally, you should add step 2A as follows
2A. If D > Rn return;
This basically avoids doing any computations if D > A[n].getRadius(). We need this step because otherwise, the algorithm will assign each T[i] = i, which is not as per the problem requirements.
- Killedsteel April 05, 2015The above answer with whitespaces preserved(hopefully ;)):
1. Sort {{R1,C1}, {R2, C2}, ..., {Rn, Cn}} on their radius value // notice how each Ci is satellite data for Ri
--> Let's call this sorted array A
2. Initialize T[i] for i from 1 to n
3. For each i from 1 to n
R(k) = D - R(i)
If (R(k) < 0) R(k) = 0 // if R(i) is larger than D itself, min radius of second gear needed is 0
minCostIdx = i
j = ModifiedBinarySearch(A, R(k))
For each m from j to n
FindMinPairCost(minCostIdx, A, m)
T[i] = minCostIdx;
Why do we do binary search internal to the outer "for loop" you ask? The reason is, in the average case, you expect to eliminate 1/2 of the array A based on it (thus bringing down your checks to only n/2). Basically, you can think of it as trading off n/2 checks for log(n) checks.
- Killedsteel April 05, 2015This is one of those questions wherein you need to prompt the interviewer for more details. Specifically,
1. Are {R1, R2, ..., Rn} in increasing order of magnitude?
2. Do both arrays fit into memory? Or do you only have a buffer of size k <= n?
My solution assumes the answers to the above questions as
1. No
2. Yes
In this case, you can basically implement the following algorithm:
1. Sort {{R1,C1}, {R2, C2}, ..., {Rn, Cn}} on their radius value // notice how each Ci is satellite data for Ri
--> Let's call this sorted array A
2. Initialize T[i] for i from 1 to n
3. For each i from 1 to n
R(k) = D - R(i)
If (R(k) < 0) R(k) = 0 // if R(i) is larger than D itself, min radius of second gear needed is 0
minCostIdx = i
j = ModifiedBinarySearch(A, R(k))
For each m from j to n
FindMinPairCost(minCostIdx, A, m)
T[i] = minCostIdx;
Notes:
1. ModifiedBinarySearch(A, R(k)) is basically binary search, but tweaked a little to find the smallest index j in array A that has R(j) >= R(k)
2. FindMinPairCost(minCostIdx, A, m) basically checks whether C(m) < C(minCostIdx). If C(m) = C(minCostIdx), it further compares R(m) to R(minCostIdx). It then keeps the index of the cost with the larger radius (as per problem requirements).
3. ABS is the absolute function in mathematics.
Complexity analysis:
1. Call 1 in the algorithm above takes O(nlog(n)) time.
2. Call 3 is O(n) on the outer "for loop". Within it, each time we do a binary search (i.e. O(log(n)). But in the worst case, we could end up with j going from 1 to n always (e.g. A = {{2, 2}, {3, 2}, {5, 1}} and D = 1). Hence, in the worst case, this is O(n). So, call 3 of the algorithm is O(n^2).
Hence,
Time complexity: O(n^2)
Space complexity: O(1) (assuming we're given space to store array T initially)
+1 for the hint of selection algorithm. Specifically, using Quickselect, we can achieve this in O(n):O(1) in time:space :)
- Killedsteel February 27, 2014Sorry for the minor mistake in the code. Please replace
if(isZeroFollower) {
by
if(isInfluencer) {
Time complexity: O(N) (Assuming a square matrix with side sqrt(N))
Space complexity: O(1)
Imagine a matrix as follows:
a11 a12 a13 a14 ...... a1N
a21 a22 a23 a24 ...... a2N
..
..
aN1 aN2 aN3 aN4...aNN
So we have an NxN matrix of booleans. We are interested in that particular individual, for whom:
1. This individual does not follow anybody. That means, if the index of such an individual is j in
the above matrix, then ajk = false for all k = 1, 2, ...., N
2. This individual is followed by everybody. This means, the column akj = true for all k = 1, 2, ..., N (since followingMatrix[i][j] denotes "i follows k", for all k = 1, 2, ..., N, we get the sentence "all k follow j" which is what we want)
So basically, we are searching for a row, with all false values, and for a column with all but one true values (the value ajj will always be false, based on the definition in the problem statement).
This can be done using the following function:
public static int getInfluencer(boolean[][] followingMatrix) {
for(int i=0; i<followingMatrix.length; i++) {
boolean isInfluencer = true;
for(int j=0; j<followingMatrix[i].length; j++) {
if(followingMatrix[i][j]) {
isInfluencer = false;
break;
}
}
if(isZeroFollower) {
// check for all (j, i) for j = 0 to i-1
for(int j=0; j<i; j++) {
if(followingMatrix[j][i]) {
isInfluencer = false;
break;
}
}
// (i, i) will always be false, so skip
// check for all (j, i) for j = i+1 to N
for(int j=i+1; j<followingMatrix[i].length; j++) {
if(followingMatrix[j][i]) {
isInfluencer = false;
break;
}
}
}
if(isInfluencer) return i;
}
return -1;
}
Doesn't radix sort have a worst case complexity of O(n) (for bucket allocation)?
I'd recommend something like quicksort (average case time complexity O(n logn), space complexity O(1)) and then a linear search & counting.
Interesting problem. Very similar to the TopCoder problem found here:
http :// community.topcoder.com/stat?c=problem_statement&pm=1889
Assume (0, 0) is top-left corner, and (m, n) is bottom-right.
Then, the basic idea is:
1. Let K(i, j) be the ways in which you can get to (m, n) from (i, j).
2. In a highly compressed formula, the recurrence relation is:
K(i, j) = {(i-1, j) reachable}?K(i-1, j):0 + {(i, j+1) reachable}?K(i, j+1):0
where i = m-1, m-2, ..., 2, 1 & j = n-1, n-2, ..., 2, 1
For getting base conditions, we observe:
3. K(m-1, n-1) = 2 (Since you can go {East + South} or {South + East} to get to (m,n))
4. K(m-1, n-2) = 1 + K(m-1, n-1)
5. K(m-1, n-3) = 1 + K(m-1, n-2) and so on...
6. Similarly, for the vertical last column, K(m-2, n-1) = 1 + K(m-1, n-1)
7. K(m-3, n-1) = 1 + K(m-2, n-1) and so on.
Note*: There may be some points in the last column/row that are unreachable. This calls for a slight modification from steps (3) through (7), that are really similar to the formula in step (2). This is left as an exercise to the reader :)
Note**: The above algorithm also has within it, a neatly embedded heuristic: "I can only move in the downward or right directions"! This I think, is justified, since you constantly want to be moving in the direction of your target (m, n). This is also something you can talk to the interviewer about. If the interviewer wants to incorporate another direction into allowable directions (say, left), then we simply change the recursion formula to reflect so.
Time complexity: O(mn)
Space complexity: O(mn)
Sorry for the partial answer. There is probably an extra step:
5. return error "no such occurrence of pairs found"
If this is N dimensional, then naturally the above approach breaks. It would then become O(2^n):O(2^n) in time:space. You can maybe then iterate over the input array, and compute the values for the midpoint for each of the N dimensions, for every M^2 combination of points. This would thus be a O(NM^2):O(N) solution in time:space.
1. Create a data structure called Pair {int x, y;} and assume that the input array comes in this format.
2. Initialize four variables of Type Pair to null. These four represent the four cases {Pair(evenX, evenY), Pair(evenX, oddY), Pair(oddX, evenY), Pair(oddX, oddY)}
3. Now start iterating over the input array of pairs. If you encounter a pair of a particular type from the four possible types listed above, set the respective Pair variable with those values.
4. There are 8 cases to consider here:
a1. while setting Pair(evenX, evenY), we discover that it was null - set it to current (X,Y)
a2. while setting Pair(evenX, evenY), we discover that it was not null - return this variable and current Pair(X,Y) as a solution
a3 & a4. same as a1 and a2, except for pairs of type Pair(oddX, oddY)
a5. while setting Pair(oddX, evenY), we discover that Pair(evenX, oddY) was null - set it to current(X,Y)
a6. while setting Pair(oddX, evenY), we discover that Pair(evenX, oddY) was not null - return this variable and current Pair(X, Y) as solution
a7 & a8. same as a5 and a6 but with "oddX" and "evenY" replaced by "evenX" and "oddY" and vice versa respectively.
This would be an O(n):O(1) solution in time:space.
Important thing to note: This question demands for reversing a linked list, followed by the return of the _tail_ element (unlike usual reversal, which requires the return of the head).
Please ignore any comments/code using the "this" operator. That is something I need for this function to work with my class.
Edge case left as exercise: The original input list, is of length less than k. I'd recommend adding a small while loop in the beginning of the above function that counts till (currHead.getNext() == null || elemCount >= k)
Here is the function that works with a LinkedList class that I implemented.
public Element<T> reverseEveryK(int k) {
Element<T> currHead = head; // to iterate
Element<T> iterator = head;
Element<T> tempIterator = iterator;
Element<T> prevHead = null;
int len = 1;
while(iterator.getNext() != null) {
if(len%k == 0 || iterator.getNext().getNext() == null) {
//iterator = (iterator.getNext().getNext() == null)?iterator.getNext():iterator;
tempIterator = iterator.getNext(); // store next in chain
iterator.setNext( null ); // make this the new tail
Element<T> kListTail = reverseList(currHead); // tail end recursion
kListTail.setNext( prevHead );
prevHead = iterator;
currHead = tempIterator;
iterator = tempIterator; // move to next element
}
else {
iterator = iterator.getNext();
}
len++;
}
tempIterator.setNext( prevHead );
this.head = tempIterator; // for printing entire list
return tempIterator;
}
The reversal function implements tail-end recursion. Here it is:
private Element<T> reverseList(Element<T> head) {
if(head.getNext() == null) {
//this.head = head;
return head;
}
Element<T> curr = reverseList(head.getNext());
curr.setNext(head);
head.setNext( null );
return head;
}
EDIT: ...number of characters in text that the star* represents.
Perhaps another modification exercise for someone? :)
@Julien: Oh, you're right! I came up with a quick fix for this. Change the following:
char currTextChar = text.charAt(i);
char prevPatternChar = pattern.charAt(j-1);
to this:
if(text.substring(i, text.length()).equals(pattern.substring(j+1, pattern.length())))
return matchStrings(text.substring(i), pattern.substring(j+1)); // '*' may represent
// no characters at all
char currTextChar = text.charAt(i);
char prevPatternChar = pattern.charAt(j-1);
Note: This is a temporary fix, only to address the above issue. But in general, I understand what the problem here is. We need to measure the number of similar characters in the text, say A, then measure the number of characters in the pattern, that are similar to the character before the '*', say B. We then need to calculate A-B. Those are the number of characters in text that the start represents. We then change
i = i - 1; // same as above. ready for
j = j + 1; // next piece of string matching
to
i = i - B; // same as above. ready for
j = j + A - B; // next piece of string matching
I just tested it. It seemed to work.
Assert.assertFalse(StringMatchWithWildCards.matchStrings("Facebook", "Facebo*ok"));
If you meant it to be the other way, I tried that too. And that worked as well.
Assert.assertFalse(StringMatchWithWildCards.matchStrings("Facebo*ok", "Facebook"));
Can you provide space/time bounds please? I think this will be O(n):O(n) in space:time, where n = number of digits in original number
- Killedsteel February 13, 2014
Many questions to ask the interviewer on this.
- Killedsteel March 04, 20171. How many bandnames are expected to be stored? Is there some upper limit? (e.g. it HAS to be < 7.5 billion since that's more than the population of the Earth :)
2. How many songs can a band create? Again, it has to be a small/limited amount.
3. What is the expected response time of the server?
4. How frequently will both API's be called? Will the API for play() be called much much more often than topSong(), or vice-versa?
5. Is any other metadata provided with the play() request other than in the API? e.g. header information?
Based on these questions, we can design a service for your problem. One implementation is using a hash map with key = band name, value = TreeMap{ key = song name, value = count}. This would cost O(log(n)) for insertions, and O(1) for look-ups, as well as O(n) in space (since we're storing all information).