ttsou
BAN USERJust a homeless guy in the Bay Area slinging code.
This is an extension of the simpler problem of computing a dot product on two sparse arrays. We do not compute any multiples with zeros.
Given 'N' and 'M' entries in the sparse matrices A and B. We can brute force the filled entries  still substantially better than brute forcing the full array dimensions  to reach O(n*m) performance on time. However, matrix multiplication, as is the simpler dot product, is a ordered process that provides improvement.
So provide ordering operations on both arrays, prioritize horizontal ordering on A and vertical on B, as well as an AB compare that serves as a index transformation to sort and 'flatten' the matrices into a linear operation. That way we can iterate unidirectionally in a single pass for both matrices.
Final result is O(n) for space. Time is O(n) if inputs are sorted or O(nlogn) if inputs are not sorted. In both cases 'n' is the number of filled entries  dimensions of the matrices is irrelevant.
/*
* 'Dot Product on Sprase Matrices'
*
* O(n) time and space complexity for multuply not including sorting where 'n'
* is number of used entries in sparse matrices A or B. Actual matrix
* dimensions are irrelevant (square, 1d array, etc.) and have no effect on
* the algorithm or running time.
*
* If input arrays are not sorted then running time extends to O(nlogn) where,
* again, 'n' is the number of elements in the sparse array irrespective of the
* matrix dimensions.
*/
import java.util.Arrays;
import java.util.Comparator;
public class Value {
public int val, x, y;
public Value(int x, int y, int val)
{
this.x = x;
this.y = y;
this.val = val;
}
public String toString()
{
return ("(" + x + ", " + y + ") = " + val);
}
}
public class DotProduct {
private class ComparatorA implements Comparator<Value>
{
public int compare(Value a, Value b)
{
if (a.y < b.y)
return 1;
else if (a.y == b.y && a.x == b.x)
return 0;
else if (a.y == b.y && a.x < b.x)
return 1;
else
return 1;
}
}
private class ComparatorB implements Comparator<Value>
{
public int compare(Value a, Value b)
{
if (a.x < b.x)
return 1;
else if (a.x == b.x && a.y == b.y)
return 0;
else if (a.x == b.x && a.y < b.y)
return 1;
else
return 1;
}
}
private int compareIndexAB(Value a, Value b)
{
if (a.y < b.x)
return 1;
else if (a.y == b.x && a.x == b.y)
return 0;
else if (a.y == b.x && a.x < b.y)
return 1;
else
return 1;
}
public Value[] compute(Value[] A, Value[] B)
{
Arrays.sort(A, new ComparatorA());
Arrays.sort(B, new ComparatorB());
int max = A.length < B.length ? A.length : B.length;
Value[] C = new Value[max];
int sum;
int c = 0;
int b = 0;
boolean nonzero = false; // For empty empty return handling
for (int a = 0; a < A.length; a++) {
while (b < B.length && compareIndexAB(A[a], B[b]) > 0) {
b++;
}
if (b >= B.length)
break;
if (compareIndexAB(A[a], B[b]) == 0) {
nonzero = true;
int prod = A[a].val * B[b].val;
if (C[c] == null)
C[c] = new Value(A[a].y, 0, prod);
else if (C[c].x == A[a].y)
C[c].val += prod;
else
C[++c] = new Value(A[a].y, 0, prod);
}
}
if (!nonzero)
return new Value[0];
Value[] resizeC = new Value[c+1];
for (int i = 0; i <= c; i++)
resizeC[i] = C[i];
return resizeC;
}
public static void main(String[] args)
{
Value[] matrixA = {
new Value(45, 1000000, 1),
new Value(46, 1000000, 2),
new Value(47, 1000000, 3),
new Value(48, 1000000, 4),
new Value(1234, 123, 5),
};
Value[] matrixB = {
new Value(1000000, 45, 1),
new Value(1000000, 46, 2),
new Value(1000000, 47, 3),
new Value(1000000, 48, 4),
new Value(789, 5678, 5),
};
DotProduct dp = new DotProduct();
Value[] matrixC = dp.compute(matrixA, matrixB);
for (int i = 0; i < matrixC.length; i++)
System.out.println(matrixC[i]);
}
}

ttsou
December 05, 2016 The critical phrase here is 'positive integers' because even if we just extended the the value set from natural numbers to whole numbers with the addition of zero, the summation would become unbounded, or up to 'N' where 'N' is the full size of the array.
But, with natural numbers, the subarray size is bounded by the sum, which is maximally reached with all ones. That limits running time at O(nk) and O(k) space complexity, where k is the target sum and n is the array size.
One could go further with early exit on the subarray sum, but there's little point as that has no effect on the worst case running time.
#include <stdio.h>
int findSubArray(unsigned *d, unsigned d_size, unsigned X)
{
for (int n = 0; n < d_size; n++) {
int sum = 0;
for (int i = 0; i < X; i++) {
if (n+i >= d_size)
break;
sum += d[n+i];
if (sum == X)
return 1;
}
}
return 0;
}
int main(int argc, char **argv)
{
unsigned in[] = { 1, 3, 5, 18, };
printf("Sum exists for %u? %s\n", 9,
findSubArray(in, 4, 9) ? "True" : "False");
printf("Sum exists for %u? %s\n", 10,
findSubArray(in, 4, 10) ? "True" : "False");
printf("Sum exists for %u? %s\n", 40,
findSubArray(in, 4, 40) ? "True" : "False");
}

ttsou
December 04, 2016 I'm not a fan of this question. Not sure whether it's the original question itself or the representation given, but I'm saying ambiguous. This is why.
Additional information provided in the comments states that the traversals provided are inorder and the root node is given. Presumably let's assume that the tree itself is ordered; otherwise the question becomes even more ambiguous.
For example, looking at inorder traversals on the first tree starting from node 'D', it's easy to see that at least two trees may exist that yield the same traversal ordering. This affects what nodes are leaves and which nodes are not, which an essential part of the question.
D
/ \
C E
/
B
/
A
D
/ \
B E
/ \
A C
If the traversal was preordered, then the situation would be more clear; leaf nodes could be detected with a stack and array traversal in O(nlogn) time and O(logn) space. But, that's not how the problem is stated. I'm moving on unless somebody tells me I'm wrong or provides additional details that have not yet been given.
 ttsou December 04, 2016Presumably array length 'n' is very large relative to history depth 'k'.
The naive answer then is sorting on 'n' and taking the k'th index. That would be O(n) space and O(nlogn) time, both which are 'very large' because 'n' is 'very large'.
Use of a heap (or even insertion sort) of 'k' elements iterating without modifying the array gives the main improvement. So the final answer should be O(k) space complexity. Time complexity must still run over the entire N array so O(nlogk) for heap implementation or O(nk) for insertion sort.
The heap is clearly better here but given that n >> k, even the simple sliding insertion sort on k elements is still better than any answer involving sorting on n.
For these types of questions, it's usually safe to assume that sparse matrices implies few N entries relative to very large dimensionality M. For example, N=100 values in sparse matrix size M=2^50, which implies that direct array storage is completely out of the question.
The related (separate) question involves dot product computation (a linear operation), which gives some hints to what the running order should be. Regardless, the answer to this question  as it stands  is really any structure with space growth that tracks the number of loaded entries independently of the matrix dimensions.
Sparse matrices should almost never be represented with matrices because the space requirement is quadratic with dimension. In this case we only need to track the nonzero values on the board, which takes us down to O(n) space where 'n' is the number of nonzero values in the sparse matrix.
From the set of nonzero values we can construct an undirected graph with an underlying adjacency list implementation. To do so we sort values by location, and look for two values on the same axis that are one unit away from each other.
Once the graph is constructed, which takes O(nlogn) time because of sorting, the 'big plus' calculation is then DFS per each nonzero value. So space is O(n) for DFS using a queue. Amortized time  remember the matrix is sparse  comes out to O(n). In both cases 'n' is the number is entries. We completely ignore the dimensions of the board.
/*
* 'Largest Plus'
*
* Implement points as an undirected graph with underlying adjacency list implementation.
*
* O(n) running time for sparse arrays and O(n) space where n is the number
* of '1's regardless of the number of '0' or board dimensionality.
*
* Spare matrix implies adjacency list as opposed to a direct matrix
* representation; latter is not scalable as space grows quadratically
* with dimension.
*/
import java.util.Arrays;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.Comparator;
public class Point {
public int x, y;
public int i; // Graph vertices index
public Point(int x, int y)
{
this.x = x; // Board X coordinate
this.y = y; // Board Y coordinate
}
public String toString()
{
return "(" + x + ", " + y + ")";
}
}
public class LargestPlus {
private ArrayList<Integer>[] adj; // Graph adjacency list
private Point[] points; // Points representing '1's
private int V; // Number of '1's
// Compare points by X coordinate first
private class PointCompareX implements Comparator<Point>
{
public int compare(Point a, Point b)
{
if (a.x < b.x)
return 1;
else if ((a.x == b.x) && (a.y == b.y))
return 0;
else if ((a.x == b.x) && (a.y < b.y))
return 1;
else
return 1;
}
}
// Compare points by Y coordinate first
private class PointCompareY implements Comparator<Point>
{
public int compare(Point a, Point b)
{
if (a.y < b.y)
return 1;
else if ((a.y == b.y) && (a.x == b.x))
return 0;
else if ((a.y == b.y) && (a.x < b.x))
return 1;
else
return 1;
}
}
// Conditionally connect two graph nodes if they are adjacent
private void condConnect(Point a, Point b)
{
if ((a.y == b.y && Math.abs(a.x  b.x) == 1) 
(a.x == b.x && Math.abs(a.y  b.y) == 1)) {
adj[a.i].add(b.i);
adj[b.i].add(a.i);
}
}
// Build adjacency representation of undirected graph
private void createGraph()
{
// Vertical connections
Arrays.sort(points, new PointCompareX());
points[0].i = 0;
for (int i = 0; i < V1; i++) {
// Assume X ordering is used for verticestopoint mapping
points[i+1].i = i+1;
condConnect(points[i], points[i+1]);
}
// Horizontal connections  maintain the previous ordering
Point[] yPoints = new Point[V];
for (int i = 0; i < V; i++)
yPoints[i] = points[i];
Arrays.sort(yPoints, new PointCompareY());
for (int i = 0; i < V1; i++)
condConnect(yPoints[i], yPoints[i+1]);
}
// Check that two points are on the same X or Y axis
private boolean checkLine(int i, int j)
{
if (points[i].x == points[j].x  points[i].y == points[j].y)
return true;
else
return false;
}
// Breadth First Search
private int bfs(int s)
{
if (adj[s].size() != 4)
return 1;
LinkedList<Integer> q = new LinkedList<Integer>();
boolean[] marked = new boolean[V];
q.add(s);
marked[s] = true;
int count = 0;
boolean expand = true;
// Each node can only have one viable expansion edge
while (!q.isEmpty() && expand) {
expand = false;
int v = q.remove();
for (int w : adj[v]) {
// Don't backtrack and check plus alignment
if (!marked[w] && checkLine(v, s)) {
marked[w] = true;
q.add(w);
expand = true;
count++;
}
}
}
// We increase count in each direction so divide by 4 for the size
return count / 4;
}
// Return 'plus size' for a given vertices index
public int plusSize(int s)
{
return bfs(s);
}
// Return a point for given vertices index. Needed because we sort the points.
public Point point(int s)
{
return points[s];
}
public LargestPlus(Point[] points)
{
if (points == null)
throw new NullPointerException();
this.points = points;
this.V = points.length;
adj = (ArrayList<Integer>[]) new ArrayList[V];
for (int i = 0; i < V; i++)
adj[i] = new ArrayList();
createGraph();
}
public static void main(String[] args){
Point[] points = {
new Point(0, 1),
new Point(0, 2),
new Point(1, 2),
new Point(2, 0),
new Point(2, 1),
new Point(2, 3),
new Point(2, 4),
new Point(3, 2),
new Point(4, 1),
new Point(4, 2),
new Point(5, 2),
new Point(6, 1),
new Point(6, 2),
new Point(5, 0),
new Point(2, 2),
new Point(1000000, 1000000),
};
LargestPlus lp = new LargestPlus(points);
int max = 0;
Point maxPoint = null;
for (int v = 0; v < points.length; v++) {
int size = lp.plusSize(v);
if (size > max) {
max = size;
maxPoint = lp.point(v);
}
}
System.out.println("Size " + max + ", Position " + maxPoint);
}
}

ttsou
December 04, 2016
RepMarryJohan, Consultant at ASAPInfosystemsPvtLtd
I am successfully manage and coordinate graphic design projects from concept through completion. Create and conduct highly persuasive sales and ...
RepOliviaJBlack, Consultant at Apkudo
I am Olivia from San Diego. I am working as a checker in the Thom McAn Store company. My strong ...
Open Chat in New Window
Beating the horse...
Obvious naive solution is to square and sort, which is O(nlogn) time and O(1) assuming the sorting is performed in place.
Better solution is a play on merge sort to reach O(n) time and O(n) space because of the spare array.
Insane solution performs merge in place (someone posted the reference above) for O(1) space, but you'd be insane to implement the algorithm during an interview (or even outside of an interview).
Straight C version
 ttsou December 06, 2016