## Amazon Interview Question

Hey Luffy

Your solution seems to be correct to me . Its almost like coin change problem where we have denominations from 1 to sqrt(n) , each denomination are infinite . Its similar to knap sack problem where we associate -1 value for each addition of a coin . so we get the solution with minimum complexity . Time complexity is 0(n*sqrt(n)) .space complexity is same . DP its better we use bottom up approach

1) Get the largest a > 0, such that a^2 <= N. If N is a perfect square, return.

2) Now take the set of numbers 0, 1, 4, 9 .. a^2 and use dynamic programming to get the optimal combination to reach N. This is the exact same problem as the Coins combination problem.

The basic idea is

MinNoOfSquares(N) = Minimum of (MinNoOfSquares(N - 1), MinNoOfSquares(N - 4), (MinNoOfSquares(N - 9) ...) + 1;

My first idea - kind of brute: consider the squared numbers 1,4,9,...,[sqrt(N)]^2 as nodes in a weighted graph where w(i,j) = i+j (i and j are two nodes) - the graph will be complete.

For each node (or only a half is enough?), do a DFS/BFS while computing the path length.

If a partial solution has 2 nodes, return it.

If path length < N, continue with next node.

If path length == N, store the partial solution in a set of partial solutions.

If path length > N, remove the current node and continue.

When all nodes are complete, get the minimum solution (i.e. the smallest number of nodes) from partial solutions set.

I thinks the complexity is O(m^2) in time and space, where m is the number of nodes in graph.

I really doubt if u r convinced urself that ur method works. When you say, w(i,j) = i+j, can't you realize that you are taking same value twice, if you chose two edges like w(i,j) and then w(j,k). Hence, this graph theoretic method is wrong at it's core conception!

We can solve this problem using recursive function.

1. Create a function which returns nearest largest square root of the number.

2. Call the above function recursively till the number is reduced to 0.

Ex: n = 21

1. nearest sqrt: 4, call function(21-(4^2)) i.e., function(5)

2. nearest sqrt: 2, call function(5-(2^2)) i.e., function(1)

3. nearest sqrt: 1, call function(1-(1^2)) i.e., function(0) //stop recursion

Solution: 4, 2, 1

What u try to do is called greedy. It fails. For N = 106, you'd pick 100, then 4, then 1, then 1. Hence, length = 4. But, optimum is 9^2+5^2 = 106.

Why don't work with few inputs before giving so simple? That way you'd learn something!

You have a sorted array of squares: 1^2, 2^2, ..., [sqrt(N)]^2.

Determine if that array has two elements, a,b which sum to N.

Determine if that array has three elements a,b,c, which sum to N.

3 is the limit.

So to clarify.

Let P^2 be the largest perfect square <= N.

Have three arrays:

1^2, 2^2, ..., P^2

2* 1^2, 2* 2^2, ..., 2*P^2

3* 1^2, 3* 2^2, ..., 3* P^2

Check if N appears in last two. etc etc.

But why have same elements repeated 6 times in 3 different arrays? can't they be repeated 4 times in same array including 4* 0^2?

It was mentioned only to dispel doubts of this working. Optimization left to the reader.

I think there are 3 cases:

- find a, b such that a^2 + b^2 = n => takes O(m) where m is number of elements from which to select a and b.

- find a, b, c such that a^2 + b^2 + c^2 = n => takes O(m^2)

- find a, b, c, d such that a^2 + b^2 + c^2 + d^2 = n => takes O(m^2 logm)

As m = sqrt(n), it seems total time complexity would be O(n logn).

1. Find the number which is a perfect square closest to the given number.

2. Get the remainder (number-perfect square) and then perform step 1.

3. By Lagrange's four square theorem we know that the maximum we can go is 4.

your algo doesn't work... pls read the above discussion, your algo is the brute force approach not the optimal one.

Eg: 106

-according to you algo its 10^2 + 2^2 + 1^2 + 1^2

- but its also 9^2 + 5^2 which is optimal

Hey genius! wht's ur problem? Guest123 had already talked abt this greedy approach... Someone posted this to his reply :

```
What u try to do is called greedy. It fails. For N = 106, you'd pick 100, then 4, then 1, then 1. Hence, length = 4. But, optimum is 9^2+5^2 = 106.
Why don't work with few inputs before giving so simple? That way you'd learn something!
```

correct me if m wrong..we can calcuate all possible sums?? for all the perfect squares that are posted in the the solution...is it related to dynamic programming?

O(2^n) without memoization. O(kn) if memoization used..

```
c = 0
MKS(k, i)
if(i < 0)
return
if(k == 0)
if(c < min)
min = c
return
else
MKS(k - a[i], i-1, c+1)
MKS(k, i-1, c)
n = 0
CA(k)
while(1)
if(n*n <= k)
a[n] = n*n
else
return
```

Improved one

```
MKS(k, i)
if(i > n)
print "No solution possible"
if(k == 0)
return 0
if(a[i] > k)
return 0
return min(MKS(k-a[i], i+1) + 1, MKS(k, i+1))
n = 0
CA()
while(n*n < k)
a[n] = n*n
n++
main()
CA()
print MKS(k, 0)
```

Is it not the simple knapsack problem; where the size of the weight are i*i and value of the each weight is -1; now maximize the value of the knapsack size N;

- luffy August 24, 2011getMin(int N)

{

int m[]=new int[N];

m[0]=1;

for(int i=1;i<N;i++)

{

for(int j=1;j<Sqrt(N)+1;j++)

m[i]=Min(m[i],1+m[i-w[j*j]);// add a special condition that m[i] cannot be 1;

}

return m[N];// If we want to print the values ; we need to store the vales which

//we choose then back trace.

}

please comment if i am wrong