## Chronus Interview Question

Java Developers**Country:**India

@gen-y-s

as my understanding your solution also takes O(n*k) when the given elements are in descending order. please correct me if i am missing any part in ur solution

that'll be just O(n). you'll compare for at most n times while traversing takes n times. it'll be totally 2n times, which is O(n)

```
public static void printMin(int[] input, int windowSize) {
for(int i = 0; i <= input.length - windowSize; i++){
int localMin = Integer.MAX_VALUE;
for(int j = i; j < i + windowSize; j++){
if(input[j] < localMin ){
localMin = input[j];
}
}
System.out.println("Min : " + localMin);
}
}
```

Since the windows are intersected with each other, I think that using previous min element to get the next one might be good idea, see my reply for more details...

How about using the priority queue like below:

```
private static void printMin2(int[] input, int windowSize){
PriorityQueue queue = new PriorityQueue();
for (int i = 0; i < windowSize; i++) {
queue.add(input[i]);
}
System.out.println("Min : " + queue.peek());
for (int i = windowSize ; i < input.length; i++) {
queue.remove(input[i-windowSize]);
queue.add(input[i]);
System.out.println("Min : " + queue.peek());
}
}
```

Complexity is O(n * log (windowSize))

This is not the optimal one

here is the reason why it is not.

PriorityQueue is essentially a heap.

Complexity of removing an element(not the min or max) from heap takes O(n) time. And your second for loop run 'n-k' times ..which means total time is is O(k(n-k)) which is similar to the brute force one.

Edit: forgot to mention. and every time you are adding an element , which is Heapify operation takes logk time. so your algo would take

O((k+logk) (n-k)). which is slower than bruteforce one

PriorityQueue in this implementation "provides O(log(n)) time for the enqueing and dequeing methods (offer, poll, remove() and add);" - i copied it from the documentation

Why don't you look at the rest of the sentence in the documentation:

" linear time for the remove(Object) and contains(Object) methods;"

There are two methods named "remove" - one removes the root element of the heap and takes O(log N) time, and the other removes a specific item from the heap and take O(N) time.

Guess which one he's using here ?

This solution O(KN) which is not better than the trivial solution of scanning each window. There are better solutions, including O(N) solution.

Because to remove the root (e.g. the largest element from a MAX heap) you just remove the root and replace it with the last item in the heap, and then push the new root down until the hepa property is restored.

To remove a specific element from the heap you have to scan the whole heap to find it first - O(N).

Okay, let's say that it's not just the priority queue, but the binary heap. The complexity of insert and delete operations of binary heap is O(log N), right?

gen-y-s is correct. Searching an element in a binary heap (which is required to remove it) provably cannot be faster than O(k) in the worst case where k is the size of the heap, unless the binary heap is augmented with some other data structure. Deleting an element with a specified value takes O(k) in the worst case. The O(log k) removal time is for deleting an element at a known position once you've found it. For example, the min can be found right away in a min-heap, so it can be deleted in O(log k).

That said, you can use a map to associate values with positions in the heap. You can then locate values in the heap in log(k) or better, and cause their removal in log(k). This becomes the O(n * log(k)) solution you were hoping for, but it's now more complicated than the O(n) solutions to this problem.

If you keep such a map of the values to heap positions, you will have to update the map every time an item is moved in the heap, which happens quite often during the heap push-down operation.

This will add quite a significant overhead to solution, both in time and space, as well as code complexity.

@gen-y-s: That's very true, but there are only log(k) elements moved during each heap deletion. If you use a HashMap, you can get expected O(log k) per heap operation overall.

If you didn't implement the heap as an array but as a tree with dynamically allocated nodes, you could not have to update pointers in the map. You'd just have to switch around child and parent pointers in the heap itself, achieving deterministically O(log k) time heap operations.

#include <stdio.h>

#include <string.h>

#define size(a) sizeof(a)/sizeof(a[0])

int main(int argc, char *argv[]){

int arr[] = {3,9,0,3,18,6}, k = atoi(argv[1]), i = 0,j = 0;

int n = size(arr) - k +1;

for(i = 0;i < n;i++){

int min = arr[i];

for(j = i+1; j < (i+k);j++ )

if(arr[j] < min) min = arr[j];

printf("%d ", min);

}

printf("\n");

return 0;

}

```
void least(int arr[], int k, int n)
{
int *temp = (int *)malloc(sizeof(int)*k);
int i, j, start, end;
for(i=0; i<k; i++)
{
temp[i] = arr[i];
}
sort(temp, k);
int start = 0;
for(i=0;i<k; i++)
{
if(temp[i] = arr[k-1])
{
end = i;
break;
}
}
printf("%d\t", temp[start]);
j = k;
while(j < n)
{
if(arr[j] < temp[start])
{
temp[start] = arr[j];
end = start;
}
else if(arr[j] >= temp[end])
{
end = (end+1)%(k+1);
temp[end] = arr[j]);
}
else
{
end = find_position(temp, start, end, arr[j]);
temp[end] = arr[j];
}
if(arr[j - k] == temp[start])
{
start = (start+1)%k;
}
printf("%d\t", temp[start]);
}
}
```

```
//this is a C version if anyone needs it.
void SmallestNumber()
{
int arr[] = {3,9,0,3,18,6};
int n = 6;
int k = 3; //window
int p = 0;
int s = 0;
for(int i=0;i<=n-k;i++)
{
for(int n=0;n<k;n++)
{
int t = arr[i+n];
if(n == 0 || t < s)
{
s = t;
}
}
if(i==0)
{
printf("%d",s);
}
else
{
printf(",%d",s);
}
}
printf("\n");
}
```

All solutions posted so far have time complexity O(n*k) at best.

It's possible to achieve a O(n*logk) solution by combining a min heap with a

queue. We need the heap to extract the min element in logk time and we need the queue to keep track of the last element (the one that has to be removed from the heap).

The problem with a heap only is that when an element falls out of the window, we need O(k) time to find it in the heap and remove it. The idea is to keep a queue inside the heap: heap nodes have a next pointer which points to the next element in the window; we also keep a head and a tail pointer to that queue.

Now when the window slides, the node pointed by head falls out of it, so we can find it in constant time and replace it with the new element that enters the window and then re-heapify (thats O(logk) operation). Thus, the total time complexity is O(n*logk)

Here is the C++ code (sorry if it's too long but I had to implement a heap myself since C++ library does not provide O(logk) heapify operation)

(whitespace got screwed)

```
#include <iostream>
#include <vector>
#include <cstdlib>
#include <algorithm>
#include <cassert>
using namespace std;
template<class T>
struct Node {
Node * next;
size_t index;
T value;
Node(const T& value, size_t index = 0):next(NULL), index(index), value(value) {}
};
template<class T>
void swap(Node<T> *&a, Node<T> *&b)
{
std::swap(a->index, b->index);
std::swap(a, b);
}
template<class T>
bool max_node(Node<T> * a, Node<T> * b)
{
return a->value < b->value;
}
template<class T>
bool min_node(Node<T> * a, Node<T> * b)
{
return a->value > b->value;
}
template<class It, class Comp>
void heapify(It begin, It end, It pos, Comp comparator)
{
It left = begin + 2 * distance(begin, pos) + 1;
It right = begin + 2 * distance(begin, pos) + 2;
It largest = pos;
if (left < end && comparator(*pos, *left)) {
largest = left;
}
if (right < end && comparator(*largest, *right)) {
largest = right;
}
if (largest != pos) {
swap(*pos, *largest);
heapify(begin, end, largest, comparator);
}
}
template<class It, class Comp>
void build_heap(It begin, It end, Comp comparator)
{
for (It it = begin + distance(begin, end) / 2;it >= begin;--it) {
heapify(begin, end, it, comparator);
}
}
template<class T>
void min_sliding_window(vector<T> vals, size_t k)
{
Node<T> *head, *tail;
vector<Node<T>*> heap;
heap.push_back(new Node<T>(vals[0], 0));
for (size_t i = 1;i < k;++i) {
Node<T> *tmp = new Node<T>(vals[i], i);
heap.back()->next = tmp;
heap.push_back(tmp);
}
head = heap[0];
tail = heap[heap.size() - 1];
build_heap(heap.begin(), heap.end(), min_node<T>);
for (size_t i = k;i < vals.size();++i) {
cout<<heap[0]->value<<" ";
assert(head->next);
tail->next = head;
tail = tail->next;
head = head->next;
tail->value = vals[i];
heapify(heap.begin(), heap.end(), heap.begin() + tail->index, min_node<T>);
}
for (size_t i = 0;i < heap.size();++i) {
delete heap[i];
}
}
int main()
{
int k = 4;
int n = 20;
vector<int> vals;
for (size_t i = 0;i < n;++i) {
int n = rand() % 100;
vals.push_back(n);
cout<<n<<" ";
}
cout<<endl;
min_sliding_window(vals, k);
}
```

```
void minWindow(vector<int> &v, unsigned k)
{
int mi = 0;
int m = v[0];
for (int i = 1; i < min(v.size(), k); ++i) {
if (v[i] < m) {
m = v[i];
mi = i;
}
}
cout << m << endl;
for (int i = 1; i+k <= v.size(); ++i) {
if (v[i+k] < m) {
m = v[i+k];
mi = i;
}
if (i > mi) {
m = v[i];
mi = i;
for (int j = mi; j+k < v.size(); ++j) {
if (v[j] < m) {
m = v[j];
mi = j;
}
}
}
cout << m << endl;
}
}
```

```
import java.util.Random;
public class MinimumNumberInWindow {
static int[] inputArray;
@SuppressWarnings("unused")
public static void main(String[] args) {
final int ARRAY_SIZE = 15;
final int WINDOW_SIZE = 3;
if(WINDOW_SIZE > ARRAY_SIZE){
throw new IllegalArgumentException("Window Size greater than Array Size");
}
//Prepare Data
System.out.println("input: ");
inputArray = new int[ARRAY_SIZE];
Random random = new Random();
for(int i=0; i<ARRAY_SIZE;i++){
inputArray[i] = random.nextInt(100);
System.out.print(inputArray[i]+" ");
}
//Find min in the Initial Window
System.out.println("");
System.out.println("Result: ");
int minIntIndex = findMinEleInWindow(0, WINDOW_SIZE);
int minElement = inputArray[minIntIndex];
System.out.print(minElement+" ");
for(int i=1;i<=(ARRAY_SIZE-WINDOW_SIZE);i++){
if(inputArray[WINDOW_SIZE+i-1] <= minElement){
minElement = inputArray[WINDOW_SIZE+i-1];
minIntIndex = WINDOW_SIZE+i-1;
}else if(minIntIndex < i){
minIntIndex = findMinEleInWindow(i, WINDOW_SIZE);
minElement = inputArray[minIntIndex];
}
System.out.print(minElement + " ");
}
}
public static int findMinEleInWindow(int startIndex, int WindowSize){
int minElement=inputArray[startIndex];
int minIndex = startIndex;
for(int i=startIndex+1; i<(WindowSize+startIndex); i++){
if(inputArray[i] <= minElement){
minElement = inputArray[i];
minIndex = i;
}
}
return minIndex;
}
}
```

```
private static int FindMinElementIndex(int[] input, int startIndex, int endIndex)
{
var index = -1;
for (var i = startIndex; i <= endIndex; i++)
{
if (index == -1 || input[i] < input[index])
index = i;
}
return index;
}
private static void PrintMinimumElements(int[] input, int windowSize)
{
var minElementIndex = -1;
for (var i = 0; i <= input.Length - windowSize; i++)
{
if (minElementIndex != -1 && minElementIndex >= i)
{
minElementIndex = input[minElementIndex] < input[i + windowSize - 1] ? minElementIndex : i + windowSize - 1;
}
else
{
minElementIndex = FindMinElementIndex(input, i, i + windowSize - 1);
}
if(i>0)
Console.Write(", ");
Console.Write(input[minElementIndex]);
}
Console.WriteLine();
}
static void Main()
{
var array = new[] {51, 23, 47, 89, 123, 12, 78, 90, 32};
PrintMinimumElements(array, 3);
}
```

Lets calculate complexity of algorithm.

In brute force way. finding a min element in a window of size 'k' would be O(k). and there exists n-k+1 windows. so total complexity would be O(K * (N-K))

now your algo would also give same complexity if input is sorted order in increasing way.

agree that ur algo is better than brute force way..but we still end up with same complexity. is there any other better approach using any data structures?

Using dynamic programming:

Use the n*W matrix and as we do for the Matrix chain multiplication,

start from the diagonal. And Diagonal element would be the same as the array values.

Then keep on increasing the interval as 2,3,4,....to the window size.

The recurrence would be

A[i][j] = min(A[i][j-1], A[j-1][j]).

This algo will fill the matrix of window size W for each row N so the complexity of the algo would be O(n*W).

same would be the space complexity.

could you post the code? not sure what "start from diagonal" means as the matrix is N*w, not N*N

I have doubt.So we need to print smallest number from every array after dividing using window.

Just find the first windows minimum then compare it with the next element that is being added go th window since it is the only new element. Also since the first element is removed and if it is the smallest you will have to find the second smallest if this is the case. Shouls just be another boolean value to keep track of. Worst complexity is o( n*n) but avg is just o(1)

why is everyone trying his own way? the best one is to use a double linkedlist which will reduce it to O(n). it's called sliding window problem. you can find the solution everywhere

I think that all the known solutions have been discussed here:

1. heap

2. track the minimum

3. ascending minima

@S.Abakumoff all right. my bad. I missed the correct solution.

@eugene.yarovoi well, I mean for a typical problem like this, the best one is to use the queue, which I believe is most interveiwers expect. ok, if we don't have O(k) memory, I guess the only way is to compare in every window, which is rarely possible for a interview problem.

I don't really like the label "best one". For example, the first time someone asked me this problem, I came up with a solution that was nothing like any of the solutions here so far. Mine also ran in O(n). Was mine any better or worse? Well, the queue solution is very elegant in my opinion; my solution looks more like a hack. My solution (at least to me) is far more natural though, in that I think someone who has never seen this problem solved before would be more likely to come up with it than with the queue solution, but maybe that's just me. I don't know about the constant factor. I'd have to look at it carefully. The point is that even if two solutions are both O(n) there may be tradeoffs between code complexity, complexity of the idea, elegance of the idea, constant factor for the time, locality of memory accesses (will you get lots of cache hits, etc.), and space needed. For instance, my solution was able to be adjusted to use only O(sqrt(windowSize)) space at the expense of doubling the constant factor (still O(n) though, so there are methods that use less space and are still O(n)).

That's generally why people think it's worth discussing different solutions to the same problem.

@eugene.yarovoi Ok, I'm not saying there's only one best solution. And I like discussions as long as they are trying to achieve better time and space complexity than the best we know till now. I just mean in the time complexity, the queue solution is the best one, which is O(n) and the best we can do. If your solution runs in O(n), I will say it's also the best we can do. Why don't you post your O(n) time with O(sqrt(windowSize)) space solution? It's surely better in the overall time and space complexity.

The idea is to keep a queue of the last K items, by enqueing each new item and dequeing the oldest item. When enqueing an item, compare it the elenents in front of it in the queue, and discard them if they are larger (since they will never be the smallest now that a new smaller item is added to the queue):

- gen-y-s January 30, 2013