## Samsung Interview Question for Software Developers

- 0of 0 votes

AnswerYou are given the length and time of occurrence of packet and Queues which process packets. Total processing time for a packet is equal to the length of packet plus the waiting time in queue. For eg lets say we have only one queue for now, and A packet of length 5 comes at t = 1, and another packet of length 4 comes at t = 3. Total processing time for first packet is 5( no waiting time as queue is empty at t = 1) and at t = 3, 2 units of first packet is processed and 3 units remaining so, for second packet 3 units will be waiting time in queue plus 4 units for its length. Total processing time for 2nd packet is 7 units. If there are multiple queues you can add new packet in any of the other queues. Given the time and length of all incoming packets, we need to find the minimum no. of queues required such that total processing time of each packet is less than 10 units. Maximum possible no. of queues are 5. If you require more than 5 queues print -1.

Test Cases Format: First Line contains the number N, the total no. of packets and N following line contains two numbers ti, li where li is length of packet coming at time = ti units.

Test case1:

2

2 7

5 8

Test Case 2:

3

1 3

2 3

3 5

Test Case 3:

3

1 5

2 4

3 8

Output:

Case1: 2

Case2: 1

Case3: 2

Consider the following time table of incoming packets:`time packets-length 1 8 2 5 3 2 4 6`

If you put the packet in queue with minimum time then this will lead to 3 queues:

- ak4017 April 25, 2018 in United States

t = 1:

q1: 8

t = 2:

q1: 7

q2: 5

t = 3:

q1: 6

q2: 4, 2

t = 4:

q1: 5

q2: 3, 2

q3: 6

But its output should be 2 queues:

1) 8 in queue 1

2) 5 in queue 2

3) 2 in queue 1

4) 6 in queue 2| Report Duplicate | Flag | PURGE

Samsung Software Developer Algorithm

**Country:**United States

**CareerCup**is the world's biggest and best source for software engineering interview preparation. See all our resources.

Open Chat in New Window

Here's my shot.

The hardest part for me is modelling the problem. The coding itself is very easy and does not require any advanced data-structure.

One way to look at this: at each instant (or time unit) we have the queues which can contain one or more packages to consume. At each instant, the status of the queues changes. A brute force implementation can iterate over the time and upgrade the status of the queues, and return the output value based on the queues status.

Another line of thought: let's consider a _single_ queue. *Let's call T the processing time available by that queue*. At time=0, T=10 as no packet is being allocated yet. After time=3 secs, T=7 because 2 secs have been wasted waiting for packets and doing nothing. If a packet of length 3 arrives at time=4 secs, the queue would be busy until time=7 and T=3.

|----|===|---|

So T is really how much processing the queue can still take on, considering all the allocated packets.

While considering one queue, we can define

[P1..Pn] the packets to receive. Note each P has _receiveTime_ and _length_ properties.

T* = process(T, [P1..Pn])

as the function returning (T*) the available processing time _after_ processing [P1..Pn].

Now we have that

T* = process(T, [P1..Pn]) = process(T**, [P2..Pn])

T** = T - process(T,[P1])

The available time after processing [P1..Pn] from the initial time T, will be the same as the processing of all the packets but the first, from the initial time reduced by the time needed to process the first packet.

We can use this idea to do a recursive implementation of the process function.

But we have multiple queues! We can change the signature of the process function to have multiple times, but the same logic still applies.

[T1..T5]* = process([T1..T5], [P1..Pn])

[T1..T5]* = process([T1..T5], [P1..Pn]) = process([T1..T5]**, [P2..Pn])

[T1..T5]** = [T1..T5] - process([T1..T5],[P1])

One important difference between one and multiple queues is the way we calculate the time after the processing of P1. In a single queue the resulting time is simply

T - (P1.receiveTime + P1.length)

When we move to multiple queues, we apply the same subtraction but *only of one of the times, as we allocate to one specific queue*. So we need to choose which queue to allocate to!

As the problem requests the minimum number of the required queues, we are trying to *optimize the number of queues*.

When a new packet arrives should we send it to an empty queue or a non-empty queue? We should always prefer a non-empty queue, actually *that with the least processing time.* Ideally we'd stick every packet in one queue.

Below is the code. the process function is implemented by the method calculateRemainingProcessingTime()

Please note the function is recursive as the definition. But it's trivial to make it iterative, as all the parameters are mutable.

PS: test case 2 has output 2, not 1!

---------

- ograndedienne April 25, 2018