is a comprehensive book on getting a job at a top tech company, while focuses on dev interviews and does this for PMs.
CareerCup's interview videos give you a real-life look at technical interviews. In these unscripted videos, watch how other candidates handle tough questions and how the interviewer thinks about their performance.
Most engineers make critical mistakes on their resumes -- we can fix your resume with our custom resume review service. And, we use fellow engineers as our resume reviewers, so you can be sure that we "get" what you're saying.
Our Mock Interviews will be conducted "in character" just like a real interview, and can focus on whatever topics you want. All our interviewers have worked for Microsoft, Google or Amazon, you know you'll get a true-to-life experience.
Here's my shot.
The hardest part for me is modelling the problem. The coding itself is very easy and does not require any advanced data-structure.
One way to look at this: at each instant (or time unit) we have the queues which can contain one or more packages to consume. At each instant, the status of the queues changes. A brute force implementation can iterate over the time and upgrade the status of the queues, and return the output value based on the queues status.
Another line of thought: let's consider a _single_ queue. *Let's call T the processing time available by that queue*. At time=0, T=10 as no packet is being allocated yet. After time=3 secs, T=7 because 2 secs have been wasted waiting for packets and doing nothing. If a packet of length 3 arrives at time=4 secs, the queue would be busy until time=7 and T=3.
|----|===|---|
So T is really how much processing the queue can still take on, considering all the allocated packets.
While considering one queue, we can define
[P1..Pn] the packets to receive. Note each P has _receiveTime_ and _length_ properties.
T* = process(T, [P1..Pn])
as the function returning (T*) the available processing time _after_ processing [P1..Pn].
Now we have that
T* = process(T, [P1..Pn]) = process(T**, [P2..Pn])
T** = T - process(T,[P1])
The available time after processing [P1..Pn] from the initial time T, will be the same as the processing of all the packets but the first, from the initial time reduced by the time needed to process the first packet.
We can use this idea to do a recursive implementation of the process function.
But we have multiple queues! We can change the signature of the process function to have multiple times, but the same logic still applies.
[T1..T5]* = process([T1..T5], [P1..Pn])
[T1..T5]* = process([T1..T5], [P1..Pn]) = process([T1..T5]**, [P2..Pn])
[T1..T5]** = [T1..T5] - process([T1..T5],[P1])
One important difference between one and multiple queues is the way we calculate the time after the processing of P1. In a single queue the resulting time is simply
T - (P1.receiveTime + P1.length)
When we move to multiple queues, we apply the same subtraction but *only of one of the times, as we allocate to one specific queue*. So we need to choose which queue to allocate to!
As the problem requests the minimum number of the required queues, we are trying to *optimize the number of queues*.
When a new packet arrives should we send it to an empty queue or a non-empty queue? We should always prefer a non-empty queue, actually *that with the least processing time.* Ideally we'd stick every packet in one queue.
Below is the code. the process function is implemented by the method calculateRemainingProcessingTime()
Please note the function is recursive as the definition. But it's trivial to make it iterative, as all the parameters are mutable.
PS: test case 2 has output 2, not 1!
---------
- ograndedienne April 25, 2018