is a comprehensive book on getting a job at a top tech company, while focuses on dev interviews and does this for PMs.
CareerCup's interview videos give you a real-life look at technical interviews. In these unscripted videos, watch how other candidates handle tough questions and how the interviewer thinks about their performance.
Most engineers make critical mistakes on their resumes -- we can fix your resume with our custom resume review service. And, we use fellow engineers as our resume reviewers, so you can be sure that we "get" what you're saying.
Our Mock Interviews will be conducted "in character" just like a real interview, and can focus on whatever topics you want. All our interviewers have worked for Microsoft, Google or Amazon, you know you'll get a true-to-life experience.
The question looks a little incomplete in terms of the problem statement. Hashmap and linkedlist seems to me LRU cache implementation. The items (keys) of the hashmap point to the nodes in linked list and the payload of linked list has the value (page content cache). After every fetch operation the head points to the element just fetched. That way the last element is always the least recently used and would be thrown off the cache when the cache is full with n keys. This way the caching server would work optimally by having a high cache hit for frequently used pages and a cache miss would occur more frequently for infrequently visited pages.
- NEO May 06, 2014