is a comprehensive book on getting a job at a top tech company, while focuses on dev interviews and does this for PMs.
CareerCup's interview videos give you a real-life look at technical interviews. In these unscripted videos, watch how other candidates handle tough questions and how the interviewer thinks about their performance.
Most engineers make critical mistakes on their resumes -- we can fix your resume with our custom resume review service. And, we use fellow engineers as our resume reviewers, so you can be sure that we "get" what you're saying.
Our Mock Interviews will be conducted "in character" just like a real interview, and can focus on whatever topics you want. All our interviewers have worked for Microsoft, Google or Amazon, you know you'll get a true-to-life experience.
If I understood correctly, your first have a O(n) operation when you look for the list with the maximum length; then you have O(n*log(m)) to do a binary search on each row. But why do you have one more O(log(m)), don't you do this in the worst case for O(log(n*m)) times? So the final complexity would be O(n^2 * log(m) * log(n*m))?
- justguy September 20, 2015I think this can be improved to O(n * log(n) * log(m) * log(n*m)) if instead of doing a linear search for the row with the maximum number of elements, you would keep them in a max-heap. In a language like C/C++/Java this can be done efficiently because you can store a reference/pointer to the list and getting the size is an O(1) operation.