is a comprehensive book on getting a job at a top tech company, while focuses on dev interviews and does this for PMs.
CareerCup's interview videos give you a real-life look at technical interviews. In these unscripted videos, watch how other candidates handle tough questions and how the interviewer thinks about their performance.
Most engineers make critical mistakes on their resumes -- we can fix your resume with our custom resume review service. And, we use fellow engineers as our resume reviewers, so you can be sure that we "get" what you're saying.
Our Mock Interviews will be conducted "in character" just like a real interview, and can focus on whatever topics you want. All our interviewers have worked for Microsoft, Google or Amazon, you know you'll get a true-to-life experience.
Millions of updates per second can be handled by a high throughput distributed messaging system like Kafka.
- colin August 03, 2016For high scalability, use an intermediary data processing task (e.g. a job in Spark Streaming) to perform first level aggregations on the incoming data.
The message contains all the information for a trip id. This data can be aggregated to a sub-total for a batch, e.g. day, hour, min, city, total trips, total fare, total new users.
The sub-totals can be stored in RDBMS, queries run on this to populate the dashboard. Frequency of queries should be high, < 5 min to achieve the realism.
The queries should update tables in the DB which the dashboard then reads for low latency.
All existing users can be stored in a hash or a DB table, an interesting question is what is a new user? The same person might create a new ID.