Facebook Interview QuestionSDE1s
Country: United States
is a comprehensive book on getting a job at a top tech company, while focuses on dev interviews and does this for PMs.
CareerCup's interview videos give you a real-life look at technical interviews. In these unscripted videos, watch how other candidates handle tough questions and how the interviewer thinks about their performance.
Most engineers make critical mistakes on their resumes -- we can fix your resume with our custom resume review service. And, we use fellow engineers as our resume reviewers, so you can be sure that we "get" what you're saying.
Our Mock Interviews will be conducted "in character" just like a real interview, and can focus on whatever topics you want. All our interviewers have worked for Microsoft, Google or Amazon, you know you'll get a true-to-life experience.
I think it's important to define the problem first:- Chris August 01, 2017
- what does slow mean? high latency? for which percentile of requests? all, a few, e.g. 5%? Where is it measured (could it be, it's not the server but too much traffic, so server receives the request already late and it takes too long to deliver to client, ...) best is, draw a picture, with the server(s) the infrastructure around until the point of measurement... worst case it's the end customer telling you this with no additional information.
- are there any service level objectives (SLO's) defined one can compare against?
- is there history data? did high latency increase slowly?
- when did it change? was it due to a new release or no modifications
- what diagnostics is available?
Potential issues are:
- the server it self is overloaded (e.g. to much disc usage)
- the server contacts other services and waits for them (e.g. a slow service in the back end)
- the CPU is overloaded because some CPU bound processes eat away CPU power
- to many services use too much memory, the system keeps swapping in and out
- data grew, current sharding scheme may not be effective anymore
- OS update, a new network stack or driver?
- etc. etc. etc. etc.
I think a systematic approach could be
- verify / assume objectives
- narrow down the service(s) causing delay (DB cluster, app server cluster, what services specifically)
this is starting at the front-end service: what's the delay between receive request and served request? does this include the delay causing problems? If yes: go down, check the call graphs to other services, is there any call causing the delay? If yes, move to this server and repeat there
- look into the server(s) what is causing the delay (disc, cpu, memory (e.g. page faults))
- verify history data if available (sudden change, continuous process)