Google Interview Question
SDE1sCountry: United States
1) A request is a memory address (ram) , number of bytes to copy to , and hd address.
2) all requests are queued on the harddisk controller, which continuously rearrange the queue based on the hard disk current state and lowest seek distance.
3) process the requests on queue , and asynchronouly the DMA(Direct memory addressing) will copy the n bytes to the ram.
4)An interrupt could possibly be raised , with the request number. so the process that is waiting on this is revived.
1) A request is a memory address (ram) , number of bytes to copy to , and hd address.
2) all requests are queued on the harddisk controller, which continuously rearrange the queue based on the hard disk current state and lowest seek distance.
3) process the requests on queue , and asynchronouly the DMA(Direct memory addressing) will copy the n bytes to the ram.
4)An interrupt could possibly be raised , with the request number. so the process that is waiting on this is revived.
Probably a bit late on this question...but I thought I would take a shot at it.
There are a couple ways to reduce read latency.
+ On sequential reads, read ahead can reduce latency.
+ Parallelizing the read i.e. reading from multiple disks like in a raid group can reduce latency.
+ Caching blocks can improve latency as well if the hit rate is good.
+ For random reads, readahead will not help and neither will caching. So, using a disk with good random read performance like ssds or sas disks can help as well.
For reducing write latency, there are a couple options as well.
+ Write to persistent memory for lazy writes to disk at a later time
+ Parallelize the writes i.e. divide the writes to multiple disks
..
In addition to other answers use elevator disk scheduling algorithm.
- smm March 03, 2016