Amazon Interview Question
Software Engineer / DevelopersCountry: United States
Interview Type: In-Person
Good way to think this problem. My solution is similar, but it bought up another question:
My original design is a 2 column table:
Table( day, details) --- Detail is a concatenated string of numbers, like CSV file.
Then I day or the timestamp in your design is the Index.
Index normally is B tree(im not trying to use other types of indexes), then the search to day in my design is around O(log 365*5 ) , ignore the fact of bissextile years.
Then searching in detail will be O(60*60*12).
On the other hand, in your design,
searching indexes will be O(Log (5*365*12*60)), in the detail will be O(60).
so this becomes a math problem to find the extreme point for different search. (well, also depends on what kinda search those uses do most).
I am not going to do any math here, because I haven't done it. However, another thing we can do is to partition the table (maybe by range -- quarters of the year, or month) to divide the big data set. I have a feeling your design is more reasonable than mine, after partitioning. (since my Original design is "daily" based, so the data entry number is limited for the index. In this case, partitioning won't help too much. Your design suits it better. )
So to improve it, we can do a math here, to better adjust the index big O and detail big O, or ask for more details of how often do users use each search functions.
I think there are two parts to this question. One is how would I design the database now to start logging requests? and the other is, how would I want to store historic data? I have no choice but to log request in the least granularity requested with rollups to desired higher granularity when fetching em from the table. I would obviously have to back up data after it gets too crazy and store it on Archive storage engine to save space (this isnt the best deal if we plan on going back to historic data and query frequently because archive tables dont support indexes)
I would use a star schema with one Dimension. Fact table would hold the number of requests processed and the DateTime Dimension table can hold all the attributes such as sec, minute, hour, date, day, month, quarter, year etc.
The granularity of the historical data can be used to design this system.
- Anonymous June 21, 2012There can be 2 in memory data points for current second and current minute. Which can be inserted into DB every minute. Then a separate process can aggregate data on a monthly and yearly basis.
To deal with the data volume, the count per second can be rolled up into the minutes row.
Minutes Data table:
Timestamp | Count | Details
Timestamp = For each minute
Count = count
Details = breakdown for that second, a comma separated string of counts for each second in that minute.