Morgan Stanley Interview Question
Software Engineer / Developersdo u mean frequency of a particular/given word like a search string, or frequencies of all words in the dictionary?
Doesn't matter, you still have to do hashing, because it can be ANY word. So you have to run through the entire book or file for all the words. So you can just hash it once and 'cache' it or put the hash recorded structure in another file so if user use it again, it will be fast the 2nd time. It's mostly going to be slow the first time as with MOST THINGS in life.
Can be done in linear time using Boyer's Moore algorithm. If preprocessing is allowed we can build a hash table and keep a count of all the words seen in the book but that will be expensive. We can also use suffix tree which can be constructed in O(n) time and will require the same amount of space and then the search for the pattern can be carried out in O(length of pattern). But this will also take a lot of space. Can anyone think of a better solution????
1. linear look up this word.
- rp March 15, 20072. is that saying that it is a least square regression problem?
3. make the bigger set a tree. query each item of set2 in that tree so the complexity is nlogn.