Goldman Sachs Interview Question for Software Engineer / Developers


Team: Strategies Group
Country: India
Interview Type: In-Person




Comment hidden because of low score. Click to expand.
3
of 3 vote

Why not use double linked list and a Hash (made from RB trees like in C++)? Inserting would involve checking first whether value already present which takes log n time. The Hash would store node address with value in node as key.

- anantkaushik89 June 12, 2013 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 votes

This is how LRU is implemented .

- Geek July 20, 2014 | Flag
Comment hidden because of low score. Click to expand.
2
of 2 vote

A max heap with key value to be Unit's last access time. should this be sufficient?

- Jacques January 02, 2013 | Flag Reply
Comment hidden because of low score. Click to expand.
1
of 1 vote

This won't be sufficient. This way, replacement and insertion will be O(logn) but searching would be O(n). Correct me if I am wrong.

- aaa April 24, 2013 | Flag
Comment hidden because of low score. Click to expand.
2
of 2 vote

Well searching time in a max heap can be optimized upto N/2 at max, so according to me a regular max heap won't help. You can either choose an
1. Skip list: Which on average take O(log n) time for inserting,searching and replacing. You can keep a pointer to the last node always for LRU entry, updation of which will O(1) time if Skip list is implemented using a Quad node.
2. Heap structured search trees (HSST) looks another good option. But skip list being a standard and more evolved DS, I would like to give it a shot at first.

- Anshul January 02, 2013 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 votes

Searching on skip list is O(logn) only if the list is sorted. In this case, on what parameter would you sort on: accesstime/item?

- aaa April 24, 2013 | Flag
Comment hidden because of low score. Click to expand.
0
of 0 vote

just a priority queue should be fine. It can be implemented by linkedlist. all nodes are sorted so that inserting, searching and replacing will take log(n) time

- zyfo2 January 04, 2013 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 votes

Sorted according to what? If it is sorted to page id.. then how will u maintain priority. and vice versa

- isandesh7 February 01, 2013 | Flag
Comment hidden because of low score. Click to expand.
0
of 0 vote

I think we can implement by AVL tree

- Ajinkya January 06, 2013 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

If there are n elements , the best data structure to use is an array of size (log n) .This array acts as a frame and replaces the least recently used.
Hence insertion , search and replacement will take maximum of O(log n).

- Anonymous January 20, 2013 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

Array of structure...
struct page
{
int page_no;
int flag;
}
if flag=0, it means that page is not in memory..
otherwise it is preasent in memory.
array is sorted..
In this way we don't have to delete any elment.
Just search and update the flag.
Searching takes O(logn) time and updating flag takes O(1) time.

- Sidharth June 09, 2013 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

1) Construct a linked list of pages present in the cache such that the "least recently used" page will always be at end and the "most recently used" page will always be in front of the list. Each node of the list contains the page number and it's address in the cache.
2) Construct a "height balanced" binary search tree (like AVL tree). Each node of the tree will store the page number and the address of corresponding node of the linked list.

FOR EACH REQUEST: Search the page number in the tree (O(log n)).
=> If the page is present in the list then delete this node from the list and insert it in front of the list (O(1)).
=> If page is not present then load it from memory and insert it front of the list (O(1)). Insert corresponding node in the tree (O(log n)).
=> If the list is full then delete the last node from the list (O(1)) and the corresponding node from the tree (O(log n)).

NOTE that we can also perform all the operations in constant average time using HashMaps.

- EOF June 13, 2013 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

We can extend LinkedHashMap.

- Ravi July 29, 2013 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

Cann't we use binary search tree with key value as units last access time...
Searching... use FindMaximumKey(struct node *root)...which takes O(lg n)
Insertion also will take O(lg n).

- da anmol August 09, 2013 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

Use a double-linked list to store the pages with the most recently used page in the head of the list and the least recently used page in the back of the list.
APage<=>BPage<=>CPage

Then use a map to map the page ID to the page.
A->APage
B->BPage
C->CPage

Search: O(1) from the map, O(1) remove and add to front of list
Insert: O(1) in the map, O(1) by adding to the front of the list
Replace: O(1) in the map, O(1) by removing from back of list and adding to front of list

- maximus_algorius August 12, 2013 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

Delete data by scheduler using cleanData method.

package com.learning.lru;

import java.util.ArrayList;
import java.util.concurrent.locks.ReentrantReadWriteLock;

public class MyLRUCache  {

	
	private ArrayList<Data>  dataHolder = new ArrayList<Data>();
	
	ReentrantReadWriteLock lock = new ReentrantReadWriteLock(true);
	
	private final long dataLifetime = 5000;
	

	public Data getData(int i) {
		
		try{
		lock.readLock().lock();
		if(dataHolder.size() == 0){
			System.out.println(" NO Data Found");
			return null;
		}
		 return dataHolder.get(i);
		}finally{
			lock.readLock().unlock();
		}
				
	}


	public boolean putData(Data d) {
		try{
			lock.writeLock().lock();
			  dataHolder.add(d);
			  return true;
			}finally{
				System.out.println(" Final Data size "+dataHolder.size());
				lock.writeLock().unlock();
				return false;
			}
	
	}


	public boolean cleanData() {
		int i=0;		
		if( dataHolder.size()==0){
			System.out.println(" NO Data to clean");
			return true;
		}else{
			System.out.println(" Data Size "+ dataHolder.size());
			System.out.println(" lock status "+lock.getReadHoldCount());
		}	
		try{
			lock.writeLock().lock();			
			for(Data d : dataHolder){
			  if(System.currentTimeMillis()-d.getTimeStamp().getTime() > dataLifetime){
				  //tempHolder.add(d);
				  dataHolder.remove(i);
				  i++;				  
			  }
			}				
			}finally{
				lock.writeLock().unlock();
				return false;
			}
	}

}

//Client code

package com.learning.lru;

import java.util.Date;

public class LRUClient {


	public static void main(String[] args) {
		
		MyLRUCache cache = new MyLRUCache();
		
		LRUClient client = new LRUClient();;
		Thread lruc = new Thread(client .new LRUConsumer(cache));
		
		Thread lrup = new Thread( client .new LRUProducer(cache));
		
		Thread lrucl = new Thread( client .new LRUCleaner(cache));
	
		lrup.start();
		
		lruc.start();

		lrucl.start();
		
		
		//use scheduled
	}
	
	class LRUConsumer implements Runnable{		
		MyLRUCache cache ;
		LRUConsumer(MyLRUCache cache ){
			this.cache=cache;
		}
		@Override
		public void run() {			
			try {
				Thread.sleep(10);
			} catch (InterruptedException e) {
				// TODO Auto-generated catch block
				e.printStackTrace();
			}		
			System.out.println(" Consuming  Data ---------->>>>>>>>");
			while(true){
				for(int i=0; i<20;i++){
					//System.out.println(" DATA :>>>"+cache.getData(i));
				}		
			}
		}
	}

	class LRUProducer implements Runnable{		
		MyLRUCache cache ;
		LRUProducer(MyLRUCache cache ){
			this.cache=cache;
		}
		@Override
		public void run() {			
			try {
				Thread.sleep(10);
			} catch (InterruptedException e) {
				// TODO Auto-generated catch block
				e.printStackTrace();
			}
			System.out.println(" Putting  Data ---------->>>>>>>>");
			for(int i=0; i<20;i++){				
				cache.putData( new Data(i, new Date(System.currentTimeMillis())));
			  }		
		}
	}
	
	class LRUCleaner implements Runnable{		
		MyLRUCache cache ;
		LRUCleaner(MyLRUCache cache ){
			this.cache=cache;
		}
		@Override
		public void run() {			
			try {
				
				while(true){
					//Thread.sleep(3000);
					System.out.println(" Cleaning Data ---------->>>>>>>>");
					cache.cleanData();
				}
			} catch (Exception e) {
				// TODO Auto-generated catch block
				e.printStackTrace();
			}
			
		}
	}
}

- java.interviews.questions September 10, 2013 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

May be maintaining a tree kind of structure with data as node value and a queue where in you store elements as they are added to tree. Searching in tree would be log(n) and for deletion/insertion we can get element from start of queue to know which is least recently used.

- Puja January 27, 2014 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

we can use in it min priority queue with time stamp so that we can easily extract lest recently used process in logn time and insertion and deletion in priority queue is log n time . :)

- Anonymous June 18, 2014 | Flag Reply
Comment hidden because of low score. Click to expand.
0
of 0 vote

Google
implement-lru-cache geeksforgeeks

- Priyan October 10, 2014 | Flag Reply


Add a Comment
Name:

Writing Code? Surround your code with {{{ and }}} to preserve whitespace.

Books

is a comprehensive book on getting a job at a top tech company, while focuses on dev interviews and does this for PMs.

Learn More

Videos

CareerCup's interview videos give you a real-life look at technical interviews. In these unscripted videos, watch how other candidates handle tough questions and how the interviewer thinks about their performance.

Learn More

Resume Review

Most engineers make critical mistakes on their resumes -- we can fix your resume with our custom resume review service. And, we use fellow engineers as our resume reviewers, so you can be sure that we "get" what you're saying.

Learn More

Mock Interviews

Our Mock Interviews will be conducted "in character" just like a real interview, and can focus on whatever topics you want. All our interviewers have worked for Microsoft, Google or Amazon, you know you'll get a true-to-life experience.

Learn More