Ajeet
BAN USER
Learn & Share
//Simple DFS approach
int components(){
int count = 0;
for(Node n : Graph.vertices()){
if(!visited(n)){
count++;
DFS(n);
}
}
return count;
}
void DFS(Node node){
visit(node);
for(Node adj : node.getAdjacentNodes()){
DFS(adj);
}
}
//To implement visited we can use an array or Map\Dictionary
public static int getWater(int[] heights){
if(heights == null){
return 0;
}
int max = -1;
int n = heights.length;
int[] left = new int[n];
for(int i = 0; i< n;i++) {
if(heights[i] > max) {
max =heights[i];
}
left[i] = max;
}
max= -1;
int[] right = new int[n];
for(int i =n-1; i>=0; i--) {
if(heights[i] >max) {
max = heights[i];
}
right[i] =max;
}
int ans = 0;
for(int i = 0; i < n; i++) {
if(heights[i] == left[i] || heights[i] == right[i]) {
continue;
}
else {
ans = ans + Math.min(left[i],right[i]) - heights[i];
}
}
return ans;
}
It is an classic problem - implement it by using one queue and a HashMap\Table ?
Each time you access an element from the map, remove it from last and add it at front of queue.
HashMap will hold direct references to nodes in queue, so it will be a O(1) operation to find it in queue.
The most recently used pages will be near front end and least recently pages will be near rear end
1.
If peak means highest element than we can find it with the help of smallest element. Just find out index of smallest element - say it is i, if i is greater than 0, than i-1, will be the largest element otherwise last element from the array will be largest element. Here is code for that:
public static int search(int A[]) {
int N = A.length;
int L = 0;
int R = N - 1;
while (A[L] > A[R]) {
int M = L + (R - L) / 2;
if (A[M] > A[R])
L = M + 1;
else
R = M;
}
if(L == 0){
return N-1;
}else{
return L-1;
}
}
2. To rearrange the array in to original shape we will use following property of rotation :
After X rotation array[i] will move to array[(i + X) % arrayLength]
So if we rotate an array of size N, by N times (or say X==N), than array did not change, or it will remain in sorted order.
So just rotate it by (N-X) time to get original array.
1. We will maintain a HaspMap\directory to store count\quantity sold.
HashMap<String, Integer> frequency; //It will store <ItenID, quantitySold>
2. Maintain a min heap of size i, it will contain only itemIds
3. Now iterate list of items:
Update HashMap
1. If item exists in HashMap than add item.quantitySold in that,
2. If item does not exists in HashMap than add it to HashMap
Update MinHeap
1. If itemID already exists than update\maintain heap with new value of quantity.
2. Else check if its quantity is greater than quantity of head of heap. If it is bigger than replace head and perform heapify.
Time: O(NlogN)
Space O(N)
We can solve it by using a simple flood fill:
public final class Maze {
private static boolean[][] m_visited;
private static int[][] m_maze;
public static boolean hasPath(int[][] maze, int startRow, int startCol, int destRow, int destCol){
if(maze == null || maze.length == 0){
return false;
}
m_visited = new boolean[maze.length][maze[0].length];
m_maze = maze;
visit(startRow,startCol);
if(! m_visited[destRow][destCol]){
return false;
}
return true;
}
private static void visit(int i, int j) {
if(i < 0 || i >= m_visited.length || j < 0 || j >= m_visited[0].length || m_maze[i][j] == 1){
return;
}
if(!m_visited[i][j]){
m_visited[i][j] = true;
visit(i-1, j); // Move left
visit(i+1, j); // Move Right
visit(i, j-1); //Move top
visit(i, j+1); //Move bottom
}
}
public static void main(String[] args) {
int[][] maze = {{0, 0, 1, 1, 1},{0, 1, 0, 0, 0}, {1, 1, 1, 1, 1 },{0, 0, 0, 0, 1}};
System.out.println(hasPath(maze, 0,0,1,2));
}
}
We can use BFS with path length property, if any node has length 3 from source than source and this node are critical nodes.
Incase of A-C-B & A-D-E-B, A-D-E will be critical path.
private int[] bfs(char s){
boolean[] visited = new boolean[26];
int[] count = new int[26];
Queue<Character> q = new LinkedList<Character>();
q.offer(s);
visited[s - 'A'] = true;
count[s -'A'] = 1;
while(! q.isEmpty()){
char tmp = q.poll();
for(char x : getAdjacentVertices(tmp)){
int indics = x - 'A';
if(! visited[indics]){
q.offer(x);
visited[x - 'A'] = true;
count[indics] = count[ tmp - 'A'] + 1;
} else{
count[indics] = count[ tmp - 'A'] + 1;
}
}
}
return count;
}
public List<Character> findCriticalNodes(){
int[] count = bfs(source);
LinkedList<Character> nodes = new LinkedList<Character>();
for(int x = 0 ; x < 26; x++){
if(count[x] == 3){
nodes.add((char) (x + 'A'));
}
}
if(! nodes.isEmpty()){
nodes.addFirst(source);
}
return nodes;
}
O(1) solution ....
int countSwaps(int x, int y){
int tmp = x ^ y;
return numberOfSetBits(tmp);
}
int numberOfSetBits(int i)
{
i = i - ((i >> 1) & 0x55555555);
i = (i & 0x33333333) + ((i >> 2) & 0x33333333);
return (((i + (i >> 4)) & 0x0F0F0F0F) * 0x01010101) >> 24;
}
We can use simple DFS approach ...
private static int count = 0;
private static void dfs(Node root) {
dfs(root, null);
}
private static void dfs(Node root, Node parent) {
if(root == null) return;
if((root.getLeft() != null && root.getLeft().getVal() == root.getVal()) && (root.getRight() != null && root.getRight().getVal() == root.getVal()) ){
//To check are we expending existing UnivalTree tree or found new one
if(parent == null){
count++;
}
parent = root;
} else if(root.getLeft() != null && root.getRight() == null && root.getLeft().getVal() == root.getVal()){
//Found tree with single child - Left child
if(parent == null){
count++;
}
parent = root;
} else if(root.getRight() != null && root.getLeft() == null && root.getRight().getVal() == root.getVal()){
//Found tree with single child - Right child
if(parent == null){
count++;
}
parent = root;
} else{
parent = null;
}
dfs(root.left, parent);
dfs(root.right, parent);
}
After dfs(root) operation, count will be equal to number of UnivalTree.
- Ajeet March 29, 2015We can start from this, and we can improve it by adding more fields like Brand, Catalog etc ...
ShoppingCart
List<ItemOrder> items;
add(ItemOrder)
remove(ItemOrder)
applyCoupan(Coupan)
getTotal()
ItemOrder
Item item;
int quantity
double price;
getPrice()
getQuantity()
changeQuantity();
Enum ItemTypes\ or Category
Item
String name;
ItemTypes type;
double price;
String itemID;
getName()
getItemID();
getItemPrice();
The HLD documentation presents the structure of the system, such as the database
architecture, application architecture (layers), application flow (Navigation), and
technology architecture. The HLD uses non-technical to mildly-technical terms.
Low Level Design (LLD) is like detailing the HLD. It defines the actual logic for each and every component of the system
Short answer: If we want to maintain a single object of a class per JVM .. :)
Long answer: 1. If we have to maintain some information at single place or dont want to spreed everywhere in application, than we prefer singleton ...like objects for configurations, factories etc.
2. Only single object is enough of a particular functionality (class) like .... starters, initializers etc. etc.
Issue:
wait() will not work.
Root Cause: :
wait() (and notify()) work on resources\objects not on threads. But in this piece of code you are calling wait() on this (thread instance) not on object\resource.
Solution:
In place of static int variable use a Object as a lock:
private static final Object lock = new Object();
wait() -> lock.wait();
synchronized (lock) {
lock.wait(); // It's already locked so wait(sleep) till someone wakes me up
try{
/* Critical Section - Increment g */
}finally{
lock.notify();
}
}
It does not depend on stack,operating systems, or the JVM. It will be 256, because there are 256 possible values of a byte.
Of the 256 possible byte-long opcodes, 198 are currently in use, 51 are reserved for future use, and 3 are set aside as permanently unimplemented.
Yes trie is the data structure for Name (First name, and last name). and It will required one HashTable for number for fast searching.
1. Trie will contain two node - one for First name, and one for Last Name. To provide search based on both - first and last name.
2. Each trie node will contain phone number to fetch contact details(Name, address etc) from HashTable
I am considering only 26 Captal letters and 26 small letters for name, so number of keys in a trie will be 52.
Time Complexity - O(logL base 52), where L = length key (first name or last name)
Disadvantage:
Trie is unbeatable in terms of time, but it is space monster. So we can use Ternary Search Tree (Each node can have maximun 3 children) in place of Trie
Time Complexity - O(logL base 3), where L = length key (first name or last name)
If you want to provide prefix based search capability in number too, than use one more trie for number in place of HashMap.
We can do it by using an simple character array. I am using an character array (string, we can also use StringBuffer to avoid objects overhead) ...
public static String reverse(String input){
if(input == null){
return null;
}
String output = "";
String tmp = String.valueOf(input.charAt(0));
for(int i = 1; i < input.length(); i++){
if(input.charAt(i) ==' '){
output = output + tmp + ' ';
tmp = "";
}else{
tmp = input.charAt(i) + tmp;
}
}
//Append last word
output = output + ' ' + tmp;
return output;
}
Space O(N), Time O(N)
@variksla
Yes we can do, but it will not make any difference.
@Psycho
For missing index we will use HashMap's "linear probing" concept. Suppose if we have random index 5, but there is no element at index 5, so we will continue a linear search from 5 to end of the array, untill we will not find an element.
Just like HashMap's concept to resolve a hash collision.
It will required a combination of HashMap, DoublyLinkedList and a random method.
1. Use HashMap\Table to store elements in key value pairs (we can use element as a key too).
Now all of these operations will be in O(1) complexity:
1. void add(e)
2. void delete(e)
3. boolean contains(e)
2. We need a linked list to perform getMostRecent() in O(1), and i will prefer doubly linked list to remove the complexity of last element.
Each time you access an element from the map, just add it's reference at the end of list. If you delete an element than find its reference from the map and remove from list too.
3. We will need a random method for getRandom().
//Class variable
int previousRandom = 0;
e getRandom(){
//Select any two prime numbers as A and B
previousRandom = (A * previousRandom + B) Mod sizeOfMap;
//use previousRandom as hashcode to find an element from the map.
}
Just check it for BST ..
public boolean isBST() {
return isBST(root, null, null);
}
private boolean isBST(Node x, Key min, Key max) {
if (x == null) return true;
if (min != null && x.key.compareTo(min) <= 0) return false;
if (max != null && x.key.compareTo(max) >= 0) return false;
return isBST(x.left, min, x.key) && isBST(x.right, x.key, max);
}
Yea, its my BAD i forgot to assign it back to level1, thanks for the correction:
Algorithm:
==========
Queue level1 = new Queue();
level1.add(rootNode);
Node lastNode = null;
while(!level1.isEmpty()){
Queue level2 = new Queue();
while(!level1.isEmpty()){
Node currentParent = queue.dequeue();
if(currentParent.hasLeftNode()){
Node leftChild = currentParent.leftNode();
if(lastNode != null){
lastNode.next() = leftChild;
}
lastNode = leftChild;
level2.add(leftChild);
}
if(currentParent.hasRightNode()){
Node rightChild = currentParent.rightNode();
if(lastNode != null){
lastNode.next() = rightChild;
}
lastNode = rightChild;
level2.add(rightChild);
}
}
level1 = level2;
}
Space O(L) where L = number of nodes at one level, Time O(N)
- Ajeet July 14, 2014Approach:
=========
We can implement it using BFS search ...
1. Use a queue and add root node inside it.
2. Now iterate over queue untill it not empty
Algorithm:
==========
Queue level1 = new Queue();
level1.add(rootNode);
Node lastNode = null;
while(!level1.isEmpty()){
Node currentParent = queue.dequeue();
Queue level2 = new Queue()
if(currentParent.hasLeftNode()){
Node leftChild = currentParent.leftNode();
if(lastNode != null){
lastNode.next() = leftChild;
}
lastNode = leftChild;
level2.add(leftChild);
}
if(currentParent.hasRightNode()){
Node rightChild = currentParent.rightNode();
if(lastNode != null){
lastNode.next() = rightChild;
}
lastNode = rightChild;
level2.add(rightChild);
}
}
Can't we just implement it just using dutch flag algorithm ..... we have only four grades,
1. Create four lists for each grade.
A for grade a
B for grade b
C for grade c
d for grade d
2. Scan already sorted list and check grade of each student, add it to corresponding linked list.
3. At last add all lists - A->B->C->D
Time complexity: O(N)
Algorithm:
• Run BFS search and colour all nodes in odd layers red, others blue
• Go through all edges in adjacency list and
check if each of them has two different
colours at its ends colours at its ends -if so then if so then G
is bipartite, otherwise it is not
You can implement it using BFS with an additional array colour[], When we add a node to layer L, than set colour[L]=red if L is even else set colour[L] = blue.
At the end of BFS, just scan all edges, if any edge has same colour than graph is bi-partite.
If we say huge file, than it can go any whare ... even number of unique elements can also go anywhere (to fit in memory, either using a trie or HashTable). So i think:
The obvious answer is of course to keep a hash map\trie and store a counter of the occurrence of elements as you move through the file. This is (in terms of time complexity) the optimal solution.
If however, your space requirements are tight you can perform an external sort on the file and then find the longest consecutive run of equal elements. The following should have a constant memory footprint and can be done in Θ(nlogn).
void insert(Node node, int x) {
if (node == null) {
node = new Node(x);
node.next = node;
return;
}
Node nextP = node; // To move forward
Node prevP = NULL; // To hold pointer on previous node
do {
prev = nextP;
nextP = nextP.next;
if (x <= nextP.data && x >= prevP->data){
//If value resides between two values of nextP & prevP, add here
break;
}
if ((prevP.data > nextP.data) && (x < nextP.data || x > prevP.data)) {
//If node x is largest (larger than all elements in the list)
//or smallest (smaller than all elements in the list)
break;
}
} while (nextP != node);
Node newNode = new Node(x);
newNode.next = nextP;
prevP.next = newNode;
}
High Level Design (HLD) is the overall system design - covering the system architecture and database design. It describes the relation between various modules and functions of the system. data flow, flow charts and data structures are covered under HLD.
Low Level Design (LLD) is like detailing the HLD. It defines the actual logic for each and every component of the system. Class diagrams with all the methods and relation between classes comes under LLD. Programs specs are covered under LLD.
This required a stack data structure to validate the structure. Because with count we will not be able to differentiate between - "{123*}23 }{" & "{123*{23 }}". First string is in invalid.
Algorithm:
1. Use a stack - implemented using linkedlist.
2. If "{" comes push it to stack.
3. If "}" comes pop from the stack.
4. End of file - if stack is empty, it is a valid file.
Note: If there is any chance that stack can throw OutOfMemory (there are too many push) in that scenario we can implement this stack using a BitVector (to reduce the space).
We can achieve it using extra space O(P)
where P = size of path between A & B.
Algorithm:
1.Node a = A;
2.Node b = B;
3.Map<Node,Node> path = new HashMap<Node,Node>();
4.while(a.p != null && b.p != null && a != b){
if(!path.contains(a)){
path.put(a,a.p);
a = a.p;
}else if(!path.contains(b.p)){
path.put(b.p,b);
b = b.p;
}else{
return path;
}
}
//Map maintains elements in insertion order.
5.return path;
Just store these numbers in an array and add two numbers by performing addition on cells....like -
int[] add(int[] A, int[] B){
int m = 0;
if(A.size >=B.size){
m = A.size;
}else{
m= B.size';
}
int[] result = new int[m+1];
int index = m;
int carry =0;
(iterate both arrays from last to first element, till both array reach at first element){
int x = A[i] +B[j] + carry;
carry = x/10;
int number = (x % 10);
result[index] = number ;
}
return result;
}
class VendingMachine
- boolean takeMoney(Money money);
//Here item will contain type & quantity, no smart search required
//say user need half litere Coke.
- Item pickItem(Item item);
- boolean isAvilable(Item item)
- Double askForAmount(Input input);
- Double valiDateAndReturnBalance(Money money);
class Money
-CurrencyType type;
-Double amount
class MoneyValidator
- boolean validate(Money amount);
class Item
-ItemType type;
-Double quantity
-String barcode; // May be anything else to identify it
class Input
- CommandType type;
- Double quantity
- ItemType type;
class display
- void updateDisplay(String instructions)
class KeyPad
- Input readUserInput(Key key);
class Key
-KeyType type
-Double value
enum CurrencyType
enum ItemType
enum KeyType
enum CommandType
Text file can be too large to fit in RAM, so i will prefer .. Map-Reduce approach:
1. Split your text file in to small chunks to fit in RAM.
2. Check each word for palindrome within the node and update its frequency in HashTable. Than write table to secondary storage.
3. Collect result\table for each chunk and combine them all.
4. To combine the results we can use "external sort" kind of approach.
I think trick is here about reading the file. If we will read it in small pieces than we can restrict space complexity to less than O(N). Only O(k) should be required.
- Ajeet December 10, 2020But we cant reduce the time complexity because in worst case scenario we need to iterate whole file.