oOZz
BAN USERGoogle+ ozslatersd
1. Find the total sum of the array
2. For each element in the array
2.a. keep contSum, which is the sum up to the point in the array
2.b. if the (sum - A[i] - contSum) == contSum, return index (This is the point where the leftSum and the rightSum equals)
3. Return -1 if not found
public static int equilibriumIndex(int[] A) {
int sum = 0;
int i = 0;
for (i = 0; i < A.length; i++)
sum += A[i];
int contSum = 0;
for (i = 0; i < A.length; i++) {
if ( (sum - A[i] - contSum) == contSum) return i;
contSum += A[i];
}
return -1; // not found
}
The key point is to add the values in the contiguous sub array until it's negative, otherwise keep adding. Here is the O(n) solution
public static int maxContinousSub(int []A) {
int maxsub = 0;
int maxsum = 0;
for (int i = 0; i < A.length; i++) {
maxsub = Math.max(maxsub+A[i], 0);
maxsum = Math.max(maxsub, maxsum);
}
return maxsum;
}
@Apostle Yes, you have a good point. the question is asking the "best" data structure and that's vague. I gave a solution based on the time complexity.
I don't see any problems with synchronization, which can be handled easily. The data you're talking about is a very small amount. There are only 3200 cities, 812 states, and 196 countries in the world.
@Apostle The solution explained and the code written has nothing to do with DP, but it's a working solution, I give you that. The problem is not asking the "best pairs" and I see that you've added gMin, gMax, etc.
I am just pointing out that this solution does not require DP and it can be implemented in a simple way with less number of codes, to me that's valuable, and I am sure it is for the interviewer too.
There is a very simple solution in O(n). The max benefit is the case where you buy at the lowest price and sell at the highest, with the condition that the buy has to be before sell.
public static int getMaxBenefit(int []S) {
int maxBenefit = 0;
int minPrice = Integer.MAX_VALUE;
for (int i = 0; i < S.length; i++) {
maxBenefit = Math.max(maxBenefit, S[i] - minPrice);
minPrice = Math.min(S[i], minPrice);
}
return maxBenefit;
}
Why bother with DP, when there is a very simple solution in O(n). The max benefit is the case where you buy at the lowest price and sell at the highest, with the condition that the buy has to be before sell.
public static int getMaxBenefit(int []S) {
int maxBenefit = 0;
int minPrice = Integer.MAX_VALUE;
for (int i = 0; i < S.length; i++) {
maxBenefit = Math.max(maxBenefit, S[i] - minPrice);
minPrice = Math.min(S[i], minPrice);
}
return maxBenefit;
}
1 and 2 can be done in O(1) time using hash tables:
Map<String, ArrayList<String>> countries_ht = new HashMap();
Where key is the country name and the list is the list of the state names
Map<String, ArrayList<String>> states_ht = new HashMap();
Where Key is the state name and the list is the list of the city names, then:
1) find list of states for a country.
List<String> states = countries_ht.get("USA")
2) find list of cities for a state.
List<String> cities = states_ht.get("California")
For the 3. query you need to have reverse lookup tables also using hash tables
Map<String, String> city_to_state_rlt = new HashMap();
Where Key is the city name and the value is the state that the city is in
Map<String, String> state_to_county_rlt = new HashMap();
Where Key is the state name and the value is the country that the state is in
3) find the name of the country and state for a city.
String state = city_to_state_rlt.get("Los Angeles"); // returns "California
String country = state_to_county_rlt.get(state); // returns USA
Realistically though, this sounds like a database entity relation question, where you have entities for country, state and city and query these tables. Most databases are implemented using B-trees, which is a generalization of a binary search tree in that a node can have more than two children. You index the names of the countries, states, and cities for the given queries to search faster.
- oOZz July 25, 2013I can think of a brute-force solution which is O(NM) where N is the length of the text and M is the length of the pattern. I am also assuming from the question that there is only the wildcard character (*) in the pattern.
The idea is to use recursion to check possible patterns, if there is wildcard character and backtrack, if there isn't a match.
bool match_pattern(char *text, char *pattern) {
if (*pattern == 0)
return *text == 0;
if (*text == 0)
return *pattern == 0;
if ('*' == *pattern) return match_pattern(text+1, pattern) || match_pattern(text, pattern+1);
if (*text == *pattern) return match_pattern(text+1, pattern+1);
return false;
}
People are missing the point that this is modular exponentiation and you got it right my friend +1. Your program works fine, but I suggest you use the code formatter next time, I think that's why people don't understand the code.
This is a nice implementation from the book Applied Cryptography by Bruce Schneier.
Here is the implementation, courtesy of GeeksForGeeks:
/* function that returns nth Fibonacci number */
int fib(int n)
{
int F[2][2] = {{1,1},{1,0}};
if (n == 0)
return 0;
power(F, n-1);
return F[0][0];
}
/* Optimized version of power() in method 4 */
void power(int F[2][2], int n)
{
if( n == 0 || n == 1)
return;
int M[2][2] = {{1,1},{1,0}};
power(F, n/2);
multiply(F, F);
if (n%2 != 0)
multiply(F, M);
}
void multiply(int F[2][2], int M[2][2])
{
int x = F[0][0]*M[0][0] + F[0][1]*M[1][0];
int y = F[0][0]*M[0][1] + F[0][1]*M[1][1];
int z = F[1][0]*M[0][0] + F[1][1]*M[1][0];
int w = F[1][0]*M[0][1] + F[1][1]*M[1][1];
F[0][0] = x;
F[0][1] = y;
F[1][0] = z;
F[1][1] = w;
}
This is a good solution considering that the recursive solution has exponential time complexity and DP with memoization has O(n) space complexity. Though you could have written it better, with a few lines of code.
There is also another solution using matrix exponentiation and exponentiation by squaring techniques, which has O(log N) time complexity. You basically take the nth power of the matrix {{1,1},{1,0}} and resulting matrix is {{F(n+1), F(n)}, {F(n), F(n-1)}, where F(N) is the nth Fibonacci number.
+1. This is a classic problem invented by G.J.E. Rawlins. The key point is that the solution is a "randomized" quicksort, where the nuts and bolts are picked randomly and therefore expected running time is Big-Theta(n log n). Randomness element in the solution is important, I think that's why @D,tox'mi's points are important, otherwise just like quicksort, this solution performs O(n^2) worst case, so +1 for D,tox'mi's comment too.
- oOZz July 14, 2013I can't think of a better way of measuring the running time, but run the program multiple times. You need to pick a big N and then also measure for 2N, don't bother with 1 or N^2. You run the test many number of times for both N and 2N and then pick the best case running time. If you run the program plenty enough, you'll see the best running times more and more during benchmark. Then you can verify the complexity by comparing best running time of N and 2N, e.g.,
if T(N) ~= T(2N) => O(1)
if T(N) ~= 2T(2N) => O(n)
The other alternatives I can think of:
1. Look at the source, if it is an open source library.
2. For binary versions, profilers may help, if the library is build with with profile/debug flags enabled
3. You can generate source code from binary (byte-code) in Java. Then analyze the source.
This question is asked before, so if you search you can find the implementations as well. You can do this in two different ways
Method 1
1. Sort the number array A => A'
2. Keep two pointers start and end. start points to the first element and end points to the last element
3. while start < end, add sum = A'[start] + A'[end]
3.a. if sum == k return true
3.b. if sum < k, start++
3.c. if sum > k, end--
4. return false at the end (no match found at this point)
This is O(n log n) time due to sorting, and O(1) time
Method 2
1. First pass, put all the elements in the hashtable. the key is the number, the value is the frequency of the number in the array
2. second pass, calculate diff = k - A[i] and check if the key is in the hashtable
2.a. if diff is in the hashtable
2.a.a. special case (if diff == A[i])
check if the frequency is >2, then return true, else continue
2.a.b. return true (diff exists in hashtable and diff != A[i])
3. return false at he end (no match found at this point)
O(n) time, and O(n) space
I've also answered this in the other post. Basically the solution depends on the problem. If the problem is the 0-1 Knapsack problem, where items are not divisible; you either take an item or not, then you can ONLY use DP. However, fractional Knapsack (where items are divisible) problem can only be solved BOTH with the Greedy Algorithm and DP.
- oOZz July 09, 2013Greedy Algorithm DOES NOT work for the 0-1 Knapsack problem, where items are not divisible; you either take an item or not. However, fractional Knapsack (where items are divisible) problem can be solved with Greedy Algorithm.
DP can solve both 0-1 and Fractional Knapsack problems.
This is a coding competition question: w w w.hackerrank.com/contests/july13/challenges/game-of-throne-ii
I will make the same argument that you did. Imagine the string is 200 characters long, then 100! will overflow even with 64bit unsigned long/integer. The question in the contest says the length of the string is: 1<=length of string <= 10^5, so imagine a string 100000 characters long and calculating 50000!.
I am sure there must be a better way of doing this.
The code you posted provides absolutely no improvement to my solution. Starting your loop from 0 or 1 won't change the complexity, it's still O(n). Besides you've added a third loop two print the values, whereas you could have done that in the second loop.
The main point of this question is the BigInteger part and implementation of multiplication and division for the BigInteger. The code you've provided does not address the overflow condition and therefore it's incomplete and incorrect.
1. First pass calculate the product P of all the numbers in array A
2. Second pass recreate the array A[i] = P / A[i]
As the interviewer has indicated the product can be very big, if the numbers in the array are big and/or the array length is big. Some languages support BigInteger operations, like Python and I think Java also has BigInteger class. If using the programming language provided implementation is not an option, then you'll need to implement your own "BigInteger" class. You need to only implement the constructor, which will take the number and convert it to string and then two methods for multiplication and division. C++ version will probably will overload the multiplication(*) and division(/) operators.
@Dumbo Good point. Though the known problems with DCL are not due to the double checked locking pattern, but due to the programming language used to implement the pattern. For example, for non static singleton instance version, like you've suggested, Java fixed this issue by using volatile for the singleton instance (Also note that this is Java 5 and up). You don't have to use volatile instance, for instance for the static version, a final static wrapper object, can also be used in Java.
- oOZz July 03, 2013There is another way to find LIS, similar to patient sort, where you keep piles (stacks) to keep the increasing sequences. Here is the link and explanation of the algorithm: wordaligned.org/articles/patience-sort.
By using this method, you can not only print the longest increasing sequence, but also print all the longest subsequences, if there are more than one maximum subsequence.
Did you actually run this code yourself? There are a few bugs in your program:
1. Your recursive function does not return. You need to return after you print the word.
2. The function doesn't print all the combinations. specifically case 10, so it only prints 3 words. Check out @nescent comment, you'll need to add that line, otherwise you will miss one of the combinations.
Also, a friendly suggestion, the code snippets that you're posting are not readable. This is not a coding competition, so you should use better variable names, and more comments in the code, not to mention the formatting.
If you keep posting code like this, people won't understand and learn. If they don't understand they will also down vote your solution, so all your effort will be wasted.
Problem: You have to synchronize the getInstance() method, because there is a possibility that while one thread is creating the singleton instance for the first time (this is point just before the new() is called to created the instance), the other one might be also executing the getInstance() method and checking if the instance is already created. Since the instance is not created yet by thread one, the second thread will see that the instance is null and try to create it again.
Solution: You can either synchronize the getInstance() method, which will be expensive. So, there is a better approach called "double checked locking", where you only synchronize the part where the instance is created. This is less expensive than synchronizing the whole method, but you check the instance again for nullness in the syncronized block before creating it.
It's not clear from the question, but if this is the majority find problem, then you can use Moore’s Voting Algorithm, which is O(N) time and O(1) space. (see w w w.geeksforgeeks.org/majority-element/)
- oOZz July 25, 2013If the problem is finding the most frequent element in the array, then the given hashtable solutions will work.