MehrdadAP
BAN USER
@blckembr, Yes, absolutely it is :D
The reason is in fenwick tree, update and read are O(logN). Also, as I explained in this problem, for each update query we just need to do two updates on fenwick tree (on each end of range) and one read for each isON query. So totally it's O(logN) for each query.
I assumed that range in 1-based for simple implementation. So for toggle(1,N) you just need to change 1 and N+1 in array.
@plutoman, Thanks.
There is another form of Master Theorem (in Wikipedia known as Generic Form and sometimes it's called Special Case) which includes the case that I used.
(See more explanation in Master Theorem page in Wikipedia)
Also, you could compute it without MT, using recurrence tree. It's the same as merge sort.
@Larry: it's guaranteed because based on what I said, when we are at position i, we just have prefix sums before i in the BST.
@myzn: I came up with another solution, and I tried to explain it more coherently. Part of new solution is the same as above solution. I hope it could help.
@Jar: Because the right answer is 4 :D
[4] [4,100,-100] [4,100,-100,3] [3]
Let's do it step by step for more clarification:
0: X_0 = 4 --> ans=1
1: X_0=4 -> pos1=pos2=null
2:X_0 = 4,X_1 = 104 -> pos1=pos2=null
3:X_0=4,X_1=4,X_2=104 -> X_4=7 so ans++ and pos1=0,pos2=1 so ans+=2
total ans = 4.
probably in case of equal numbers we need to define the pos1 and pos2 more accurate. But the whole idea works.
I could solve it with easier implementation in O(nlogn), thanks to @emb hint.
The idea is based on divide and conquer technique(like(almost the same as) what we do in merge sort).
Let's call prefix sums, S_i = arr[0]+arr[1]+..+arr[i]
Basically, I need to count all A<= (S_i - S_j) <=B and i>j, plus all A<=S_i<=B.
Now, let's think about it with divide and conquer(D&C) technique.
It means, assume we have split prefix sums array,S, into two equal parts.Also, assume each part is sorted individually.
Based on D&C, we can say we know the answer of left part and right part. We just need to find all S_i's in right part, and all S_j's in left part which satisfy A<= S_i - S_j <=B (note that since all S_i's are in right part so they already satisfy i>j).
We can find them with two approaches.
Approach 1)
For every i in right part, we can do two binary searches on left part array (which is sorted) and find a range which all of them satisfy the condition.(for more explanation, you could see my first solution based on BST)
In this approach the time complexity can be computed as following recurrence:
T(n) = 2T(n/2) + O(nlogn) --> Using Master theorem: T(n) = O(n*logn^2)
Approach 2)
In approach 1, we didn't take advantage of the fact that numbers in right part are also sorted.
So if for S_i we have p1 and p2 as a range that all numbers between [p1,p2] satisfy the condition, for finding the range for S_{i+1}, we just need to move p1 and p2 forward.(It needs a little bit playing with numbers to make sure that this fact is true!) So in this case we will visit all numbers of left part, twice (one for moving p1 and one for moving p2).
So the recurrence would be:
T(n) = 2T(n/2) + O(n) --> Using Master Theorem: T(n) = O(nlogn)
Very simple and cool question. Every time, I'm so excited about D&C solutions/problems :D
Here's my c++ implementation
int solve(int s,int e,int a,int b,vector<int> &arr)
{
if (e<s) return 0;
if (s==e) return (a<=arr[s] && arr[s]<=b);
int mid=(e+s)>>1;
int ans = solve(s,mid,a,b,arr) + solve(mid+1,e,a,b,arr);
int p1=s,p2=s;
for (int i=mid+1; i<=e;i++){
while (p1<=mid && (arr[i]-arr[p1])>=a)
p1++;
while (p2<=mid && (arr[i]-arr[p2])>b)
p2++;
// now for all k>=p2 we have arr[i]-arr[k]<=b
// and for all k<p1 we have arr[i]-arr[k]>=a
if (p2<=mid && p2<=p1)
ans+=(p1-p2);
}
//sort the array from (s,e)
vector<int> tmp;
p1=s,p2=mid+1;
while (p1<=mid && p2<=e){
if (arr[p1]<=arr[p2]){
tmp.push_back(arr[p1]);
p1++;
}
else{
tmp.push_back(arr[p2]);
p2++;
}
}
while (p1<=mid){
tmp.push_back(arr[p1]);
p1++;
}
while (p2<=e){
tmp.push_back(arr[p2]);
p2++;
}
for (int i=0;i<tmp.size();i++){
arr[i+s]=tmp[i];
}
return ans;
}
int countPartialSums(vector<int> &nums,int a,int b)
{
int N=nums.size();
for (int i=1;i<N;i++){
nums[i]=nums[i-1]+nums[i];
}
return solve(0,N-1,a,b,nums);
}
@emb, That's not my approach. x_i are sorted at each step.
It was my bad on definition of x_k.
Let me explain more with your example.
x_0 is not between [a,b] --> no answer
at position 1: we have: x_0 = 1 --> no answer
at position 2: we have: x_1=-1, x_0 =1 --> no answer
So answer =0.
As an another example:
count([-2,5,-1],-2,2)
position 0: -2<=-2<=2 --> ans +=1
position 1: x_0=-2 --> ans+=0
position 2: x_0=-2 , x_1 = 3 --> (x_2=2, so x_2 - x_0 and x_2-x_1 satisfiy) ans+=2
finally: ans = 3
It can be solved in O(nlogn) using cumulative sum and balanced binary search tree.
Start iterating from left to right. When we are at position i, we need to have cumulative sum of elements from 0..(i-1) in a balanced binary search tree.
For simplicity assume that we have cumulative(prefix) sums in a sorted array.
x_1, x_2, x_3, ..., x_(i-1)
which x_k = arr[0] + arr[1]+...+arr[s] (note that X is a sorted array of prefix sums)
(More clarification on X:
Let's say we have prefix sums as S_i = arr[0]+arr[1]+...arr[i]
Now X, is sorted version of S. Therefore X_1 is the smallest S_i and X_2 is the second smallest S_i and so on)
Now, for finding all possible answers for position i, we need to find all x_j provided that a<=x_i - x_j <=b.
Since array is sorted, we can do binary search on (x_i - a) and (x_i - b).
pos1 = position of greatest number which is smaller or equal than (x_i - b).
pos2 = position of smallest number which is greater or equal than (x_i - a).
now all numbers in range of [pos1,pos2] satisfy a<=x_i - x_j <=b. So we can add (pos2-pos1+1) to final answer.
We can't use a set container (in c++) instead of binary search tree, since we want to know the position of a given number in set, which is not provided in c++ set. So basically we need to write our own set (red-black BST) and at each node keep the number of its children.
Time Complexity would be O(nlogn) since we need binary search for each position.
Here's my O(nlogn) solution.Basically it's the same as what @zd said.
More explanations:
Think of time intervals as a bunch of segments (with start and end points). Now we want to find a point which has most weighted(memory usage) intersection with other segments.
For sure the answer is one of the start points.
We can add all start and end points to a vector (with their type(start or end) and their original index in segments) and sort them.
Now we start from left to right, and keep track of sum of all memory usages which are currently having intersection with each other.At each end point we just need to subtract the memory usage of that segment, and at each start point just add the new memory usage and see whether we have a maximum answer so far.
The tricky point is that in case of equal dates(times), start points should be on the left of end points.
Here's the my implementation in C++.
int findBusyHour(vector< pair<pair<int,int>,int > > times)
{
int N=times.size();
vector< pair<int, pair<int,int> > > points; // (time,(isEndPoint,OriginalIndex))
for (int i=0;i<N;i++){
points.push_back({times[i].first.first,{0,i}});
points.push_back({times[i].first.second,{1,i}});
}
sort(points.begin(),points.end());
int cur=0,maxi=0;
int ans=0;
for (int i=0;i<2*N;i++){
int ind=points[i].second.second;
int isEnd = points[i].second.first;
int h=points[i].first;
if (isEnd){//end point
cur-=times[ind].second;
}
else{
cur+=times[ind].second;
if (cur>maxi){
maxi = cur;
ans = h;
}
}
}
return ans;
}
UPD: I could find a way to optimize my code to O(N). Main idea is the same. I added updated part after my old explanations
Here's my O(N^2) idea which is absoloutly true:D ( proved, and tested with many random test cases)
Let's say we have string S:
I define next[i] = next position (after i) which is equal to S[i], if there is not, -1.
So if S="bacdbcb", next={4,-1,5,-1,6,-1,-1}
Now, I iterate through S, from left to right, and at each position, I decide whether to pick the current character or not. Also, there is an array called visited, which visited[i]=true means character i has been picked.
Greedy part:
If we are at position i, (let's say x=S[i]), we need to find the first j>i which next[j]=-1 and visited[j]=false, call it y=S[j]. Also I need to know what is the smallest character from i to j, call it z( and obviously visited[z]=false). If the smallest character in between, z, is bigger than x, then add x to answer and visited[x] = true, otherwise skip x and continue to i+1.
Here's the reason:
We have something like this.
....x .... z ... y ...
It means definietly y is the part of answer string (since its next is -1). So if there is nothing smaller than x between x and y (including y), for sure it's better to choose x to have a smaller lexicographic answer.(note that all the characters between x and y have another copy after y, since y is the first position which has no next, so it's safe to postpone picking them, if needed). On the other hand, if z<x for sure it's better to skip x and choose another x later on.
Because at each position i, we need to see, in worst case, all next characters, so time complexity is O(N^2).
I couldn't find a way to find y in O(logn) or O(1), if there is any suggestion please let me know.
Also, I really really enjoyed the problem and it took me a day to come up with this solution!
string removeRepeated(string s)
{
int N=s.size();
vector<int> next(N);
int lastPos[26];
for (int i=0;i<26;i++)
lastPos[i]=-1;
for (int i=N-1;i>=0;i--){
int ind = s[i]-'a';
next[i] = lastPos[ind];
lastPos[ind]=i;
}
vector<bool> visited(26,false);
string ans;
for (int i=0;i<N;i++){
int cur = s[i]-'a';
if (visited[cur]) continue;
if (next[i]==-1){
ans+=s[i];
visited[cur]=true;
}
else{
int j=i+1;
char smaller=s[i];
while(j<N){
if (visited[s[j]-'a']==false){
smaller = min (smaller , s[j]);
if (next[j]==-1) break;
}
j++;
}
if (s[i]<=smaller){
ans+=s[i];
visited[cur]=true;
}
}
}
return ans;
}
UPD: As I said it seemed it's possible to find y better than visiting all next characters.Using a stack I optimized the solution to O(26*N) = O(N)
for each character, I add all of its positions in the string, into a stack.(from right to left of S)
Also I keep the last position of each character in lastPos array.
After processing a character from position i, I pop its position from its stack. So when I'm at position i, each element of all stacks are greater than i.(all positions before i are popped beforehand from their stacks).
Now, I can find y and z much faster.
position of y = min (lastPos[k]), which k is all characters which are not visited yet.
Is there any z<x = if there is a character smaller than x which top of its stack is less than position of y.
So time complexity is 26*N or O(N).
Here's the code:
string removeRepeatedLinear(string s)
{
int N=s.size();
stack<int> pos[26];//positions of each character
int lastPos[26]={-1};
for (int i=N-1;i>=0;i--){
int cur= s[i]-'a';
if (pos[cur].empty())
lastPos[cur]=i;
pos[cur].push(i);
}
vector<bool> visited(26,false);
string ans;
for (int i=0;i<N;i++){
int cur = s[i]-'a';
if (visited[cur]) continue;
if (pos[cur].size()==1){ //last character of cur
ans+=s[i];
visited[cur]=true;
}
else{
bool isSmaller=false;
int minpos=N;
for (int k=0;k<26;k++){
if (!visited[k] && !pos[k].empty())
minpos=min(minpos,lastPos[k]);
}
for (int k=0;k<cur && !isSmaller;k++){
if (visited[k] || pos[k].empty()) continue;
if (pos[k].top()<=minpos)
isSmaller=true;
}
if (isSmaller==false){
ans+=s[i];
visited[cur]=true;
}
}
pos[cur].pop();
}
return ans;
}
let N=A.size, M=Q.size
By using a hashmap, it can be solved in O(N).
We apply sliding window idea.
Assume there is a window of size M. Just move it from beginning of the array to the end, one by one element. At each step we just need to know about the number of elements in the window that match to Q's elements. Once it reaches to M, it means we found the answer.
The only thing is to update the information of inside the window when we are moving to next element. We can use a hashmap to keep track of frequency of Q's elements inside the window.
Simply, when we pass an element, we decrease its frequency and we insert new element into window, we increase its frequency. Also, by using another variable called "cur", we can keep track of number of distinct element inside the window at each step.
See the code for more clarifications.
pair<int,int> solve(vector<int> A,vector<int> Q)
{
int N=A.size(),M=Q.size();
if (N<M) return {-1,-1};
unordered_map<int,int> mp;
for (int i=0;i<M;i++)
mp[Q[i]]=0;
int cur=0;//number of common elements with Q in sliding window
for (int i=0;i<N;i++){
if (cur==M){
return {i-M,i-1};
}
//the window is becoming size M, so there is no removing
if (i<M){
if (mp.find(A[i])!=mp.end()){
mp[A[i]]++;
if (mp[A[i]]==1)
cur++;
}
}
//we should handle both removing and inserting
else{
//removing oldest element from the window
if (mp.find(A[i-M])!=mp.end()){
mp[A[i-M]]--;
if (mp[A[i-M]]==0)
cur--;
}
//adding new element to the window
if (mp.find(A[i])!=mp.end()){
mp[A[i]]++;
if (mp[A[i]]==1)
cur++;
}
}
}
if (cur==M)
return {N-M,N-1};
return {-1,-1};
}
let's |S1|=n and |S2|=m
If the array is not sorted, it means we have to see each element at least once. So we can easily do a normal merge and then run Selection Algorithm on merged array. It would be O(n+m)
But the problem is interesting when both arrays are sorted.
O(K) approach is to simulate merge operation of merge sort and find the k-th element.
But It can be solved in O(logn+logm).
You can find a clear and comprehensive explanation and implementation in leetcode website. I can't copy the link here, so just search in google "finding kth element of two sorted arrays leetcode" and hit the first link!
Let's say we have a 10*10 chessboard and we are at row i and now wanna check whether it's possible to put a bishop at column j or not. As an example let i=3,j=4.
Basically, there are one primary diagonal and one secondary diagonal that can threat cell (3,4). Primary diagonal is (0,1) (1,2) (2,3). it doesn't matter if there is bishop in (0,1) or (1,2) or (2,3), we just need to know is there any bishop in one of them or not, and that's where we have overlapped sub-problems. The same thing is true for secondary diagonals.
When we are at row i, we just need to know the information of M primary diagonals and M secondary diagonals. (by information, I mean whether they have a bishop that can threat row i's cells).So every time we just need to pass the information of M primary diagonals and M secondary diagonals to next row.
So I define dp like this:
dp[i][state1][state2] = that means in how many ways I can put bishops from row i to the end. And also we know about the constraints in state1(for primary diagonals) and state2(for secondary diagonals).
For handling blocked cells, when I'm at row i, I iterate through its cells, and when I reach at a blocked cell, I free those diagonals that have the current blocked cell in them. (since previous bishops can no longer threat any other cells after current blocked cell)
So state1 and state2 each can have 2^M different values. Since M,N<=10, we could use memoization technique to store answer of states.
Moreover, in the case that M,N>10 we can ignore storing the state in memory(since it's large) and just use the recursion which is much faster than your backtrack.
I posted my implementation, so you can see it for more clarification. I tested my solution with different random chess and the answers were the same as your implementation, therefore you could safely trust it :D
Nice!
Actually, I also wanted to solve it with backtrack but I thought maybe the answer could be large enough that backtrack would not work and it should be solved with Dynamic Programming.
After seeing your solution I tried to generate so many random chess to see what are the ranges of large answers.
This is one of the largest I could generate:
10 10
...*.**...
.***......
.*....**..
..**..*...
....*.*...
...**.*...
.**.*.....
..........
..........
..........
the answer is: 40307036
I ran both of my solution and yours on Codeforces's server:
Your solution took 1887 ms and mine, 15 ms.
(Albeit we should keep in mind that C++ is faster than Java)
It seems backtrack is fast enough for this problem.
UPD: I optimized my code, now it works in O(N*2^N*2^N). I added UPD part at the end of my old explanations.
Here's my not-optimal DP solution:
It's O(N*2^(4*N)), which for this problem is O(10*2^40)
dp[i][state1][state2] = number of ways of putting bishops in rows>=i while state1 shows the state of primary diagonals and state2 show the state of secondary diagonals.
with one iteration I free those diagonals that are blocked by cells of current row.
and then I add answer of putting a bishop in current row and going to next row.
UPD:
I realized that when we are at row i, we actually don't need to know about state of every diagonal. all we nned is to know about the diagonals which can threat current row.
So we just need to know about N primary diagonals and N secondary diagonals. So the total complexity is N*2^N*2^N which is 10*2^20 ~ 10^7 which is fine for this problem.
/*
MehrdadAP
Count number of ways of putting exactly one bishop in a chess with obstacles.
O(N*2^N*2^N)
*/
#include <iostream>
#include <string.h>
using namespace std;
const int maxn = 10;
int N,M;
int dp[maxn][1<<maxn][1<<maxn];
char grid[maxn][maxn];
int solve(int row,int state1,int state2)
{
if (row==N) return 1;
//we actually don't need all the information of
//state1 and state2, so we can reduce them to ss1 and ss2
//to make the implementation easier, I use state1 and state2
//when pass it to next row, but for stoing in dp table, I use
//ss1 and ss2
int ss1=0,ss2=0;
for (int col=0;col<M;col++){
int diagInd = ( row<=col ? col-row : row-col+M-1);
int diagInd2= row+col;
if ((state1 & (1<<diagInd))!=0)
ss1 |=(1<<col);
if ((state2 & (1<<diagInd2))!=0)
ss2 |=(1<<col);
}
int &ref = dp[row][ss1][ss2];
if (ref!=-1)
return ref;
for (int col=0;col<M;col++){
int diagInd = ( row<=col ? col-row : row-col+M-1);
int diagInd2= row+col;
if (grid[row][col] == '*'){
state1 &= ~(1<< diagInd);
state2 &= ~(1<<diagInd2);
}
}
int ans=0;
for (int col=0;col<M;col++){
int diagInd = ( row<=col ? col-row : row-col+M-1);
int diagInd2= row+col;
if (grid[row][col]== '*') continue;
if ((state1 & (1<<diagInd))==0 && (state2 & (1<<diagInd2))==0 ){
ans+= solve(row+1,state1 | (1<<diagInd) , state2 | (1<< diagInd2));
}
}
//cout << row << " " << state << " " <<ans << endl;
return ref = ans;
}
int main()
{
while (cin >> N >> M){
for (int i=0;i<N;i++)
for (int j=0;j<M;j++)
cin >> grid[i][j];
memset(dp,-1,sizeof dp);
cout << solve(0,0,0) << endl;
}
return 0;
}
Yes, it takes O(m) to each pair of rows but you have to do it for every two consecutive rows. So in each merge operation you have to check each number once which means each merge operation takes O(nm) and there would be logn merge operation in total. So it's O(nmlogn).
- MehrdadAP September 13, 2015Here's my O(nlogn) solution:
Main Idea: Binary search on answer.
Basically, we are looking for the smallest substring which contains all type of characters of original string S. let's call it T.
Now, We need to notice to the following fact:
If there is a substring with size X, that has T different characters, for sure there is substring with size more than X, that has T different characters.In other words, we have a non-decreasing function in term of size of substring, X. So we can safely apply binary search for finding the optimal point (here, minimum size).
Here's my code:
I used sliding window idea for counting different characters.
//MehrdadAP
#include <iostream>
#include <string>
using namespace std;
pair<string,int> findSmallest(string s,int size)
{
int frq[26]={0};
int unique=0,ansUnique;
int ansInd=0;
for (int i=0;i<size;i++){
frq[s[i]-'a']++;
if (frq[s[i]-'a']==1)
unique++;
}
ansUnique=unique;
for (int i=size;i<s.size();i++){
frq[s[i]-'a']++;
if (frq[s[i]-'a']==1)
unique++;
int tp = s[i-size]-'a';
frq[tp]--;
if (frq[tp]==0)
unique--;
if (unique>ansUnique){
ansInd=i-size+1;
ansUnique=unique;
}
}
//cout << size <<": " << s.substr(ansInd,size) << " " << ansUnique << endl;
return {s.substr(ansInd,size),ansUnique};
}
string smallestSlidingWindow(string s)
{
int N=s.size();
int lo=0,hi=N;
int unique = findSmallest(s,N).second;
string ans=s;
while (lo<=hi){
int mid= (lo+hi) >> 1;
auto tmp = findSmallest(s,mid);
if (tmp.second < unique){
lo=mid+1;
}
else{
ans = tmp.first;
hi=mid-1;
}
}
return ans;
}
int main()
{
string s;
while (cin >> s){
cout << smallestSlidingWindow(s) << endl;
}
return 0;
}
Here's my O(nlogm) solution:
Main Idea: Binary search on answer:
I do binary search on the the integer values to find the median (in range of -2^30 to 2^30)
Two terms:
lower_bound(V,x): returns the first item in V, which is greater than or equal x.
upper_bound(V,x): returns the first item in V, which is greater than x.
Let's say I want to check that whether x is the median or not.
All I need is to know is that how many numbers are less than x. (I'll explain about equality later), let's call it cnt1. Because each row is sorted I can easily do an lower_bound on each row and add the answer to cnt1.
Now based on cnt1 I can decide whether x is the median or I have to look up for bigger values of x or smaller values.
So total time complexity would be log(2^30)*N*log(M)=(30*N*logM)=O(N*logM)
The tough part of implementation of this idea is how to handle equality.
1 1 (3 3 3) 5 5
let's x=3
Now I need to know the range of all x's.
I handled it using an upper_bound, and then based on lower_bound and upper_bound we can decide that whether x is the median or not.
#include <iostream>
#include <vector>
using namespace std;
pair<int,int> countElements(vector< vector<int> > &V,int val)
{
int N=V.size();
int ansLow=0,ansHigh=0;
for (int i=0;i<N;i++){
int cnt= lower_bound(V[i].begin(),V[i].end(),val) - V[i].begin();
ansLow+=cnt;
cnt = upper_bound(V[i].begin(),V[i].end(),val) - V[i].begin();
ansHigh+=cnt;
}
return {ansLow,ansHigh};
}
double findMedian(vector< vector<int> > V)
{
int N=V.size(),M=V[0].size();
int needed=(N*M)/2;
int s=-(1<<30);
int e=(1<<30);
bool found=false;
int ans1;
while (s<=e){
int mid = (s+1LL+e) >> 1;
auto tmp = countElements(V,mid);
int low=tmp.first,hi=tmp.second;
if ((N*M)%2==0){
if (low == needed && hi == low ){ // 1 2 4 5 --> (3)
found=true;
ans1=mid;
break;
}
if (hi - low >1 && low<needed && hi>needed){ // (1 1) 2 5 3 3 1 1
found = true;
ans1=mid;
break;
}
if (hi==needed){ // 1 (2) 3 4
ans1=mid;
break;
}
}
else{
if (hi - low >1 && low<=needed && hi>needed){ // 1 (3 3 3) 5 or 1 1 (3 3) 5
found = true;
ans1=mid;
break;
}
if (low == needed && hi-low==1){ // 1 2 (3) 5 7
ans1=mid;
found=true;
break;
}
}
if (hi <= needed){ // (1 1) 3 4 5
s=mid+1;
}else
e = mid-1;
}
if (found)
return ans1;
int ans2=(1<<30);
for (int i=0;i<N;i++){
int ind = upper_bound(V[i].begin(),V[i].end(),ans1) - V[i].begin();
if (ind != V[i].size() ){
ans2= min (ans2 , V[i][ind]);
}
}
cout << ans1 << ":" << ans2 << endl;
return (ans1+ans2)/2.0;
}
int main()
{
int N,M;
vector< vector<int> > V;
cin >> N >> M;
V.resize(N);
for (int i=0;i<N;i++){
V[i].resize(M);
for (int j=0;j<M;j++){
cin >> V[i][j];
}
}
cout << findMedian(V) << endl;
return 0;
}
Two things to consider:
1- Your solution is wrong:
example:
1 4 5
2 12 20
3 7 8
based on your algorithm the medians are {4, 7, 12} and 7 is the answer.
while the correct answer is 5.
2- For finding the median of an array we don't need to sort the array, using selection algorithm we can find the median in O(n)
Two things to consider:
1- We can find the median of an array in O(n) with selection algorithm. So the time complexity for this problem can be easily O(n*m) by just creating an array of all elements and run the selection algorithm. For sure it can be solved better than O(nm) and of course O(nmlogn) is not an acceptable solution.
2- Creating a heap is O(n) not O(nlogn)
Let's intial value of answer to longest common subsequence(LCS) of s1 and s2
ans = LCS(s1,s2)
Now we have to add remaining characters of s1 and s2 into ans. This could be done greedily.
Here's why and how:
assume that LCS(s1,s2) = abc
So s1 and s2 should be something like these:
s1= (x1) a (x2) b (x3) c (x4)
s2= (y1) a (y2) b (y3) c (y4)
in which (xi) and (yi) could be any strings of any size(>=0)
Obviously, all of the characters of (xi) and (yi) are different, because otherwise we could extend LCS and get a bigger one which is a contradiction to the definition of LCS.
So now we could add all characters of xi and yi in any order to the answer.
so the answer would be:
ans = (x1)(y1) a (x2)(y2) b (x3)(y3) c (x4)(y4)
and finding the LCS is a classic Dynamic Programming problem which could be solved in O(N*M).
The length of S3 would be S1.size()+S2.size()-LCS.size(), which is the minimum size we could get, so this would be the best answer.
So the final time complexity would be O(N*M)
I really enjoyed the problem very much.
//Mehrdad AP
// finding LCS (by DP) and add remainig character between lcs greedily.
//Time Complexity: O(N*M)
#include <iostream>
#include <string>
#include <vector>
using namespace std;
string LCS(string s1,string s2)
{
int N=s1.size(),M=s2.size();
vector< vector<int> > dp(N),parent(N);
for (int i=0;i<N;i++){
dp[i].resize(M);
parent[i].resize(M);
}
for (int i=0;i<N;i++){
for (int j=0;j<M;j++){
int maxi=0,par=1;
if (i!=0 && dp[i-1][j]>maxi){
maxi = dp[i-1][j];
par = 1;
}
if (j!=0 && dp[i][j-1]>maxi){
maxi = dp[i][j-1];
par = 2;
}
if (s1[i]==s2[j]){
if (i!=0 && j!=0 && dp[i-1][j-1]+1>maxi){
maxi = dp[i-1][j-1]+1;
par = 3;
}
if (1>maxi){
maxi = 1;
par=3;
}
}
dp[i][j] = maxi;
parent[i][j]=par;
}
}
int i=N-1,j=M-1;
string ans="";
while (i>=0 && j>=0){
int par =parent[i][j];
if (par==3){
ans+=s1[i];
i--,j--;
}
else if (par==2)
j--;
else if (par==1)
i--;
}
reverse(ans.begin(),ans.end());
return ans;
}
string findingMix(string s1,string s2)
{
string lcs=LCS(s1,s2);
int i=0,j=0;
string ans="";
for (int k=0;k<lcs.size();k++){
while (i<s1.size() && s1[i]!=lcs[k]){
ans+=s1[i];
i++;
}
while(j<s2.size() && s2[j]!=lcs[k]){
ans+=s2[j];
j++;
}
ans+=lcs[k];
i++,j++;
}
//finishing s1 and s2 characters
while (i<s1.size()){
ans+=s1[i];
i++;
}
while(j<s2.size()){
ans+=s2[j];
j++;
}
return ans;
}
int main()
{
string s1,s2;
while (cin >> s1 >> s2){
cout << findingMix(s1,s2) << endl;
}
return 0;
}
#include <iostream>
#include <vector>
using namespace std;
vector< vector<int> > rotate(vector< vector<int> > matrix)
{
int N=matrix.size();
vector< vector<int> > rotatedMatrix(N);
for (int i=0;i<N;i++)
rotatedMatrix[i].resize(N);
for (int i=0;i<N;i++){
for (int j=0;j<N;j++){
rotatedMatrix[j][N-i-1]=matrix[i][j];
}
}
return rotatedMatrix;
}
int main()
{
int N;
while (cin >> N){
vector< vector<int> > V(N);
for (int i=0;i<N;i++){
V[i].resize(N);
for (int j=0;j<N;j++)
cin >> V[i][j];
}
V=rotate(V);
for (int i=0;i<N;i++){
for (int j=0;j<N;j++)
cout << V[i][j] << " ";
cout << endl;
}
}
return 0;
}
I know there are lots of correct codes here, but here's mine :)
vector<int> add(vector<int> &A,vector<int> &B)
{
vector<int> ans;
reverse(A.begin(),A.end());
reverse(B.begin(),B.end());
int carry=0;
int i=0;
while (i<A.size() || i<B.size()){
int sum=(i<A.size() ? A[i]: 0) + ( i<B.size() ? B[i]:0) + carry;
ans.push_back(sum%10);
carry=sum/10;
i++;
}
if (carry>0)
ans.push_back(carry);
reverse(ans.begin(),ans.end());
return ans;
}
int firstOne = (x & -x);
int i = log(firstOne)/log(2.0);
y = (x >> (i+1) ) | ( 1<<32);
first we need to find the first set bit of x (from right).let's say first 1 occurred at i-th bit. if we shift all bits of x, (i+1) times to the right, then x will have one set-bit less than before. now, if we change the last bit (32-th place) to 1, the result would have equal number of 1's and is different from x.
Update: When I wrote this solution, writers of question hadn't mentioned the constraint of minimizing |y-x|
Dynamic Programming:
Let's call the array of numbers, num
dp[i,j] = maximum value we could get from numbers num[i] to num[j]
base case: dp[i,i] = num[i]
dp[i,j] = max (dp[i,k]*dp[k+1,j] , dp[i,k]+dp[k+1,j] ) for all i<=k<j
and the final answer is dp[0,N-1]
Time Complexity O(N^3)
Using a hashmap:
//MehrdadAP
void solve(int N)
{
unordered_map<long long int,vector< pair<int,int> > > umap;
for (int i=0;i<=N;i++)
for (int j=i;j<=N;j++){
long long int tmp = i*1LL*i*aLL*i+j*1LL*j*1LL*j;
umap[tmp].push_back({i,j});
}
for (auto it:umap){
if (it.second.size()==1) continue;
for (auto x:it.second)
cout << "(" << x.first << "," << x.second << ")" << " " ;
cout << endl;
}
}
Dynamic Programming:
Time Complexity O(100*N) = O(N)
Space Complexity O(20) = O(1)
//Mehrdad AP
int solve(int N)
{
vector<int> dpCur(10,0),dpPrev(10,0);
dpPrev[0]=0;
for (int i=1;i<10;i++)
dpPrev[i]=1;
for (int i=1;i<N;i++){
for (int j=0;j<10;j++)
for (int k=0;k<10;k++)
if (abs(j-k)>=4)
dpCur[j]+=dpPrev[k];
dpPrev=dpCur;
}
int ans=0;
for (int i=0;i<10;i++)
ans+=dpPrev[i];
return ans;
}
RepI am working as a Software quality assurance analyst in Turtle's Records company. I look for flaws and weaknesses ...
Repkayegoinsk, Cloud Support Associate at ABC TECH SUPPORT
Hello, I'm Kim Starns. I work as a Telephone service representative at the respected Sholl's Colonial Cafeteria. Part ...
Repcharlesndouglass, Employee at VANS
I am Michael from Nantucket USA. I am working as a Power plant dispatcher in Matrix Design company. I am ...
Repjessicajbunting, Aghori Mahakal Tantrik at ABC TECH SUPPORT
Hi I am Jessica, from Houston USA. I am working as a manager in Incredible Universe company. I enjoy additional ...
Replarryehickl, Associate at ABC TECH SUPPORT
I am a blogger in the Geek system operator . As an editor with a strong background in english and hindi ...
@manazhao, your code is O(n^2) because of inner loop (for (auto iter = lbIter; iter != ubIter; iter++))
- MehrdadAP November 02, 2015It's not possible to implement it with c++ set, in the same way as Java solution.