MehrdadAP
BAN USER
If for each cell we would know these information the problem would be solved:
Left(x,y)= maximum number of consecutive 1's in the left of cell(x,y) (including cell(x,y))
Right(x,y)= the same as above with right direction
Up(x,y) = same for up direction
Down(x,y)=same for down direction
And then the answer (largest +) for each cell would be the minimum of all of its directions ( min(Left(x,y),Right(x,y), ... )
And the final answer would be the cell with largest +.
Calculating mentioned arrays can be done with a simple dynamic programming.
For example:
Left(x,y) = if (cell(x,y)==0) 0 else Left(x,y-1)+1
So Total time complexity would be O(N*N).
It could be solved using following trick:
When we need to toggle range (s,e), we could add 1 to cell s, and -1 to cell e+1.
Now, every time that we wanna know about the state of a i-th cell, we need to know whether the cumulative sum from 1 to i is even or odd.
For having an efficient updatable cummulative sum array, we could use Binary Index Tree.
So updating and reading the state of a cell would be O(logN)
Code is extremly easy.
//Mehrdad AP
//Binary Indexed Tree(Fenwick Tree) code from Topcoder's Tutorials
#include <iostream>
using namespace std;
const int maxn=100000;
int sum[maxn];
int read(int idx){
int ans = 0;
while (idx > 0){
ans += sum[idx];
idx -= (idx & -idx);
}
return ans;
}
void update(int idx ,int val){
while (idx <= maxn){
sum[idx] += val;
idx += (idx & -idx);
}
}
bool isOn(int i)
{
return read(i)%2==1;
}
void toggle(int start,int end)
{
update(start,1);
update(end+1,-1);
}
int main()
{
int Q,x,y;
char ch;
cin >> Q;//queries
while (Q--){
cin >> ch;
if (ch=='T'){
cin >> x >> y;//1-based indexed;
toggle(x,y);
}
else{
cin >> x;
cout << isOn(x) << endl;
}
}
return 0;
}
The greedy approach by @ RecruitingForSandvineCanada and the DP approach of soneyoung are both correct, I think the DP approach based on time for this specific problem is a very clever answer.
But in general, if your intervals wouldn't be time intervals, you could also solve it using another DP approach in O(nlogn). This DP approach will be more useful when there is a benefit for choosing each interval.
Anyway, here's my code, which is a Dynamic Programming approach for this problem in general. (you could easily add benefits of each interval to the code)
//finding maximum number of non-overlaping intervals
//also could be solve Greedily
//Mehrdad AP
#include <iostream>
#include <algorithm>
#include <vector>
#include <stdio.h>
using namespace std;
bool cmpBasedEnd(const pair<int,int> &x,const pair<int,int>& y)
{
if (x.second!=y.second)
return x.second < y.second;
else
return x.first < y.first;
}
int maximumNonOverlapIntervals(vector< pair<int,int> > intervals)
{
int N=intervals.size();
//sorting array based on end of intervals
sort(intervals.begin(),intervals.end(),cmpBasedEnd);
vector<int> dp(N,0);
for (int i=0;i<N;i++){
int j= upper_bound(intervals.begin(),intervals.begin()+i,make_pair(100000,intervals[i].first),cmpBasedEnd) - intervals.begin();
j--;//index of nearest interval(party) which could be the previous party before i-th party
dp[i]=1;
if (j>=0 && intervals[j].second<=intervals[i].first)
dp[i]+=dp[j];
if (i>0)
dp[i] = max ( dp[i] , dp[i-1]);
}
return dp[N-1];
}
int main()
{
int N;
cin >> N;
vector< pair<int,int> > V(N);
for (int i=0;i<N;i++)
cin >> V[i].first >> V[i].second ;
cout << maximumNonOverlapIntervals(V) << endl;
return 0;
}
We can have three different hash tables based on ID, name and salary. Also, each table is like a bucket, which contains a list of all employees with same ID (or name or salary).
In this case each query will be answered in O(L) in which L is the number of result items, which is the most efficient time complexity we could have.
Note that for inserting an employee we should insert it in three different tables, which causes to have space complexity of 3*N(N is number of all employees) which is O(N) in total.
By chain, do you mean the following statement:
If we remove a character from string S1 and it becomes S2, next time do we have to only remove a character from S2 and making another word in the inventory ?
If so, here's my dynamic programming approach:
- Sort words by their size (from smallest to longest).
- then, define dp[i] = The longest chain that can be formed from word[0] up to word[i] and ends at word[i].
- now, dp[i] = max( dp[j]+1) from all j from 0 to i, provided that by removing one character from word[i] we can have word[j].(we can use a hash map for checking this condition)
- final answer is max(dp[i]) 0<=i<N.
- The order is O(N*L*L), which N is the number of words and L is the maximum length of words.
#include <bits/stdc++.h>
using namespace std;
const int maxn=10000;
unordered_map< string,int > hashTable;
int dp[maxn];
int LongestChain(vector<string> V)
{
vector< pair<int,string> > list;
for (auto s:V){
list.push_back({s.size(),s});
}
sort(list.begin(),list.end());
int N=list.size();
hashTable.insert( {list[0].second,0} );
dp[0]=1;
int answer=dp[0];
for (int i=1;i<N;i++){
dp[i]=1;
string s=list[i].second;
int size = s.size();
for (int j=0;j<size;j++){
string tmpStr = s;
tmpStr.erase(tmpStr.begin()+j);
auto it = hashTable.find(tmpStr);
if (it!=hashTable.end())
dp[i] = max (dp[i],dp[it->second]+1);
}
answer = max ( answer , dp[i]);
hashTable.insert({s,i});
}
return answer;
}
int main()
{
int N;
cin >> N;
vector<string> V(N);
for (int i=0;i<N;i++)
cin >> V[i];
cout << LongestChain(V) << endl;
return 0;
}
It depends on the range of R. If it's less than 10^7 we could pre-process prime numbers using the Sieve of Eratosthenes. Afterwards, we need to do two binary searches to find all the prime numbers between L and R. Then we would simply count the frequency of each digit.
Also, to make it more efficient, we could calculate the cumulative sum of frequency of each digit in prime numbers. And then we need to subtract the cumulative frequency of last number out of the range [L,R] from the last number inside the range.
If R could be a large number, we need to use Miller-Rabin Primality Test on all the numbers between L and R.
Here's my implementation with suffix tree.
N= # of numbers(strings)
K= size of numbers (which in this problem is 10)
Time complexity for building the tree is O(N*K*K), Does anyone have any idea how to reduce it?
Also, to avoid repetitive answers, I used set to store the index of each node.any better idea ?
#include <bits/stdc++.h>
using namespace std;
#define null NULL
struct Node
{
set<int> indices;
Node* child[10];
}*root;
Node* add(Node* root,string s,int cur,int index)
{
if (root==null){
root = new Node();
}
root->indices.insert(index);
if (cur != s.size())
root->child[s[cur]-'0'] = add(root->child[s[cur]-'0'],s,cur+1,index);
return root;
}
void createSuffixTree(vector<string> numbers)
{
int N=numbers.size();
for (int i=0;i<N;i++){
for (int j=0;j<numbers[i].size();j++)
//I know that it will be better if I avoid using
//substr here (or at least preprocess every substrings)
//but please try to optimize other parts of the code
root = add(root,numbers[i].substr(j,numbers[i].size()),0,i);
}
}
set<int> count(Node* root,string s,int cur)
{
if (root==null) return set<int>();
if (cur == (int)s.size())
return root->indices;
else
return count(root->child[s[cur]-'0'],s,cur+1);
}
int main()
{
int N;
cin >> N;
vector<string> numbers(N);
for (int i=0;i<N;i++)
cin >> numbers[i];
createSuffixTree(numbers);
string t;
while (cin >> t){
set<int> ans=count(root,t,0);
for (auto x:ans)
cout << x << " " ;
cout << endl;
}
return 0;
}
If I understood your approach well, we have to keep the indices in tree nodes, right? So in this case the size of the suffix tree would be O(N*K*K)?
and to avoid duplicate report, we need to use binary tree or hash or set. right?
I've implemented your approach, if you have any optimization on it, please let me know. (My code is in next comment)
Also I have no idea how to implement the construction part in O(NK), mine is O(NK^2). (For this specific problem, we can not use the edge compressing idea to reduce the building time)
Here's my code which is a simple implementation without any optimization.
O(MlogM)
M=2^N
vector<int> Folding(int N,string S)
{
vector< vector<int> > prev,cur;
vector<int> tmp;
for (int i=0;i<(1<<N);i++){
tmp.clear();
tmp.push_back(i+1);
prev.push_back(tmp);
}
for (int t=0;t<N;t++){
cur.clear();
if (S[t]=='R'){
int sz=prev.size();
for (int i=0;i<sz/2;i++)
cur.push_back(prev[i]);
for (int i=0;i<sz/2;i++)
for (int j=prev[i].size()-1;j>=0;j--)
cur[i].push_back(prev[sz-i-1][j]);
}
else{
int sz=prev.size();
for (int i=sz/2;i<sz;i++)
cur.push_back(prev[i]);
for (int i=0;i<sz/2;i++)
for (int j=prev[i].size()-1;j>=0;j--)
cur[i].push_back(prev[sz/2-i-1][j]);
}
prev=cur;
}
reverse(cur[0].begin(),cur[0].end());
return cur[0];
}
First of all, we can calculate the cumulative sum of changes and then the answer for maxGain would be the two elements with maximum difference in which the bigger element is on the right of the smaller one.
So we could easily iterate over the array from left to right, update the minimum value we've seen so far, and each time update the best answer we could have.
Here's my code for 1.a an d 1.b
O(N)
pair<int,int> maxGain(vector<int> changes)
{
vector<int> stock;
int N=changes.size();
stock.push_back(changes[0]);
for (int i=1;i<N;i++)
stock.push_back(stock[stock.size()-1]+changes[i]);
//tmp is the index of minimum stock we've seen so far
int tmp=0,startDay=0,endDay=N-1;
for (int i=1;i<N;i++){
if (stock[i]-stock[tmp] > stock[endDay]-stock[startDay]){ //for maxLost change it to <
startDay = tmp;
endDay = i;
}
if (stock[i]<stock[tmp])//for maxLost, change it to >
tmp = i;
}
return make_pair(startDay,endDay);
}
for 1.c, we need to know the minimum value among the last K elements we've seen so far. I used the multiset to storing the values, and if the size of the multiset reaches to K+1, we should remove the first invalid element.In this way we always have the last K valid elements in the multiset.
Also, each time the first element of multiset is the minimum one that we are looking for.
O(N*logK)
pair<int,int> maxGainWithLimit(vector<int> changes,int K)
{
vector<int> stock;
int N=changes.size();
stock.push_back(changes[0]);
for (int i=1;i<N;i++)
stock.push_back(stock[stock.size()-1]+changes[i]);
//multiset for keeping possible repeated values
multiset< pair<int,int> > st;
int tmp=0, startDay=0,endDay=N-1;
st.insert(make_pair(stock[0],0));
for (int i=1;i<N;i++){
//first element of set is the minimum of stocks which are not far from K elements
if ( stock[i] - (*st.begin()).first > stock[endDay] - stock[startDay] ) {
startDay = (*st.begin()).second;
endDay = i;
}
st.insert(make_pair(stock[i],i));
if (st.size()==K+1){
st.erase(make_pair(stock[i-K],i-K));
}
}
return make_pair(startDay,endDay);
}
@zortlord: I think my code does the minimum number of steps to get the permuted string, although I can't prove it yet.
It seems to me, that in worst case we need O(n^2) number of steps to make the permuted one.
Can you do the following example better than 10 steps?
ABCDE
EDCBA
Here's a simple code in C++.
Iterate over the permuted string and find the first position in original string which is equal to the current character of permuted. Then go back and swap one by one.
O(n^2)
void Swapping(string S,string T)
{
for (int i=0;i<S.size();i++){
int j=i;
while (S[j]!=T[i])j++;
while (j>i){
swap(S[j],S[j-1]);
cout << S << endl;
j--;
}
}
}
RepI am working as a Software quality assurance analyst in Turtle's Records company. I look for flaws and weaknesses ...
Repkayegoinsk, Cloud Support Associate at ABC TECH SUPPORT
Hello, I'm Kim Starns. I work as a Telephone service representative at the respected Sholl's Colonial Cafeteria. Part ...
Repcharlesndouglass, Employee at VANS
I am Michael from Nantucket USA. I am working as a Power plant dispatcher in Matrix Design company. I am ...
Repjessicajbunting, Aghori Mahakal Tantrik at ABC TECH SUPPORT
Hi I am Jessica, from Houston USA. I am working as a manager in Incredible Universe company. I enjoy additional ...
Replarryehickl, Associate at ABC TECH SUPPORT
I am a blogger in the Geek system operator . As an editor with a strong background in english and hindi ...
This is a naive solution which is O(N^3).
- MehrdadAP August 19, 2015It can be solved with O(N^2) time.