Chris
BAN USERSWE
 0of 0 votes
AnswersA new species has been discovered and we are trying to understand their form of communication. So, we start by identifying their alphabet. Tough Mr. Brown managed to steal a vocabulary that has words in it in ascending order and a team already identified the letters. So, we have now ordered words as sequences of numbers, print every letter once in ascending order.
 Chris in United States
[3,2]
[4,3,2]
[4,1,1]
[1, 2]
[1, 1]
alphabet: [3, 4, 2, 1] Report Duplicate  Flag  PURGE
Software Engineer Data Structures
It depends on the exact semantics of "all paths". Can we jump back and forth a field
to extend the path or is the intention for each path we count that path only visits
a field once.
I assume the first, no matter how I walk, as long as I can get to (0,0) from
the given start (x,y) in k steps, it's a valid path, even if I pass (0,0) before
and come back to it on the kth step. Further I assume I can go everywhere in the matrix.
There is an obvious solution, that is, try all paths. There are 4^n such paths,
this will result in an O(4^n) (time) solution.
Given the graph problem of finding all paths or finding kth order path is NP hard,
this isn't much of a surprise.
But here we have a matrix, so I would try to "exploit" the fact that this isn't exactly
a graph and it seems feasible to have some subproblems one can reuse.
I think that's possible by keeping a vector of size k for each cell and in each
iteration I count for each cell in how many ways I can reach it. I do this k times.
This is a O(k*n*m) time and space solution. So it's polynomial and I solved a
special case of an NPhard problem  not exactly (of course) but it sounds a bit cool...
Space complexity of the later can be reduced to n*m but the code then doesn't look
as neat anymore.
#include <vector>
#include <unordered_set>
#include <iostream>
using namespace std;
using uint = unsigned int;
const uint MOVES[][2]{ { 1,0 },{ 1,0 },{ 0,1 },{ 0,1 } };
size_t count_paths_bruteforce(size_t m, size_t n, uint sx, uint sy, uint k)
{
// lazy recursive impl.
if (sx + sy > k) return 0; // can't reach {0, 0} no need to further try (to be not totally silly)
if (k == 0 && sx == 0 && sy == 0) return 1; // reached {0, 0} in k steps
if (sx >= m  sy >= n) return 0;
uint sum = 0;
for (size_t d = 0; d < sizeof(MOVES) / sizeof(decltype(MOVES[0])); ++d) {
// ignore overflow here, should check
sum += count_paths_bruteforce(m, n, sx + MOVES[d][0], sy + MOVES[d][1], k  1);
}
return sum;
}
size_t count_paths_poly(size_t m, size_t n, uint sx, uint sy, uint k)
{
vector<vector<vector<uint>>> dp(m, vector<vector<uint>>(n, vector<uint>(k + 1, 0))); // can optimize space to n*m
dp[sx][sy][0] = 1; // start condition
for (uint i = 1; i <= k; i++) {
for (uint x = 0; x < m; ++x) { // could be optimized to only iterate over visited fields
for (uint y = 0; y < n; ++y) { // same optimization as with y
for (size_t d = 0; d < sizeof(MOVES) / sizeof(decltype(MOVES[0])); ++d) {
uint ox = x + MOVES[d][0]; // go from ox to x
uint oy = y + MOVES[d][1]; // go from oy to y
if (ox >= m  oy >= n) continue; // in field?
dp[x][y][i] += dp[ox][oy][i  1]; // += 0 doesn't harm
}
}
}
}
return dp[0][0][k];
}
void test(size_t m, size_t n, uint x, uint y, uint k)
{
cout << "test case (mxn): " << m << "x" << m << " (x,y): " << x << "," << y << " k:" << k << endl;
cout << "poly algo: " << count_paths_poly(m, n, x, y, k) << endl;
cout << "brute force: " << count_paths_bruteforce(m, n, x, y, k) << endl << endl;
}
int main()
{
//test(2, 2, 1, 1, 1); // 0
test(2, 2, 1, 1, 2);
test(2, 2, 1, 1, 3);
test(2, 2, 1, 1, 4);
test(5, 5, 4, 4, 5);
test(5, 8, 4, 6, 10);
test(5, 8, 4, 6, 16);
test(5, 8, 4, 6, 17);
test(5, 8, 4, 6, 18);
test(5, 8, 4, 6, 20);
test(5, 8, 4, 6, 21); // the last few to experience exp vs. poly
}

Chris
October 23, 2017 My first thought was the following simple approach (easy to code):
1) sort the letter in each word and put it into a hash table / hash set
2) sort the letters of input word, create the superset and check for each set(=string) if it's in the hash set: if word is 10 chars: 2^10: 1K lookups, with 20: 1 Mio, with 30: 1 Bio
The problem here is, there are a lot of tests into the hashtable that will not hit a "word". If I could stop trying if I knew a prefix of a subset I am trying doesn't exists ... see alternative below.
This HT approach would solve the followup question neatly. For the initial question it's not efficient.
Alterantive with Trie.
1) Store the words in a trie data strucutres (after sorting the chars, so CAT becomes "ACT"). The trie can be optimized to an alphabet of 26 chars and compacted (Patricia tree to save memory) but the original word must be still stored to return (since order is not preserved in the trie...)
2) Sort the chars in the search word (e.g. "ACRAT" > "AACRT") Now, try to enter the trie with the first letter and then the basic idea is, if the letter is in the trie, I later need to backtrack and try the next letter from input string without the previous letter, but, and that's the catch, if it is not in the trie, I can stop exploring quite a lot subsets, so I do much fewer lookups that will be "empty".
Theoretically this is faster, but tries are hard to implement cachefriendly and so, it is not right obvious, if it's much faster on a modern machine. It is surely if we can add some additional constraints, like limit the number of results to return to e.g. 100, 1000, 10.000.
I would just write the search code in the trie assuming some trie interface:
vector<string> findInTrie(string searchStr, Trie* trie, int maxResults)
{
vector<string> results;
if (searchStr.length() == 0) return results;
sort(searchStr.begin(), searchStr.end());
stack<pair<Trie::iterator, int>> s; // trie iterator, index in string
s.push(trie.begin(), 0);
while (!s.empty() && results.size() < maxResults) {
auto trieIt = s.top().first;
int resIdx = s.top().second;
s.pop();
while (trieIt != trie.end() && results.size() < maxResults) {
if (trieIt.isWord()) {
results.push_back(trie.getWord());
}
resIdx++;
if (resIdx < searchStr.size()) {
s.push({trieIt, resIdx}); // skip the char (char is not in the probing set)
}
trieIt.next(c); // advance within the trie with character c
}
}
return results;
}

Chris
October 19, 2017 I thought of a O(n) solution:
If I encounter a closing parentesis, the only thing I must ensure is there
was an opening parentesis before. If not, the expression isn't valid. An opening
parentesis can be a star as well. Stars I don't use, don't matter, they are null.
This doesn't account for too many open parentesis. But If I reverse the problem, I can
do the same again and find out, if all opening parentesis actually match a closing or
a "*".
It passes all the test cases and a few more. But the prove isn't that easy. It would, if
I used a stack and change the '*' into the according parentesis, but I think it's not
necessary. Any thoughts?
#include <iostream>
#include <string>
using namespace std;
bool validExpression(const string& expression)
{
string expr(expression); // make a modifiable copy of expr uses withing auxilary func.
auto aux = [&expr]() > bool {
int openCtr = 0;
int wildCtr = 0;
for (char& c : expr) {
if (c == '(') {
c = ')';
openCtr++;
} else if (c == ')') {
c = '(';
if (openCtr > 0) openCtr; // match ')' to a '('
else if (wildCtr > 0) wildCtr; // match ')' to a '*'
else return false; // can't match ')' to anyting, error
} else if (c == '*') {
wildCtr++;
}
}
reverse(expr.begin(), expr.end()); // reverse string
return true;
};
return aux() && aux();
}
int main()
{
cout << "TC1: " << (validExpression("(*)") == true) << endl;
cout << "TC2: " << (validExpression("((*)") == true) << endl;
cout << "TC3: " << (validExpression("((*))") == true) << endl;
cout << "TC4: " << (validExpression("((*)))") == true) << endl;
cout << "TC5: " << (validExpression("((*))))") == false) << endl;
cout << "TC6: " << (validExpression("*((*))))") == true) << endl;
cout << "TC7: " << (validExpression("(*((*))))") == true) << endl;
cout << "TC8: " << (validExpression("(*)(*)(**") == true) << endl;
cout << "TC9: " << (validExpression("(*)(*)(***") == true) << endl;
cout << "TC10: " << (validExpression("((*)(*)(***") == true) << endl;
cout << "TC11: " << (validExpression("(()(*)(") == false) << endl;
cout << "TC12: " << (validExpression("(()(*)(") == false) << endl;
cout << "TC13: " << (validExpression("****()))))") == true) << endl;
cout << "TC12: " << (validExpression("") == true) << endl;
cout << "TC13: " << (validExpression("()") == true) << endl;
cout << "TC14: " << (validExpression("(*)") == true) << endl;
cout << "TC15: " << (validExpression("(*)(*)") == true) << endl;
cout << "TC16: " << (validExpression("*") == true) << endl;
cout << "TC17: " << (validExpression("**") == true) << endl;
cout << "TC18: " << (validExpression("*)") == true) << endl;
cout << "TC19: " << (validExpression("*(((()())()))())") == true) << endl;
cout << "TC20: " << (validExpression("*()(") == false) << endl;
cout << "TC21: " << (validExpression("**()(") == false) << endl;
cout << "TC22: " << (validExpression("**(**)*(") == false) << endl;
cout << "TC23: " << (validExpression(")**") == false) << endl;
cout << "TC24: " << (validExpression("****()))))") == true) << endl;
cout << "TC25: " << (validExpression("****())))") == true) << endl;
cout << "TC26: " << (validExpression("****())))))") == false) << endl;
cout << "TC27: " << (validExpression("***********************(((((()") == false) << endl;
cout << "TC28: " << (validExpression("*(((()())())())") == true) << endl;
cout << "TC29: " << (validExpression("(((()())()))())") == false) << endl;
cout << "TC30: " << (validExpression("*(((()())())*())") == true) << endl;
}

Chris
October 17, 2017 @kiran, I checked, there was a problem with the loop invariants when moving the window, I did write it in a coffee break and didn't thoroughly go through, it should be
int countSubArrayProductLessThanK(const vector<int>& arr, int k)
{
if (arr.size() < 0) return 0;
int start = 0; start of interval
int end = 1; // end of half open interval [start, end)
long long prod = arr[0];// current prod is arr[0] for [0, 1)
int count = 0; // no product found yet
while (start < end && end <= arr.size()) {
if (prod <= k) { // if product fits in
count += end  start; // all subarrays with product < k ending in with element end1
if (end < arr.size()) prod *= arr[end]; // try to push end further and increase product
end++;
} else {
prod /= arr[start++];
// note: for the current end, I haven't found anything yet
// I can't move end anymore, but i can move start until I pass end or until
// the product falls beyond k
}
}
return count;
}

Chris
October 12, 2017 @Alex, that's a correct observation, your code is elegant and works correct ;)
Depending on interval definition and whether it's closed or open and whether you want to merge those "connected" intervals, the case I mentioned is valid or not. It's more of a "think about it" and to encourage thinking of the other case, not mentioned, which is
A = {[0, 1], [3, 5]},
B = {[0, 10]},
where the result would be {[0,1], [3,5]}
@studip.innovates: I think you can place the summing up into the recursion at the start, thus you don't need to do it for left and right redundantly (not absolutely sure about Java syntax) which is neater.
public static void max(T node, int l, Map<Integer, Integer> map)
{
if(node == null) return;
Integer sum = map.get(l);
map.put(l, sum == null ? node.value : node.value + sum);
max(node.left, l+1, map);
max(node.right, l+1, map);
}
alternatively you can use a vector instead of a hashmap, but hashmap is more convenient to use here... vector would be faster, constant, same O(n) time complexity
alternative, do it without recursion, to spare the final step of stepping through the map/vector. note the different space requirements, the recursive code has (O(h), whereas the nonrecursive and queue based solution has O(n), where n is the number of elements, and h is the height of the tree.
int minLevelSum(Node *root)
{
if(root == nullptr) return 0; // avoid endless loop below
size_t level = 1; // root is level 1 as "defined" in the question
int levelSum = 0; // current level's sum
int minLevelSum = numeric_limits<int>::max();
int minLevelSumLevel = 0;
deque<Node*> q({root, nullptr}); // just a queue, where null marks end of current level
while(true) { // last node is always nullptr
Node* node = q.back();
q.pop_back();
if(node == nullptr) {
if(levelSum < minLevelSum) {
minLevelSum = levelSum;
minLevelSumLevel = level;
}
if(q.empty()) break; // last level, stop here
q.push_front(nullptr); // levelseperator for next level
levelSum = 0; // start new level, sum = 0
level ++;
} else {
levelSum += node>value_;
if(node>left_ != nullptr) q.push_front(node>left_);
if(node>right_ != nullptr) q.push_front(node>right_);
}
}
return minLevelSumLevel;
}

Chris
October 12, 2017 @PenChief: agree, except to calculate a single digit prefix you don't need a loop.
// for number > 0
uint div = pow(10, static_cast<int>(log(number))); // static_cast<int> to floor the double
prefix = number / div; // e.g. from 91234 = 9
suffix = number % div; // e.g. from 91234 ) 1234
or suffix = number % 10; // if you only want the '4' as in your code
an other detail, the question didn't say the number itself must be prime either, so I wouldn't agree with your first if statement. e.g 21 is no prime, but 2 is a prime and 1 is a prime.
 Chris October 12, 2017probably there is no parent pointer in the node, since the problem would be trivial that way (as mentioned in the question).
Maybe some observations are interesting:
 If the fire starts at the root, it takes the trees hight time. to burn down the whole tree.
 If the fire starts on the left branch of the root, it takes max(height of the left branch, 2 + height of right branch)
 we can recursively formulate the problem for each root:
if it starts the fire:
burntime = max(height(left) + 1, height(right) + 1)
if it doesn't start fire, but one of it's subtrees start the fire:
burntime = max(burntimefirestarting subtree, height other subtree + 1 + distance from firestarting node + 1)
if it doesn't start the fire and neither the left nor the right subtree starts the fire:
return the height (max leftheight, rightheight)
in code, if I didn't miss a count, that's the kind of things one should better check carefully, but i actually only wanted to post the idea
struct BurnDownState {
int height; // height of (sub)tree
int distFromFireStart; // if tree starts fire, distance to the fire starting node
int burnDownTime; // time needed to burn that tree
};
BurnDownState burnDownTimeRec(Node* root, Node* fireStart)
{
if(root == nullptr) return {0, 1, 1};
auto ls = burnDownTime(root>left, fireStart);
auto rs = burnDownTime(root>right, fireStart);
if(root == fireStart) {
return {1 + max(ls.height, rs.height), 0, max(ls.height, rs.height)};
}
if(rs.distFromFireStart >= 0) swap(ls, rs); // avoid redundant code below for rs
if(ls.distFromFireStart >= 0) {
return {1 + max(ls.height, rs.height), ls.distFromFireStart + 1,
max(ls.burnDownTime, ls.distFromFireStart + 1 + rs.height)};
}
return {1 + max(ls.height, rs.height), 1, 1};
}
int burnDownTime(Node* root, Node* fireStart)
{
return burnDownTimeRec(root, fireStart).burnDownTime;
}

Chris
October 11, 2017 I interprete the question as follows:
 suppose number n is split into prefix p and suffix s in such a way that p * 10^log(s+9) + s = n (assuming log returns the integer floor of 10based logarithm)
e.g. n = 192 > {(p=1, s=92), (p=19, s=2)}
 it's a superprime if and only if isPrime(s) and isPrime(p)
for a number n, there are log(n) such combinations.
I just create a loop which checks for s and p if they are prime. I could "fastcheck" before calling isPrime and only do that if s%2 != 0 and p%2 != 0 and thus avoid at least one potential expensive call, but I suppose that's not the big catch here.
bool isSuperPrime(unsigned int n)
{
unsigned int prefix = n / 10; // e.g. 19
unsigned int suffix = n % 10; // e.g. 2
unsigned int mul = 10;
while(prefix != 0) {
if(isPrime(suffix) && isPrime(prefix)) return true;
suffix += (prefix % 10) * mul; // increase suffix, e.g. 9*10 + 2 = 92
prefix /= 10; // reduce prefix, e.g. 1
mul *= 10;
}
return false;
}

Chris
October 11, 2017 1)offline algo: check each file if it has changed since last sync, keep syncstate on master
Easy and robust, slow and long delays on replication
2) online, nontransactional: monitor filesystem changes and replicate asap. Asap is the challenge if slave is not reachable, so add a persistent queue on master or a reliable 3rd system. Needs a setup and resync to initialize, depends on transaction safety of queue.
3) online transactional: hook into filesystem and only accept master write if slave write was accepted (2 phase commit): transactional, but introduces a runtime dependency from master to slave, which does not scale.
Alternatice thoughts: issue transactions on both, use quorum and vector clocks, optimize, trade off on write speed, read speed and reliability requirements. Hard to do generic (filesystem) as you might end up with full ACID requirements: look at the specific use case, if possible.
@mani0119:You didn't see the values, probably due to nonascending order...
But since we are now clear on the problem, there's no linear solution, it's clearly exponential, with an upper bound (not thigh though) of O(3^n). There must be a misunderstanding or interviewer tested you on some behavior thing, maybe?
here some code with sample output to keep it less virtual, the exponential nature is obvious.
#include <vector>
#include <iostream>
using namespace std;
vector<vector<int>> getJumbledNumbers(int n)
{ // n beeing the number of digits
vector<vector<int>> results;
if (n == 0) return results;
for (int i = 1; i < 10; ++i) {
results.push_back({ i });
}
for (size_t i = 1; i < n; ++i) {
size_t lastResultCount = results.size();
for (size_t j = 0; j < lastResultCount; ++j) {
int previous = results[j].back();
results[j].push_back(previous); // 1,1
if (previous < 9) { // 1,2 .. 8,9
vector<int> new_res(results[j]);
new_res.back()++;
results.push_back(move(new_res));
}
if (previous > 0) { //1,0 .. 9,8
vector<int> new_res(results[j]);
new_res.back();
results.push_back(move(new_res));
}
}
}
return results;
}
// helper to print
void printJumbledNumbers(int n) {
auto results = getJumbledNumbers(n);
cout << results.size() << " jumbled numbers with " << n << " digits";
if (results.size() < 90) {
cout << ": ";
for (auto& result : results) {
for (auto& digit : result) {
cout << digit;
}
cout << " ";
}
}
cout << endl << endl;
}
int main()
{
for (int i = 1; i < 10; ++i) {
printJumbledNumbers(i);
}
}
/* output
9 jumbled numbers with 1 digits: 1 2 3 4 5 6 7 8 9
26 jumbled numbers with 2 digits: 11 22 33 44 55 66 77 88 99 12 10 23 21 34 32 45 43
56 54 67 65 78 76 89 87 98
75 jumbled numbers with 3 digits: 111 222 333 444 555 666 777 888 999 122 100 233 211
344 322 455 433 566 544 677 655 788 766 899 877 988 112 110 223 221 334 332 445 443 556
554 667 665 778 776 889 887 998 123 121 101 234 232 212 210 345 343 323 321 456 454 434
432 567 565 545 543 678 676 656 654 789 787 767 765 898 878 876 989 987
217 jumbled numbers with 4 digits
629 jumbled numbers with 5 digits
1826 jumbled numbers with 6 digits
5307 jumbled numbers with 7 digits
15438 jumbled numbers with 8 digits
44941 jumbled numbers with 9 digits
*/

Chris
October 08, 2017 @studip.innovates:
my thoughts on your documented output:
 1 is not of n=3 digits unless you interpret it as 100, but then 2 as 200 would violate the "1" difference between 2 and 0
 if it should be 001, a general question is, if "001" is of three or one digit, and "002" would have the same issue as "200"
 "101" is missing, as well as "121", "132", etc..
... it seems we have a different understanding of the problem.
The first digit can be anything from 1..9, the next digit can be previous, previous1 or previous+1, and so on. There are O(9*3^(n1)) jumbled numbers (lines of output), how should one produce this amount of output with a O(n) or O(lg(n)) algo? Was maybe the question to return how many jumbled number exist?
 Chris October 08, 2017If the interviewer tells you, “it’s not sorted” and the first step you do is “sort” or if you sort over time using a heap, you might loose him/her. At least it's a strong indication you should think as well about a solution that does not include sorting (since the sorting is very obvious).
A similar obvious alternative is finding out if you have discrete time and if you can walk through the time in, let's say, m steps, and if m << n (tie is n*log(n)) then you might use or indicate a different approach than priority queues (aka sorting, heaps):
1) run through the n intervals of the first list and put each [start time.. end time] into a hash set. Optimize it by keeping a maxend value and only put [max(maxend) .. end time]. So you end up with an O(n+m) loop
2) do the same for the second list but instead of putting into a hashset check the first hashset if at that time an interval existed and depending on or / and operation extend, end or start a new result interval.
You end up with a O(n+m) time complexity and O(m) space complexity algorithm that might overperform or underperform significantly compared to the heap/sorting solution depending the problem set constraints given.
Always, state your assumptions and practice by making assumptions conscious.
a) shall we only create distinct permutation? e.g. from "aaaa" > 1, from "aab", 2
b) in the sample the first element "2" is actually swapped twice:
12345
*2*1345, swapped 2 with 1
13*2*45, swapped 2 with 3
thus I conclude the task is to only swap an element once per permutation
as a result the recursion would look like
curly brackets "{}" indicate the optional conditions if only distinct permutations shall be created.
dp(i) = 0, if i < 0
= 1, if i == 0
= dp(i1) {if dp[i] == dp[i1]}
= dp(i2) + 1 + dp(i1)  1 {if dp[i] != dp[i1]} ==> dp(i2) + dp(i1)
reasoning
dp(i) = dp(i2) + 1: if the elements are different they can be swapped thus extending the dp(i2) with one option. This path count's e.g. "(123)+54" and "(123)+45"
dp(i) = dp(i1)  1: this counts "(1234)+5" but because "(1234)+5" = (123)+45" I have to remove one permutation, because I already counted that with (123)+45
@kustav.adorable: I think that answers your question, too, right?
usually, a subarray is a continous subarray; if not continous, one usually says subsequence. Thus there are O(n^2) such subarrays. One can now test all of those, or move a sliding window and solve it in O(n) if there are only positive integers in the array.
for the given sample to briefly verify interpretation and explain algo by sample
start=0, end=0: {2} > count = count + 1 = 1
start=0, end=1: {2,3} > count = count + 2 = 3, from {2,3} or from {3}
start=0, end=2: {2,3,6} > 2*3*6 > 10, start = 1
start=1, end=2: {3,6} > 3*6 > 10, start = 2
start=2, end=2: {6} > 6, count = count + 1 = 4, from {6}
for only positive integers with no "0"'s this would be:
int countSubArrayProductLessThanK(const vector<int>& arr, int k)
{
if (arr.size() < 0) return 0;
int start = 0;
int end = 1;
long long prod = arr[0];
int count = 0;
while (start < end && end <= arr.size()) {
if (prod <= k) {
count += end  start;
if (end < arr.size()) prod *= arr[end];
end++;
} else {
prod /= arr[start++];
}
}
return count;
}
if there are 0's exclude them from product and count them in the window. If more than 1 zero exists, the product will be 0. With negatives it's a bit trickier because the adjustment of the windows size doesn't work the same way anymore. Any ideas?
 Chris October 02, 2017@Alex, Hi, sorry to pinch you, but I consider this page as a learning platform ... anyways your solution is not O(n), because within your "n loop" you compare the "hash" which is a loop over the number of distinct character, so it becomes O(n*f) where f is the number of distinct characters.
RabinKarp maintains a hash (some integer, comparable in O(1)) which it does using modulo arithmetic ... you could do the same within your hash class so it would become close to O(n) expected average case, but RabinKarp has the problem, that if the hash matches it will have to deep compare, so one needs to distinguish between worst and average case behavior.
For pattern matchin RK is bad, when the patterns occurs often in the search string (e.g. "a" in "aaaaaaaaaaa"). which is not the case here. So RabinKarp is a cool idea, but it will not work exactly, because it takes the order into account. So you will need to come up with a "flying" hash that will ignore order (e.g. by applying a prime to each character ...). It's a cool idea!
the problem is finding at least one anagram of a string as a substring in an other string.
an anagram can be any permutations of a string S. Obviously we could search any
permutations in the next string N. This is the brute force approach and maybe codeable
in 15 minutes (probably with errors) if we accept creating repeating permutations.
Then it's O(S!*L*S) if S>=L
(X denotes the length of the string X, a larger string is never an anagram of a shorter)
Improved and easier to code is by sorting characters of S string and extracting
all substrings of the size of S from L, sorting the character of this substrings
and compare the substring against the sorted S. This is better to code and faster,
but still not optimal.
Time complexity: O(S*lg(S)+(LS)*S*lg(S)+S) = O((LS)*S*lg(S))
Best is:
1) Anagram is the multiset S of a string P.
note: A set or multiset ignores the order.
2) Substring B is a sequence or interval of a string L with length P
for all such substring B in L, if multiset(B) = multiset(B) then at least one substring
of L is a anagram of P
How to code it:
1) build a frequency table F for P (how many times does each character occure)
2) initialize a substring B = empty string
3) for each character c in L
append to substring, decrement occurrence of c in F
if substring is now larger than P, remove first character from substring and increment
occurrence of c in F
check if all frequencies are 0 in F, if so, it's a substring
Time complexity for a single pair of strings would be O(m+(nm)) = O(n) if n >= m, that's
better. In the given problem we do this at most A1 times where A is the given array of
strings.
#include <string>
#include <vector>
#include <unordered_map>
#include <algorithm>
#include <iostream>
using namespace std;
// solution with sorting O((ba)*a*lg(a))
bool isAnagramSubstring_sort(const string& a, const string& b)
{
if (a.size() > b.size()) return false;
vector<char> av(a.begin(), a.end());
sort(av.begin(), av.end());
auto bb = b.begin();
auto be = next(b.begin(), a.size());
while(true) {
vector<char> sbv(bb, be);
sort(sbv.begin(), sbv.end());
if (equal(av.begin(), av.end(), sbv.begin())) return true;
if (be == b.end()) break;
++bb;
++be;
}
return false;
}
// O(b) solution
bool isAnagramSubstring(const string& a, const string& b)
{
if (a.size() > b.size()) return false;
unordered_map<char, int> am;
for_each(a.begin(), a.end(), [&am](char v) { am[v]++; });
int mismatch_ct = a.size();
auto be = b.begin();
auto bb = b.begin();
while (be != b.end()) {
int c = am[*be]; // *be: element entering the window
am[*be] = c  1;
mismatch_ct += abs(c  1)  abs(c);
if (be  bb >= a.size()) { // *bb: is an element leaving?
c = am[*bb];
am[*bb] = c + 1;
mismatch_ct += abs(c + 1)  abs(c);
++bb;
}
if (mismatch_ct == 0) return true;
++be;
}
return false;
}
// driver
int countNoNextAnagramOfPrev(const vector<string>& arr)
{
int count = 0;
for (size_t i = 1; i < arr.size(); ++i) {
if (isAnagramSubstring(arr[i  1], arr[i])) count++; // or isAnagramSubstring_sort
}
return count;
}
int main()
{
cout << countNoNextAnagramOfPrev({ "ab", "ab", "abc", "bca" }) << endl;
cout << countNoNextAnagramOfPrev({ "ab", "aqb" }) << endl;
}

Chris
September 30, 2017 edit: cleaned up code
The idea with returning only unique elements is to prevent duplicates from entering
the heap, because when there, they are ordered O(log(m)) just to be ignored (which is unnecessary work).
I did it using a set, containing all "nextvalues" of the current positions in the
arrays (probably streams in reality). When advancing in an array, I check if the
element I am about to put into the heap is already in the set. If so, I advance further
until I find an element that is not already in the heap. This way, to skip a single duplicate
takes O(1) vs. O(lg(m)) when entering the heap.
#include <vector>
#include <algorithm>
#include <iostream>
#include <unordered_set>
#include <cassert>
using namespace std;
class Iterator
{
private:
// struct that is uses within the heap, contains a reference to the "array"
// and it's current position (basically a reference to a stream)
struct StreamRef
{
const vector<int>* array_; // reference to array
size_t pos_; // index index in array
void next() { pos_++; }
bool isValid() const { return pos_ < array_>size(); }
int value() const { return array_>at(pos_); }
bool operator < (const StreamRef& rhs) const { return value() > rhs.value(); }
};
vector<vector<int>> arrays_; // copy of arrays to "simulate" streams
vector<StreamRef> streams_; // heap of streams
unordered_set<int> headValues_; // all valid values of current streampositions
public:
Iterator(const vector<vector<int>>& arrays) : arrays_(arrays) {
for (auto& arr : arrays_) {
StreamRef first{ &arr, static_cast<size_t>(1) };
if (advanceToNextUniqueValue(first)) streams_.push_back(first);
}
make_heap(streams_.begin(), streams_.end());
}
// is there a next element
bool hasNext() const { return streams_.size() > 0; }
// get next element
int getNext()
{
assert(hasNext());
pop_heap(streams_.begin(), streams_.end());
auto& next = streams_.back();
int current = next.value();
if (advanceToNextUniqueValue(next)) push_heap(streams_.begin(), streams_.end());
else streams_.pop_back(); // this stream is exceeded
return current;
}
private:
// advances stream until it encounters an element that is not already in
// the heap. Maintains the headValues_ set containing all values in the heap
bool advanceToNextUniqueValue(StreamRef& e)
{
bool removePrevious = e.isValid();
int previousValue = removePrevious ? e.value() : 0;
do { e.next(); } while (e.isValid() && headValues_.count(e.value()) > 0);
if (removePrevious) headValues_.erase(previousValue);
if (e.isValid()) headValues_.insert(e.value());
return e.isValid();
}
};
int main()
{
Iterator it({ {1,2,3,4,5,10},{1,2,2,7,7,11},{0,9},{6,22},{1,1,1,1,1} });
while (it.hasNext()) {
cout << it.getNext() << ", ";
}
}

Chris
September 30, 2017 code for above, without the duplicate extension
#include <vector>
#include <algorithm>
#include <iostream>
using namespace std;
class Iterator
{
private:
const vector<vector<int>> arrays_; // (lame) copy of input arrays
vector<pair<size_t, size_t>> offsets_; // arrayidx, arraypos
struct HeapComparer {
Iterator* it_;
HeapComparer(Iterator* it) : it_(it) { }
bool operator()(const pair<int, int>& a, const pair<int, int>& b) {
return it_>arrays_[a.first][a.second] > it_>arrays_[b.first][b.second];
}
};
public:
Iterator(const vector<vector<int>>& arrays) : arrays_(arrays) {
reset();
}
bool hasNext() const { return offsets_.size() > 0; }
int getNext() {
pop_heap(offsets_.begin(), offsets_.end(), HeapComparer(this));
pair<int, int> smallest = offsets_.back();
if (smallest.second < arrays_[smallest.first].size()  1) {
offsets_.back().second++; // more to fetch
push_heap(offsets_.begin(), offsets_.end(), HeapComparer(this)); // bubble element up (O(lg(m))
} else {
offsets_.pop_back(); // this array is exceeded
}
return arrays_[smallest.first][smallest.second];
}
void reset() {
for (size_t i = 0; i < arrays_.size(); ++i) {
if (arrays_[i].size() > 0) {
offsets_.push_back({ i, 0 });
}
}
make_heap(offsets_.begin(), offsets_.end(), HeapComparer(this));
}
};
int main()
{
Iterator it({ {1,2,3,4,5,10},{2,2,7,7,11},{0,9},{6,22} });
while (it.hasNext()) {
cout << it.getNext() << ", ";
}
}

Chris
September 29, 2017 binary heap, to select next lowest element. In the heap there must be all next elements of the m arrays and a pointer or index to the array, so when accessing the top of min heap one can as well access the array and fetch the next element if not at end.
1) O(m*lg(m)) to build the heap first time, assuming m is the number of arrays
2) in each iteration O(lg(m)) is required for "heapify"
3) total time is O(n*lg(m)) assuming n is the number of all distinct elements in the arrays
to remove duplicates, it's when taking an element compare it against the last element fetched. This is inefficient when long sequences with same values exists. So, we should skip elements when inserting to the heap (when fetching the next element from the array). If very long sequences of equal elements are expected, we could try some binary search approach to skip elements from the input array. Maybe implement a cache friendly version of this.
@Alex, it's not O(N * target): assuming S is your target in your O(N*S) runtime observation.
 sequence {1,2,3,..,30}: N=30
 target = 10: n*target = 300, number of returned subsets: 10
 target = 20: n*target = 600, number of returned subsets: 64
 target = 50: n*target = 1500, number of returned subsets: 3351
 target = 100: n*target = 3000, number of returned subsets: 198732
(results are actually the same from your and mine code)
you can't create more subsets with at least one element than your upper bound.
As I mentioned, you can count in O(N*S) or you can decide if any subset sums to target in O(N*S) but you can't create all subsets in that time, because this number seem to grow faster than N*S.
cool question. it really depends how exact the comparison must be and what this is used for (e.g. fraud detection). An alternative to Alex's exact match is:
1) extract all words in any order, transform words into wordids using a growing dictionary. 2) create a vector for each page, where the wordid is the index and the value is the amount of occurrences of this word.
3) apply a similarity measure between the vectors (like pearson similiarity etc.) and then, if you get a very high similarity, you have a very high chance the pages are equal.
This method is harder to fake, if e.g. somebody has a high interest in bypassing the "exact match" method because same text might have meant "looks the same for the user" and there, the order of leaves definitely is not relevant since the creator can apply any mean tricks to avoid being recognized as duplicate.
If you want to have some tolerance in spelling as well, you might transform the words into some phonetic versions (where similar sounding words get the same phonetic "value" thus are the same words). ...
and maybe the interesting part, a simple runtime comparison of the different algorithms (memo, is basically Alex's version)

numbers: {1, 3, 4, 5, 6, 15}
target: 15
algo brute force ran in 1.419 ms
algo dp (memo) ran in 5.286 ms
algo dp (p.poly) ran in 2.706 ms

numbers: {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}
target: 50
algo brute force ran in 353.478 ms
algo dp (memo) ran in 149.093 ms
algo dp (p.poly) ran in 15.262 ms

numbers: {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20}
target: 25
algo brute force ran in 4505.34 ms
algo dp (memo) ran in 36.832 ms
algo dp (p.poly) ran in 12.317 ms

numbers: {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30}
target: 465
skip brute force, takes too long
algo dp (memo) ran in 244.602 ms
algo dp (p.poly) ran in 85.866 ms

numbers: {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30}
target: 444
skip brute force, takes too long
algo dp (memo) ran in 257.729 ms
algo dp (p.poly) ran in 95.207 ms

numbers: {1, 2, 10000, 20000, 30000}
target: 30003
algo brute force ran in 0.326 ms
algo dp (memo) ran in 1.561 ms
algo dp (p.poly) ran in 673.041 ms
as expected, brute force is quite slow in most cases, and the pseudo polynomial (ala knapsack) is not good on large values.
 Chris September 28, 2017this is the iterative, pseudo polynomial time algorithm (like knapsack) which has expensive costs, when many results are generated in the backtracking step.
// reduced problem, only positive nums, positive target,
vector<vector<int>> all_subset_sums_dp(const vector<int>& nums, int target)
{
// go through all nums and check in how many ways target can be built
auto nums_set = remove_dublicates(nums);
vector<vector<bool>> dp(nums_set.size() + 1, vector<bool>(target + 1));
dp[0][0] = true; // base case
for (size_t i = 0; i < nums_set.size(); ++i) {
int num = nums_set[i];
dp[i + 1] = dp[i];
for (int s = num; s <= target; ++s) {
if (dp[i][s  num]) { // can extend using num
dp[i + 1][s] = true;
}
}
}
// backtrack to reconstruct the solutions
stack<StackItem> stack;
vector<vector<int>> results;
if(dp.back().back()) { // wa there at least one solution
stack.push({ {}, target, dp.size()  1 });
}
while (!stack.empty()) {
if (stack.top().rem_sum > 0) { // can stop here, no need to backtrack 'till 1
stack.top().dp_idx;
size_t dp_idx = stack.top().dp_idx;
int rem_sum = stack.top().rem_sum;
if (rem_sum >= nums[dp_idx] && dp[dp_idx][rem_sum  nums[dp_idx]]) { // sum has been achieved with
if (dp[dp_idx][rem_sum]) { // sum has as well been achieved without
stack.push({ stack.top() }); // clone
}
stack.top().rem_sum = nums[dp_idx];
stack.top().set.push_back(nums[dp_idx]);
}
} else {
results.push_back(move(stack.top().set));
stack.pop();
}
}
return results;
}

Chris
September 28, 2017 this is the nonrecursive version, based on NoOne's hint, time complexity: O(2^N)
vector<vector<int>> all_subset_sums_bf(const vector<int>& nums, int target)
{
// try all combinations in an efficient "binary counting way"
int sum = 0; // current sum
vector<int> nums_set = remove_dublicates(nums);
vector<vector<int>> results;
vector<bool> big_int(nums_set.size(), false); // specialized vector, is a bit vector
while (true) {
// do manual increment:
//  if I set a bit to 1, I add the nums_set[bitidx] to the sum
//  if I clear a bit, I subtract the nums_set[bitidx] from the sum
size_t idx;
for (idx = 0; idx < big_int.size(); idx++) {
if (big_int[idx]) {
sum = nums_set[idx];
big_int[idx] = false;
} else {
big_int[idx] = true;
sum += nums_set[idx];
break;
}
}
if (idx >= big_int.size()) break; // all combinations tried
if (sum == target) {
results.push_back({});
for (idx = 0; idx < big_int.size(); idx++) {
if (big_int[idx]) {
results.back().push_back(nums_set[idx]);
}
}
}
}
return results;
}

Chris
September 28, 2017 there is a pseudo polynomial algorithm for this. Basically you apply each coin n times
and mark the target value you get by adding up the number of ways you can get there.
for the sample:
dp[0][0..S] = 0
dp[0][0] = 1, base case
 use coin with value 1
dp[1][0..S] = dp[0][0..S]
dp[1][1] = dp[0][1] + dp[0][0] + 1 = 1
 use coin with value 2
dp[2][0..S] = dp[1][0..S]
dp[2][2] = dp[1][2] + dp[1][0] + 2 = 1
dp[2][3] = dp[1][3] + dp[1][1] + 2 = 1
 use 1 coin with value 3
dp[3][0..S] = dp[2][0..S]
dp[3][3] = dp[2][3] + dp[2][0] + 3 = 2
dp[3][4] = dp[2][4] + dp[2][1] + 3 = 1
dp[3][5] = dp[2][5] + dp[2][2] + 3 = 1
dp[3][6] = dp[2][6] + dp[2][3] + 2 = 1
 use 2 coins with value 3 = value 6
dp[4][0..S] = dp[2][0..S]
dp[4][6] = dp[3][6] + dp[3][0] + 6 = 2
result = dp[4][6] = 2
in c++ 11
#include <vector>
#include <iostream>
using namespace std;
typedef unsigned int uint;
unsigned int numberOfWaysForChange(const vector<pair<uint, uint>>& coins, uint S)
{ // pair<int, int>: 1st: value, 2nd: count
if (S < 1) return 0;
vector<int> dpc(S + 1, 0); // current
vector<int> dpl(S + 1, 0); // last
dpl[0] = 1;
for (const auto& coin : coins) {
for (uint value = coin.first, c = 0; c < coin.second; value += coin.first, ++c) {
for (uint s = 0; s < S; s++) {
if (dpl[s] == 0) continue;
dpc[s] = dpl[s];
if (s + value > S) continue;
dpc[s + value] = dpl[s + value] + dpl[s];
}
swap(dpl, dpc);
}
}
return dpl.back();
}
int main()
{
cout << numberOfWaysForChange({ {1,1},{2,1},{3,3} }, 6) << endl;
}

Chris
September 27, 2017 what's the goal? maximize the amount of days the rooms are booked? Minimize the number of rejected bookings? Can we assume, there are more bookings then we can serve with the two rooms? Can we further assume we have all bookings at hand and can accept and deny them?
Anyway, if there are only 10 upcoming bookings it's 2^10 = 1024 options, which I would just implement using brute force and maximize the solution on whatever the question is supposed to maximize it on.
it's a duplicate question, but I feel honored, it was adapted with this map representation ;)
Actually it is NP hard and constist of two parts:
 find the shortest distances among all "@", S and G > all pair shortest path on a map.
 find the shortest route between S and G while visiting each "@". Think of all the "@" as cities and imagine the distances among the cities were already calculated. It's clear there are count(@)! (factorial) options. It's as well clear if you calculated the shortest path properly, the triangle inequality holds. There are many approximation algorithms to the problem you can use then. But the exact solution is NPhard.
@avv: you're wrong: stackoverflow.com/questions/10657503/findrunningmedianfromastreamofintegers.
it's obvious that a single heap won't help, but it's a standard algorithm to calculate a running mean with two heaps. One has the upper part and the other the lower part of the values received from the streams. Please at least use google before posting such a nonsense.
@Alex, that's correct, Netr added an extra "2" in between in the list. For the "132241043" pattern I agree, 6 combinations, for "1322241043" there are 10.
@studip.innovations: I think you are supposed to interprete the whole string, and count after that "132241043" definitely can't be interpreted as "1" unless you skip all following characters. "0" alone is no valid character, as the "0" comes from "10" and the range is 1..26 not 0..25
@Netr: Do I interpret nCnv right if I assume you mean nC(nv) = "choose (nv) from n"
for "1010" it has two valid groups, so it's 4C(42), which is 6 (type 4 choose 2 into wolfram alpha...). this is hard to understand, same is with "1111" where the interpretations are (11)11; 1(11)1; 11(11); 1111 which are 4.
Two things I don't understand from your proposal,
1) "1010" can't be treated the same way as "1111"
2) the formula in general seems not complete, because you don't choose 2 from 4, you choose 0,2 or 4 from 4 but not exactly, because you can't choose any 2 only index 0,1 or index 2,0 ... I don't understand how you come up with the nC(nv).
same as above solution without O(n) space and less constant cost in the loop.
unsigned int countNumberOfWaysToDecode(const vector<unsigned char>& arr)
{
if (arr.size() == 0) return 1; // only interpretation is "empty" or "null"
unsigned int dpim2 = arr[0] > 0 ? 1 : 0; // dp[i2]
unsigned int dpim1 = arr[0] > 0 && arr[0] < 10 ? 1 : 0; // dp[i1]
unsigned int dpi; // dp[i]
for (size_t i = 1; i < arr.size(); ++i) {
dpi = (arr[i] > 0 && arr[i] < 10 ? dpim1 : 0) + (arr[i  1] * 10 + arr[i] < 27 ? dpim2 : 0);
dpim2 = dpim1;
dpim1 = dpi;
dpi = 0;
}
return dpim1;
}

Chris
September 26, 2017 There is a solution without merge, thus O(1) space complexity and and time complexity of something like O(lg(m)*lg(n)) or O(lg(m+n)) [depending on how smart you implement it] which is basically a simultaneous binary search over the two arrays. It goes like this:
 since the array are sorted, you can jump into the middle of the first array, pick that element and binary search it in the second array, then you know the order of that element (e.g. kth order). since you are looking for n/2 or n/21 and n/2 you can correct, so you repeat it but only consider the first arrays upper or lower part etc.
 The corner cases are ugly, if you want to optimize constant factors etc.
if you are interested there is the exact same question on leetcode including some explanations in quite some details. How ever, you'll need to write it on a piece of paper to fully understand corner cases and cleanly code it.
assumption: "01" is not valid and does not mean "1"
I don't see a combinatorics approach that will lead to success, but it can be written as recursion:
dp(i) = sum[
dp(i1) if 0 < arr[i] < 10
dp(i2) if i > 0 and 0 < (arr[i1] * 10 + arr[i]) < 27
0 if arr[i] == 0  (i > 0 && arr[i1] * 10 + arr[i] > 26)
0 if i < 1
1 if i == 1
]
with a functional programming language that would be it.
With an imperative language, this can be implemented in O(n) time complexity and O(1) space complexity. Below the simplified version with O(n) time and O(n) space.
#include <vector>
#include <iostream>
using namespace std;
int countNumberOfWaysToDecode(const vector<unsigned char>& arr)
{
if (arr.size() == 0) return 1; // only interpretation is "empty" or "null"
vector<unsigned int> dp(arr.size(), 0);
// note the repeated comparisons for i > 0, i > 1 could be simplified by
// extracting the base case from the loop (which would be nicer, but less compact)
for (size_t i = 0; i < arr.size(); ++i) {
if (arr[i] > 0 && arr[i] < 10) {
dp[i] = (i > 0) ? dp[i  1] : 1;
}
if (i > 0 && arr[i  1] > 0 && (arr[i  1] * 10 + arr[i]) < 27) {
dp[i] += (i > 1) ? dp[i  2] : 1;
}
}
return dp.back();
}
// some test cases
int main()
{
cout << (countNumberOfWaysToDecode({ 2,6,2,6 }) == 4) << endl; // 2,6,2,6; 26,2,6; 26,26; 2,6,26
cout << (countNumberOfWaysToDecode({ 0,1,1 }) == 0) << endl; // invalid
cout << (countNumberOfWaysToDecode({ 1,1 }) == 2) << endl; // 1,1; 11
cout << (countNumberOfWaysToDecode({ }) == 1) << endl; // null
cout << (countNumberOfWaysToDecode({ 2,7 }) == 1) << endl; // 2,7;
cout << (countNumberOfWaysToDecode({ 2,1,1,1 }) == 5) << endl; // 2,1,1,1; 21,1,1; 21,11; 2,11,1; 2,1,11
}

Chris
September 26, 2017 I assume the sequence in which they move is not strictily alternating. Otehrwise the
problem would be a bit primitive. E.g. best might be left, left, left = 3 moves for
{1,2,3,1,2,3,1,2,3}
so, it really depends from which side we approach e.g.
{1,2,3,1,2,3,10,9,8,7,6,5,4,3,2,1,1,2,3,1,2,3}
is solved by left, left, right, right, right, right, right, right, right
it seems best, to approach it from left and right side and figure where to stop
so that min moves from left and right are needed.
I create two arrays with moves only from left and only from right and build the
min sum of the two arrays A1, A2 as min(A1[i]+A2[i]) for all 0 <= i < n
This would be an O(n) time and O(n) space approach.
#include <vector>
#include <algorithm>
#include <iostream>
using namespace std;
int calcMinMoves(const vector<int>& skyline)
{
size_t n = skyline.size();
// from left
vector<unsigned int> costL(n, 1);
for (size_t i = 1; i < n; ++i) {
if (skyline[i  1] <= skyline[i]) {
costL[i] = costL[i  1];
} else {
costL[i] = costL[i  1] + 1;
}
}
// from right
// note: from left and from right should be placed in an auxiliry
// function, but for better visibility of the actual algo, I left it
// as redundant code
// the auxfunc had the following signature:
// vector<unsigned int> calcMovesFromOneSide(skyline, int start, int direction)
vector<unsigned int> costR(n, 1);
for (size_t i = n  2; i < n; i) {
if (skyline[i + 1] <= skyline[i]) {
costR[i] = costR[i + 1];
} else {
costR[i] = costR[i + 1] + 1;
}
}
// find min moves
costR[n  1] = 0; // special case when only from left (last on right, could be from left or right)
costL[0] = 0; // special case when only from right
unsigned int minMoves = n;
for (size_t i = 0; i < n; i++) {
minMoves = min(minMoves, costR[i] + costL[i]);
}
return minMoves;
}
int main()
{
cout << (calcMinMoves({1,2,3,1,2,3,1,2,3}) == 3) << endl;
cout << (calcMinMoves({ 3,2,1,3,2,1,3,2,1 }) == 3) << endl;
cout << (calcMinMoves({ 1,2,3,4,5,6,7,8,9 }) == 1) << endl;
cout << (calcMinMoves({ 1,2,3,4,5,4,3,2,1 }) == 2) << endl;
cout << (calcMinMoves({ 1,2,3,4,5,5,5,4,3,2,1 }) == 2) << endl;
cout << (calcMinMoves({ 1,1,1,1,1,1 }) == 1) << endl;
cout << (calcMinMoves({ 1,2,2,2,2,2 }) == 1) << endl;
cout << (calcMinMoves({ 2,2,2,2,2,1 }) == 1) << endl;
cout << (calcMinMoves({ 1,2,3,1,2,3,9,8,7,6,5,4,3,2,1,1,2,3,1,2,3 }) == 7) << endl;
}

Chris
September 24, 2017 You are looking for the largest strongly connected component in that nondirected graph.
This is a linear time algorithm whereas it is linear in the number of vertices in this
example (sparse graph). How ever, to find all vertices you need to look at least once at
each element in the matrix. Therefore it is O(n*m) if n, m are the dimensions of the
matrix. It's easy to show that's the best conceivable runtime, thus no need for much
improvement or dp.
Be aware you do not actually need to construct the graph.
I assume connected means here if left, right, top or bottom square has an X (bottom left,
top right, etc.) is not considered. If that's wanted, just add this "moves" into the MOVE array below.
DFS can be implemented recursive and nonrecursive. Below a nonrecursive example in
C++ 11, modifying the input array for simplicity to remember visited fields.
#include <vector>
#include <stack>
#include <algorithm>
#include <iostream>
using namespace std;
int getMaxShape(vector<vector<char>>& arr)
{
pair<int, int> MOVES[]{ { 0,1 },{ 0,1 },{ 1,0 },{ 1,0 } };
int max_comp_size = 0;
for (size_t i = 0; i < arr.size(); ++i) {
for (size_t j = 0; j < arr[i].size(); ++j) {
if (arr[i][j] != 'X') continue;
arr[i][j] = '';
int comp_size = 1;
stack<pair<size_t, size_t>> s({ {i, j} });
while (!s.empty()) {
auto u = s.top(); s.pop();
for (const auto& m : MOVES) {
pair<size_t,size_t> v { u.first + m.first, u.second + m.second };
if (v.first < arr.size() && v.second < arr[v.first].size()
&& arr[v.first][v.second] == 'X') {
arr[v.first][v.second] = ''; // do not visit again
s.push(v);
comp_size++;
}
}
max_comp_size = max(max_comp_size, comp_size);
}
}
}
return max_comp_size;
}
int main()
{
vector<vector<char>> tc {
{'.', 'X', 'X', 'X', 'X', 'X', 'X', '.', '.', '.', '.' },
{'.', '.', '.', 'X', '.', '.', 'X', 'X', '.', '.', 'X' },
{'.', '.', '.', 'X', 'X', 'X', 'X', '.', '.', '.', '.' },
{'.', '.', 'X', '.', '.', '.', '.', '.', 'X', '.', '.' },
{'.', '.', 'X', 'X', 'X', '.', '.', 'X', 'X', '.', '.' },
{'.', '.', '.', '.', '.', 'X', 'X', '.', '.', '.', '.' }
};
cout << getMaxShape(tc);
}

Chris
September 21, 2017
Rep
Rep
Repjudydschultz1234, Android test engineer at Eterno Infotech Pvt Ltd
Spent a weekend researching weebles in Nigeria. My current pet project is developing strategies for how to break someone's ...
RepAmber Van is the top rated company that offers friendly and professional removals services.Our featured services includes domestic moves ...
RepJames Gulledge, Dev Lead at ADP
Get you driving in comfort and class in Atlanta! Prestige Luxury Rentals offer a wide selection of premium vehicles Our ...
Open Chat in New Window
@Alex: for n = 3, k = 2: "AAA", "EEE", "III", "OOO", "UUU" = 5
 Chris October 24, 2017I don't see how your code comes to this result (if it does, I read out, it returns 1). What did I miss?