Sergey
BAN USER 2of 4 votes
AnswersInplace reverse a sentence
You given a sentence of english words and spaces between them.
Nothing crazy:
1) no double spaces
2) no empty words
3) no spaces at the ends of a sentencevoid inplace_reverse(char* arr, int length) { // your solution }
Example:
 Sergey in United States
input "I wish you a merry Christmas"
output "Christmas merry a you wish I"
Constrains: O(1) additional memory Report Duplicate  Flag  PURGE
Facebook Software Engineer / Developer Algorithm  5of 5 votes
AnswersThe closest common ancestor in a tree forest.
class Node { Node* parent; // == null for root of tree Node* left; Node* right; } Node* tree_forest[]; // array of pointers which points to roots of each tree respectively Node* closest_common_ancestor(Node* n1, Node* n2) { // your solution }
Example:
 a  j  / \  /  b c  h  / / \  d e f 
for e and d CCA is a
 Sergey in United States
for e and f CCA is c
for e and c CCA is c
for h and d CCA is null
Constrains: O(1) additional memory Report Duplicate  Flag  PURGE
Facebook Software Engineer / Developer Algorithm
If array is sorted this could be solved in O(n) complexity.
Python solution:
def pair_of_sum(A, S):
left = 0
right = length(A)  1
while left < right:
current_sum = A[left] + A[right]
if current_sum == S:
return left, right
if current_sum < S:
left += 1
if current_sum > S:
right = 1
If array is not sorted we need to sort it and total complexity will be O(n log n)
 Sergey October 30, 2016Could you please clarify question.
Every element of this array have "next" element to go (including itself). So we can do "next" starting from any element infinite times and always can continue. Thus EVERY possible array will have loop because array is finite. Loop length could be from 0 to N (whole array).
What case could be considered as noloop array?
For real?? And this is the way they want to measure candidates? Shame...
1. Qsort complexity (worst or avg) belong to class of functions O(e^n). Simply by definition of that class. In other words Big O notation means "grows asymptotically no faster than ...".
2,3,4. Disputing questions... From my perspective Big O notation or asymptotic complexity analysis is a very good way to measure different approaches. However, it is not the ONLY one property of algorithm complexity analysis.
I would mention algorithms with same asymptotic and different constants.
5. Insertion sort will be good enough.
6. Counting sort because of small set of different unicode chars
P.S. This questions shows nothing about candidate skills. Even if you don't know answers you still could be good candidate and vice verse.
@ChrisZ
The problem definition is very blurry from my perspective.
If it's allowed to tune hash function until you're are sure your function has no collisions on this set. Well... It might work.
But what if you have only one iteration of reading?
I agree that trie and arch algs is not good solution. But we have strong constraint "no false positive", so we have to save some of the words without loosing of information.
At least I can create an example of words, which have a lot of hash collisions for popular hashing algorithms. I mean if nature of words is random you have low probability of collisions. But if you face artificially designed set of words it would be problem.
My conclusion for now: not enough information about this problem and about nature of words in particular. What probability of a mistake is acceptable.
PS by "merging approaches" I mean if alphabet is wide but words have a lot of same parts you can switch to another alphabet, like arch algs do. Then create trie over new alphabet.
But I never did this on practice. Such tricks usually used in math proofs.
PSS if you create a trie u not always need to use ptrs for all edges. You can make quasitrie. And sub trie will be indexed inside little blocks. Very like to CPU functional blocks inside CPU.
I hope u get the idea, couse it's hard to describe with my english knowledge :)
This problem is not about hash. Why? If you use hash you can't guarantee "no false positive". It means if you say "YES" it should be 100% "YES". If you store words with losing information you can't say "yes, i'm sure".
PLEASE stop use hash EVERYWHERE.
So, we need such data structure which say "Yes, I'm sure" and "Probably no".
1st approach
Use Trie (aka digital tree/radix tree/etc). Try to append as much words as you can. Trie is good choice because of good time complexity for search and better then plane storing memory efficiency if words intersecting each other.
2nd approach
Use archiving algorithms without losing of data. Try to append as much as you can.
Pros: you will be close to best memory efficiency.
Cons: veeeeery sloooow.
Both approaches can be merged depending on nature of data.
Dynamic programming:
Assuming we have array "a" w/ length n and subarray "sa" w/ length k.
At worst case we have O(nk) operations.
Idea:
Go via a and on each step calc closest position there starts best solution for sa[1]. For solution sa[1]sa[2]. and so on until sa[1]sa[2]...sa[k].
Example:
i = 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
a = 1 2 5 2 8 5 7 1 1 1 8 7 7 7 5 8 7
sa= 5 8 7

1 1 1 2 2 2 5 5 5 5 5 5 5 5 5 14 14 14
1 1 1 1 1 2 2 2 2 2 2 5 5 5 5 5 14 14
1 1 1 1 1 1 1 2 2 2 2 2 5 5 5 5 5 14
^ ^ ^
solution 2, 5  solution 14, 3 and it is the best solution

solution 5, 7 but we already have better
Hope you get the idea.
Code on python3
def find(a, sa):
res = [1] * len(sa)
min_i = 1
min_len = 1
for i in range(len(a)):
for j in range(len(sa)):
if a[i] == sa[j]:
res[j] = res[j  1] if j > 0 else i
if j == len(sa)  1:
if min_len == 1 or min_len > i  res[j] + 1:
min_len = i  res[j] + 1
min_i = res[j]
# print('{:3} {:3} {}'.format(a[i], i, res))
return min_i, min_len
print(find([1,2,5,2,8,5,7,1,1,1,8,7,7,7,5,8,7], [5,8,7]))
print(find([1,2,5,2,2,8,5,1,1,7,1,8,7,7,7], [5,8,7]))
print(find([1,2,3,4,5], [1,2,3,4,5]))
print(find([1,2,3,4,5], [1,2,3,4,5,6]))
Output:
(14, 3)
(6, 7)
(0, 5)
(1, 1)

Sergey
September 23, 2016 1 approach
1) sort array by abs(x) as key (e.g. [1,2,3,4,5,6])
2) return [x**2 for x in array]
O(n log n) time
O(1) additional space
2 approach
1) do sqr for all negative and reverse list, do sqr for positive
2) merge 2 arrays like in merge sort but only 1 iteration needed
O(n) time
O(n) additional space (because of merge)
Actually it IS possible to do merge in linear time and constant extra space. By algorithm is not trivial :)
Google for it: BingChao Huang, Michael A. Langston, “Practical InPlace Merging” (1988).
So best solution will be:
O(n) time
O(1) extra space
UPD:
def merge_k(k, a, b):
result = []
i, j = 0, 0
while i + j < k and i < len(a) and j < len(b):
if a[i] < b[j]:
result.append(a[i])
i += 1
else:
result.append(b[j])
j += 1
while i + j < k and i < len(a):
result.append(a[i])
i += 1
while i + j < k and j < len(b):
result.append(b[j])
j += 1
return result
print(merge_k(4, [1,2,3,4], [2,3,4,5])) #[1, 2, 2, 3]
print(merge_k(8, [1,2,3,4], [2,3,4,5])) #[1, 2, 2, 3, 3, 4, 4, 5]
print(merge_k(15, [1,2,3,4], [2,3,4,5])) #[1, 2, 2, 3, 3, 4, 4, 5]
print(merge_k(15, [1], [2,3,4,5])) #[1, 2, 3, 4, 5]

Sergey
September 16, 2016 1) sort all words by length (if not)
2) sort each word by chars. e.g:
{a, b, ba, bca, bda, bdca} > {a, b, ab, abc, abd, abcd}
3) dynamic: add each word and check longest chain for all previous words chain
f(n+1 word) = max {f(n) f(n1) ... f(1)} + 1
{a}  {1, 1}
{a, b}  {1, 1}
{a, b, ab}  {1, 1, 2}
{a, b, ab, abc}  {1, 1, 2, 3}
{a, b, ab, abc, abd}  {1, 1, 2, 3, 3}
{a, b, ab, abc, abd, abcd}  {1, 1, 2, 3, 3, 4}
complexity in worst case is O(n^2 k) there n is number of words and k is max len of longest word
Where are 52! possible permutations of 52 cards (all of them are uniaque of course).
Thus we just need ceil(log_2(52!)) bits to code this number.
There are two possible situations:
1) You have a computer  simple calculate this equation and you get 226.
2) You have no computer. Just use Stirling's Approximation of n! ~ (n/e)^n * sqrt(2*Pi*n)
and then use logarithms to end this approximation.
I've personally got 52! < 2^265 but may be you will be better =)
This problem tests your knowledge of recursion problems (e.g. stack overflow).
Naive solution via recursion will work, but in worst case will require O(n) additional memory (your tree is "a linked list" or in other words all node>right == NULL for all nodes in the tree)
Here is a solution which much more complicated, but work with O(1) addition memory and without recursion.
Actually this solution is finite state machine
void swap_branches(Node* node) {
Node* tmp;
tmp = node>left;
node>left = node>right;
node>right = tmp;
}
void mirror(Node* root) {
Node* current, previous;
int state;
previous = NULL;
current = root;
state = 1; // initial state
while (true) {
switch (state) {
case 1: // try left
if (current>left != NULL) {
prev_node = current;
current = current>left;
state = 1;
} else {
state = 2;
continue;
}
break;
case 2: // try right
if (current>right != NULL) {
prev_node = current;
current = current>right;
state = 1;
} else {
state = 3;
continue;
}
break;
case 3: // mirror current node
swap_branches(current);
state = 4;
break;
case 4: // try parent
if (current>parent != NULL) {
previous = current;
current = current>parent;
if (previous == current>left) {
state = 2;
} else {
stats = 3;
}
} else {
return; // done
}
default:
break;
}
}
}
PS my c++ skills is far from perfect but I hope you get the idea
 Sergey January 19, 2015Interviewer gave me a hint: it's better to consider the case when n1 and n2 lays on the same level of tree (in my example b, c, h lays on same level of trees)
Case 1.
Let's assume that n1 and n2 lays on the same level.
In this case we can just go up over the tree simultaneously.
Then we face situation when n1 == n2 we done.
Case 2.
1) We need to count level for both nodes.
2) Align levels by following parent on node with higher level.
3) Do Case 1 solution.
int node_level(Node* node) {
int level = 0;
while (node>parent != NULL) {
node = node>parent;
level++;
}
}
Node* closest_common_ancestor(Node* n1, Node* n2) {
n1level = node_level(n1);
n2level = node_level(n2);
while (n1level < n2level) {
n2 = n2>parent;
n2level;
}
while (n1level > n2level) {
n1 = n1>parent;
n1level;
}
// in different tree situation we stop at
// n1 == n2 == NULL and it's still a correct result
while (n1 != n2) {
n1 = n1>parent;
n2 = n2>parent;
}
return n1;
}

Sergey
January 17, 2015
It's still not clear for me :)
 Sergey October 30, 2016Need more test cases:
[1, 1, 1, 1, 1, 1, 1] loop? For me it's a full array loop.
[1, 1, 1, 1, 1, 1, 0] loop? For me it's a zero length/self loop on the last element.
[1, 1, 1, 0, 1, 1, 1] loop? Same as previous. But some elements can't be reached from first.
[1, 1, 1, 1, 1, 1, 2, 1, 1] loop? A loop of length three.
[10] loop? A full array self loop :)
[] loop?
From math point of view (theory of permutations) all examples have loops. From 0 to N length. Except last. It's degenerate/edge case.
Is it clear why i'm so confused? :)
Conceptually this problem on finding a loop in a single linked list. Usually people solve such problem by reversing links, or by using two iterators (with step size 1 and 2 consequently).