SycophantEve
BAN USERSecond year Computer Science/Mathematics dual major at Wayne State University.
 0of 0 votes
AnswersGiven an array such that it is increasing until a point and then decreasing. Return the index of the number n in the array in sublinear time.
 SycophantEve in United States
Ex.
[1,2,5,8,13,9,3,1] n = 5
Output = 2
Next: Can you do it without finding the maximum element? Report Duplicate  Flag  PURGE
Algorithm
Can we assume that each node knows how many children it has. I.E it's stored in a vector of pointers not an array with no count. (C++ reference)
 SycophantEve June 26, 2015Can such a hashcode function exist? I mean, technically, the equal objects could be anywhere in an array of a different size. Which I could shuffle at any moment. How do you deterministic get it's index from a hash function that is ran on a completely different array.
{a, b, c, d} {e, b, d, q, f, g, v}
vs
{a, b, c, d} {e,q, b, v, f, g, d}
vs
{a, b, c, d} {d,b,c,q,a}
The first array is exactly the same, so the hash keys should be the same. But the second one is completely different and you have no way of knowing that. How do you get the index from the hashcode? Note that the second array in the first example and the second array in the second example have the same lengths and thus hashcode%length would need to give you two different values on the same hashcode.
This is assuming that I understand you right in that you only hash the smaller array values and then try to "make indices come out of thin air" from that hash.
If the short array is m and the long array is n, step 2 would be n log m.
Yes binary search is O(log m) on the short array but you have to do it n times.
So its O((n+m)log m)
Pretty much. The idea is, that given any array that is sorted in this way, we can find two things. 1. Which side the peak is on and 2. If we find an element greater than the element we're looking for, then the left side sub array is sorted in increasing order relative to what we're looking for and the right side is sorted in decreasing order relative to what we're looking for.
{1, 4, 5, 7,8, 9,10, 9, 6, 1}
So we check mid = 8
It's greater than 6 so we break up the arrays
{1, 4, 5, 7} {9,10,9,6,1}.
So even though either array may be unsorted, they still have the invariant required for binary search. In that on the right side, everything that is greater than the element we're looking for is before our element.
Basically you do a binary search, moving closer to the peak each time if the middle element is less than what we're looking for. After we find any element greater than the searched for element, we do a ascending order binary search on the left half and a descending binary searchon the right.
Basically the trick is, the sub arrays are "sorted enough" as long as you split at an element > than the search for element.
In the example above, you also notice that the right array is "sorted enough".
Give a piece of wood:
________________
We want to find the least, and cheapest, way to cut it.
We also want to get the largest free cut as possible(if 1 cut gives two pieces).
Let's say we have 21 and we want to cut (7,7,6).
7+7+6 = 20 < 21
21 0
subtract 7 and add to right side
14 7
subtract 7 add to right size
7 14
subtract 6 add to right side
1 19
Now pick the one that minimizes l  r
which is 7 14
We already got 7 for "free"
Cost 14
Then recurse on the smallest side that you can take the largest cut of using the same technique:
14 0
7 7
0 14
lr = 7 7
7 0
0 7
etc.
Cost = 27
Let's say we cut (9, 8, 3, 1)
21 0
12 9
4 17
1 20
0 21
Take the one that minimizes l  r = 12 9
We get 9 for "free"
Cost 12
12 0
4 8
1 11
4 8
etc.
Total cost 12 + 8 + 3
(6,6,6)
21 0
15 6
9 12
3 18
Take that one that minimized lr = 9 12
Cost 12
9 0
3 6
0 9
lr = 3 6
12 0
6 6
0 12
lr = 6 6
Cost = 12 + 6 + 6
(1, 1, 1)
21 0
20 1
19 2
18 3
Minimize lr = 18 3
cost 18 + 2 + 1
I believe this should work. The main idea is we want to make our first cut as close to n/2 as possible without making it impossible to to get the correct lengths down the line. By finding the cuts in this way, we are guaranteed that, by continuing to cut, we can get the corrects lengths afterwards.
Should work better than my previous attempt. Once again, if someone sees a problem. I would like to know.
It can be done without finding the peak, same complexity though. Only takes two binary searches instead of three, however.
Take, as an example, the array {1, 4, 5, 7,,10, 11, 9, 6, 1}
And, while looking for 6 through some variant of binary search, I get the two sub arrays {1,4,5} {7,10,11,9,,61}. See if you notice something interesting about the arrays.
I don't believe they actually use a table, as a precalculated table for all possible numbers between [0, maxDouble] up to 6 digits of precision would take a gigantic amount of memory. A square root table maybe, as shown below, but you still need to know how to convert that to an actual power.
I personally believe they use an optimized, base 2, approximation algorithm similar to fast square root.
All Credit goes to Spektre on Stack Overflow, this was posted in the link I posted below.
I am using fixed point long arithmetics and my pow is log2/exp2 based. Numbers consist of:
int sig = { 1; +1 } signum
DWORD a[A+B] number
A is num of DWORDs for integer part of number
B is num of DWORDs for fractional part
My simplified solution is this:
//
longnum exp2 (const longnum &x)
{
int i,j;
longnum c,d;
c.one();
if (x.iszero()) return c;
i=x.bits()1;
for(d=2,j=_longnum_bits_b;j<=i;j++,d*=d)
if (x.bitget(j))
c*=d;
for(i=0,j=_longnum_bits_b1;i<_longnum_bits_b;j,i++)
if (x.bitget(j))
c*=_longnum_log2[i];
if (x.sig<0) {d.one(); c=d/c;}
return c;
}
//
longnum log2 (const longnum &x)
{
int i,j;
longnum c,d,dd,e,xx;
c.zero(); d.one(); e.zero(); xx=x;
if (xx.iszero()) return c; //**** error: log2(0) = infinity
if (xx.sig<0) return c; //**** error: log2(negative x) ... no result possible
if (d.geq(x,d)==0) {xx=d/xx; xx.sig=1;}
i=xx.bits()1;
e.bitset(i); i=_longnum_bits_b;
for (;i>0;i,e>>=1) // integer part
{
dd=d*e;
j=dd.geq(dd,xx);
if (j==1) continue; // dd> xx
c+=i; d=dd;
if (j==2) break; // dd==xx
}
for (i=0;i<_longnum_bits_b;i++) // fractional part
{
dd=d*_longnum_log2[i];
j=dd.geq(dd,xx);
if (j==1) continue; // dd> xx
c.bitset(_longnum_bits_bi1); d=dd;
if (j==2) break; // dd==xx
}
c.sig=xx.sig;
c.iszero();
return c;
}
//
longnum pow (const longnum &x,const longnum &y)
{
//x^y = exp2(y*log2(x))
int ssig=+1; longnum c; c=x;
if (y.iszero()) {c.one(); return c;} // ?^0=1
if (c.iszero()) return c; // 0^?=0
if (c.sig<0)
{
c.overflow(); c.sig=+1;
if (y.isreal()) {c.zero(); return c;} //**** error: negative x ^ noninteger y
if (y.bitget(_longnum_bits_b)) ssig=1;
}
c=exp2(log2(c)*y); c.sig=ssig; c.iszero();
return c;
}
//
where:
_longnum_bits_a = A*32
_longnum_bits_b = B*32
_longnum_log2[i] = 2 ^ (1/(2^i)) ... precomputed sqrt table
_longnum_log2[0]=sqrt(2)
_longnum_log2[1]=sqrt[tab[0])
_longnum_log2[i]=sqrt(tab[i1])
longnum::zero() sets *this=0
longnum::one() sets *this=+1
bool longnum::iszero() returns (*this==0)
bool longnum::isnonzero() returns (*this!=0)
bool longnum::isreal() returns (true if fractional part !=0)
bool longnum::isinteger() returns (true if fractional part ==0)
int longnum::bits() return num of used bits in number counted from LSB
longnum::bitget()/bitset()/bitres()/bitxor() are bit access
longnum.overflow() rounds number if there was a overflow
X.FFFFFFFFFF...FFFFFFFFF??h > (X+1).0000000000000...000000000h
int longnum::geq(x,y) is comparition x,y returns 0,1,2 for (<,>,==)
All you need to understand this code is that numbers in binary form consists of sum of powers of 2, when you need to compute 2^num then it can be rewritten as this
2^(b(n)*2^(n) + ... + b(+m)*2^(+m)) where n are fractional bits and m are integer bits multiplication/division by 2 in binary form is simple bit shifting so if you put it all together you get code for exp2 similar to my. log2 is based on changing the result bits from MSB to LSB until it matches searched value (very similar algorithm as for fast sqrt computation). hope this helps clarify things...
Note that the sqrt is quick and dirty as it's about 3 lines of code. (67 if you have to write your own sqrt function, and it'll be slower).
Take a look here: http://stackoverflow.com/questions/2882706/howcaniwriteapowerfunctionmyself
For different methods including the Square Root method, the logarithm/taylor series methods, and the code I posted above.
Yeah, for a^b where b is a real number. I find a^(integer portion of b) then multiply by a^(floating point portion of b).
To find the floating point portion we use the following observation:
a^b = sqrt(a^2b)
a^b = sqrt(sqrt(a^4b)
a^b = sqrt(sqrt(sqrt(a^8b)))
As an example lets say 4b < 1 <= 8b
then a^b = sqrt(sqrt(sqrt(a * a^(8b1))))
.
.
Notice that this could continue infinitely, or at least way past the precision that we care about, which is why we keep a precision counter. (if you plug in a giant number like 2000000 for precision in the ideone code below you'll notice that for 1.7, the power actually converges to 1 around 4060 iterations so it stops on it's own.)
For example, 5^1.7
5^1 * 5^.7
5^.7 = sqrt(5^1.4)
= sqrt(5^1*5^.4)
5^.4 = sqrt(5^.8) = sqrt(sqrt(5^1.6)) = sqrt(sqrt(5^1*5^.6)
Continue until you reach the level of precision desired at which point you just estimate with sqrt(5).
Unwinding gives us 5^1.7 = 5*sqrt(5*sqrt(sqrt(5*sqrt(5))))
The precision number could be replaced by a counter that counts down. For example, the log_2(1/0.00001) = 16.609... so I picked 20 in the ideone code below to be safe.
Here is edited C++11 code that, instead of outputing the answer, outputs the closed form 5*sqrt(5*sqrt(sqrt(....
Feel free to mess around with it:
(If you haven't used ideone before, just hit edit and under input type two numbers seperated by a space, base exponent. I have it default to 5 1.7)
Note that the actual algorithm gives you NaN, just like the std::pow function, for negative bases to fractional exponents but the string version will most likely just spit out the positive version with negatives in front of everything incorrectly.
http://ideone.com/2mcE5h
I hope that helps.
Corrected the negative exponents. However, the more difficult part is the fractional exponents.
#include <iostream>
#include <cmath>
#include <ctime>
float powe(float x, int exp)
{
printf("%f %d\n", x, exp);
if (exp < 0)
return powe(1 / x, exp);
if (exp == 0)
return 1;
if (exp == 1)
return x;
if (((int)exp) % 2 == 0)
return powe(x*x, exp / 2);
else
return x*powe(x*x, (exp  1) / 2);
}
int main() {
std::cout << pow(5, 1.7);
std::cout << std::endl;
std::cout << powe(5, 1.7);
}
pow(5, 1.7) outputs 15.4258.
powe(5, 1.7) outputs 5.00001.
Think about a^4.3 = a^4 * a^.3
Where a^.3 = a^.5^.5 * a^.05
Where a^.05 = a^.5^.5^.5^.5^.5 * a^.01875
Of course you'll never end this descent but you can get as close as desired. (6 decimals of precision)
It's not the fastest(you can do better with taylor series expansion and logarithms) but using repeating square roots is a quick and dirty way to handle fractional(or irrational) exponents.
Disclaimer: Not my code  Square Root Method
#include <iostream>
// Not My Code, edited to be more effecient
double sqr(double x) { return x * x; }
// meaning of 'precision': the returned answer should be base^x, where
// x is in [powerprecision/2,power+precision/2]
double mypow(double base, double power, double precision)
{
if (power < 0) return 1 / mypow(base, power, precision);
if (power >= 1) return base * mypow(base, power  1, precision);
if (precision >= 1) return sqrt(base);
return sqrt(mypow(base, power * 2, precision * 2));
}
// End not my code
// My code
double mypowfast(double base, int power) {
if (power == 0) return 1;
if (power == 1) return base;
if (power % 2 == 0) return mypowfast(base * base, power / 2);
else return base * mypowfast(base*base, (power  1) / 2);
}
// My code
double mypow(double base, double power) {
if (power < 0) {
power = power;
base = 1 / base;
}
int intpower = (int)power;
double t = mypowfast(base, intpower);
power = intpower;
return t*mypow(base, power, .000001);
}
int main() {
std::cout << mypow(5, 1.734) << std::endl;
std::cout << pow(5, 1.734) << std::endl;
}

SycophantEve
June 20, 2015 Here's working code of the algorithm:
#include <iostream>
#include <string>
// I tried to take into account all edge cases but I may have missed one.
std::string findMajoritySorted(int a[], int beg, int end) {
if (beg < 0  beg >= end) return "Invalid Input";
int mid = beg + (end  beg) / 2;
int candidate = a[mid];
int startMid = mid;
int end1 = end, beg1 = startMid;
if (a[beg] == candidate && a[end  1] == candidate) return std::to_string(candidate);
if (a[mid  1] != candidate && a[mid + 1] != candidate) return "No Majority by test.";
int j = 0, k = 0;
while (end1 >= beg1) {
int mid = beg1 + (end1  beg1) / 2;
if (a[mid + 1] > candidate && a[mid] == candidate) {
k = mid;
break;
} else if (a[mid] > candidate) {
end1 = mid  1;
} else {
beg1 = mid + 1;
}
}
int end2 = startMid, beg2 = beg;
while (end2 >= beg2) {
int mid = beg2 + (end2  beg2) / 2;
if (a[mid  1] < candidate && a[mid] == candidate) {
j = mid;
break;
} else if (a[mid] < candidate) {
beg2 = mid + 1;
} else {
end2 = mid  1;
}
}
return (k  j + 1 > ((beg + end) / 2) ? std::to_string(candidate) : "No Majority by binary search.");
}
int main() {
int sortedMajority[] = { 1, 1, 2, 2, 2, 2, 3 };
int sortedNotMajority[] = { 1, 1, 2, 2, 2, 3, 3 };
int NotMajority[] = { 1, 2, 3, 4 };
int singleElement[] = { 1 };
int fullDuplicatesElement[] = { 5,5,5,5 };
int twoElements[] = { 1, 2 };
std::cout << findMajoritySorted(sortedMajority, 0, 7) << std::endl;
std::cout << findMajoritySorted(sortedNotMajority, 0, 6) << std::endl;
std::cout << findMajoritySorted(NotMajority, 0, 4) << std::endl;
std::cout << findMajoritySorted(singleElement, 0, 1) << std::endl;
std::cout << findMajoritySorted(fullDuplicatesElement, 0, 1) << std::endl;
std::cout << findMajoritySorted(twoElements, 0, 2) << std::endl;
}

SycophantEve
June 20, 2015 Your code failed on
{1,1,2,2,2,2,3} which has a majority element [2] but neither start or end is candidate.
and it's because this line:
if (array[start] != candidate && array[end] != candidate) {
return false;
}
There's no guarantee that the first or last, even less so both, elements must be the candidate.
It also fails on
{ 1, 2, 2, 2, 2, 3, 3 };
Even with the incorrect line of code removed. You need to check how many of the candidate elements there are, which can be done by binary searching both sides of the array in O(logN) still.
 SycophantEve June 20, 2015What about negative or fractional exponents? std::pow can handle those.
 SycophantEve June 20, 2015No as that is inefficient.
 SycophantEve June 20, 2015If the array isn't sorted, see my answer in your other post. Short answer: No.
If the array is sorted than the majority element is A[mid] if one exists. And I believe you'd be able to check if it is the majority element in O(logN) by binary searching to find the length of the run.
{1,1,1,2,3} 1 is the majority element A[mid] and you can binary search to find that there are 3 ones which is greater then n/2
{1, 1, 2, 3, 3} There is no majority element and you can find this by binary searching to find that there is one 2 which is less than n/2.
You could also speed this up by noting that if A[mid1] && A[mid+1] != A[mid] then it cannot have a majority element. However, the converse isn't true. i.e. if A[mid1] && a[mid+1] == a[mid] it still might not have a majority element.
For example:
{1,1, 2, 2, 2, 2, 3, 3} passes the test but doesn't have a majority element.
If anyone could correct me if I'm wrong I would be much obliged.
I'd like to know as well. I doubt it since the algorithm to find the majority element is O(n). If you look through the array, and it's even length, then the only way to know it doesn't have a majority element is if no adjacent pairs are equal but some pairs may be equal and it still might not have a majority element. Thus in the best case(no pairs) you need to scan to find pairs which is still O(n).
{1,2,3,2,4 2} has no majority element for sure
{1, 2, 2, 3, 4, 2} might have a majority element(it doesn't)
{2,1,2,3,2,5,2} has a majority element and no pairs(odd)
If it's odd then the majority element is either one of the paired elements of the n1 subarray, or the last element. Which means no matter what you have to check the last element which is O(n).
However, I could be wrong and would like to be corrected.
Bug Found. Will Edit.
 SycophantEve June 19, 2015Note any ugliness added beyond the two binary searches is due to the ugliness of C++ and deleting from a vector without doing a linear scan(had to change from using indices to iterators). However, this function does the same thing as described in words above but uses binary search to do it in O(logN)**** time with O(1) space assuming the intervals are sorted as stated in the problem. If not, then use the linear scan given above.
****Note I feel like it is cheating to say this can be done in O(logN) at all as erasing from an array is O(n) and if one uses a linked list then searching is O(n) regardless of sorted order. However, this is still a "faster" O(n) algorithm then the previous one.
void interval(std::vector<std::pair<int, int>> &a, std::pair<int, int> interv) {
std::vector<std::pair<int, int>>::iterator j = a.end(), k = a.end();
int min = std::numeric_limits<int>::max();
int max = std::numeric_limits<int>::min();
int beg = 0;
int end = a.size()1;
while (end >= beg) {
int mid = beg + (end  beg) / 2;
if (interv.first <= a[mid].second && a[mid].second < min) {
k = a.begin() + mid; // Sets the iterator to point to a[mid]
min = a[mid].second;
end = end  1;
} else {
beg = mid + 1;
}
}
beg = 0;
end = a.size()1;
while (end >= beg) {
int mid = beg + (end  beg) / 2;
if (interv.second >= a[mid].first && a[mid].first > max) {
j = a.begin() + mid; // Sets the iterator to point to a[mid]
max = a[mid].first;
beg = beg + 1;
} else {
end = mid  1;
}
}
if (j == a.end()  k == a.end()) {
a.push_back(interv);
return;
}
j>first = std::min(k>first, interv.first);
j>second = std::max(j>second, interv.second);
if (k != j) {
a.erase(k);
}
}

SycophantEve
June 18, 2015 I believe this simple nlogn solution with O(n) space should work: sort an array of pairs that include (value, index). Find the largest consecutive subsequence sum such that index_1 < index_2 < index_3 ... < index_n.
Works for your test case as well as adding [1] to the beginning. However, I didn't fully test it out. If anyone could give a counter example, or give a faster algorithm I would like to know.
for (int i = 0; i < a.size(); ++i) {
if (interv.first <= a[i].second && a[i].second < min) {
k = i;
min = a[i].second;
}
if (interv.second >= a[i].first && a[i].first > max) {
j = i;
max = a[i].first;
}
}
if (j == 1  k == 1) {
a.push_back(interv);
return;
}
This is the following:
Find the index into the array such that the lower bound on added interval is less than the upper bound on of another interval. If it's less than more than one interval, take the one with the minimum upper bound.
At the same time, find the index into the array such that the upper bound on the added interval is greater than the lower bound of another interval. If it is greater than more than one interval, take the maximum lower bound.
If no such interval exists, simply add the new interval to the array.
auto i = a.begin();
a[std::max(k, j)].first = std::min(a[k].first, interv.first);
a[std::max(k, j)].second = std::max(a[j].second, interv.second);
if (k != j) {
for (int v = std::min(k, j); v > 0; v, i++);
a.erase(i);
}
This code simply changes one of the intervals into the new one, and then erases the other. The max and min functions, of course return the max and min, and this handles the case where intervals are fully contained in a previous interval.
I.E. placing [2, 5] in an array that contains [0, 12] already. The min of these two is [0] and the max is [12] so no changes are made.
I took the max of the two indices to edit just to simplify the next deletion step.
Then if our interval connected two old ones into one(the indices of the min and max are different), deletes the one we didn't edit(which will be the first one as I took the second one to edit by using max on the indices). I did this to keep the array in the same order as before instead of deleting the two old ones and adding a new one to the end.
Big Theta is not the average case...
Yes O(f(n)) says that our algorithm is bounded above by some constant multiple of f(n) such that it cannot run slower than it.
Big Theta however is a tighter bound. It says that our algorithm cannot run slower than a constant f(n) but it also cannot run FASTER than some constant times f(n). For example mergesort is big theta(nlogn) because it takes nlogn no matter the input. While Quicksort is O(n^2) but can run in nlogn time in most cases. Thus it is NOT Big Theta(n^2).
In mathematical terms: g(n)=O(f(n) implies g(n) <= c*f(n) for some constant c for all n past some n_0
and bigTheta implies c_1*f(n) <= g(n) <= c_2*f(n) for some constants c1, c2 for all n past some n_0.
The basic idea of the fisher yates is this:
1 2 3 4 5
Do a swap of 1 and random(1,2,3,4 5) = 3
32 1 4 5
Do a swap of 2 ad random(1, 2, 4, 5) = 4
3 4  1 2 5
Notice how after something it swapped, it is no longer available for swap.
Therefore on the first pass there was a 1/n chance of any one element being picked to swap. And on the second we never look at that element again so there is a 1/(n1) chance of picking any single element on the second pass.
so on the n'th pass there is a 1/1 chance of picking that element.
Therefore each permutation has the probabily of 1/(n!) which is perfectly "random".
I feel like it's close to correct, but am I wrong in thinking this function doesn't actually do anything as is? It doesn't seem to be able to return anything but one. I think you forgot to check the sum property.
Shouldn't it be something more like:
typedef struct NAryTree
{
int content;
int* children;
}tNAryTree, *tNArayTreePtr;
int sumProperty(tNArayTreePtr root)
{
int sum = 0;
if (root == NULL)
return 1;
for (int i = 0; i < n; ++i) {
sum += children[i].content;
}
int isSumPropertySatisfied = 0;
int i = 0;
//iterate over all the children of this node
if (sum == content) {
isSumPropertySatisfied = 1;
for (; i < n && isSumPropertySatisfied; i++)
{
//if any children is null, skip
if (root>children[i] == NULL)
continue;
isSumPropertySatisfied = isSumPropertySatisfied && sumProperty(root>children[i]);
}
}
return isSumPropertySatisfied;
}

SycophantEve
June 18, 2015 The problem is numbers greater than 26 cannot be encoded. Thus 893 can only be encoded as 8,9,3 and no three digit numbers are allowed. I feel like recursion would be useful here as it would allow to not have to check "unallowed" permutations more than once at each level.
1,2,2,2
12,2,2
1,22,2
then
12,22
1,2,22
Adding a 5th is all of the above encodings plus
1,2,2,22
12,2,22
1,22,22
which happens to be the 3 digit encodings
6 gives all of the above plus
1,2,2,2,22
1,22,2,22
12,2,2,22
1,2,22,22
12,22,22
Which once again is all the 4 digit encodings
Thus the number of encodings for a N digit string(where every combination can encoded) is the number of encodings for an n1 digit string + the number of encodings of a n2 digit string. Note if every combination is not possible, such as 12,35 then make sure you check that each number is < 26 or else don't count it.
I would definitely use a dynamic programming approach or else this will become exponential in run time like fibonacci.
Do you really need the search at the start? I believe this simple loop will work ( same idea though):
int absCount(std::vector<int> a) {
int count = 0;
for (int i = 0, j = a.size()  1; i <= j;) {
if (std::abs(a[i]) < std::abs(a[j])) {
count++;
j;
} else if (std::abs(a[i]) > std::abs(a[j])) {
count++;
i++;
} else {
count++;
i++;
j;
}
}
return count;
}
Correct me if I'm wrong. But since, as you said, the array is broken up into two subarrays already, we don't actually need to find the midpoint. The fact that it exists is enough. Still O(n) but should be about 1.5x as fast.
 SycophantEve June 18, 2015Here is an O(n) algorithm assuming the intervals aren't guaranteed to be in sorted order. If they are then you can of course speed it up by doing binary search, as per Rohit, instead of a linear scan but the thought is the same. In C++:
#include <iostream>
#include <vector>
#include <limits>
#include <algorithm>
void interval(std::vector<std::pair<int, int>> &a, std::pair<int, int> interv) {
int j = 1, k = 1;
int min = std::numeric_limits<int>::max();
int max = std::numeric_limits<int>::min();
for (int i = 0; i < a.size(); ++i) {
if (interv.first <= a[i].second && a[i].second < min) {
k = i;
min = a[i].second;
}
if (interv.second >= a[i].first && a[i].first > max) {
j = i;
max = a[i].first;
}
}
if (j == 1  k == 1) {
a.push_back(interv);
return;
}
auto i = a.begin();
a[std::max(k, j)].first = std::min(a[k].first, interv.first);
a[std::max(k, j)].second = std::max(a[j].second, interv.second);
if (k != j) {
for (int v = std::min(k, j); v > 0; v, i++);
a.erase(i);
}
}
int main() {
std::vector<std::pair<int, int>> a = {
std::pair<int, int>(0, 2), std::pair<int, int>(10, 1), std::pair<int, int>(4, 10), std::pair<int, int>(14, 19), };
interval(a, std::pair<int, int>(5, 1));
std::cout << "Adding [5,1]:\n";
for (auto i : a) {
std::cout << "[" << i.first << ", " << i.second << "] ";
}
std::cout << "\nAdding [3,9]:\n";
interval(a, std::pair<int, int>(3, 9));
for (auto i : a) {
std::cout << "[" << i.first << ", " << i.second << "] ";
}
std::cout << "\nAdding [1,3]:\n";
interval(a, std::pair<int, int>(1, 3));
for (auto i : a) {
std::cout << "[" << i.first << ", " << i.second << "] ";
}
std::cout << "\nAdding [15,13]:\n";
interval(a, std::pair<int, int>(15, 13));
for (auto i : a) {
std::cout << "[" << i.first << ", " << i.second << "] ";
}
std::cout << "\nAdding [3,21]:\n";
interval(a, std::pair<int, int>(3, 21));
for (auto i : a) {
std::cout << "[" << i.first << ", " << i.second << "] ";
}
std::cout << "\nAdding [30, 50]:\n";
interval(a, std::pair<int, int>(30, 50));
for (auto i : a) {
std::cout << "[" << i.first << ", " << i.second << "] ";
}
std::cout << std::endl;
}

SycophantEve
June 18, 2015 I would make sure to ask if the integers are unsigned or not. If they are then negating them would instead make them wrap around in certain languages and depending on this functionality would be unadvised. I would instead go with the first comment to add N and check if a number is >= N
 SycophantEve June 14, 2015I believe the following should be an O(n) solution. Scan once to find the first element where a[i] >= a[i+1]. If it's = keep scanning until you either meet a number greater than it, in which case you throw out that guess and continue or a number where a[i] < a[j] in which case you save it. In other words save the FIRST index of a run of equal numbers. Then find the next element out of order such that a[j1] >= a[j]. Do the same as above but in reverse to save the last index of a run of equal numbers.(Note this shouldn't actually need to be done for j as if there is a run of equal numbers out of order at this point, there is automatically more than two out of order and it's false). Swap j and i Scan one more time to see if it's sorted.
If nothing else is out of order, only then, do you let j = i+1; Otherwise you'll always let j = i+1 by definition.
Counter Example posted above:
1, 2, 9, 9, 2
Scan 1: A[i] = 9 as 9 >=9>2, A[j] = 2 as 2 < 9
swap 9 and 2
1,2,2,9,9,is sorted
1, 2, 2, 2, 1, 9
Scan 1: a[i] = 2 as 2 >= 2 >= 2 > 1 a[j] = 1 as 1 < 2
swap 2 and 1
1,1,2,2,2,9 is sorted.
1 2 3 7 5 6 4 8 9
Scan 1: a[i] = 7 a[j] = 4
Swap 4 and 7
Scan 3: It's in order
8 1 2 3 4 5 6 7 0
Scan 1:a[i] = 8 a[j] = 0
swap
scan 2: in order
1 2 3 5 4 7 6 8 9
Scan 1: a[i] = 5, a[j] = 4
Swap
Scan 2: out of order.
1 2 3 5 4 6 7 8 9
Scan 1: a[i] = 5 a[j] = 4
Swap
Scan 2 in order.
0, 10, 10, 20, 30, 40, 50
Scan 1: a[i] 0 and a[j] = 10
swap a[i] and a[i+1]
scan2 in order
0, 2, 4, 6, 8, 10, 9
Scan1: a[i] = 10, a[j] = 9
swap a[i] and a[j]
scan 2: sorted.
It's important that you never let j = i+sizeofequalrun unless you didn't find a candidate j elsewhere in the array. Therefore set j to a dummy value and if it's still that dummy value after the scan set it to "defaultJ" which will be either i+1 or j+sizeofrunofnumbersequaltoJ. Here's the code to show it's not hard to implement.
bool ifoneswapp(std::vector<int> a) {
int i = a.size(), j = a.size();
int defaultJ = a.size();
for (int k = 0; k < a.size()  1; ++k) {
if (a[k] >= a[k + 1]) {
i = k;
while (k < a.size()  1 && a[k + 1] == a[k]) {
k++;
};
if (a[k + 1] < a[k]) {
defaultJ = k + 1;
break;
}
}
}
for (int k = defaultJ + 1; k < a.size(); ++k) {
if (a[k] < a[k  1]) j = k;
}
if (j == a.size()) j = defaultJ;
if (i >= a.size()  j >= a.size()) return true;
int temp = a[i];
a[i] = a[j];
a[j] = temp;
for (int i = 0; i < a.size()  1; ++i) {
std::cout << a[i] << " ";
if (a[i] > a[i + 1]) return false;
}
return true;
}
If anyone can think of a counter example, I would be much obliged. I only ran the tests above but I believe it makes sense. The only way for a single swap to make a sorted array is if there are two numbers A[i] and A[j] that split the whole array A into 3 sorted arrays of possibly empty size where A[i] > A[j] if i < j. If there's more than a single swap won't work and if A[i] < A[j] for i < j then a swap will place you in the same position as the 3 sub arrays are already sorted and you're placing a smaller(or greater) number at the end
 SycophantEve June 14, 2015You can do it in O(n) where n is the length of the smallest string while returning a count of the distance. Of course, a boolean in practice would allow it to short circuit on average cases but same bigO/
 SycophantEve June 14, 2015Note with the array you posted, no swaps will be done as there is no i s.t a[i] > a[i+1] as 0 is the last element. and it will correctly find that it cannot be sorted in one swap with the second scan.
 SycophantEve June 14, 2015Note that your array is false as you can't make it sorted in one swap. You can just make it sorted in one rotation.
I can think of a 2N so O(n) solution. Scan once to find the first element where a[i] > a[i+1]. Then find the next element out of order such that a[j1] > a[j]. Swap j and i Scan one more time to see if it's sorted.
1 2 3 7 5 6 4 8 9
Scan 1: a[i] = 7 a[j] = 4
Swap 4 and 7
Scan 3: It's in order
8 1 2 3 4 5 6 7 0
Scan 1:a[i] = 8 a[j] = 0
swap
scan 2: in order
1 2 3 5 4 7 6 8 9
Scan 1: a[i] = 5, a[j] = 4
Swap
Scan 2: out of order.
1 2 3 5 4 6 7 8 9
Scan 1: a[i] = 5 a[j] = 4
Swap
Scan 2 in order.
0, 10, 10, 20, 30, 40, 50
Scan 1: a[i] 0 and a[j] = 10
swap a[i] and a[i+1]
scan2 in order
0, 2, 4, 6, 8, 10, 9
Scan1: a[i] = 10, a[j] = 9
swap a[i] and a[j]
scan 2: sorted.

SycophantEve
June 14, 2015 Ask "What door would the other one say is the heaven door?"
Case 1: Liar  He would say the door that is not the heaven door as he would lie that the truthteller would say the hell door.
Case 2: Truth  He would say the door that is not the heaven door as he would tell the truth that the liar would lie.
This works because if both were liars then the answer would be easy as you would just go in the opposite door. And this question forces the truth teller to "lie" as he is answering honestly for the liar.
So either way just take the other door.
Note that P[i] in both cases = P[i1] + A[i] with setting P[0] as S[0] in both cases.
Decode:
func decode(type a[]) {
type p1[0] = 0, p2[1] = 1;
for(int i = 0; i < a.size()1; ++i) {
p1[i+1] = a[i] + p[i];
p2[i+1] = a[i] + p[i];
if(p1 != 1  p1 != 0  p2 != 0  p2 != 2) return none;
}
print p[1], p[2];
}

SycophantEve
June 14, 2015 the output shouldn't be vector[i1] but instead vector[counter+i1] sorry.
 SycophantEve June 14, 2015Store N lines in a vector. When N lines have been read, start adding to the beginning again and increment a counter by 1 % size of the array for each element read.
readNLines() {
int counter = 0;
while(file NOT empty) {
if(vector.size() < n)
vector.push_back(file.line)
else {
vector[(counter+1)%n] = file.line
counter = (counter+1)%n
}
}
for(int i = 1; (counter+i)%n != counter; ++i) {
output vector[i1]
}
}

SycophantEve
June 14, 2015 Instead of compressing, couldn't you just has the longest string that fits into memory, and then hash the hash with the rest. Repeating if needed? No two nonidentical strings should collide because at some point their hashes will differ.
(Not an expert on cryptographic hash functions so I don't know how much of an effect rehashing has on collision although I know it does increase the likelyhood).
That's also assuming that the integers are small relative to the size of the list so the k in the O(n+k) doesn't overpower the n.
 SycophantEve June 14, 2015Yes the point is you can do it in O(logN). As the question asks for better than O(N).
Binary search for an index such that A[j] == 3 and A[j1] < 3 and then again for an index such that a[k] == 3 and a[k+1] > 3
return kj+1;
If you pick the middle element and it doesn't equal either of it's neighbors, you're done. Otherwise, recurse into the odd half the array.
 SycophantEve July 03, 2015