Google Interview Question
Software Engineer / DevelopersCountry: United States
If N is the number itself, then the number of bits is roughly equal to log(N). Since the obvious algorithm requires inspecting each bit, one could argue it's O(log(N)) where N is the decimal number itself.
http://www.quora.com/Algorithms/How-can-we-find-the-longest-continuous-subsequence-of-0s-in-binary-representation-of-an-integer-in-O-log-n-time-complexity/answer/Michal-Fori%C5%A1ek
The comment-answer above is the correct one I think. However the question is, how fast is '<<'. Is it constant?
I think it can be solved in O(lgn) by divide and conquer method explained in Programming Pearls
1) divide the given string into two halves
2) find continuous zeros count in left half using recursion
3) find continuous zeros count in right half using recursion
4) find continuous zeros count in left part that touches the mid point
5) find continuous zeros count in right part that touches the mid point
6) add counts found in (4) & (5)
7) return max of (2), (3) & (6)
int get_max_zeros(char* binstr, int lo, int hi) {
if (lo > hi)
return 0;
if (lo == hi)
return (binstr[lo] == '0')? 1 : 0;
int mid = lo + (hi-lo)/2;
int left_crossing_count = 0;
for (int i=mid; i>=lo; i--)
if (binstr[i] != '0')
break;
else
left_crossing_count++;
int right_crossing_count = 0;
for (int i=mid+1; i<=hi; i++)
if (binstr[i] != '0')
break;
else
right_crossing_count++;
int crossing_count = left_crossing_count + right_crossing_count;
int left_count = get_max_zeros(binstr, lo, mid);
int right_count = get_max_zeros(binstr, mid+1, hi);
int max1 = crossing_count > left_count? crossing_count : left_count;
int max2 = max1 > right_count? max1 : right_count;
return max2;
}
Yes this could be done in O(log n)...you can use right shift operator which is basically dividing the number by 2 which leads to log n solution..here is a sample code
#include<stdio.h>
int findLongestzeros(int n)
{
int maxzeroes=0;
int count=0;
while(n)
{
if((n&1)==0)
{
count++;
if(count>maxzeroes)
maxzeroes=count;
}
else
count=0;
n=n>>1;
}
return maxzeroes;
}
int main()
{
printf("%d",findLongestzeros(133));
}
Before sending the number to this method, we need to convert an int to binary representation. I tried using Integer.toBinaryString(133) which actually returns a string on which I couldn't use the function. How to code this extension. btw i edited your code to java and it doesn't seems to give correct input. Could you help me.
Using right shift operator would still be O(n). It has to go through each individual bit for n number of bits (which is the input).
This is not what the OP asks for. The input is a binary string already, and the n measure in the Log(n) requirement is the number of bits, not the decimal number representation.
@julius
How do you know?
Another ambiguous question. 1 min. to type by OP, then hours wasted by everyone else.
Inefficient.
a faster method to count number of 1s and 0s
void countint0(unsigned int p)
{
int c0 = 0;
int c1 = 0;
if (p == 0)
{
c0 = 1;
}
while(p > 0)
{
if (p & 1 == 1)
{
if (p == 0xffff)
{
c0 = 0;
c1 = 32;
break;
}
else
{
int q = p+1;
int cc1 = ((p ^ q) + 1) /4;
cc1 = 1+ log(cc1) / log(2);
c1 += cc1;
p = p >> cc1;
}
}
else
{
int q = p-1;
int cc0 = ((p ^ q) + 1) /4;
cc0 = 1 + log(cc0)/ log(2);
c0 += cc0;
p = p >> cc0;
}
}
}
It is possible. The question is reduced to how you can represent a number as addition of numbers which are power of 2
N = 133
= 128 + 5
= 128 + 4 +1
Now, for each such number above, you know how many bits you need to represent that number and max diff of bits between two consecutive numbers is your answer.
So for N = 128 + 4 +1 , you need bits 8(for 128), 3(for 4) and 1(for 1), so the answer is
Max[ (7-3) and (2-1) ] = 4. Why 7 and 2? Because with 8 bits to represent 128, MSB is set already so max you can have 7 bits 0 and same with 4 where you have MSB set so max 2 remaining bits.
int Max =0;
while(N)
{
int NumBits = CEILING(log N);
int NextN = N - pow(2, NumBits)
if(NextN == 0)
{
// this means, the number is power of 2, just find trailing 0s and return max
if(NumBits > max)
max = NumBits;
return max;
}
int NextNumBits = CEILING(log NextN);
if(NumBits - NextNumBits > max)
max = NumBits - NextNumBits;
N = NextN;
}
return max;
As an example N = 133 ==> 10000101
NumBits = 7 (= CEILING(log 133))
NextN = 133 - 128 ( = pow(2,7) ) now NextN becomes 5
NumZero = NumBits - CEILING(long NextN)) = 7 - 3 = 4
Check if NumZero is max so far,
Make N = NextN and repeat loop then return max
>>If N is equal to a ((power of 2) - 1), I think the complexity will still be O(n).
You are correct anonyco. Worst case complexity will be O(N) and I don't think you can do better than that.
This is O(n) worst case, imagine you'd have 11111110 instead of 10000101, means you'll have 8 iterations == O(n)
Running time for this is O(NumberOfSetBits), which you can argue is just O(N) but this code is slick. And if you could think of this solution on the spot, then I think your interview will be delighted :)
I rarely downvote, but I am going to downvote this.
The code is inefficiency and a bit messy.
The idea used therein can be done way more efficiency with much cleaner code.
In fact, some of the statements used in the code are not necessary (redundancies exist).
@mithya, sorry but the code is not slick.
{All complexities are w.r.t. total value of number}
you seem to be using floating point log function (then using ceil around it), and then using some sort of pow function (floating point?)
and you do each twice inside a loop, at start and end... this is a clear sign that there is redundancy
We can find "the highest set bit" in upper bound log(log(n)) time which will replace your code segments that look like
int NumBits = CEILING(log N);
int NextN = N - pow(2, NumBits)
Then in the middle of your loop you have this
if(NextN == 0)
{
// this means, the number is power of 2, just find trailing 0s and return max
if(NumBits > max)
max = NumBits;
return max;
}
Can you make due without this? Even if you keep this, why do you need to set max then return?
And then at the end of the loop, you repeat the whole "find highest set bit" log/pow statements.
Please try writing a slick for or while loop to replace this.
@urik: I appreciate your reply. However, I see a lot of 'could' / 'should' in your reply without providing anything concrete. It would be nice to replace your 'could'/'should' with some code to benefit everyone(especially me) here.
#!/usr/bin/python
def longest0s(i):
#return a list of 0s and 1s
m = {}
nxt = i>>1
if i == 0:
#return [0]
return 1
m[-1] = 0
count = 0
while i != 0:
bit = i ^ nxt<<1
if bit == 0 :
m[count] = m[count-1] + 1
else:
m[count] = 0
i = nxt
nxt = nxt>>1
count = count + 1
max = -1
for key in m:
if max < m[key]:
max = m[key]
return max
if __name__ == '__main__':
i = 200
print longest0s(i)
#include<iostream>
#include<math.h>
using namespace std;
int count_bits(int number)
{
int count=0;
while(number)
{
number=number>>1;
count++;
}
return count;
}
void main()
{
int number=0;
int bits=0;
int max=0;
int next_no=0;
int next_bits=0;
cout<<"Enter Number= ";
cin>>number;
while(number)
{
bits=count_bits(number)-1;
next_no=number-pow(2.0,bits);
if(next_no==0 && bits>max)
{
max=bits;
break;
}
next_bits=count_bits(next_no)-1;
if(bits-1-next_bits>max)
{
max=bits-1-next_bits;
}
number=next_no;
}
cout<<"Max"<<max<<endl;
system("pause");
}
#include <stdio.h>
int Find_longest_zeros( int n ){
if( n == 0 ){
return 8*sizeof(int);
}
int stack[32];
int top = 0;
int max_length = 0;
//initialized stack,assign -1 to every element
for( int i = 0 ; i < 32 ; i++){
stack[i] = -1;
}
while ( n != 0 ){
if( n%2 != 0 ){
//if the remains is 1,we should compute the amount of the //element in stack
if( top > max_length ){
max_length = top;
}
while( top > 0){
stack[--top] = -1;
}
}else{
stack[top++] = 0;
}
n = n/2;
}
if( top > max_length ){
max_length = top;
}
printf("%d\n",max_length);
return 1;
}
int main(){
Find_longest_zeros(133);
return 1;
}
A method that in some cases can be slightly better than the obvious bit by bit examination
#include<stdio.h>
int bit_dif(unsigned greater, unsigned lesser){ //calculate the lengh of continuous zeros between two bits
int c=0;
while(!(greater&lesser)){
c++;
lesser<<=1;
}
return c;
}
main(){
unsigned n=0x22CFF;
unsigned temp;
unsigned lw_bit; // lowest bit
unsigned con_bgn=1;// the beginning of the next zero-continuum
int max_con=0;
while(n){
lw_bit=(temp=n^(n-1))^(temp>>1);
if((temp=bit_dif(lw_bit,con_bgn))>max_con)
max_con=temp;
con_bgn=(n^(temp=n+lw_bit))&temp;
n=n&temp;
}
printf("%d\n",max_con);
}
These are all great solutions. However, they operate based on an integer value input. I guess the question is how to use the binary representation of a given integer. If that's a valid guess, we can assume that the input to this function is a string of 0-1 values. I find the solution in O(n lg n) but still have problem turning it to an O(lg n). Here's the O(n log n) solution. Let see if we can use this as a starting point to find the O(lg n) solution, thanks!
#include<iostream>
#include<string>
using namespace std;
int FindOne(string s, int b, int e)
{
try
{
if(s.size()==0 || b<0 || e<0) throw "\nInvlaid\n";
}catch(const char *p)
{
puts(p);
return -1;
}
if(b==e)
{
if(s[b]=='1')
return b;
return -1;
}
int m=b+((e-b)/2);
int t=FindOne(s,b,m);
if(t!=-1)
return t;
return FindOne(s,m+1,e);
}
void LongestcontinuousZeroSeq(string str)
{
if(str.size()==0)
return;
int i=0;
int max=INT_MIN;
int flag=0;
int LastOne=-1;
int n=str.size();
int begin=-1;
int end=-1;
while(i<n)
{
if(str[i]=='1' && flag==0)
{
LastOne=i;
if(i!=0 && max==INT_MIN)
max=i;
else
flag=1;
i++;
}
else if(str[i]=='0')
i++;
else
{
if(flag==0)
{
max=i;
begin=0;
end=i-1;
}
else
{
if(i-LastOne>max)
{
max=i-LastOne-1;
begin=LastOne+1;
end=i-1;
}
LastOne=i;
flag=0;
}
}
}
if((i-1)-LastOne>max)
{
max=(i-1)-LastOne;
begin=LastOne+1;
end=i-1;
}
if(max==0)
cout<<max<<"[ -- ]"<<endl;
else if(max==1)
cout<<max<<" ["<<begin<<"]"<<endl;
else
cout<<max<<" ["<<begin<<"..."<<end<<"]"<<endl;
}
int main()
{
string str="000100110000100000";//"1001100001010000000";//"1";//"0";//"00000000";//"1001";//"01000";//"101";//"1001";//"01";//"00000000000100110000101000000000000";//"100110000101";//"1000000000100011001";//"10000101";//"100110000101";//
cout<<str<<endl;
LongestcontinuousZeroSeq(str);
return 1;
}
Sorry, there is a redundancy in the above code. Please see below for correction:
while(i<n)
{
if(str[i]=='1' && flag==0)
{
LastOne=i;
if(i!=0 && max==INT_MIN)
{
max=i;
begin=0;
end=i-1;
}
else
flag=1;
i++;
}
else if(str[i]=='0')
i++;
else
{
if(i-LastOne>max)
{
max=i-LastOne-1;
begin=LastOne+1;
end=i-1;
}
LastOne=i;
flag=0;
}
}
I think I have an O(lg n) solution. It is based on the binary representation of a value in form of a string.
int FindOne(string str, int b, int e) //finds the first occurrence of a 1 within the specified indices, b and e
{
if(str.size()==0 || b<0 || e<0 || b>e || e>str.size() || b>str.size())
return -1;
if(b==e)
{
if(b>=str.size())
return -1;
if(str[b]=='1')
return b;
return -1;
}
int m=b+((e-b)/2);
if(m>=str.size())
return -1;
int t=FindOne(str,b,m);
if(t!=-1)
return t;
return FindOne(str,m+1,e);
}
void LongestcontinuousZeroSeq(string str)
{
if(str.size()==0)
return;
int i=0;
int n=str.size();
int index1=-1;
int index2=-1;
int max=INT_MIN;
while(i<n)
{
if(i==0)
{
index1=FindOne(str,i,n);
if(index1==-1 || index1==n-1)
{
max=n-1;
break;
}
index2=FindOne(str,i+1,n);
if(index1!=-1 && index2==-1)
{
max=n-index1-1;
break;
}
if(index1!=0 || index1==index2)
max=index1;
else if(index2-index1>max)
max=index2-index1-1;
index1=index2;
}
else
{
index2=FindOne(str,i,n);
if(index1==index2)
break;
else if(index2-index1>max)
max=index2-index1-1;
if(index2==-1 && max!=INT_MIN)
break;
index1=index2;
}
if(index2!=-1 && index2<=n-1)
i=index2+1;
}
if(n-1-index1>max)
max=n-1-index1;
cout<<"MAX:\t"<<max<<endl;
}
Maybe I'm missed it, but it seems like the above solutions can be done in a much simpler way with bitwise operations:
public int maxZeros(long x){
long p = 0;
int i=0, di = 0;
while( x > 0){
//Single out the next 1 bit
p = ~(x-1)&x;
//Compare with current di
if(log(p) - i < di)
di = log(p) - i
//Save the last 1 position
i = log(p);
//Remove the last 1 from x
x = x - p;
}
return di;
}
I agree with all of the comments that this isn't O(log(N)) because in the worse case you visit all of the bits. However, it's equivalent to the best answers I've seen, just with less obfuscation.
This isn't possible as log n is smaller than the input size n of the binary string. This problem requires you to look at all the numbers as well.
Processors can do a lot with groups of bits at the same time. For example, 4 or 8 bytes of bits can be in O(1) used to do indirect addressing.
And other bit operations can do byte or integer wide bit operations in the common machine cycle on most CPUs.
The two ideas above can lead to two different possible solutions (LUT for first, popcount type thing second. The second method is someone else's specialty, and I suspect he will remember this thread and post his solution eventually.)
An idea is to shift 2 bits at a time instead of 1 bit. Then compares the 2 bits with the four possible situations 00, 01, 10, 11 to determine the maximum contiguous zeros. This is analogous to shifting 1 bit at a time and only compare the single bit to the 2 possible situation 0 and 1.
public static int findMaxZero(int number) {
int max = 0;
int current = 0;
while (number != 0) {
int temp = number & 0x3;
number = number >>> 2;
if (temp == 0) {
current += 2;
} else if (temp == 1) {
if (current > max) {
max = current;
}
current = 1;
} else if (temp == 2) {
if ((current+1) > max) {
max = current+1;
}
current = 0;
} else {
if (current > max) {
max = current;
}
current = 0;
}
}
return max;
}
This is such a very hard problem, we are doomed! I did some searching someone posted it on stackoverflow and someone else referred to a horrible book (horrible like the Necronomicon or principia mathematica (it’s a heavy read: I am in awe of the author’s mental abilities)) called the Hacker’s Delight. I pieced together the code and got it running but understanding was still slow to come so I added lots of print statements and my own print out as binary function.
In the first step we do sort of a binary search using a shift and “and” process. When it manages to destroy all the 1s you know where the end of the longest set of 1s is.
Then you do a goto and jump into a set of things that follow a similar binary search sort of logic to get the first one back to the start. This is dependent on the operations you did before so some information is carried forward using your decision where to jump to.
Finally you call nlz which locates that bit using another binary search process.
This book is about writing ultra fast stuff for deep in the OS or in drivers or perhaps control systems. Which is why they worry about such strange stuff and why the loops are expanded. I’m not up to making it into a loop or something recursive. This would probably kill all the efficiency several times over the brute force O(n) solution any way.
Put some numbers into it and single step it for a while and see if you can grasp the idea. I can just barely do it myself.
I think the key here is to try to learn a little bit about how to think like the wizards and logicians of the Hacker’s Delight, and flounder around as gracefully as you can when Google makes a query of you.
Good luck
// bit flip 1.cpp : Defines the entry point for the console application.
//
// c++ on windows MFX
#include "stdafx.h"
void print_b(unsigned x)
{
unsigned m = 1 << 31;
for(int i = 0; i < 32; i++)
{
if((x & m)!= 0)
{
printf("1");
}
else
{
printf("0");
}
m = m >> 1;
}
}//----------------------------------------------------------
void print_b(char * l, unsigned x)
{
printf("%s\t", l);
print_b(x);
printf("\n");
}//----------------------------------------------------------
int nlz(unsigned x) {
print_b("nlz(",x);
int n;
if (x == 0) return(32);
n = 1;
print_b("x>>16= ",x>>16);
if ((x >> 16) == 0)
{
n = n +16;
x = x <<16;
print_b("x = ",x);
}
print_b("x>>24= ",x>>24);
if ((x >> 24) == 0)
{
n = n + 8;
x = x << 8;
print_b("x = ",x);
}
print_b("x>>28= ",x>>28);
if ((x >> 28) == 0)
{
n = n + 4;
x = x << 4;
print_b("x = ",x);
}
print_b("x>>30= ",x>>30);
if ((x >> 30) == 0)
{
n = n + 2;
x = x << 2;
print_b("x = ",x);
}
n = n - (x >> 31);
return n;
}//-----------------------------------------------------------------------------------------
int fmaxstr1(unsigned x, int *apos)
{
// invert bits.
//x = ~x;
print_b("fmax(",x);
unsigned x2, x4, x8, x16, y, t;
int s;
if (x == 0)
{
*apos = 32;
return 0;
}
x2 = x & (x << 1);
print_b("x2 = ",x2);
if (x2 == 0)
{
s = 1;
y = x;
goto L1;
}
x4 = x2 & (x2 << 2);
print_b("x4 = ",x4);
if (x4 == 0)
{
s = 2;
y = x2;
goto L2;
}
x8 = x4 & (x4 << 4);
print_b("x8 = ",x8);
if (x8 == 0)
{
s = 4;
y = x4;
goto L4;
}
x16 = x8 & (x8 << 8);
print_b("x16 = ",x16);
if (x16 == 0)
{
s = 8;
y = x8;
goto L8;
}
if (x == 0xFFFFFFFF)
{
*apos = 0;
return 32;
}
s = 16; y = x16;
L16:t = y & (x8 << s);
print_b("y = ",y);
print_b("x8 = ",x8);
printf("x8<<%i\t",s);print_b(x8 << s);
print_b("\nt = ",t);
if (t != 0)
{
s = s + 8;
y = t;
print_b("y = ",y);
}
L8: t = y & (x4 << s);
print_b("y = ",y);
print_b("x4 = ",x4);
printf("x4<<%i\t",s);print_b(x4 << s);
print_b("\nt = ",t);
if (t != 0)
{
s = s + 4;
y = t;
print_b("y = ",y);
}
L4: t = y & (x2 << s);
print_b("y = ",y);
print_b("x2 = ",x2);
printf("x2<<%i\t",s);print_b(x2 << s);
print_b("\nt = ",t);
if (t != 0)
{
s = s + 2;
y = t;
print_b("y = ",y);
}
L2: t = y & (x << s);
print_b("y = ",y);
print_b("x = ",x);
printf("x<<%i\t",s);print_b(x << s);
print_b("\nt = ",t);
if (t != 0)
{
s = s + 1;
y = t;
print_b("y = ",y);
}
L1: print_b("y = ",y);
*apos = nlz(y);
return s;
}//--------------------------------------------------------------------------------------------
int _tmain(int argc, _TCHAR* argv[])
{
unsigned x = (~(16+15)) << 8 | 15;
print_b("x = ", x);
unsigned y = x & ~(3 << 17);
print_b("y = ", y);
y = ~y;
print_b("y = ", y);
int apos;
int z = fmaxstr1(y,&apos);
return 0;
}//--------------------------------------------------------------------------------------------
If i am not missing any boundary condition, I dont think it required any special algorithm to find it in O(logN)....
int maxZeroLen(int N)
{
int maxLen = 0;
int curLen = 0;
while(N)
{
int remainder = N % 2;
N = N >> 2;
if(!remainder)
curLen++;
else curLen = 0;
maxLen = max(maxLen, curLen);
}
return maxLen;
}
This can be done even in less complexity.
complexity is number of 1's in a number.
O(number of 1's)
public static int CountZeros(int no) {
if(no == 0) {
return 32;
}
if(no == -1 ) {
return 0;
}
int max = 0;
int pre = 0;
while(no != 0) {
int cur = no & ~(no -1);
cur = (int) (Math.log10(cur) / Math.log10(2));
//cur = cur / 2;
if(cur - pre > max) {
max = cur - pre;
}
pre = cur + 1;
no = no & (no -1);
}
return(max);
}
#include <stdio.h>
#include <stdlib.h>
int max (int a, int b)
{
return (a>b?a:b);
}
int solution(unsigned long int N)
{
int flag = 0, count = 0, max_gap = 0;
unsigned long int temp = N;
while (temp>0)
{
if(flag == 1 && temp%2 == 0)
{
count++;
max_gap = max(max_gap,count);
}
else if(flag == 0 && temp%2 == 1)
{
flag = 1;
count = 0;
}
else if(flag == 1 && temp%2 == 1)
{
count = 0;
}
else //flag == 0 && temp%2 == 0
{
count = 0;
}
temp = temp/2;
}
//printf("max_gap - %d\n",max_gap);
return max_gap;
}
int main(void) {
// your code goes here
unsigned long int N;
scanf("%u",&N);
printf("%d\n",solution(N));
return 0;
}
#include <stdio.h>
#include <stdlib.h>
int max (int a, int b)
{
return (a>b?a:b);
}
int solution(unsigned long int N)
{
int flag = 0, count = 0, max_gap = 0;
unsigned long int temp = N;
while (temp>0)
{
if(flag == 1 && temp%2 == 0)
{
count++;
max_gap = max(max_gap,count);
}
else if(flag == 0 && temp%2 == 1)
{
flag = 1;
count = 0;
}
else if(flag == 1 && temp%2 == 1)
{
count = 0;
}
else //flag == 0 && temp%2 == 0
{
count = 0;
}
temp = temp/2;
}
//printf("max_gap - %d\n",max_gap);
return max_gap;
}
int main(void) {
// your code goes here
unsigned long int N;
scanf("%u",&N);
printf("%d\n",solution(N));
return 0;
}
// Code is based on string operation and it can be applied to any character sequence
public static void findMaxZeroSeq(String binNumber) {
int curr_state = 1;
int curr_max_index = -1, curr_start_index = -1;
int curr_max_seq = 0, curr_seq = 0;
for (int i = 0; i < binNumber.length(); ++i) {
if (curr_state == 1) {
if (binNumber.charAt(i) == '1') {
continue;
} else if (binNumber.charAt(i) == '0') {
curr_start_index = i;
curr_seq = 1;
curr_state = 0;
}
} else if (curr_state == 0) {
if (binNumber.charAt(i) == '0') {
++curr_seq;
continue;
} else if (binNumber.charAt(i) == '1') {
// Compare current sequence with current max
if (curr_seq > curr_max_seq) {
curr_max_index = curr_start_index;
curr_max_seq = curr_seq;
}
curr_state = 1;
}
}
}
// Compare current sequence with current max
if (curr_seq > curr_max_seq) {
curr_max_index = curr_start_index;
curr_max_seq = curr_seq;
}
System.out.println("Max sequence starts at: " + curr_max_index + "; with value of: " + curr_max_seq);
}
This solves but it would be o(n) since it's the complexity of string.split...
#A binary gap is the largest amount of consecutive 0s on a binary number.
def binary_gap ( N ):
b = bin(N)
b = b[2:] #Take away the 0b from string
b = b.rstrip('0') #If ends on 0 all 0s before doesn`t form a gap
gaps = b.split('1') #Takes all gaps
maxGap = 0
for gap in gaps:
maxGap = len(gap) if len(gap) > maxGap else maxGap #Find max Gap
return maxGap
I agree that the question as stated is ambiguous. If N is the number of bits of the integer then all the implementation using shift operators won't result in a O(log(n)) solution.
As many people have pointed out in the worst case for a 32bit int you still need 32 iterations to find the solution = O(n).
I think people who thought of this problem as a "string" problem, e.q. a sting of "0" and "1" chars of non-specified length.
In this case the shortest sequence of contiguous "0" is 2.
So, the easy algorithm O(n) is counting the "0" in a string of "0" and "1" characters.
Knowing that we want to find the max. of contiguous "0" a better algorithm is to count contiguous pairs of "0". This by definition should lead to a O(logn) algorithm. (Note: Assumption is that we can make the decision of "00" -> 1, ("01", "10") -> in O(1)).
A further refinement of the algorithm that leads to parallelization is to divide the counting of the "0" pairs" into subsets. Of course one has to account for the edge cases.
If bsr assembly instruction is available, my solution runs in O(M). M is the count of 1 bit in the integer. M = O(logN).
#include <stdio.h>
#include <assert.h>
// Time complexity: O(log(pow)). Use it if bsr assembly instruciton is
// unavailable.
int log(unsigned pow)
{
assert(pow > 0);
int i = 0;
while (pow > 1) {
pow >>= 1;
i++;
}
return i;
}
// O(1)
int bsr(int n)
{
assert(n > 0);
int idx;
asm("bsr %1, %0": "=r" (idx) : "r" (n));
return idx;
}
int longest_seq_of_zero(unsigned n)
{
assert(n > 0);
unsigned pow;
int max_len, len, prev_log, cur_log;
max_len = 0;
prev_log = -1;
while (n > 0) {
pow = n & -n;
n ^= pow;
// cur_log = log(pow);
cur_log = bsr(pow);
len = cur_log - prev_log - 1;
if (max_len < len)
max_len = len;
prev_log = cur_log;
}
return max_len;
}
int main(int argc, const char *argv[])
{
unsigned int N;
N = 15; // 1111
assert(longest_seq_of_zero(N) == 0);
N = 0x85; // 10000101
assert(longest_seq_of_zero(N) == 4);
N = 0x48; // 1001000
assert(longest_seq_of_zero(N) == 3);
return 0;
}
If n represents the integer, it is clear that we can have a O(lg n) solution.
However, when N represents the number of bits, it is impossible. Note that the bit operation, like bit "and", also spend O(n) time.
An adversary mode can prove this lower bound. Let us partition the N bits into two part. If you choose the left part, the adversary can eventually hide longer consecutive 0's in the right part, and vice versa. Therefore, we need to search both parts. Therefore, O(lg n) is impossible.
A verified O(logN) solution in Java where N is the number of bits (not the number).
The basic idea is eliminate zeros in a binary-search-like manner.
public class LongestZeroSequence {
private static int countLongestZeroSeq(int z){
int nBits= Integer.SIZE;
// check nBits
if(z == 0 ) return nBits;
// check 1
int mask = (z | (z<<1)) | 1;
if(mask == -1) return 1;
int i;
for(i = 1; i < nBits; i <<=1){
int cur = mask | mask << i;
if(cur == -1) {
break;
}
mask = cur;
}
int p = i, q = i << 1 ;
int savedP = p;
if(q >= nBits) q = nBits-1;
while(p < q ){
int k = (p+q)/2;
if(allOnes(mask,k-savedP)){
q = k;
} else {
p = k+1;
}
}
return p;
}
private static boolean allOnes(int mask, int k){
int res = mask | (mask << k);
return (res) == -1;
}
}
If the input is in string format, then we can do this in O(log N) using recursion:
void longestSequence(string binary, int start, int end, int* maxAtStart, int* maxAtEnd, int* maxOverall)
{
if(start == end)
{
*maxAtStart = binary[start] == '0' ? 1 : 0;
*maxAtEnd = *maxAtStart;
*maxOverall = *maxAtStart;
return;
}
int maxAtStartLeft, maxAtEndLeft, maxOverallLeft;
int maxAtStartRight, maxAtEndRight, maxOverallRight;
longestSequence(binary, start, (start+end)/2, &maxAtStartLeft, &maxAtEndLeft, &maxOverallLeft);
longestSequence(binary, (start+end)/2 + 1, end, &maxAtStartRight, &maxAtEndRight, &maxOverallRight);
if((maxAtEndLeft + maxAtStartRight > maxOverallLeft) && (maxAtEndLeft + maxAtStartRight > maxOverallRight))
{
*maxOverall = maxAtEndLeft + maxAtStartRight;
*maxAtStart = (maxAtStartLeft == ((start+end)/2 - start + 1)) ? *maxOverall : maxAtStartLeft;
*maxAtEnd = (maxAtEndRight == (end - (start+end)/2)) ? *maxOverall : maxAtEndRight;
}
else
{
*maxOverall = maxOverallLeft > maxOverallRight ? maxOverallLeft : maxOverallRight;
*maxAtStart = maxAtStartLeft;
*maxAtEnd = maxAtEndRight;
}
return;
}
int longestSequence(string binary)
{
int maxAtStart = 0, maxAtEnd = 0, maxOverall = 0;
longestSequence(binary, 0, binary.length() - 1, &maxAtStart, &maxAtEnd, &maxOverall);
return maxOverall;
}
The best time complexity we could get is O(m) where m is the no of 1's in the digit.
Idea:
start from the rightmost bit
while(N) {
if(N&1 == 0){
run a variant of binary search to reach to the left end of zeros.
and keep a count of the number of zeros
}
N = N>>1
}
after the loop return the max value
use lookup table? value->max 0s len. Then, lookup for each byte and (exercise for the reader)
Speaking generally, would that reduce the solution to O(log n)? As far as I see, that will be O(n/k), where k=size of each fragment, which is essentially O(n). Though it will definitely fasten the calculation.
For this particular case: assuming an int is 32-bit and we use a 8-bit table, that should be 4 steps, which is actually better than log 32 (=5).
Also, the elements of that table would need to have three values. (a) max no. of consecutive 0's in that element (within two 1s), (b) the number of consecutive 0s from left until the first 1 is encountered, (c) he number of consecutive 0s from right until the first 1 is encountered. The program would need to add the current (b) with previous (c) and compare that against (a) and the max so far(another variable). And, this doesn't handle the edge cases where only one 1 or zero 1 present in a fragment.
Solution should be recursive approach. (Divide and Conquer)
1) Divide the given binary into two halves (left and right)
2) Call recursively on left and right
3) calls to each left and right will return with max of continuous zeros that can be concatenated with the other half and one that cannot be concatenated
4) calculate same return values based on the returned values and return
public class FindZero {
static int findMaxZero(int n){
int maxCount = 0;
int count = 0;
if(n == 0)
return 1;
while(n != 0){
if(n % 2 == 0){
maxCount++;
if(count < maxCount)
count = maxCount;
}
else{
maxCount = 0;
}
n = n / 2;
}
return count;
}
public static void main(String args[]){
System.out.println(findMaxZero(11));
}
}
Easy task.
Just think about how you convert a number to binary.
code in java:
int maxNumerOfContinuousZero(int n) {
int result = 0;
int tmp=0;
while(n>1){
if(n%2 ==0){
tmp++ ;
n = n/2;
}
else{
if(tmp> result){
result = tmp;
}
n = (n-1)/2;
}
}
if(tmp>result){
result= tmp;
}
return result;
}
Yours is O(N), the interviewer is asking for O(logN), where N is the number of binary digits representing the integer, same as the amount of operations you are doing
The logic would be to recursively do this->
if i is divisible by 2 then add 1 to max number of zeroes of max number of zeroes of i/2;
if i is odd then terminate adding and calculate max number of zeroes for i/2 and compare the max zeroes before i to the max zeroes by i/2
max for 1 is 0
I think this should work fine with log n complexity since we are going on dividing it by 2.
I know. That's why I said terminate adding zeroes after odd number is encountered and then calculate in the same way for i/2 if for i/2 it is greater than what was for i then the max cont number of zeroes is the one for i. Sorry if it wasn't clear
e.g
suppose 38
38 is even so max=1;
19 odd so max[2]=0;
9 odd so max[3]=0;
4 even so max[4]=1;
2 even so max[4]=2;
1 odd so max[5]=0;
max=2
now check if it's correct
38 0
19 1
9 1
4 0
2 0
1 1
100110
0+2+4+32=38
max cont = 2;
@Kirk
this is iterating over every bit in a number, look you have 5 iterations for 6 bits. which is O(N), while O(lg N) would require up to 2 iterations.
This question is ambiguous. If N is the integer, then it is easy to find a solution in log(N). But if N is the length of binary representation of n, is there a O(N) = O(log(log(n)) implementation? This DOES have a lower bound of O(log(log(n)) complexity because there are N possible results for longest continuous sequence of 0s. But that doesn't say it is impossible. However, I feel like if there was a O(log(N)) that would be a pretty impressive result deserving of a research paper, not necessarily something that they would expect to be solved.
That depends on what 'N' is. If it is the number '15', then the solution is easily found and it will be in log(N). If N is the number of digits, it is not possible to do it. Clearly you need to read the digits first to do anything.
- memo October 24, 2013