chenlc626
BAN USER
actually, we can achieve o(n*n) at least for time complexity and O(1) for space complexity without HASH. We first need to change the value to its square one with its original sign, for example, 3=>9, -3=>-9. Then sort them using the fast sort according to the absolute value of the element, abs(9)=9, abs(-9)=9 in nlogn time complexity. then we are going to find the triple, using A+B=C, A=abs(a), B=abs(b), C=abs(c), a,b,c is the element of the square value. For each fixed C value, we know can search the pair A+B =C in O(n) time, so overall we can have O(n*n) time complexity for the case. if we choose the in place quick sort algorithm, the space complexity is O(1).
- chenlc626 March 29, 2013In order to generate 0 with the probability p/q,first you find out a integer n that 2^n is just bigger than q. Then the machine B would each time get n bits from machine A, and uses it as unsigned int value v, and perform following work:
1> 0=<v<p, output 0
2> p=<v<q, output 1
3> otherwise output nothing.
Please add more information about the questions. For example, i am not if the set has x elements, and will it have same value for the elements, for example, two 3, two 4,and if the different subgroup could pick the same element in the set as its size. Also i am not sure about the meaning of the "however the ordering of the group should be intact(cannot switch positions)", does it mean that only a continous sub parptition of the sequence could be in the one sub group? for example, if choose 3 as the size of first subgroup, then it will contain 0-2 elements of the sequence in the sub group, and if choose 4 as the size of the second sub group, then it will contains 3-6 elements of the sequence in the sub group, etc. I am also quite sure the meansing of "sum of squares of the differences of 1's and 2's in each sub group", is the the sum of the squares of the differences of number of 1's and number 2's in each sub group? So if the subgroups is:
121111, the difference would be 4, the square is 16? Please give some example showing the question's requirement. Thanks.
And one more question is that is X fixed?
Pick some sort algorithm based on the more information about the element of the array. some radix sort may be used since those are numbers and there is a extra space available to contain the result. Shrink the final sorted array to remove the duplicated one using two ptrs, one writer, one reader.
- chenlc626 March 25, 2013Here is the python code:
origPtr=[(0,0), (1,0),(1,1), (0,1), (-1, -1), (-1,0), (-1,1), (0,-1), (1,-1), (3,6), (4, 9)];
foundSquare={};
origSet={};
for elem in origPtr:
origSet[elem]=None;
def clockRotate(ptr1, ptr2):
ptr=(ptr2[1]-ptr1[1]+ptr1[0], ptr1[0]-ptr2[0]+ptr1[1]);
return ptr;
def antiClockRotate(ptr1,ptr2):
ptr=(ptr1[1]-ptr2[1]+ptr1[0], ptr2[0]-ptr1[0]+ptr1[1]);
return ptr;
def orderPtr(ptr1, ptr2, ptr3,ptr4):
orderList=sorted([ptr1, ptr2, ptr3, ptr4]);
return tuple(orderList);
for index in range(len(origPtr)):
ptr1=origPtr[index];
for ptr2 in origPtr[index+1:]:
#left rotate:
ptr3=antiClockRotate(ptr2, ptr1);
ptr4=clockRotate(ptr1, ptr2);
if ptr3 in origSet.keys() and ptr4 in origSet.keys():
#print "found one clock square", ptr1, ptr2, ptr3, ptr4;
foundSquare[orderPtr(ptr1,ptr2,ptr3,ptr4)]=1;
#right rotate:
ptr3=clockRotate(ptr2, ptr1);
ptr4=antiClockRotate(ptr1, ptr2);
if ptr3 in origSet.keys() and ptr4 in origSet.keys():
#print "found one anti clock square", ptr1, ptr2, ptr3, ptr4;
foundSquare[orderPtr(ptr1,ptr2,ptr3,ptr4)]=1;
#print foundSquare;
for elem in foundSquare.keys():
print elem;
1> First you pick one pointer p1 from the list
2> Then you pick another pointer p2 from the rest of the list
3> using these two pointers as one side of the square to find out other possible two set of 2 pointers forming a square. If any of those pointer exists, store one solution.
4> Loop same logic through all of the pointer in the rest of the list
5> Then you remove P1 from the list, and start 1> again.
be noting:
1> There would be some repeat when searching, add some flag to avoid it.
2> You do not need to calculate the distance. You may need to perform vector rotate (p1-p2) rotate 90, or -90 to find 4 possible points.
3> A N Entry hash would be needed to check if the pointer is in the list.
Test result:
orig 1234567890 ==> 0123456789 in 1 right shift
orig 1234567890 ==> 9012345678 in 2 right shift
orig 1234567890 ==> 8901234567 in 3 right shift
orig 1234567890 ==> 7890123456 in 4 right shift
orig 1234567890 ==> 6789012345 in 5 right shift
orig 1234567890 ==> 5678901234 in 6 right shift
orig 1234567890 ==> 4567890123 in 7 right shift
orig 1234567890 ==> 3456789012 in 8 right shift
orig 1234567890 ==> 2345678901 in 9 right shift
My Idea is that:
1> If the k is the relative prime of len, you just keep move the element to its next place one by one, after len times, all elements are in place.
2> If the k is not relative prime of len, find the gcd of k and len, divide the array into gcd number of sub virtual arrays, in which sub array i is in the form of array[i+k*p], where i is in [0, gcd-1], representing the sub array, while p is in [0, len/gcd-1], the index inside the subarray. For each sub array[i], the rotate happens only in 1 increase its index p, so you can use the same logic in 1> to rotate each sub array.
Here is my uCode:
void inplaceRotate(char*str, int len, int k)
{
int cur, next;
int count;
char temp, temp2, gcd, loopLen;
if(NULL==str||k>len||k<1)
return;
//First we need to find out the gcd(len, k)
for(temp=len, temp2=k;temp!=temp2;)
{
if(temp>temp2)
temp-=temp2;
else
temp2-=temp;
}
gcd=temp;
loopLen=len/gcd;
//printf("gcd is %d\n", gcd);
for(gcd--;gcd>=0;gcd--)
{
for(cur=gcd, temp=str[cur], count=0; count<loopLen;count++)
{
next=(cur+k)%len;
str[next]^=temp;
temp^=str[next];
str[next]^=temp;
cur=next;
}
}
}
It is the same logic to find the next permutation of string. Please check the wikipedia (en.wikipedia.org/wiki/Permutation) and section: Generation in lexicographic order:
Generation in lexicographic order
There are many ways to systematically generate all permutations of a given sequence[citation needed]. One classical algorithm, which is both simple and flexible, is based on finding the next permutation in lexicographic ordering, if it exists. It can handle repeated values, for which case it generates the distinct multiset permutations each once. Even for ordinary permutations it is significantly more efficient than generating values for the Lehmer code in lexicographic order (possibly using the factorial number system) and converting those to permutations. To use it, one starts by sorting the sequence in (weakly) increasing order (which gives its lexicographically minimal permutation), and then repeats advancing to the next permutation as long as one is found. The method goes back to Narayana Pandita in 14th century India, and has been frequently rediscovered ever since.[12]
The following algorithm generates the next permutation lexicographically after a given permutation. It changes the given permutation in-place.
1. Find the largest index k such that a[k] < a[k + 1]. If no such index exists, the permutation is the last permutation.
2. Find the largest index l such that a[k] < a[l]. Since k + 1 is such an index, l is well defined and satisfies k < l.
3. Swap a[k] with a[l].
4. Reverse the sequence from a[k + 1] up to and including the final element a[n].
After step 1, one knows that all of the elements strictly after position k form a weakly decreasing sequence, so no permutation of these elements will make it advance in lexicographic order; to advance one must increase a[k]. Step 2 finds the smallest value a[l] to replace a[k] by, and swapping them in step 3 leaves the sequence after position k in weakly decreasing order. Reversing this sequence in step 4 then produces its lexicographically minimal permutation, and the lexicographic successor of the initial state for the whole sequence.
1> You first pick one point p1, and calculate the distance to other three points d12, d13, d14.
2> two of these three points should be the same, otherwise fails to be square. Suppose they are d12=d13, and d14 should be bigger than d12, otherwise, fails.
3> Check inner product of vector(p2-p1) and (p3-p1), should be zero. Otherwise they fails.
4> If the middle point of p1 and p4, should be the same of middle point of p2 and p3, it is square. Otherwise it is not.
For the points, people should not assume their order in the real coordinate, but should find out through the calculation. Otherwise, you may only to calculate two sides and make sure they are the same.
My understand is :
1> Build a simple hash from an array, mapping '0'-'9', 'a'-'z', 'A'-'Z', to it. making sure 'a-z' point to same node as 'A-Z'.
2> Go through the list, count each keys accordingly. then list them according to the order of digital, consonant, vowel. You can also reuse the old list if you want.
For the question arised by Romonov, how to provide a snapshot of a thread application level without digging into the operating system level. Well, there are two cases, one is that the memory leak is caused by the API's call of malloc, another issue is that the memory leak is caused by some functions (beside the direct memory allocation API) called by the API logic, but unfortunately the logic forget the side effect of this API and does not follow some predefined protocol to free it, or what's more, the function the API calls has memory leak, the latter case it is difficult to handle. We may really need to replace some libc implementation of memory allocation, which will call the raw exception based system call, and let the gcc to use the libc when linking the test application.
- chenlc626 March 23, 2013For the question itself, it is asking people to make sure that an API does not leak memory. If the "people" are API designer, you gonna have a list of strategy for this, as listed above. But all in all, the word is "not allocation memory, or free it after allocation before exiting", it is kind of good wish, but not a systematic method to make sure of this. So finally the people are the API tester, whatever he is unit tester or real user, you need a way to check it in black box style.
- chenlc626 March 23, 20131> the binary code it does not output anything, does not mean that the system library it may use does not output anything, unless the binary does not call any system API, which is unlikely.
2> Same as 1>. I do not care what is inside the binary code itself, i care what other library or service it may depends on to work. Have u used the dmesg for some system service?
3> If it is calling the dynamical lib of the system, you should see the app call using NM or other binary utility, otherwise system loader would not help it to prepare the .so file when it starts running. Or it may load the .so itself, in this case, it also must call some system API to do it. You should be able figure out a way to got the name of .so it would use, use a debug of that .so if you can.
4> It depends on the virtual machine you are using. I know some virus software company is using virtual machine to perform the analysis.
The whole idea is that unless the software is just a ball of junk code which does not call any external system service and does not use any system resources, you should be able to get some information from its interactivity with the external system service.
We need to build a hash for each character in the boggle board, and make a recursive search on each character of the string:
bbr1=['S', 'M', 'E','F'];
bbr2=['R', 'A', 'T','D'];
bbr3=['L', 'O', 'N','I'];
bbr4=['K', 'A', 'F','B'];
bb=[bbr1, bbr2, bbr3,bbr4];
rowmax=len(bb);
colmax=len(bb[0]);
wordPos={};
for row in range(4):
for col in range(4):
if bb[row][col] in wordPos.keys():
wordPos[bb[row][col]].append([row, col]);
else:
wordPos[bb[row][col]]=[[row,col]];
print wordPos;
print range(-1,1);
def searchWord(pos, curPos, wordList):
print curPos, wordList;
if len(wordList)==0:
return 0;
if curPos == None:
if wordList[0] in pos.keys():
return searchWord(wordPos, pos[wordList[0]], wordList[1:]);
else:
return -1;
else:
for onePos in curPos:
for h in range(-1,2):
for v in range(-1,2):
if h==0 and v==0:
continue;
row=onePos[0]+v;
col=onePos[1]+h;
if row < 0 or row > rowmax-1:
continue;
if col < 0 or col > colmax-1:
continue;
if bb[row][col] == wordList[0]:
if 0==searchWord(pos,[[row, col]], wordList[1:]):
return 0;
return -1;
print searchWord(wordPos, None, "STAR");
print searchWord(wordPos, None, "TONE");
print searchWord(wordPos, None, "NOTE");
print searchWord(wordPos, None, "SAND");
print searchWord(wordPos, None, "AFB");
print searchWord(wordPos, None, "UU");
print searchWord(wordPos, None, "");
I would build a hash whose key is the remainder of each integer in the list divided by K, and its value is the counter left after pair is removed. If there is still some non-zero counter in the hash after all integer is checked, then the test fails, other wise, the test success. see the code as bellow:
#input=[1,5, 7,0, 4,5,9];
#input=[ 1, 9, 5, 11, 14, 15 ];
input = [1, 2, 3, 5, 7, 15];
sum=3;
print "input", input
print "sum", sum
remainderCnt={};
for elem in input:
remain=elem%sum;
if remain == 0:
if remain in remainderCnt.keys():
remainderCnt[remain]=(remainderCnt[remain]+1)%2;
else:
remainderCnt[remain]=1;
continue;
if (sum-remain) in remainderCnt.keys():
remainderCnt[sum-remain]-=1;
else:
if remain in remainderCnt.keys():
remainderCnt[remain]+=1;
else:
remainderCnt[remain]=1;
for elem in remainderCnt.keys():
print elem, ":", remainderCnt[elem];
pair_complete=1;
for elem in remainderCnt.keys():
if remainderCnt[elem]:
pair_complete=0;
break;
if pair_complete :
print "Test passed\n";
else:
print "Test Failed\n";
1> Turn on all the system level log, and see if there is any error message.
2> objdump the binary and see if it is calling some API of other dynamical library, replace those library with debug version and see if it will output any errors.
3> You can also replace the system level library such memory allocation clib with debug version, and dump more information out.
4> Finally you can run the binary in some virtual machine and dump its execution sequence (API call sequence with parameters), and see if it would give you some idea of what is going on.
The loser sequence would be as bellow by proper play, if A plays first:
ABBABBABBABBA
A would be loser, only if its two predecessors are all Bs. Because whatever A takes 1 or 2, B now becomes the first player for the rest of the coins, and according the definition of the list, it would be its next player as the loser.
1> Setup two pointers Left and Right to the start/end of the string.
2> If Left is white space or punctuation char, move Left to another non whitespace and non punctuation char Left to it unless end of string. If Right is white space or punctuation char, move Right to the another non white space and non punctuation char Right to it unless start of string. If Left>=Right, return TRUE. (Either all are whitespace or punctuation key or only one non whitespace and punctuation key).
3> Other compare LOWERCASE(*Left) and LOWERCASE(*Right), if they are the same, go to 2>. Otherwise return FALSE.
1> If people already knows the len of each strings, sort the strings in the ascend order of the strings.
2> pick two strings from the list, build the suffix tree for each strings.
3> Based on the suffix tree of the shortest string, prune suffix tree based on the another suffix tree.
4> If the result tree is empty one, return NULL;
5> pick another string from the list if one exist, build suffix tree of that string, and go to 3.
6> If there is no more string in the list, return the longest path of the result tree as the result.
Write some wrapper function for malloc/free etc dynamic memory allocation API, and add some statistics based on the time-stamp and thread ID in there. Each time before call the API, get a snapshot of the current memory usage of that thread, and after call the API, check its current thread related memory usage is the same of the snapshot taken before the API call, fatal if they are different unless you know that the API itself will need to allocate the memory.
- chenlc626 March 19, 20131> The issue is that you need to lock related accounts during the transaction.
2> You should avoid the global lock since it will lock all other threads.
3> With each account each lock, you need to take care of the order of lock other wise you got deadlock.
4> Use some unique id of the account, such as the account number, translate it into some unique lock order value, make sure Order(A)>Order(B), ORder(B)>ORder(C) ==> Order(A)>Order(C), and ORder(A)==Order(B)==> A==B. lock accounts in the transaction following the order of the ORder(account).
I would like to have one log directory for each URL, and one log file for each day. For each log line in the log file with time in seconds and the accumulated number of access since the start of the log. All the access with same time in second is shared in the same line of the log in the file.
For statistics of the number of access time for the specified URL, you just find the specified directory for the special URL, and find all the log file within the time frame, for each log file, you can find the number of the access by subtracting the earlies time of access number in that log file from the latest one in the log file within the time frame, and sum all these number to formal the final result. Here is some concern on the design:
1> distributed effort for each URL and its time frame
2> Accumulated number of access within each day, so omitting the going through of all file.
This is an extension of string permutation; Please see my code:
#include <stdio.h>
int numPerm(unsigned char *pNum, unsigned int wordLen, unsigned int permLen, char* pOrig)
{
unsigned char *pTemp;
unsigned int count;
unsigned char temp;
if(NULL==pNum || NULL==pOrig)
return -1;
if(!permLen)
{
temp=pNum[0];
pNum[0]='\0';
printf("%s\n", pOrig);
pNum[0]=temp;
}
for(count=0;count< wordLen;count++)
{
temp=pNum[0];
pNum[0]=pNum[count];
pNum[count]=temp;
numPerm(pNum+1, wordLen-1, permLen-1, pOrig);
pNum[count]=pNum[0];
pNum[0]=temp;;
}
return 0;
}
int main()
{
unsigned char numList[]={'0', '1', '2', '3', '4','5', '6', '7', '8', '9'};
numPerm(numList, 10, 4, numList);
}
When i am thinking about the order of each Account, I suppose there is some unique Identifier for each account, and normally it is account number. Account Number usually is a string of numbers, people could implement a function into an U64 value, and then synchronize each account according to order of this value. For example, you lock account with bigger account number first. the lock smaller account number later. If the account number in U64 form will not repeat, then it will not form a loop of lock.
- chenlc626 March 16, 2013Write a function which will convert the number bellow 1K to the string description, as suggested by the question, then write a function which separate the the original number into x billion, y million, and Z K and the rest R, call the first function to covert X, Y, Z, and K into the description, put a unit name (billion, million, thousand) at the end of each description, and combine these short phrases into final results.
- chenlc626 March 15, 2013This should be an extended form to find sum of two equal to some specified value.
1> Sort the array in ascend order.
2> Beginning from the end of the list, pick each number, as P1
3> from the rest of the numbers in the array try to find P2+P3 = -P1, which has O(n) complexity.
4> If find one in 3>, terminate the search.
Worst case complexity: O(n*n).
My python code according to charbot:
input = [ {"user_id" : "tttad", "page-type-id":"Home Page"},
{"user_id" : "ttads", "page-type-id":"Product Details"},
{"user_id" : "tttad", "page-type-id":"Home Page"},
{"user_id" : "tttad", "page-type-id":"Product Details"},
{"user_id" : "ttads", "page-type-id":"Product Details"},
{"user_id" : "tttad", "page-type-id":"Home Page"},
{"user_id" : "tttad", "page-type-id":"Product Details"},
{"user_id" : "ttads", "page-type-id":"Product Details"},
{"user_id" : "tttad", "page-type-id":"Home Page"},
{"user_id" : "tttad", "page-type-id":"Product Details"},
{"user_id" : "tttad", "page-type-id":"Product Details"},
{"user_id" : "tttad", "page-type-id":"Product Details"},
{"user_id" : "tttad", "page-type-id":"Product Details"},
{"user_id" : "tttad", "page-type-id":"Product Details"},
{"user_id" : "tttad", "page-type-id":"Product Details"},
{"user_id" : "tttad", "page-type-id":"Home Page"},
{"user_id" : "tttad", "page-type-id":"Home Page"},
{"user_id" : "tttad", "page-type-id":"Home Page"},
{"user_id" : "tttad", "page-type-id":"Home Page"},
{"user_id" : "tttad", "page-type-id":"Home Page"},
{"user_id" : "tttad", "page-type-id":"Home Page"},
{"user_id" : "tttad", "page-type-id":"Home Page"},
{"user_id" : "tttad", "page-type-id":"Home Page"}
];
seq_rank={};
user_set={};
page_type_inv=["Home Page", "Product Details"];
page_type={};
index=0;
page_type_size=0;
for index in range(len(page_type_inv)):
page_type[page_type_inv[index]]=index;
page_type_size=index;
page_type_size+=1;
print page_type;
#page_type={"Home Page":1,
# "Product Details": 2,
# "type size !!!": 3};
print page_type_size;
for item in input:
if item["user_id"] in user_set.keys():
user_set[item["user_id"]]["page_type_fifo"]%=page_type_size*page_type_size;
user_set[item["user_id"]]["page_type_fifo"]*=page_type_size;
user_set[item["user_id"]]["page_type_fifo"]+=page_type[item["page-type-id"]];
user_set[item["user_id"]]["count"]+=1;
if user_set[item["user_id"]]["count"] > 2 :
if user_set[item["user_id"]]["page_type_fifo"] in seq_rank.keys():
seq_rank[user_set[item["user_id"]]["page_type_fifo"]]+=1;
else:
seq_rank[user_set[item["user_id"]]["page_type_fifo"]]=1;
else:
user_set[item["user_id"]]={};
user_set[item["user_id"]]["page_type_fifo"]=page_type[item["page-type-id"]];
user_set[item["user_id"]]["count"]=1;
print seq_rank;
rank_max=0;
rank_max_key=0;
for key in seq_rank.keys():
if rank_max < seq_rank[key] :
rank_max=seq_rank[key];
rank_max_key=key;
third_page_type=page_type_inv[rank_max_key%page_type_size];
rank_max_key/=page_type_size;
second_page_type=page_type_inv[rank_max_key%page_type_size];
first_page_type=page_type_inv[rank_max_key/page_type_size];
print "Max Order Seq is:", first_page_type, "==> ", second_page_type, "==> ", third_page_type;
print input;
Here is my thinking, not sure if it is optimized. Let define I is the input array, also assume element in I all positive, the X is the part of array with + sign, Y is the part of array with - sign, so sum(X)-sum(Y) is the value we wanted, we have:
sum(X)+sum(Y)=sum(I)
sum(X)-sum(Y)=sum(I)-2*sum(Y)=2*(0.5*sum(I)-sum(Y)). So the question turns into another form: we need to find the Y, which is part of I, its sum is mostly close to value constant 0.5*sum(I) for specified input.
For this question i am thinking of algorithm as bellow:
1> sort I
2> build an array sumInput[x], which is the sum of element in I from index 0 to x.
3> Define a function find_closest(I, index, *pCurrentMin, target)
III> if(index<0) return
a> if(target-sumInput[index]> *pCurrentMin), you can not get closer, return
b> if(I[0]-target> 0), return, you can not get closer.
c> if(I[index]==target) *pCurrentMin=0; return;
d> if(I[index]<target) if(*pCurrentMin> target-I[index]) *pCurrentMin=target-I[index],
e call find_closest(I, index-1, pCurrentMin, target-I[index]); if *pCurrentMin==0, return
f> call find_closest(I, index-1, pCurrentMin, target)
4> call find_closest(I, n-1, ¤tMin, 0.5sumInput(n-1)),
currentMin is initialized to 0.5sumInput(n-1);
Suppose the matrix is p[5][5], so the the number of (x,y) is p[x+2][2-y]. for p[2*k+1][2*k+1], i guess the number of (x,y) would be p[k+x][k-y]. There is a pattern in the array n(k,k)=p[2k][0]=(2k+1)*(2k+1)
The c implementation of n(x,y) is as bellow:
include <stdio.h>
int circle_digit(int x,int y)
{
int maxXY;
maxXY=abs(x);
if(maxXY<abs(y))
maxXY=abs(y);
if(x==maxXY && y>-maxXY && y<maxXY)
{
//right side
return ((2*maxXY-1)*(2*maxXY-1)) +maxXY-y;
}else if(y==-maxXY && x> -maxXY && x<=maxXY)
{
return ((2*maxXY-1)*(2*maxXY-1))+3*maxXY-x;
}else if(x==-maxXY && y >= -maxXY && y<maxXY)
{
return (2*maxXY-1)*(2*maxXY-1)+5*maxXY+y;
}else if(y==maxXY && x>=-maxXY && x<=maxXY)
{
return (2*maxXY-1)*(2*maxXY-1)+7*maxXY+x;
}
}
I would think it should use the A* search algorithm.
- chenlc626 April 23, 2013