Jay
BAN USERIf you look at the assignement to i, you can see that it is the combined offset (counter + offset) is modulo size of array plus the start index.
This means that i is always within the range start <= i < end and it's value goes [end-2, end-1, start, start+1].
Therefore, if there are no non-null elements after the (start + offset) position (and the linear search reaches end), it continues the linear search from the start position.
If there are no non-null elements, the loop will finish (after end-start iterations) and return -1 (to indicate there are no non-null elements)
public class ExcelLetters {
public static void main(String[] args) {
int i = Integer.parseInt(args[0]);
System.out.println(convert(i));
}
public static String convert(int i) {
String s;
// Least significant character is different (0 -> A, 25 -> Z)
s = Character.toString((char)(i % 26 + (int)'A'));
i /= 26;
while (i > 0) {
s = (char)(i%26 + (int)'A' - 1) + s;
i = i / 26;
}
return s;
}
}
An alternative solution is to iterate through the array to filter out the null elements. You can then just choose an element at random from the filtered array.
This has a time complexity of O(n) and a space complexity of O(n). However, if the array does not change regularly, you only need to do the filter stage once ( O(n) ) meaning the select method is constant time.
For methods where the space matters, the array changes regularly or the method is only called once, my other answer is better (as it is constant space and linear time).
If you take 2^(n-1) coins from bucket n, every possible case will have a unique weight.
For example, with 5 buckets, if the second, third and fourth buckets are fake, the sample will weigh 1.0 + 1.8 + 3.6 + 7.2 + 16.0 = 29.6 grams. This is the only combination which will have this weight, allowing you to identify which buckets are fake.
A simple (and quick, O(n) ) answer is to choose a random start position then linear search to find a non-null element.
public static int randomElement(Object[] a, int start, int end) {
int offset = (new Random()).nextInt(end - start);
for (int counter = 0; counter < end - start; counter++) {
// i is the start + counter + offset, but looped such that start <= i < end
int i = start + ((counter + offset) % (end - start));
if (a[i] != null) {
return i;
}
}
// Return -1 if all elements are null
return -1;
}
This will be fine if the elements are evenly distributed, but if they are grouped, the element at the start of the group will be returned more often. For example, in the array [null, null, null, foo, bar], foo will be returned more often than bar.
- Jay March 15, 2014
I agree. I don't think <O(n) is possible if there are duplicates.
- Jay March 20, 2014If you ignore duplicates, you can do O(log n). Using a modified binary search, you can have a system which is logarithmic on average but linear in the worst case.