dev.cpu
BAN USERWhat's your definition of one's complement? One's complement could be either a binary representation of integers or an operator (the bitwise complement operator). In any case this answer assumes an arbitrary definition of one's complement, and the question is ambiguous too.
- dev.cpu February 01, 2013All solutions posted so far have time complexity O(n*k) at best.
It's possible to achieve a O(n*logk) solution by combining a min heap with a
queue. We need the heap to extract the min element in logk time and we need the queue to keep track of the last element (the one that has to be removed from the heap).
The problem with a heap only is that when an element falls out of the window, we need O(k) time to find it in the heap and remove it. The idea is to keep a queue inside the heap: heap nodes have a next pointer which points to the next element in the window; we also keep a head and a tail pointer to that queue.
Now when the window slides, the node pointed by head falls out of it, so we can find it in constant time and replace it with the new element that enters the window and then re-heapify (thats O(logk) operation). Thus, the total time complexity is O(n*logk)
Here is the C++ code (sorry if it's too long but I had to implement a heap myself since C++ library does not provide O(logk) heapify operation)
(whitespace got screwed)
#include <iostream>
#include <vector>
#include <cstdlib>
#include <algorithm>
#include <cassert>
using namespace std;
template<class T>
struct Node {
Node * next;
size_t index;
T value;
Node(const T& value, size_t index = 0):next(NULL), index(index), value(value) {}
};
template<class T>
void swap(Node<T> *&a, Node<T> *&b)
{
std::swap(a->index, b->index);
std::swap(a, b);
}
template<class T>
bool max_node(Node<T> * a, Node<T> * b)
{
return a->value < b->value;
}
template<class T>
bool min_node(Node<T> * a, Node<T> * b)
{
return a->value > b->value;
}
template<class It, class Comp>
void heapify(It begin, It end, It pos, Comp comparator)
{
It left = begin + 2 * distance(begin, pos) + 1;
It right = begin + 2 * distance(begin, pos) + 2;
It largest = pos;
if (left < end && comparator(*pos, *left)) {
largest = left;
}
if (right < end && comparator(*largest, *right)) {
largest = right;
}
if (largest != pos) {
swap(*pos, *largest);
heapify(begin, end, largest, comparator);
}
}
template<class It, class Comp>
void build_heap(It begin, It end, Comp comparator)
{
for (It it = begin + distance(begin, end) / 2;it >= begin;--it) {
heapify(begin, end, it, comparator);
}
}
template<class T>
void min_sliding_window(vector<T> vals, size_t k)
{
Node<T> *head, *tail;
vector<Node<T>*> heap;
heap.push_back(new Node<T>(vals[0], 0));
for (size_t i = 1;i < k;++i) {
Node<T> *tmp = new Node<T>(vals[i], i);
heap.back()->next = tmp;
heap.push_back(tmp);
}
head = heap[0];
tail = heap[heap.size() - 1];
build_heap(heap.begin(), heap.end(), min_node<T>);
for (size_t i = k;i < vals.size();++i) {
cout<<heap[0]->value<<" ";
assert(head->next);
tail->next = head;
tail = tail->next;
head = head->next;
tail->value = vals[i];
heapify(heap.begin(), heap.end(), heap.begin() + tail->index, min_node<T>);
}
for (size_t i = 0;i < heap.size();++i) {
delete heap[i];
}
}
int main()
{
int k = 4;
int n = 20;
vector<int> vals;
for (size_t i = 0;i < n;++i) {
int n = rand() % 100;
vals.push_back(n);
cout<<n<<" ";
}
cout<<endl;
min_sliding_window(vals, k);
}
Use tail recursion. Plain recursion is inefficient and most likely will blow up the stack. The following code allows tail recursion optimization (with option -O2 at least in gcc), try it, it won't blow up.
#include <stdio.h>
#include <stdlib.h>
int tailSum(int*buf, int n, int sum) {
if (n == 0)
return sum;
return tailSum(buf + 1, n - 1, sum + buf[0]);
}
int main() {
int *buf = (int*)malloc(sizeof(int) * 1000000);
// initialize buf contents
printf("%d\n", tailSum(buf, 1000000, 0));
return 0;
}
Bubble sort as we know has O(n^2) worst case performance and O(n) best case (when data are already sorted).
Consider the following extreme scenarios:
a) In the first run data were sorted so bubble sort run in O(n) time, that is, it did 10000 operations in 100 seconds or 100 ops per second. Now assume in second run data were sorted in reverse order so bubble sort run in O(n^2) time. With 100 ops per second it did 5000 ops, enough only to sort sqrt(5000) ~= 70 elements.
b) In the first run data were sorted in reverse order, so bubble sort took O(n^2) time, that means 10.000^2 = 100.000.000 ops or 1.000.000 ops per second. Assume the data are sorted in the second run, now bubble sort, in 50 sec, can process 50.000.000 elements.
Don't pay attention at the exact numbers, I'm talking about orders of magnitude, and it varies a lot.
No, it's not.
If characters a to z are 8-bit ascii values we have at most 2^8 (=256) possible values for xor result. On the other hand for each row/column there are choose(51,26) possible ordered combinations with repetition, which is far greater number.
Therefore, there are at least two different combinations which give the same xor result.
6 is not undefined.
comma operator is a sequence point and that means the order of the operations inside the (++i,i++,i) is well defined. That is, the above expression can be translated to something like this, which is well defined:
++i; i++; i = i;
see this answer (it contains examples similar to 1 and 6):
stackoverflow.com/a/4176333
- dev.cpu March 24, 2013