Sandy
BAN USERhmm without using temp it would definitely have to be recursive way
anyway here the solution with temp variables (in python)
>>> sum = 0
>>> num = 123
>>> while num:
elem = num % 10
sum = sum*10 + elem
num = num // 10
unfortunately even with recursion i had to use a extra variable .. wonder if that s allowed
>>> def reverse(num,sum):
if num == 0 :
return sum
return reverse(num//10,sum*10+num%10)
>>> reverse(123,0)
>>> 321
you could even make a bst of unique keys and if a duplicate is found append to the array
that way you could save looping through bst after creation
so you would get running time of order O(nlgn)
or making use of hashing will make it even better O(n)
since we need to go through all the elems in the array we cant do better than O(n) hence using hash tables would be best
The running time is O(N) and hardly any space complexity .. the only trick here is to maintain the height of the node in the node object and the length of the linked list in the linkedlist instance
Here is the python implementation
def swap_kth_elem(ip_list,k):
node = ip_list.head
if k>ip_list.length:
return "Error message"
final_pos = None
while node is not None:
if final_pos and node.height == final_pos:
finalnode = node
break
elif k == node.height :
keynode = node
final_pos = ip_list.length - k + 1
if final_pos < k:
node = ip_list.head
continue
node = node.next
keynode.key,finalnode.key = finalnode.key,keynode.key
return ip_list.print_list()
Since you will have to parse the whole logger anyway the average complexity of the program is theta(N) where N is the number of records
here is a simple program in python
import datetime
def find_repeated_customer():
file_obj = open("file path","r")
customer_last_visit = {}
repeat_customer = set()
while line in file_obj:
timestamp,customer_id,page_id = line.split(" : ")
last_visit = customer_last_vist.get(customer_id,None)
if not last_visit:
customer_last_visit[customer_id] = last_visit
else:
# assuming time stamp looks like 2013-03-29 01:03:26.947000
year,month,date = timestamp.split(" ")[0].split("-")
current_visit = datetime.date(year,month,date)
day_diff = current_visit - last_visit
if day_diff >=1:
repeat_customer.add(customer_id)
customer_last_visit[customer_id] = current_visit
Arrays are continous memory blocks hence indexing and searching is faster than linked list
linked lists are nodes pointing to other nodes so to find the 200th node in a linked list of 10,000 nodes you would have to traverse 200 nodes
it is easier to faster to add a node anywhere in the linked list and push the right half of the nodes by 1 place since it involves just changing the prev nodes pointer and pointing the new node to next node hence it is of constant time O(1)
same goes for deleting
This can be done using heap sort with has the O(nlgn)
The key is to build min heap here and keep on shrinking the array size from left which is the sorted in increasing order
The condition that the min heap needs to follow is that the value of children is greater or equal to the parent and this can be considered as the loop invariant condition => loop invariant is the condition which needs to be satisfied in every loop inorder to sort the seq
here is the algo
1) create min heap - O(n)
2) for range starting from 0 to n-1; call min_heapify on Array[ i to n]
which will push the minimum value to top and we can consider the heap from i+1 index in the next loop
there by building sorted array
Solution with running time O(n) in python
def find_max_num(arr):
counter_dict = {} #this dictionary will be used to maintain the count of characters
for elem in arr:
val = counter_dict.get(elem,0)
counter_dict[elem] = val+1
# once we find the count of a character is equal / more than n/2+1 then we know that there can be no other character that can occur more hence we can break
if counter_dict[elem]>= len(arr)/2+1:
return elem
else:
# there was no return encountered during the for loop hence we return -1
return -1
why sort at O(nlgn) when u can loop once at O(n) and maintain a hash table where u can check for the element at O(1)
- Sandy April 19, 2013