waiging.lau
BAN USERAs the author of the API, we ensure no memory leak by
1. making object ownerships clear.
2. protecting heap allocation with smart pointers.
3. output arguments should be provided from caller or use sharedptr.
As a user of the API, we can do some test.
1. use tools with memory leak detection, i.e. valgrind.
2. limit the heap memory of the program and simply try to amplify possible memory leak by repeatedbly calling the API.
Great answer. Also accessing memory on stack seems to be faster because of high memory locality.
- waiging.lau March 31, 2013Starting from intuition:
Sol1: For each element a in A, insert a into B. This would be O(n*(n+m))
Sol2: Make the room(N empty cells) at the head of B. Merge(A,B,B) O(n+m + n+m)
Sol2: Let's do it the other way around from Sol2. Merge(A,B,B) starting from their ends.
I think this is the answer
- waiging.lau February 21, 2013Basically, as what its name implies, encapsulation means
1) grouping related data and subroutines together.
2) hiding irrelevant information from users.
3) exposing nessary interfaces to users.
For example, class is one of constructs that 1) group related data and subroutines into objects. 2-3) hide implementations and expose interfaces by access control(visibility of its members and of its base-class members).
Besides, in the old day, module is the construct that provides encapsulation for non-OOP languages. Like in C, a compilation unit is the body of a module that 1) groups data and subroutines. It 2-3) hides implementation details in the body and exports interfaces via its header.
one complement to comments above.
1. Cache
consider a big array long ar[100][100].
a) for (int i = 0; i < 100; ++i)
for (int j = 0; j < 100; ++j)
b) for (int j = 0; j < 100; ++j)
for (int i = 0; i < 100; ++i)
a) is better and faster, most-of-the-time, than b). Because a)'s traversal is smooth and using cache, while b) makes big jumps and cachemiss occurs. This also tells us to use vectors as a default container. As Stroustrup mentioned in GoNative2012, it has been tested that vectors run faster that lists even in heavy use of insertion and deletion.
1. Reference Counting for each object. Deletion is triggered when counters reach down to 0
2. Periodically check all heap objects. Deletion is triggered if there is some object which has no reference pointing to.
There is some naive idea to implement 2) in a way of customization.
a) customize or overload new operator such that all used heap addresses are stored in @HeapObjects.
b) customize a new reference/pointer class such that all references are stored in @References.
c) delete (HeapObjects - References) set of objects.
Triggers for 2) is arguable. Some designs except peridoic checking are setting upper bound of used heap, etc. (not much from at top of my head)
1. I think vtables are related to at least classes but not to objects.
2. There is 1 vtable for Base, since there are virtual functions in it.
3. There would be 1 vtable for Derived, if it owns some virtual functions which are not in Base.
There are no vtables but vptrs in these objects. Vptrs are pointing to the same vtable. Vtables live as static datas.
Agree. VM size is independent to physical memory. Notice that VM does depend on the swap size of disk. In the problem, if the swap size is less than 100mb, then the program can't actually run because the total physical storage is not even enough.
- waiging.lau July 23, 2013