Symantec Interview Question
Software Engineer / DevelopersContext switch only occurs in 3 situations (exception, interrupt, and system call). It looks like we can write a system call to calculate this. I'm just talking out of my ass. Any comments?
It depends on the number of processes or threads running concurrently and the nature of the processes.
The context switch time is usually measured as the time interval elapsed between a thread invoking a yield operation, and the next thread starting to execute.
In applications like the Media player, context switching occurs more often, because the player needs to perform the process of decoding the stream and also playing back the video. So it involves many context switches.
In applications like the Apache Web Server running on a system, it involves lots of context switches, because of the number or concurrent independent processes that are being run on the server and a really nice scheduler needs to be present here.
This is right off wikipedia. A context switch is the computing process of storing and restoring the state (context) of a CPU such that multiple processes can share a single CPU resource.
Here is the article. http://en.wikipedia.org/wiki/Context_switch
Time interval after a thread yielding and another thread taking over - context switch. But how do we measure this time interval. Can someone provide an example? I was thinking of taking timestamp just before starting a new thread and taking timestamp again first thing in the new thread and taking the difference. Will that be the right thing?
That's the idea, you can use rdtsc() to read the time-stamp counter. But the problem is there might be other (kernel) threads running and sched_yield() might schedule one of them. That's why I think its always an approximation.
To measure the time spent on the context switching, user should know two things.
1- Time spent per context switch and 2- total context switches.
Time spent per context switch is typically a processor dependent, Which is time taken for the complete register storing and replacing it back.
Total context switches on the system can be obtained from /proc/stat with ctxt tag.
To measure the time spent on the context switching, user should know two things.
1> Time spent per context switch and 2> total context switches.
Time spent per context switch is typically a processor dependent, Which is time taken for the complete register storing and replacing it back.
Total context switches on the system can be obtained from /proc/stat with ctxt tag.
To measure the time spent on the context switching, user should know two things.
1- Time spent per context switch and 2- total context switches.
Time spent per context switch is typically a processor dependent, Which is time taken for the complete register storing and replacing it back.
Total context switches on the system can be obtained from /proc/stat with ctxt tag.
Is it possible to do this with thread serialization?
This requires deep knowledge in how thread context switching is done. If it is just moving thread off the executing list, everything in memory, nothing might be able to be done without hooking into the kernel, but if certain things happen like volatile variable affected or object externalized/serialized, there is something that you can do.
Is it possible to do this with thread serialization?
This requires deep knowledge in how thread context switching is done. If it is just moving thread off the executing list, everything in memory, nothing might be able to be done without hooking into the kernel, but if certain things happen like volatile variable affected or object externalized/serialized, there is something that you can do.
Assume there are only two processes, P1 and P2.
P1 is executing and P2 is waiting for execution. At some point, the operating system must swap P1 and P2, let’s assume it happens at the nth instruction of P1. If t(x, k) indicates the timestamp in microseconds of the kth instruction of process x, then the context switch would take t(2, 1) – t(1, n).
Another issue is that swapping is governed by the scheduling algorithm of the operating system and there may be many kernel-level threads that are also doing context switches. Other processes could be contending for the CPU or the kernel handling interrupts. The user does not have any control over these extraneous context switches. For instance, if at time t(1, n) the kernel decides to handle an interrupt, then the context switch time would be overstated.
In order to avoid these obstacles, we must construct an environment such that after P1 executes, the task scheduler immediately selects P2 to run. This may be accomplished by constructing a data channel, such as a pipe between P1 and P2.
That is, let’s allow P1 to be the initial sender and P2 to be the receiver. Initially, P2 is blocked(sleeping) as it awaits the data token. When P1 executes, it delivers the data token over the data channel to P2 and immediately attempts to read the response token. A context switch results and the task scheduler must select another process to run. Since P2 is now in a ready-to-run state, it is a desirable candidate to be selected by the task scheduler for execution. When P2 runs, the role of P1 and P2 are swapped. P2 is now acting as the sender and P1 as the blocked receiver.
P2 blocks awaiting data from P1
P1 marks the starting time.
P1 sends a token to P2.
P1 attempts to read a response token from P2. This induces a context switch.
P2 is scheduled and receives the token.
P2 sends a response token to P1.
P2 attempts to read a response token from P1. This induces a context switch.
P1 is scheduled and receives the token.
P1 marks the end time.
The key is that the delivery of a data token induces a context switch. Let Td and Tr be the time it takes to deliver and receive a data token, respectively, and let Tc be the amount of time spent in a context switch. At step 2, P1 records the timestamp of the delivery of the token, and at step 9, it records the timestamp of the response. The amount of time elapsed, T, between these events, may be expressed by:
T = 2 * (Td + Tc + Tr)
credits :geeksforgeeks
I will add one thing about this. Context switch has three parts:
- sk_seeker April 25, 2009+ picking a new thread/process to run: means looking at the scheduler queues to pick up the best process to run. If there is one better than the current one, pick it.
+ saving the context of the current thread/process
+ restoring the context of the newly picked process
One could definitely save the timer ticks value before the first step and save the timer value after the last step. This will give the time for one context switch.
The interesting point to note is that this value has to be stored in global memory. If the value is stored in the Uarea or any per-thread storage, then after the context switch, that address is no longer accessible.
The other interesting point is what is the variable part of the context switch here. Depending on how many scheduler queues need to be checked, the time to search and pick the next best process might vary. And, if the memory location where we save and restore from is NUMA memory, then the memory saving time can also vary.