How does the OS scheduler regain control of CPU?

MultithreadingOperating SystemMultiprocessingSchedulingMultitasking

Multithreading Problem Overview


I recently started to learn how the CPU and the operating system works, and I am a bit confused about the operation of a single-CPU machine with an operating system that provides multitasking.

Supposing my machine has a single CPU, this would mean that, at any given time, only one process could be running.

Now, I can only assume that the scheduler used by the operating system to control the access to the precious CPU time is also a process.

Thus, in this machine, either the user process or the scheduling system process is running at any given point in time, but not both.

So here's a question:

Once the scheduler gives up control of the CPU to another process, how can it regain CPU time to run itself again to do its scheduling work? I mean, if any given process currently running does not yield the CPU, how could the scheduler itself ever run again and ensure proper multitasking?

So far, I had been thinking, well, if the user process requests an I/O operation through a system call, then in the system call we could ensure the scheduler is allocated some CPU time again. But I am not even sure if this works in this way.

On the other hand, if the user process in question were inherently CPU-bound, then, from this point of view, it could run forever, never letting other processes, not even the scheduler run again.

Supposing time-sliced scheduling, I have no idea how the scheduler could slice the time for the execution of another process when it is not even running?

I would really appreciate any insight or references that you can provide in this regard.

Multithreading Solutions


Solution 1 - Multithreading

The OS sets up a hardware timer (Programmable interval timer or PIT) that generates an interrupt every N milliseconds. That interrupt is delivered to the kernel and user-code is interrupted.

It works like any other hardware interrupt. For example your disk will force a switch to the kernel when it has completed an IO.

Solution 2 - Multithreading

Google "interrupts". Interrupts are at the centre of multithreading, preemptive kernels like Linux/Windows. With no interrupts, the OS will never do anything.

While investigating/learning, try to ignore any explanations that mention "timer interrupt", "round-robin" and "time-slice", or "quantum" in the first paragraph – they are dangerously misleading, if not actually wrong.

Interrupts, in OS terms, come in two flavours:

  • Hardware interrupts – those initiated by an actual hardware signal from a peripheral device. These can happen at (nearly) any time and switch execution from whatever thread might be running to code in a driver.

  • Software interrupts – those initiated by OS calls from currently running threads.

Either interrupt may request the scheduler to make threads that were waiting ready/running or cause threads that were waiting/running to be preempted.

The most important interrupts are those hardware interrupts from peripheral drivers – those that make threads ready that were waiting on IO from disks, NIC cards, mice, keyboards, USB etc. The overriding reason for using preemptive kernels, and all the problems of locking, synchronization, signaling etc., is that such systems have very good IO performance because hardware peripherals can rapidly make threads ready/running that were waiting for data from that hardware, without any latency resulting from threads that do not yield, or waiting for a periodic timer reschedule.

The hardware timer interrupt that causes periodic scheduling runs is important because many system calls have timeouts in case, say, a response from a peripheral takes longer than it should.

On multicore systems the OS has an interprocessor driver that can cause a hardware interrupt on other cores, allowing the OS to interrupt/schedule/dispatch threads onto multiple cores.

On seriously overloaded boxes, or those running CPU-intensive apps (a small minority), the OS can use the periodic timer interrupts, and the resulting scheduling, to cycle through a set of ready threads that is larger than the number of available cores, and allow each a share of available CPU resources. On most systems this happens rarely and is of little importance.

Every time I see "quantum", "give up the remainder of their time-slice", "round-robin" and similar, I just cringe...

Solution 3 - Multithreading

To complement @usr's answer, quoting from Understanding the Linux Kernel:

The schedule( ) Function

> schedule( ) implements the scheduler. Its objective is to find a > process in the runqueue list and then assign the CPU to it. It is > invoked, directly or in a lazy way, by several kernel routines. [...]

Lazy invocation

> The scheduler can also be invoked in a lazy way by setting the > need_resched field of current [process] to 1. Since a check on the value of this > field is always made before resuming the execution of a User Mode > process (see the section "Returning from Interrupts and Exceptions" in > Chapter 4), schedule( ) will definitely be invoked at some close > future time.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionEdwin DalorzoView Question on Stackoverflow
Solution 1 - MultithreadingusrView Answer on Stackoverflow
Solution 2 - MultithreadingMartin JamesView Answer on Stackoverflow
Solution 3 - MultithreadingTudorView Answer on Stackoverflow