CPU Scheduling

26 important questions on CPU Scheduling

How is the maximum CPU utilisation obtained?

Through multiprogramming
-> the CPU switches from one process to another (because we cannot wait for one process to end before we start another)

-> an I/O can tell the processor when to switch to another program

There are 2 types of burst cycles within the CPU - what are they?

1. CPU burst
- used to execute a process
2. I/O burst 
- when the CPU is waiting for the I/O request to complete

- the CPU will switch between CPU burst and I/O burst

What does the short-term scheduler (CPU scheduler) do?

It selects from among the processes in ready queue and allocates the CPU to one of them
  • Higher grades + faster learning
  • Never study anything twice
  • 100% sure, 100% understanding
Discover Study Smart

When may CPU scheduling decisions take place?

1. Switches from running to waiting state (e.g. waiting for I/O request) (nonpreemptive)
2. Switches from running to ready state (preemptive)
3. Switches from waiting to ready (preemptive)
4. Terminates  (nonpreemptive)

Nonpreemptive -> one the CPU has been allocated to a process, the process will be using the CPU until it releases it 

Preemptive
- they can be interrupted
- the CPU can switch from one to another   
e.g. accessing shared data

What is dispatch latency?

Time it takes for the dispatcher to stop one process and start another running

What are the scheduling criteria?

  • CPU utilisation - keep the CPU as busy as possible
  • Throughput - number of processes that complete their execution per time unit
  • Turnaround time - amount of time to execute a particular process
  • Waiting time - amount of time a process has been waiting in the ready queue
  • Response time - amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment)

What is the criteria for the optimum scheduling algorithm? Which one is typically used to compare algorithms?

  • max CPU utilisation - (40-90% - if 100% the CPU will get too hot and there will be a hardware failure)
  • max throughput
  • min turnaround time
  • min waiting time
  • min response time


Average waiting time is used to compare algorithms

What is the CPU burst?

The amount of time the process uses the processor before it is no longer ready

i.e. the time the process takes to complete

What is First-Come, First-Served (FCFS) Scheduling?

Scheduling algorithm

- non-preemptive i.e. when a process begins executing, it won't be interrupted until it terminates itself or if it is stopped by an i/o request

What is the problem with the FCFC scheduling algorithm?

The average waiting time depends on the order in which processes arrive to the CPU

We have the convoy effect

What is the convoy effect?

If short process arrive after long ones, the average waiting time will be much longer

What is the Shortest-Job-First (SJF) scheduling algorithm?

- solves the problem with FCFC

  • Associates with each process the length of its next CPU burst
--> use these lengths to schedule the process with the shortest time

  • SJF is optimal - it gives minimum average waiting time for a given set of processes

The difficulty is knowing the length of the next CPU request before its executuion

How does the SJF determine the length of the next CPU burst?

The SJF can only estimate the length of the next CPU burst

This can be done using exponential averaging
- takes the measured length of previous CPU bursts

IMPORTANT - the result is not 100% accurate but it gives a good approximation of any cpu burst

What is the preemptive version of the SJF algorithm?

The shortest-remaining-time-first

We add the arrival times and preemption (i.e. it can be interrupted)

- if a process will take a shorter time (calculated by exponential averaging), then it will interrupt the process currently executing. If the process is longer than the process currently executing, it will not interrupt

What is the priority scheduling algorithm?

- A priority number is associated with each process
- The CPU is allocated to the process with the highest priority number
- Can run in preemptive and nonpreemptive modes

How does priority scheduling compare to SJF?

SJF is priority scheduling where priority is the inverse of predicted next CPU burst time

What is the problem with priority scheduling? What is the solution?

Starvation -> low priority processes may never execute

Solution:
- Ageing - as time progresses increase the priority of the process

What is the round robin algorithm?

Each process gets a small unit of CPU time (time quantum q), usually 10-100 milliseconds
i.e. each process will get the same time quantum to execute

- after this time, the process is preempted and added to the end of the ready queue  

- The timer interrupts every quantum schedule next process

What is the problem with round robin?

- If the value of q is large - it will be similar to FIFO
- If the q is small, q must be large with respect to context switch, otherwise overhead is too high
i.e. if q is too small compared to the overhead, there will be high overhead

How does the RR algorithm compare to SJF?

Typically, the RR algorithm provides a higher turnaround, but better response

What happens when the q is too small?

If q is too small, there will be too many context switches
- this will add up to produce a lot of overhead

What is the multilevel feedback queue

Allows processes to move between various queues - ageing can be implemented in this way
- therefore the multilevel feedback queue fights against starvation

What is a disadvantage with multilevel feedback queue?

When it comes to implementation, it is quite complex because you need to know exactly which algorithm to use for each queue and what is the best quantum value to use for each queue

How does scheduling happen in a multi-processor system?

CPU scheduling is more complex when multiple CPUs are available
- Homogeneous processors within a multiprocessor

1. Asymmetric multiprocessing --> among all the processors, only one processor (the master) accesses the system data structures, alleviating the need for data sharing. The rest of the processors are slaves - only used for executing the processes

2. Symmetric multiprocessing (SMP) - this is the most common
- each processor is completely independent/self-scheduling, each has its own private queue of ready processes

What is the problem with SMP?

Two processors may pick up a process for execution at exactly the same time
- THIS IS THE SAME AS THE CRITICAL SECTION PROBLEM
- so which processor to pick for the execution as a synchronisation problem

How is scheduling done in the Windows OS?





■  The scheduler is called Dispatcher in Windows
■  Windows uses priority-based preemptive scheduling i.e. each thread is given a number which defines its priority -> Highest-priority thread runs next
■  Thread runs until (1) blocks, (2) uses time slice, (3) preempted by higher-priority thread
■  32-level priority scheme which is divided into a Variable class (those with priority1-15), and a real-time class (16-31)
■  Priority 0 is memory-management thread
■  Queue for each priority
■  If no run-able thread, runs idle thread

The question on the page originate from the summary of the following study material:

  • A unique study and practice tool
  • Never study anything twice again
  • Get the grades you hope for
  • 100% sure, 100% understanding
Remember faster, study better. Scientifically proven.
Trustpilot Logo