Skip to main content

What are the different state of the process? What are the process scheduling algorithms? What is preemptive scheduling? What is non-preemptive scheduling?

 What are the different states of the process? 

  • In the five-state model, there are 5 states in the process, which are created, ready, running, terminated, and blocked.
  • The running state is that the process is running on the CPU. In a single-processor environment, at most one process is running at any time.
  • The ready state means that the process is ready to run, that is, the process has obtained all the required resources except the CPU, and can run once it obtains the CPU.
  • The blocking state is when the process is waiting for an event to suspend operation, such as waiting for a resource to be available or waiting for I/O to complete. Even if the CPU is idle, the process cannot run.

Running state → blocking state: It is often caused by waiting for peripherals, waiting for resource allocation such as main memory, or waiting for manual intervention.

Blocking state → Ready state: The waiting condition has been met, and it can run only after being allocated to the processor.

Running state → Ready state: It is not due to its own reasons, but because of external reasons that the process in the running state surrenders the processor, and it becomes the ready state at this time. For example, the time slice is used up, or there is a higher priority process to preempt the processor, etc.

Ready state → Running state: The system selects a process in the ready queue to occupy the processor according to a certain strategy, and it becomes the running state at this time.

 

What are the process scheduling algorithms?

First come first serve

The non-preemptive scheduling algorithm performs scheduling in the order of request.

It is good for long jobs, but not good for short jobs, because short jobs have to wait for the execution of the previous long jobs before they can be executed, and long jobs need to be executed for a long time, resulting in too long waiting time for short jobs. In addition, it is also detrimental to I/O-intensive processes, because such processes have to be re-queued after each I/O operation.

 

Short assignment first

The non-preemptive scheduling algorithm performs scheduling in the order of the shortest estimated running time.

Long jobs may starve to death, waiting for the completion of short jobs. Because if there are always short jobs coming, then long jobs will never be scheduled.

 

The shortest remaining time first

The preemptive version with the shortest job first is scheduled in the order of remaining running time. When a new job arrives, its entire running time is compared with the remaining time of the current process. If the new process requires less time, the current process is suspended and the new process is run. Otherwise, the new process waits.

 

Time slice rotation

All ready processes are arranged into a queue according to the FCFS principle, and CPU time is allocated to the first process of the queue during each scheduling, and the process can execute a time slice. When the time slice runs out, the timer issues a clock interrupt, and the scheduler stops the execution of the process and sends it to the end of the ready queue, while continuing to allocate CPU time to the process at the head of the queue.

The efficiency of the time slice rotation algorithm has a lot to do with the size of the time slice:

  • Because process switching must save the process information and load the information of the new process, if the time slice is too small, it will cause the process to switch too frequently, and it will take too much time in process switching.
  • If the time slice is too long, then real-time performance cannot be guaranteed.

 

Priority scheduling

Assign a priority to each process and schedule according to priority.

In order to prevent low-priority processes from never waiting for scheduling, the priority of waiting processes can be increased over time.

 

What is preemptive scheduling? What is non-preemptive scheduling?

Preemptive means that the operating system forcibly suspends the running process, and the scheduler allocates the CPU to other ready processes.

The non-preemptive type is that once the scheduler assigns a processor to a process, it will continue to run until the process is completed or the process scheduling process schedules an event and blocks, before assigning the processor to another process.

 

Comments

Popular posts from this blog

Defination of the essential properties of operating systems

Define the essential properties of the following types of operating sys-tems:  Batch  Interactive  Time sharing  Real time  Network  Parallel  Distributed  Clustered  Handheld ANSWERS: a. Batch processing:-   Jobs with similar needs are batched together and run through the computer as a group by an operator or automatic job sequencer. Performance is increased by attempting to keep CPU and I/O devices busy at all times through buffering, off-line operation, spooling, and multi-programming. Batch is good for executing large jobs that need little interaction; it can be submitted and picked up later. b. Interactive System:-   This system is composed of many short transactions where the results of the next transaction may be unpredictable. Response time needs to be short (seconds) since the user submits and waits for the result. c. Time sharing:-   This systems uses CPU scheduling and multipro-gramming to provide economical interactive use of a system. The CPU switches rapidl

What is a Fair lock in multithreading?

  Photo by  João Jesus  from  Pexels In Java, there is a class ReentrantLock that is used for implementing Fair lock. This class accepts optional parameter fairness.  When fairness is set to true, the RenentrantLock will give access to the longest waiting thread.  The most popular use of Fair lock is in avoiding thread starvation.  Since longest waiting threads are always given priority in case of contention, no thread can starve.  The downside of Fair lock is the low throughput of the program.  Since low priority or slow threads are getting locks multiple times, it leads to slower execution of a program. The only exception to a Fair lock is tryLock() method of ReentrantLock.  This method does not honor the value of the fairness parameter.