Skip to main content

What is the difference between concurrency and parallelism? What is the difference between a large kernel and a micro kernel? What is the difference between a time-sharing system and a real-time system?What is the difference between static link and dynamic link?What are the stages of compilation?

 What is the difference between concurrency and parallelism?

Concurrency means that multiple tasks will be processed within a period of time; but at a certain moment, only one task is executing. Single-core processors can achieve concurrency. For example, there are two processes A and B. After A runs for a time slice, it switches to B, and B runs for a time slice and then switches to A. Because the switching speed is fast enough, the macroscopically shows that multiple programs can be run at the same time within a period of time.

Parallel means that there are multiple tasks being executed at the same time. This requires a multi-core processor to complete, and multiple instructions can be executed at the same time at the microscopic level. Different programs are run on different processors. This is a physical process of multiple processes at the same time.

 

What is the difference between a large kernel and a micro kernel?

The big kernel is to put all the functions of the operating system into the kernel, including scheduling, file system, network, device driver, storage management, etc., to form a tightly connected whole. The advantage of a large kernel is its high efficiency, but it is difficult to locate bugs and has poor scalability. Every time a new function needs to be added, the new code must be recompiled with the original kernel code.

The microkernel is different from the monolithic kernel. The microkernel just adds the core functions of the operation to the kernel, including IPC, address space allocation and basic scheduling. These things are all running in the kernel mode, and other functions are called by the kernel as modules. Run in user space. The microkernel is easier to maintain and expand, but the efficiency may not be high because it needs to switch frequently between the kernel mode and the user mode.

 

What is the difference between a time-sharing system and a real-time system?

The sharing time system is the system that divides the CPU time into short time slices and allocates them to multiple jobs in turn. Its advantage is that it can guarantee a sufficiently fast response time for multiple operations of multiple users, and effectively improves the utilization of resources.

Real-time system is the system's ability to process and respond to externally input information within a specified time (deadline). Its advantages are the ability to process and respond in a centralized and timely manner, high reliability, and safety.

Usually the computer uses time sharing, that is, sharing the CPU between multiple processes/users to achieve multi-tasking. The scheduling between users/processes is not particularly accurate. If a process is locked, more time can be allocated to it. The real-time operating system is different. Software and hardware must comply with strict time limits, and processes that exceed the time limit may be directly terminated. In such an operating system, each lock requires careful consideration.

 

What is the difference between static link and dynamic link?

Static linking means that during compilation, the static library is integrated into the application by the compiler and linker, and made into object files and executable files that can operate independently. The static library is generally a collection of some external functions and variables.

The static library is very convenient, but if we just want to use a certain function in the library, we still have to link all the content into it. A more modern approach is to use shared libraries, which avoids a lot of duplication of static libraries in files.

Dynamic linking can be executed when it is first loaded, or it can be completed when the program starts to execute. This is done by the dynamic linker. For example, the standard C library (libc.so) is usually dynamically linked, so that all programs can share the same library instead of being packaged separately.

 

What are the stages of compilation?

Preprocessing stage: processing preprocessing commands beginning with #;

Compilation stage: translate into assembly files;

Assembling stage: Translating the assembly file into a relocatable object file;

Link stage: Combine the relocatable object file and the separately precompiled object files such as printf.o to obtain the final executable object file.

 

Comments

Popular posts from this blog

Defination of the essential properties of operating systems

Define the essential properties of the following types of operating sys-tems:  Batch  Interactive  Time sharing  Real time  Network  Parallel  Distributed  Clustered  Handheld ANSWERS: a. Batch processing:-   Jobs with similar needs are batched together and run through the computer as a group by an operator or automatic job sequencer. Performance is increased by attempting to keep CPU and I/O devices busy at all times through buffering, off-line operation, spooling, and multi-programming. Batch is good for executing large jobs that need little interaction; it can be submitted and picked up later. b. Interactive System:-   This system is composed of many short transactions where the results of the next transaction may be unpredictable. Response time needs to be short (seconds) since the user submits and waits for the result. c. Time sharing:-   This systems uses CPU scheduling and multipro-gramming to provide economical interactive use of a system. The CPU switches rapidl

What is a Fair lock in multithreading?

  Photo by  João Jesus  from  Pexels In Java, there is a class ReentrantLock that is used for implementing Fair lock. This class accepts optional parameter fairness.  When fairness is set to true, the RenentrantLock will give access to the longest waiting thread.  The most popular use of Fair lock is in avoiding thread starvation.  Since longest waiting threads are always given priority in case of contention, no thread can starve.  The downside of Fair lock is the low throughput of the program.  Since low priority or slow threads are getting locks multiple times, it leads to slower execution of a program. The only exception to a Fair lock is tryLock() method of ReentrantLock.  This method does not honor the value of the fairness parameter.