Process vs Thread

Process: An executing instance of a program is called a process.

Thread: Thread is a lightweight thread(A thread is a subset of the process.)

diff:

  • Threads share the address space of the process that created it; processes have their own address space.
  • Threads have direct access to the data segment of its process; processes have their own copy of the data segment of the parent process.
  • Threads can directly communicate with other threads of its process; processes must use interprocess communication to communicate with sibling processes.
  • Threads have almost no overhead; processes have considerable overhead. context switch
  • New threads are easily created; new processes require duplication of the parent process.
  • Changes to the main thread (cancellation, priority change, etc.) may affect the behavior of the other threads of the process; changes to the parent process do not affect child processes.

Different type of thread

User level thread Kernel level thread
User thread are implemented by users Kernel threads are implemented by OS
OS doesn’t recognized user level threads Kernel threads are recognized by OS
implementation of User threads is easy implementation of kernel thread is complicated
context switch time is less context switch time is more
context switch requres no hardware support hardware support is needed
if one user level thread perform blocking operation then entire process will be blocked if one kernel thread perfrom blocking operation then another thread can continue execution
Example: JAVA thread, POSIX threads windows solaris

context switch

a context switch is the process of storing and restoring the state (more specifically, the execution context) of a process or thread so that execution can be resumed from the same point at a later time. This enables multiple processes to share a single CPU and is an essential feature of a multitasking operating system
context Switch

interrupt and system call

interrupt: an interrupt is a signal to the processor emitted by hardware or software indicating an event that needs immediate attention.

  • haredware interrup
    • keyboard, mouse
  • software interrupt
    • exception, divide by zero
    • services from device drivers

system call: a system call is the programmatic way in which a computer program requests a service from the kernel of the operating system it is executed on. This may include hardware-related services (for example, accessing a hard disk drive), creation and execution of new processes, and communication with integral kernel services such as process scheduling. System calls provide an essential interface between a process and the operating system.

when a process make a system call, which will cause an interruption and the process will change to kernal mode.

kernal mode
In Kernel mode, the executing code has complete and unrestricted access to the underlying hardware. It can execute any CPU instruction and reference any memory address. Kernel mode is generally reserved for the lowest-level, most trusted functions of the operating system. Crashes in kernel mode are catastrophic; they will halt the entire PC.

user mode
In User mode, the executing code has no ability to directly access hardware or reference memory. Code running in user mode must delegate to system APIs to access hardware or memory. Due to the protection afforded by this sort of isolation, crashes in user mode are always recoverable. Most of the code running on your computer will execute in user mode.

concurrent technologies

process

  • An executing instance of a program
  • text region, data region, code segment, stack region

  • Five states

    • created: by being loaded from a secondary storage device
    • waiting: it waits for the scheduler to do a so-called context switch and load the process into the processor.
    • running
    • blocked: If a process needs to wait for a resource (wait for user input or file to open, for example), it is assigned the “blocked” state.
    • terminated: Once the process finishes execution, or is terminated by the operating system, it is no longer needed.
  • three schedulers

    • Long-Term Scheduler (job scheduler) It selects processes from pool and loads them into memory for execution
    • Short-Term schduler (CPU scheduler) It selects those processes which are ready to execute
    • Medium-Term Scheduler (process swapping scheduler) It can re-introduce the process into memory and execution can be continued
  • preemption & non-preemption

    • Preemption is the act of temporarily interrupting a task being carried out by a computer system, without requiring its cooperation, and with the intention of resuming the task at a later time
    • Once resources are allocated to a process, the process holds it till it completes its burst time or switches to waiting state.
  • process schedule algorithms

    • FIFO or First Come, First Served (FCFS) Jobs are executed on first come, first serve basis; It is a non-preemptive scheduling algorithm; Easy to understand and implement; Poor in performance as average wait time is high.
    • Short job first: Best approach to minimize waiting time; Easy to implement in Batch systems where required CPU time is known in advance; Impossible to implement in interactive systems where required CPU time is not knownThe processer should know in advance how much time process will take.
    • Priority Based Scheduling: Priority scheduling is a non-preemptive algorithm and one of the most common scheduling algorithms in batch systems; Each process is assigned a priority. Process with highest priority is to be executed first and so on; Processes with same priority are executed on first come first served basis; Priority can be decided based on memory requirements, time requirements or any other resource requirement.
    • Round Robin Scheduling: Round Robin is the preemptive process scheduling algorithm; Each process is provided a fix time to execute, it is called a quantum; Once a process is executed for a given time period, it is preempted and other process executes for a given time period; Context switching is used to save states of preempted processes
    • Multiple-Level Queues Scheduling: Multiple queues are maintained for processes with common characteristics; Each queue can have its own scheduling algorithms; Priorities are assigned to each queue.

critical region & resource

In concurrent programming, concurrent accesses to shared resources can lead to unexpected or erroneous behavior, so parts of the program where the shared resource is accessed are protected. This protected section is the critical section or critical region.

  • 进入区:查看临界区是否可访问,如果可以访问,则转到步骤二,否则进程会被阻塞
  • 临界区:在临界区做操作
  • 退出区:清除临界区被占用的标志
  • 剩余区:进程与临界区不相关部分的代码

  • soltuions:

    • Semaphore: p v operations
    • mutex
  • Semaphore
    • wait: If the value of semaphore variable is not negative, decrement it by 1. If the semaphore variable is now negative, the process executing wait is blocked (i.e., added to the semaphore’s queue) until the value is greater or equal to 1. Otherwise, the process continues execution, having used a unit of the resource.
    • signal: Increments the value of semaphore variable by 1. After the increment, if the pre-increment value was negative (meaning there are processes waiting for a resource), it transfers a blocked process from the semaphore’s waiting queue to the ready queue.

deadlock

two or more competing are wating each to release resources but none of them ever.

  • four conditions

    • Mutual exclusion: The resources involved must be unshareable
    • Hold and wait: a process is currently holding at least one resource and requesting additional resources which are being held by other processes.
    • No preemption: a resource can be released only voluntarily by the process holding it.
    • Circular wait: each process must be waiting for a resource which is being held by another process, which in turn is waiting for the first process to release the resource
  • solution:

    • Ostrich algorithm.
    • allocate the resource dynamically (Bankers’ algorithm)
    • detect deadlock and recover it
    • break any conditon of above four condition

Inter-process communication

  • pipe, message queue
  • mutex semaphore, read write lock
  • shared memeory
  • RPC

memory management

MEMORY

  • virtual address: segment + segement offset
  • logical address: a logical address is the address at which an item (memory cell, storage element, network host) appears to reside from the perspective of an executing application program. segment offset
  • physical address

map

首先拿到一个逻辑地址,然后拿这个逻辑地址的虚拟页号,到页表上去进行比对(没有TLB)的情况。如果比对存在,说明这个块已经被调入内存了,如果不在,则会产生缺页中断,缺页中断就会启动I/O,然后去外存调块


page replacement algorithms

  • FIFO
  • LRU
  • SUDO LRU
  • LFU

disk schedule

  • Seek Time:time taken to locate the disk arm to a specified track where the data is to be read or write
  • Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to rotate into a position so that it can access the read/write heads
  • Transfer Time: Transfer time is the time to transfer the data.
  • Disk Access Time: the sum of above

access time

disk schedule algorithm

  • FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are addressed in the order they arrive in the disk queue

  • SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are executed first. So, the seek time of every request is calculated in advance in queue and then they are scheduled according to their calculated seek time. As a result, the request near the disk arm will get executed first. SSTF is certainly an improvement over FCFS as it decreases the average response time and increases the throughput of system

  • SCAN: In SCAN algorithm the disk arm moves into a particular direction and services the requests coming in its path and after reaching the end of disk, it reverses its direction and again services the request arriving in its path. So, this algorithm works like an elevator and hence also known as elevator algorithm
  • CSCAN: In SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing its direction. So, it may be possible that too many requests are waiting at the other end or there may be zero or few requests pending at the scanned area.
  • Look & CLook