Cherreads

Chapter 2 - Second Study Guide

Introduction to Operating SystemsOverview

An Operating System (OS) manages computer hardware and software resources, providing common services for computer programs. It acts as an intermediary between the user and the hardware, aiming for convenient and efficient program execution.

Types of Operating SystemsBatch OS: Processes are executed in batches without user interaction. Suitable for tasks without user input (e.g., payroll).Time-sharing OS: Multiple users share a computer simultaneously via rapid CPU switching (time slicing), creating the illusion of exclusive access.Distributed OS: Multiple networked computers function as a single coherent system, offering increased power, reliability, and resource access.Real-time OS (RTOS): Designed for real-time applications requiring strict deadlines (e.g., industrial control, robotics).System Components and StructureProcess Management: Creation, scheduling, and termination of processes.Memory Management: Manages primary and secondary memory, tracking usage and allocating/deallocating space.File Management: Organizes, stores, and retrieves files and directories.I/O Systems Management: Manages device drivers and I/O hardware.Security and Protection: Protects system resources and user data.System Calls: Interface between a process and the OS kernel, requesting services like file access or process creation.Process Management

A process is an instance of a running computer program and a fundamental unit of work.

Process StatesNew: Process is being created.Ready: Process is waiting for CPU assignment.Running: Process is currently being executed by the CPU.Waiting/Blocked: Process is waiting for an event (e.g., I/O completion).Terminated: Process has finished execution.Process Control Block (PCB)

A data structure in the OS kernel containing all process information (state, program counter, CPU registers, memory management details).

Threads and Multithreading

A thread is a light-weight process and the basic unit of CPU utilization. A process can have multiple threads.

Benefits: Increased responsiveness, resource sharing, better multi-core utilization.User vs. Kernel Threads:User-level: Managed by a user-level library (fast), but a blocking system call can block the entire process.Kernel-level: Supported and managed by the OS kernel (slower), allowing true parallelism on multi-core systems.Process Scheduling

Maximizes CPU utilization by switching the CPU among processes.

Scheduling Criteria:CPU Utilization: Keep the CPU busy.Throughput: Processes completed per unit of time.Turnaround Time: Total execution time (submission to completion).Waiting Time: Total time spent in the ready queue.Response Time: Time from request submission until the first response.Scheduling Algorithms:First-Come, First-Served (FCFS): Processes are served in the order they arrive (non-preemptive).Problem: Convoy effect (long process holds up subsequent processes).Shortest Job First (SJF): Schedules the process with the shortest burst time. Can be preemptive (Shortest-Remaining-Time-First) or non-preemptive.Advantage: Minimum average waiting time.Problem: Difficulty predicting the next CPU burst length.Priority Scheduling: CPU is allocated to the process with the highest priority.Problem: Starvation (low-priority process might never execute).Solution: Aging (increase the priority of long-waiting processes).Round-Robin (RR): Each process gets a small unit of CPU time (time quantum). When the quantum expires, the process is preempted and moved to the end of the ready queue.Advantage: Fair scheduling, good response time.Problem: Performance depends on the time quantum size.Context Switching

Saving the state of a running process and loading the state of the next process. A pure overhead.

Synchronization and DeadlocksInterprocess Communication (IPC)

Mechanisms for processes to exchange data.

Shared Memory: Processes share a memory region (fast communication).Message Passing: Processes communicate by exchanging messages (slower, flexible for distributed systems).Synchronization Mechanisms

Ensuring only one process can execute its critical section at a time.

Semaphores: Integer variable accessed through atomic wait()wait() and signal()signal() operations.wait()wait(): Decrements the semaphore value. If the value becomes negative, the process is blocked.signal()signal(): Increments the semaphore value. If the value is not positive, it unblocks a waiting process.Mutexes: Simpler semaphore version for mutual exclusion. A process acquires the mutex before entering the critical section and releases it upon exit.Monitors: High-level synchronization primitive with a lock and condition variables, simplifying critical section management.Classic Problems of SynchronizationBounded-Buffer Problem: Producer and consumer processes share a fixed-size buffer. Ensuring the producer doesn't add to a full buffer and the consumer doesn't take from an empty buffer.Reader-Writer Problem: Readers can read a shared resource simultaneously, but a writer requires exclusive access.Dining-Philosophers Problem: Avoiding deadlock when five philosophers share chopsticks (resources) to eat.Deadlocks

Two or more processes are blocked forever, waiting for each other.

Conditions for Deadlock:Mutual Exclusion: Resource can only be used by one process at a time.Hold and Wait: Process holds a resource while waiting for another.No Preemption: Resource cannot be forcibly taken from a process.Circular Wait: Circular chain of processes exists, each needing a resource held by the next.Deadlock HandlingDeadlock Prevention: Ensure at least one of the four conditions cannot hold.Deadlock Avoidance: OS has prior knowledge of resource requests (e.g., Banker's Algorithm).Deadlock Detection: Allow deadlocks and then use a detection algorithm to find them.Deadlock Recovery: Terminate processes or preempt resources to break the deadlock.Memory Management

OS function handling main memory.

Logical vs. Physical Address Space: Logical address is generated by the CPU. Physical address is the address seen by the memory unit. The Memory-Management Unit (MMU) translates logical to physical addresses.Dynamic Loading/Linking: Routine is loaded only when called (useful for large programs).Swapping: Temporarily moving a process out of main memory to disk and back in.Contiguous Memory Allocation

Main memory divided into one contiguous block for the OS and the rest for user processes.

Fragmentation:External Fragmentation: Enough total free memory, but it's in scattered, non-contiguous chunks.Internal Fragmentation: Allocated memory is larger than requested, wasting space within a partition.Paging

Non-contiguous memory allocation that solves external fragmentation.

Main memory is divided into fixed-size blocks called frames.Logical memory is divided into same-sized blocks called pages.The OS maintains a page table for each process to map pages to frames.

Example: Given a logical address of 2,500 with a page size of 1,000 bytes:Page number=⌊25001000⌋=2Page number=⌊10002500​⌋=2Offset=2500mod  1000=500Offset=2500mod1000=500

Segmentation

Logical address space is a collection of variable-sized segments. Each segment has a name and a length. It provides a user-oriented view of memory.

Virtual Memory Management

Allows a program to execute even if it's not entirely in memory.

Demand Paging: Pages are loaded into memory only when needed (on demand).Page Replacement: When a page fault occurs, a page must be selected for replacement.FIFO (First-In, First-Out): Replaces the oldest page.Optimal Page Replacement: Replaces the page that will not be used for the longest time. (Impractical, requires future knowledge.)LRU (Least Recently Used): Replaces the page that has not been used for the longest time.MFU (Most Frequently Used): Replaces the page that has been referenced most often.I/O Systems

Manages communication between the computer and its I/O devices.

Device Drivers: Software modules that provide an interface between the OS and I/O hardware.Interrupt Handling: Signal from a device to the CPU, notifying it of a state change. The OS's interrupt handler processes this signal.DMA (Direct Memory Access): I/O devices transfer data directly to and from main memory without involving the CPU.Disk Scheduling

The OS is responsible for using the disk hardware efficiently by minimizing disk head movement.

FCFS (First-Come, First-Served): Services requests in the order they arrive.SSTF (Shortest Seek Time First): Services the request with the shortest seek time from the current head position.SCAN (Elevator Algorithm): The disk arm starts at one end and moves towards the other, servicing all requests in its path. When it reaches the end, it reverses direction.C-SCAN (Circular SCAN): Similar to SCAN, but when the head reaches the end, it quickly returns to the beginning without servicing requests on the return trip.Security Concepts

Protection of system resources against unauthorized access or modification.

Threats and Attacks: Malware (viruses, worms), phishing, denial-of-service attacks, and unauthorized access.User Authentication: Mechanisms to verify a user's identity (passwords, biometric data, multi-factor authentication).Protection

A mechanism for controlling the access of processes to resources.

Access Control Lists (ACLs): A list associated with each file or resource that defines which users or processes have specific access rights.Capabilities: A token or key held by a user that grants them specific access rights to a resource.Cryptography in Operating Systems

Used to secure data stored on the system.

Disk Encryption: Encrypting entire disk partitions to protect data from physical theft.Secure Boot: Using digital signatures to ensure that the OS kernel and other system components have not been tampered with.

More Chapters