OS TT2
Find solved example num here :
Module 4
Deadlock can arise if four conditions hold simultaneously (Page 2 of attached document):
1.
2.
3.
4.
The hold-and-wait condition can be prevented by ensuring that whenever a process requests a resource, it does not hold any other resources (Page 14 of attached document). This can be achieved in two primary ways:
1.
2.
The no-preemption condition can be prevented by allowing the operating system to preempt resources from a process (Page 15 of attached document). Here are two ways to achieve this:
1.
2.
The circular-wait condition can be prevented by imposing a total ordering of all resource types and requiring that each process requests resources in an increasing order of enumeration (Page 15 of attached document).
1.
Deadlock handling in operating systems can be approached in three main ways: prevention, avoidance, and detection, each with its own strategy and implications (Page 6 of attached document).
In summary, prevention acts proactively to constrain resource allocation and eliminate deadlock conditions. Avoidance dynamically assesses resource requests based on future needs to maintain a safe state. Detection identifies and resolves deadlocks after they occur, allowing for more flexible resource allocation at the cost of occasional deadlock resolution.
Thrashing is a phenomenon in virtual memory operating systems where the system spends a disproportionate amount of time swapping pages in and out of memory, resulting in little productive work being done. (Not found in attached document)
Thrashing occurs when a process does not have "enough" pages in memory to support its execution. This leads to frequent page faults. When a page fault occurs, the operating system must retrieve the required page from disk, which is a slow operation. If the system is thrashing, processes are constantly faulting, leading to a high degree of disk activity and low CPU utilization. Processes spend more time waiting for pages to be swapped in than executing instructions.
Thrashing can be handled through several techniques:
Thrashing is a significant performance issue in virtual memory systems. Effective handling requires a combination of techniques to ensure that processes have adequate memory resources and that system resources are not wasted on excessive page swapping.
Demand paging is a virtual memory technique where pages are loaded into physical memory only when they are needed (on demand), rather than loading all pages of a process at once (Not found in attached document).
In a demand-paging system, when a process is started, none of its pages are loaded into physical memory. Instead, the operating system sets up the page table so that any attempt to access a page that is not in memory will cause a page fault. When the process accesses a page that is not in memory (a page fault), the operating system retrieves the page from secondary storage (usually a disk) and loads it into a free frame in physical memory. The page table is updated to reflect the new location of the page, and the process can continue execution. If no free frames are available, a page replacement algorithm is used to select a page to be swapped out.
Advantages of demand paging:
Virtual memory is a memory management technique that allows processes to execute even if they are not entirely loaded into physical memory (Not found in attached document). It creates an abstraction that provides each process with the illusion of having a large, contiguous address space, regardless of the amount of physical memory available.
Virtual memory is typically implemented using demand paging or demand segmentation. Each process has its own virtual address space, which is divided into pages (in demand paging) or segments (in demand segmentation). These pages or segments can be stored either in physical memory or on secondary storage (e.g., a hard disk).
The operating system maintains a mapping between the virtual addresses used by a process and the physical addresses in memory. This mapping is typically implemented using a page table or segment table. When a process accesses a virtual address, the operating system translates it into a physical address using the mapping table. If the corresponding page or segment is not in physical memory (a page fault or segment fault), the operating system retrieves it from secondary storage and loads it into memory.
Key aspects of virtual memory:
Q67 Consider a system consisting of m resources of the same type ,
being shared by n processes. Resources can be requested and released by processes only
one at a time. Show that the system is deadlock free if the following two conditions are hold:
a. The maximum need of each process is between 1 and m resources.
b. The sum of all maximum needs is less than the m+n.
one at a time. Show that the system is deadlock free if the following two conditions are hold:
a. The maximum need of each process is between 1 and m resources.
b. The sum of all maximum needs is less than the m+n.
m = total identical resources
n = number of processes
for process Pᵢ:
Maxᵢ ∈ [1…m] its maximum demand
Aᵢ its current allocation
Needᵢ = Maxᵢ–Aᵢ its remaining need
Available = m – ∑₁ⁿ Aᵢ
1.
1 ≤ Maxᵢ ≤ m for each i
2.
∑₁ⁿ Maxᵢ < m + n
We must show no deadlock can occur.
— Step 1: Show Available ≥ 1 in any reachable state
Assume, for contradiction, that Available = 0.
Then ∑₁ⁿ Aᵢ = m, so
∑₁ⁿ Needᵢ
= ∑₁ⁿ (Maxᵢ–Aᵢ)
= (∑Maxᵢ) – m
< (m + n) – m
= n.
But in a deadlock every blocked process must be waiting for at least one more unit, so Needᵢ ≥ 1 for each of the n processes, giving ∑Needᵢ ≥ n.
Hence ∑Needᵢ ≥ n and ∑Needᵢ < n—a contradiction.
Therefore Available ≥ 1 always.
— Step 2: Find a process whose Need ≤ Available
Suppose to the contrary that every process has Needᵢ > Available. Then
∑₁ⁿ Needᵢ
∑₁ⁿ (Available + 1)
On the other hand,
∑Needᵢ
= ∑Maxᵢ – ∑Aᵢ
= ∑Maxᵢ – (m – Available)
< (m + n) – (m – Available)
= n + Available.
So we get
n·Available + n < ∑Needᵢ < n + Available,
which is impossible for n ≥ 1.
Thus there must be at least one process Pₖ with
Needₖ ≤ Available.
— Step 3: Induction to finish everyone
1.
Available ≥ 1, so Step 2 gives us some Pₖ that can instantly acquire Needₖ units and run to completion.
2.
When Pₖ finishes, it releases all Aₖ + Needₖ = Maxₖ units, so Available increases.
3.
Remove Pₖ from consideration (now n–1 processes, total resources still m).
4.
The two conditions still hold for the remaining set (numeric inequalities are even looser), so we repeat.
By induction, every process eventually runs to completion—no deadlock .
Q68
Q69
FIFO :
Optimal replacement:
LRU same example :

Aspect
Paging
Segmentation
Memory Division
Divides memory into fixed-size pages/frames
Divides memory into variable-sized segments
Addressing
Logical address divided into page number and offset
Logical address divided into segment number and offset
Fragmentation
Avoids external fragmentation, causes internal fragmentation
Avoids internal fragmentation, may cause external fragmentation
View of Memory
Uniform view (all pages same size)
Logical view based on program structure (functions, data)
Hardware Support
Uses page tables and frame mapping
Uses segment tables with base and limit registers
1.
2.
3.
4.
5.
6.
Memory Scheme
External Fragmentation
Internal Fragmentation
High external fragmentation due to variable-sized holes created when processes are loaded and removed
Low internal fragmentation as processes are allocated exactly the required size
Possible external fragmentation because segments vary in size and may not fit into available holes
No internal fragmentation since segments are variable-sized and allocated exactly as needed
No external fragmentation because pages are fixed size and can be placed anywhere in physical memory
Possible internal fragmentation due to last page/frame possibly not fully used
Contiguous allocation requires processes to be loaded into a single contiguous block of memory, which leads to external fragmentation as free memory gets split into small unusable holes.
Segmentation divides memory based on logical segments, avoiding internal fragmentation but still suffers from external fragmentation.
Paging divides memory into fixed-size blocks, eliminating external fragmentation but causing internal fragmentation when the last page is partially filled[1, pp. 15-25].
The Translation Lookaside Buffer (TLB) is a small, fast associative memory cache that stores recent translations of page numbers to frame numbers. When the CPU generates a logical address, the page number is first checked in the TLB:
If the page number is found in the TLB (TLB hit), the corresponding frame number is retrieved directly, and the physical address is formed by combining the frame number with the page offset. This avoids the need to access the page table in main memory, speeding up the address translation.
If the page number is not found in the TLB (TLB miss), the system accesses the page table in main memory to find the frame number, updates the TLB with this entry, and then forms the physical address.

This hardware support reduces the average memory access time by avoiding the two-memory-access problem inherent in paging without TLB[1, pp. 25-27].
Thus, while demand paging optimizes memory usage and process startup, it can degrade performance if page faults are frequent or if the system does not have enough physical memory to hold the working set of processes[1, pp. 28-30; additional explanation from internet sources].
In contiguous allocation, each file occupies a set of contiguous blocks on the disk. This method is simple and supports fast sequential and direct access. However, it suffers from external fragmentation and may require knowing the file size in advance, making it difficult to grow files[1, p. 12, general OS knowledge].
Linked allocation stores each file as a linked list of disk blocks, which may be scattered anywhere on the disk. Each block contains a pointer to the next block. This eliminates external fragmentation and allows files to grow easily. However, it is inefficient for direct access since all blocks must be traversed sequentially to reach a specific point in the file[1, p. 13, general OS knowledge].
In indexed allocation, each file has its own index block, which contains the addresses of all the file’s blocks. This allows direct access to file blocks and supports dynamic file growth. However, it requires extra space for the index block, and if the file is very large, multiple levels of indexing may be needed[1, p. 13, general OS knowledge].
Method
Advantages
Disadvantages
Contiguous Allocation
Fast access, simple implementation
External fragmentation, hard to grow files
Linked Allocation
No external fragmentation, easy to grow files
Slow direct access, pointer overhead
Indexed Allocation
Fast direct access, dynamic file growth
Index block overhead, complex for large files
Segmentation provides a user’s view of memory, making it easier to organize and protect code and data. It supports sharing and protection by associating different permissions with each segment. However, segmentation can lead to external fragmentation since segments are variable-sized and may not fit perfectly into available memory holes[1, p. 19].
Logical division of memory into segments.
Each segment can grow or shrink independently.
Facilitates sharing and protection.
May cause external fragmentation.
Q77
Suppose a process requires five pages but only the first two are loaded into memory initially. When the process tries to access the third page, a page fault occurs. The OS then loads the third page into memory, possibly replacing another page if memory is full. This approach saves memory and reduces initial load time, but frequent page faults can degrade performance (a condition known as thrashing)[1, p. 28; internet sources].
Efficient memory usage.
Faster program startup.
Page faults can cause delays.
Excessive page faults lead to thrashing.
a) Translation Lookaside Buffer (TLB)
b) Page replacement algorithms
(4 marks)
A TLB is a special, fast cache memory that stores recent translations of virtual page numbers to physical frame numbers. When the CPU generates a virtual address, the TLB is checked first. If the page number is found (TLB hit), the corresponding frame number is retrieved quickly. If not (TLB miss), the page table in main memory is accessed, and the TLB is updated[1, p. 25]. The TLB significantly speeds up address translation and reduces the average memory access time.
Page replacement algorithms determine which memory page to replace when a new page needs to be loaded, and memory is full. Common algorithms include:
These algorithms aim to minimize page faults and optimize system performance[1, p. 29; internet sources].
-1 OS-M5-memory-management.pdf (Page numbers as indicated above)
Additional explanations and examples are supplemented from standard operating system literature and online resources as explicitly stated.
In simple paging, all pages of a process must be in memory to run; in virtual memory paging, only the required pages are loaded as needed.
Virtual memory paging supports demand paging and page replacement policies, while simple paging does not.
Virtual memory allows processes larger than physical memory to execute, which is not possible in simple paging.
(Reference: Not found in attached document; answer generated from internet source2 .)
The logical (virtual) address generated by the CPU is divided into two parts: a page number (p) and a page offset (d).
The page number is used as an index into the page table to find the corresponding frame number.
The frame number is then combined with the offset to form the physical address.
text
Logical Address (p, d)
|
v
+---------------------+
| Page Table |
+---------------------+
|
v
Physical Frame Number + Offset (d)
|
v
Physical Address
For example, if the CPU generates a logical address (p, d), the paging hardware uses the page number (p) to look up the frame number in the page table. The physical address is then constructed by concatenating the frame number with the offset (d).
Internal Fragmentation
External Fragmentation
Occurs with fixed-sized partitions
Occurs with variable-sized partitions
Wasted space is inside allocated block
Wasted space is outside allocated block
Solution: Best-fit block allocation
Solution: Compaction, paging, segmentation
(Reference: Not found in attached document; answer generated from internet source3 .)
A page table is a data structure used by the operating system to store the mapping between virtual page numbers and physical frame numbers. Each process has its own page table, and it is used during address translation to locate the physical address corresponding to a virtual address. The page table contains entries for each page, including the frame number and additional information like valid/invalid bits4.
An inverted page table has one entry for each physical frame in memory, rather than for each virtual page. It stores information about which virtual page (and which process) is stored in each physical frame. There is only one inverted page table for the entire system, which helps to reduce memory overhead, especially when there are many processes. However, address translation can be slower because the system may need to search the table to find the mapping4.
Page Table
Inverted Page Table
One per process
One for the entire system
Maps virtual pages to frames
Maps frames to virtual pages/processes
Can be large for big address space
More memory-efficient for large systems
A logical address (also called virtual address) is generated by the CPU during program execution. It is the address used by a process to access memory. The logical address is mapped to a physical address by the memory management unit (MMU)5.
A physical address is the actual location in main memory (RAM) where data or instructions are stored. The physical address is what the memory hardware uses to access data5.
A relative address is an address specified relative to a known point, such as the start of a program or a segment. It represents an offset from a base address. The relative address is converted to a logical address by adding it to the base address.
Address Type
Definition
Who uses it?
Example
Logical Address
Generated by CPU, used by processes
User/program
0x00001234
Relative Address
Offset from a known base (e.g., base + 200)
Program/Compiler
base + 200
Physical Address
Actual address in RAM where data is stored
Memory hardware
0xFF001234
88. What is the difference between a page and a frame?
What is the difference between a page and a segment?
(Un)
A page is a fixed-size block of virtual memory used by processes. It is the basic unit of data for memory management in the virtual address space. Pages are created by dividing the logical address space of a process into equal-sized chunks.
A frame is a fixed-size block of physical memory (RAM). Physical memory is divided into frames, each of which can hold one page. The operating system maps pages to frames using a page table, allowing the virtual address space to be mapped onto physical memory.
Pages exist in the virtual (logical) address space, while frames exist in the physical address space.
Aspect
Page (Virtual)
Frame (Physical)
Location
Virtual/Logical memory
Physical memory (RAM)
Purpose
Divides process address space
Divides main memory
Managed by
OS (via page tables)
OS (via frame allocation)
A page is a fixed-size block of memory, both in the logical and physical address space. Paging divides the process address space into equal-sized pages, and main memory into equal-sized frames.
A segment is a variable-sized logical unit of memory, such as a function, array, or data structure. Segmentation divides the process address space into segments of different lengths, which correspond to logical divisions in the program (e.g., code, data, stack).
Pages are fixed in size; segments are variable in size.
Paging is managed purely by the operating system, while segmentation is often influenced by the structure of the program (compiler/user).
Paging can cause internal fragmentation; segmentation can cause external fragmentation.
Aspect
Page
Segment
Size
Fixed
Variable
Division basis
OS divides memory into pages
Program divides memory into segments
Fragmentation
Internal fragmentation possible
External fragmentation possible
Address
Page number + offset
Segment number + offset
89. Explain with neat diagram hardware architecture of “Inverted Paging”
(Un)
An Inverted Page Table (IPT) is a system-wide data structure used in virtual memory systems to map physical memory frames to virtual pages. Unlike traditional page tables (which have one page table per process and an entry for every virtual page), the IPT has one entry per physical frame and is shared by all processes. Each entry contains:
The virtual page number
The process ID (PID)
Control bits (valid, dirty, etc.)
Chained pointer (for shared memory or hash collisions)
When the CPU generates a virtual address, the MMU (Memory Management Unit) uses the combination of PID and virtual page number to search the IPT. If a match is found, the corresponding physical frame is used; otherwise, a page fault occurs.
text
+-------------------+ +---------------------+
| Virtual Address | | Inverted Page Table |
| (PID, Page No.) | ----> |---------------------|
+-------------------+ | PID | Page | Frame |
|-----|------|--------|
| ... | ... | ... |
+---------------------+
|
v
+-------------------+
| Physical Address |
+-------------------+
Reduces memory overhead for page tables, especially in systems with large address spaces or many processes.
Only one IPT for the entire system.
Lookup can be slower, often requiring associative search or hashing.
More complex to implement shared memory4.
(Reference: 4 )
90. Explain with neat diagram hardware architecture of “Segmented Paging”
(Un)
Logical address: (Segment Number, Page Number, Offset)
Each process has a segment table. Each entry in the segment table points to a page table for that segment.
The page table maps the segment’s pages to physical frames.
1.
CPU generates a logical address (segment no., page no., offset).
2.
Segment number indexes into the segment table to find the base address of the page table for that segment.
3.
Page number indexes into the page table to get the frame number.
4.
Frame number combined with offset gives the physical address.
text
+-------------------+ +------------------+ +-------------------+
| Logical Address | | Segment Table | | Page Table |
| (Seg, Pg, Offset) |-----> | Seg# | PT Base |----> | Pg# | Frame # |
+-------------------+ +------------------+ +-------------------+
|
v
+-------------------+
| Physical Address |
+-------------------+
Supports logical program structure (segmentation) and efficient memory use (paging).
Reduces external fragmentation compared to pure segmentation.
More complex hardware and address translation.
Both segment and page tables must be managed5.
(Reference: 5 )
In the context of data storage and management, fields and records are fundamental concepts that define the structure of data organization. A field is the smallest unit of named data that has meaning in a record. Think of it as an attribute or a single piece of information about an item. For example, in a database of students, fields might include "StudentID," "Name," "Major," and "GPA." Each field is designed to hold a specific type of data, such as text, numbers, or dates.
A record , on the other hand, is a collection of related fields that are treated as a single unit. A record represents a complete set of information about a particular entity or item. In our student database example, a record would consist of all the fields (StudentID, Name, Major, GPA) for one particular student. So, a record provides a holistic view of all attributes of a single entity.
In essence, fields are the columns in a table, while records are the rows. The key difference is that a field holds a single piece of information, while a record contains all the related pieces of information about a single instance (Reference: general database knowledge).
A file is a self-contained collection of data, typically stored as a single unit on a storage device. Files are managed by the operating system's file system. They can contain various types of data, such as text, images, or executable code. Files are a basic way to store information, but they often lack advanced organizational features.
A database is a structured collection of data that is organized and managed to allow efficient storage, retrieval, and manipulation of data. Databases are managed by a Database Management System (DBMS), which provides a set of tools and features for defining data structures, enforcing data integrity, and managing access control.
The main differences lie in structure, management, and functionality. Files are simple, unstructured, and managed by the operating system. Databases are structured, managed by a DBMS, and offer advanced features like indexing, querying, and transaction management. Databases are designed for complex data relationships and efficient data handling, while files are best suited for simple data storage needs (Reference: general database knowledge).
A File Management System (FMS) is a software system (or part of an operating system) that manages files and directories on a storage device. It provides a structured way to organize, store, retrieve, and manipulate files. The FMS is responsible for maintaining metadata about files, such as their names, sizes, locations, creation dates, and permissions.
Key elements of a file management system include:
Here's a simple textual representation of a file management system's hierarchical structure:
text
Root Directory (/)
├── Directory A
│ ├── File 1.txt
│ └── File 2.jpg
├── Directory B
│ ├── Subdirectory C
│ │ └── File 3.pdf
│ └── File 4.docx
└── File 5.exe
The FMS allows users and applications to interact with files and directories in a structured and organized manner, providing essential functions like creating, deleting, renaming, and moving files (Reference: general operating system knowledge).
When selecting a file organization method, several criteria must be considered to ensure efficiency and effectiveness:
The choice of file organization should align with these criteria to optimize performance and resource utilization for the given application (Reference: general database knowledge).
Five common file organizations include:
1.
2.
3.
4.
5.
Each file organization has its strengths and weaknesses, making them suitable for different types of applications and data access patterns (Reference: general database knowledge).
The average search time in an indexed sequential file is generally less than in a sequential file due to the presence of an index. In a sequential file, to find a specific record, you might have to read through a large portion of the file, especially if the record is located towards the end. This results in a linear search time complexity, O(n), where n is the number of records.
In an indexed sequential file, an index is maintained, which maps key values to the corresponding record locations within the file. This index allows the system to quickly locate the approximate position of a record. The search process involves first searching the index to find the location of the desired record, and then directly accessing the record at that location.
The index search time is typically much faster than a linear search through the entire file. Once the index identifies the approximate location, a short sequential search may be needed within a smaller section of the file to locate the exact record. This combination of indexed and sequential access reduces the average search time, making indexed sequential files more efficient for applications that require both sequential and random access patterns (Reference: general database knowledge).
Directories are essential components of a file system, and several operations can be performed on them to manage files and organize data effectively. Typical operations include:
These operations provide users with the necessary tools to manage and organize their files and directories effectively within the file system (Reference: general operating system knowledge).
The pathname specifies the location of a file or directory in a file system. A pathname can be either absolute or relative.
An absolute pathname provides the complete path to a file or directory, starting from the root directory of the file system. It uniquely identifies the location of the item, regardless of the current working directory.
A relative pathname specifies the location of a file or directory relative to the current working directory. It is a shorthand way of specifying a path without having to type out the entire absolute path.
The working directory (also known as the current directory) is the directory in which the user or a process is currently operating. It serves as the starting point for resolving relative pathnames. When a relative pathname is used, the operating system interprets it with respect to the working directory to determine the actual location of the file or directory.
The relationship is that the working directory provides the context for interpreting relative pathnames. If the working directory is /home/user/documents , and you specify a relative pathname of report.txt , the operating system will interpret it as /home/user/documents/report.txt . This makes it easier to access files and directories within the current working area without having to specify the full absolute path each time (Reference: general operating system knowledge).
Typical access rights specify what actions a user can perform on a particular file. These rights are fundamental for security and data protection. The main access rights include:
These rights can be granted or denied individually or in combinations. For example, a user might have read and execute permissions but not write or delete, providing them with the ability to use the file but not alter or remove it (Reference: Not found in attached document; generated from internet knowledge).
I/O techniques are methods by which data is transferred between the CPU/memory and external devices. Here are three key techniques:
1.
2.
3.
Logical I/O and Device I/O represent different levels of abstraction in the I/O process:
In essence, logical I/O is what applications see and use, while device I/O is how the operating system interacts with the hardware to fulfill those requests (Reference: Not found in attached document; generated from internet knowledge).
Direct Memory Access (DMA) is a crucial hardware technique for efficient data transfer in computer systems. It allows certain hardware subsystems (e.g., disk controllers, network cards) to access system memory independently of the CPU.
The key idea behind DMA is to offload the task of data transfer from the CPU to a dedicated DMA controller. The process begins with the CPU initiating the transfer by providing the DMA controller with essential information: the source address (where the data is coming from), the destination address (where the data is going to), the amount of data to be transferred, and the direction of the transfer (read or write). Once configured, the DMA controller takes over and directly transfers data between the I/O device and memory, without further CPU intervention.
After the transfer is complete, the DMA controller notifies the CPU via an interrupt signal. This allows the CPU to continue executing other tasks while the data transfer is in progress, significantly improving system performance. DMA is particularly beneficial for high-speed I/O operations involving large amounts of data (Reference: Not found in attached document; generated from internet knowledge).
Block-oriented and stream-oriented devices differ significantly in how they organize and access data:
The fundamental difference lies in the organization and access method: block-oriented devices use fixed-size blocks and allow random access, while stream-oriented devices use continuous streams and primarily support sequential access (Reference: Not found in attached document; generated from internet knowledge).
Double buffering improves I/O performance by enabling concurrent operations between the CPU and I/O devices.
Double buffering is effective in reducing the overhead associated with I/O operations, leading to improved performance, especially in scenarios where the CPU and I/O device have different processing speeds (Reference: Not found in attached document; generated from internet knowledge).
Several delay elements contribute to the overall time taken for a disk read or write operation:
1.
2.
3.
4.
Minimizing these delay elements is crucial for improving disk I/O performance. Techniques such as disk scheduling algorithms, caching, and defragmentation aim to reduce seek times, rotational latency, and overall overhead (Reference: Module-6-IO-Management.pdf, page 468).
Q106


