✔ Scroll down and test yourself — answers are hidden under the “View Answer” button.
Attempt all questions first.
✔️ Click SUBMIT at the end to unlock VIEW ANSWER buttons.
Priority scheduling may indefinitely delay low-priority processes if higher-priority processes keep arriving.
Step-by-step Explanation
In priority scheduling, the CPU is always allocated to the process with the highest priority.
- Low-priority processes may wait forever → starvation
- No circular waiting for resources is involved
Deadlock requires circular wait among processes holding resources. Priority scheduling alone does not introduce circular dependencies.
Hence, starvation is possible, but deadlock is not inherent.
Pre-emption allows the OS to interrupt long-running processes and quickly serve interactive tasks.
Step-by-step Explanation
In non-preemptive scheduling, once a process starts executing, it keeps the CPU until it blocks or terminates.
In pre-emptive scheduling:
- The OS can interrupt a running process
- Higher-priority or interactive processes can run immediately
This significantly improves response time, which is critical for time-sharing and interactive systems.
With a very large time quantum, each process completes execution before preemption occurs.
Step-by-step Explanation
Round Robin scheduling gives each process a fixed time quantum.
- If quantum is small → frequent context switches
- If quantum is very large → no preemption
Thus, processes execute in arrival order and finish completely, which is exactly the behavior of FCFS scheduling.
Users perceive system performance based on how fast the system responds to inputs.
Step-by-step Explanation
Interactive systems (e.g., shells, editors, web servers) require:
- Quick acknowledgment of user requests
- Minimal delay before first output
This is captured by response time, not turnaround or throughput.
SJF selects the process with the smallest next CPU burst, which must be estimated.
Step-by-step Explanation
Shortest Job First minimizes average turnaround time by executing the process with the shortest CPU burst first.
However:
- Future CPU bursts are unknown
- The OS must predict them using historical data
Incorrect prediction directly degrades SJF’s optimality.
Pre-emptive scheduling relies on timer interrupts to forcibly regain CPU control from a running process.
Step-by-step Explanation
In pre-emptive scheduling, the operating system may interrupt a running process before it voluntarily releases the CPU.
- This interruption must occur automatically
- The OS cannot wait for process cooperation
Therefore, a hardware timer is essential to generate periodic interrupts that allow the kernel to regain control and perform context switching.
Without timer support, safe pre-emption is impossible.
Load imbalance reflects uneven distribution of runnable processes across CPUs.
Step-by-step Explanation
In a multiprocessor system, the scheduler aims to distribute work evenly across all CPUs.
- Idle CPUs waste processing capacity
- Overloaded CPUs increase waiting and response time
When some processors are idle while others have long run queues, the system suffers from load imbalance, reducing overall efficiency.
Partitioned scheduling binds processes to CPUs, preventing migration.
Step-by-step Explanation
In partitioned scheduling:
- Each CPU has its own ready queue
- Processes are statically assigned to CPUs
This approach:
- Improves cache locality
- Reduces scheduling complexity
However, it may cause load imbalance if process distribution is poor.
Multiple CPUs caching shared memory introduce cache consistency challenges.
Step-by-step Explanation
In multiprocessor systems:
- Each CPU may maintain its own cache
- Shared memory locations may exist in multiple caches
When one CPU updates a memory location, other cached copies must be updated or invalidated.
Ensuring this consistency is known as the cache coherence problem, which does not exist in single-processor systems.
Processor affinity exploits cache locality by keeping a process on the same CPU.
Step-by-step Explanation
When a process runs repeatedly on the same processor:
- Its data remains in the local cache
- Cache misses are reduced
This improves execution speed without increasing CPU utilization.
Hence, processor affinity primarily enhances cache performance, not fairness or deadlock handling.
Deadlock requires four specific conditions; context switching is unrelated.
Step-by-step Explanation
A deadlock can occur if and only if the following four conditions hold simultaneously:
- Mutual exclusion – at least one resource is non-shareable
- Hold and wait – a process holds resources while waiting for others
- No preemption – resources cannot be forcibly taken
- Circular wait – processes form a cycle of waiting
Context switching is a CPU management activity and does not contribute to deadlock formation.
If even one deadlock condition never holds, deadlock becomes impossible.
Step-by-step Explanation
Deadlock prevention is a proactive strategy.
The OS ensures that at least one of the four necessary conditions is permanently violated:
- Eliminate mutual exclusion (rarely possible)
- Disallow hold-and-wait
- Allow resource preemption
- Impose a strict ordering to avoid circular wait
By design, this guarantees that deadlock can never occur.
Deadlock avoidance decides safely using maximum future resource demands.
Step-by-step Explanation
Deadlock avoidance dynamically checks whether resource allocation keeps the system in a safe state.
To do this, the OS must know:
- Maximum resources each process may request
- Current allocation and availability
Without knowing maximum demand in advance, safety cannot be guaranteed.
The Banker’s algorithm grants requests only if the resulting state is safe.
Step-by-step Explanation
The Banker’s algorithm simulates resource allocation before actually granting it.
- If a safe sequence exists → allocation allowed
- If no safe sequence exists → request denied
This ensures the system never enters an unsafe (potential deadlock) state.
A safe sequence guarantees that all processes can finish without deadlock.
Step-by-step Explanation
A safe state does not mean deadlock cannot ever happen.
It means:
- There exists an order of execution
- Such that each process can obtain needed resources and finish
Deadlock avoidance algorithms aim to keep the system always in a safe state.
For single-instance resources, a cycle in the resource allocation graph is both necessary and sufficient for deadlock.
Step-by-step Explanation
A resource allocation graph (RAG) represents:
- Processes as nodes
- Resources as nodes
- Request and assignment edges
If each resource has only one instance:
- A cycle directly implies circular wait
- Circular wait guarantees deadlock
Hence, cycle detection alone is enough.
Cycle detection alone is insufficient when multiple instances of resources exist.
Step-by-step Explanation
For multiple instances:
- A cycle may or may not indicate deadlock
- Availability of extra instances matters
Hence, deadlock detection uses:
- Resource allocation matrix
- Available vector
- Work and Finish vectors (similar to Banker’s safety check)
This algorithm determines whether all processes can eventually finish.
Terminating processes wastes all computation done so far.
Step-by-step Explanation
Deadlock recovery methods include:
- Terminating all deadlocked processes
- Terminating processes one by one
- Resource preemption with rollback
Process termination:
- Causes complete loss of work
- May affect system consistency
Therefore, it is the most disruptive option.
The goal is to minimize overall system loss during recovery.
Step-by-step Explanation
When choosing a victim, the OS considers:
- Priority of the process
- How much work it has completed
- Cost of rollback or restart
Terminating or preempting a low-cost, low-priority process minimizes damage to system performance.
Starvation does not require cyclic resource dependencies.
Step-by-step Explanation
Deadlock:
- Requires circular wait
- Processes are permanently blocked
Starvation:
- A process waits indefinitely
- Due to scheduling or resource allocation policy
- No circular wait is necessary
For example, low-priority processes in priority scheduling may starve without any deadlock.
A long CPU-bound process can block many short I/O-bound processes.
Step-by-step Explanation
In FCFS scheduling, processes are executed strictly in arrival order.
If a long CPU-bound process arrives first:
- Short I/O-bound processes must wait
- I/O devices remain idle
- Overall system throughput drops
This phenomenon is called the convoy effect.
Aging gradually increases the priority of waiting processes.
Step-by-step Explanation
In priority scheduling, low-priority processes may starve.
Aging solves this by:
- Gradually increasing a process’s priority over time
- Ensuring that every process eventually executes
Thus, aging maintains fairness without sacrificing priority semantics.
SJF is provably optimal for minimizing average turnaround time.
Step-by-step Explanation
By executing shorter jobs first:
- More processes complete earlier
- Average completion time decreases
Mathematically, SJF minimizes the mean turnaround time among all non-preemptive scheduling algorithms.
Real-time systems require predictable timing guarantees.
Step-by-step Explanation
In real-time systems:
- Meeting deadlines is more important than throughput
- Worst-case execution time must be predictable
Deterministic scheduling ensures that task execution order and timing can be analytically guaranteed.
Avoidance evaluates each request dynamically, while prevention restricts behavior by design.
Step-by-step Explanation
Deadlock prevention:
- Imposes static constraints
- Violates at least one deadlock condition permanently
Deadlock avoidance:
- Checks system safety at runtime
- Grants a request only if a safe state exists
Thus, avoidance is more flexible but computationally expensive.