🚨 Quiz Instructions:
Attempt all questions first.
✔️ Click SUBMIT at the end to unlock VIEW ANSWER buttons.
Quiz Mode:
1. Which CPU scheduling algorithm can cause starvation but does not lead to deadlock?






Correct Answer: C

Priority scheduling may indefinitely delay low-priority processes if higher-priority processes keep arriving.

Step-by-step Explanation

In priority scheduling, the CPU is always allocated to the process with the highest priority.

  • Low-priority processes may wait forever → starvation
  • No circular waiting for resources is involved

Deadlock requires circular wait among processes holding resources. Priority scheduling alone does not introduce circular dependencies.

Hence, starvation is possible, but deadlock is not inherent.

2. A pre-emptive scheduler is preferred over a non-preemptive scheduler mainly because it:






Correct Answer: C

Pre-emption allows the OS to interrupt long-running processes and quickly serve interactive tasks.

Step-by-step Explanation

In non-preemptive scheduling, once a process starts executing, it keeps the CPU until it blocks or terminates.

In pre-emptive scheduling:

  • The OS can interrupt a running process
  • Higher-priority or interactive processes can run immediately

This significantly improves response time, which is critical for time-sharing and interactive systems.

3. If the time quantum in Round Robin scheduling is extremely large, the algorithm behaves like:






Correct Answer: C

With a very large time quantum, each process completes execution before preemption occurs.

Step-by-step Explanation

Round Robin scheduling gives each process a fixed time quantum.

  • If quantum is small → frequent context switches
  • If quantum is very large → no preemption

Thus, processes execute in arrival order and finish completely, which is exactly the behavior of FCFS scheduling.

4. Which CPU scheduling metric is most critical for interactive systems?






Correct Answer: D

Users perceive system performance based on how fast the system responds to inputs.

Step-by-step Explanation

Interactive systems (e.g., shells, editors, web servers) require:

  • Quick acknowledgment of user requests
  • Minimal delay before first output

This is captured by response time, not turnaround or throughput.

5. Which scheduling algorithm critically depends on accurate prediction of future CPU burst times?






Correct Answer: B

SJF selects the process with the smallest next CPU burst, which must be estimated.

Step-by-step Explanation

Shortest Job First minimizes average turnaround time by executing the process with the shortest CPU burst first.

However:

  • Future CPU bursts are unknown
  • The OS must predict them using historical data

Incorrect prediction directly degrades SJF’s optimality.

6. Which statement about pre-emptive CPU scheduling is TRUE?






Correct Answer: C

Pre-emptive scheduling relies on timer interrupts to forcibly regain CPU control from a running process.

Step-by-step Explanation

In pre-emptive scheduling, the operating system may interrupt a running process before it voluntarily releases the CPU.

  • This interruption must occur automatically
  • The OS cannot wait for process cooperation

Therefore, a hardware timer is essential to generate periodic interrupts that allow the kernel to regain control and perform context switching.

Without timer support, safe pre-emption is impossible.

7. In multiprocessor scheduling, load imbalance occurs when:






Correct Answer: B

Load imbalance reflects uneven distribution of runnable processes across CPUs.

Step-by-step Explanation

In a multiprocessor system, the scheduler aims to distribute work evenly across all CPUs.

  • Idle CPUs waste processing capacity
  • Overloaded CPUs increase waiting and response time

When some processors are idle while others have long run queues, the system suffers from load imbalance, reducing overall efficiency.

8. Which multiprocessor scheduling approach assigns each process permanently to a specific CPU?






Correct Answer: D

Partitioned scheduling binds processes to CPUs, preventing migration.

Step-by-step Explanation

In partitioned scheduling:

  • Each CPU has its own ready queue
  • Processes are statically assigned to CPUs

This approach:

  • Improves cache locality
  • Reduces scheduling complexity

However, it may cause load imbalance if process distribution is poor.

9. Which of the following problems is unique to multiprocessor systems?






Correct Answer: C

Multiple CPUs caching shared memory introduce cache consistency challenges.

Step-by-step Explanation

In multiprocessor systems:

  • Each CPU may maintain its own cache
  • Shared memory locations may exist in multiple caches

When one CPU updates a memory location, other cached copies must be updated or invalidated.

Ensuring this consistency is known as the cache coherence problem, which does not exist in single-processor systems.

10. Processor affinity in multiprocessor systems mainly improves:






Correct Answer: B

Processor affinity exploits cache locality by keeping a process on the same CPU.

Step-by-step Explanation

When a process runs repeatedly on the same processor:

  • Its data remains in the local cache
  • Cache misses are reduced

This improves execution speed without increasing CPU utilization.

Hence, processor affinity primarily enhances cache performance, not fairness or deadlock handling.

11. Which of the following is NOT a necessary condition for deadlock to occur?







Correct Answer: E

Deadlock requires four specific conditions; context switching is unrelated.

Step-by-step Explanation

A deadlock can occur if and only if the following four conditions hold simultaneously:

  1. Mutual exclusion – at least one resource is non-shareable
  2. Hold and wait – a process holds resources while waiting for others
  3. No preemption – resources cannot be forcibly taken
  4. Circular wait – processes form a cycle of waiting

Context switching is a CPU management activity and does not contribute to deadlock formation.

12. Deadlock prevention works by:






Correct Answer: C

If even one deadlock condition never holds, deadlock becomes impossible.

Step-by-step Explanation

Deadlock prevention is a proactive strategy.

The OS ensures that at least one of the four necessary conditions is permanently violated:

  • Eliminate mutual exclusion (rarely possible)
  • Disallow hold-and-wait
  • Allow resource preemption
  • Impose a strict ordering to avoid circular wait

By design, this guarantees that deadlock can never occur.

13. Which deadlock handling strategy requires a priori knowledge of maximum resource requirements of processes?






Correct Answer: D

Deadlock avoidance decides safely using maximum future resource demands.

Step-by-step Explanation

Deadlock avoidance dynamically checks whether resource allocation keeps the system in a safe state.

To do this, the OS must know:

  • Maximum resources each process may request
  • Current allocation and availability

Without knowing maximum demand in advance, safety cannot be guaranteed.

14. The Banker’s algorithm is used for implementing:






Correct Answer: C

The Banker’s algorithm grants requests only if the resulting state is safe.

Step-by-step Explanation

The Banker’s algorithm simulates resource allocation before actually granting it.

  • If a safe sequence exists → allocation allowed
  • If no safe sequence exists → request denied

This ensures the system never enters an unsafe (potential deadlock) state.

15. A system is said to be in a safe state if:






Correct Answer: C

A safe sequence guarantees that all processes can finish without deadlock.

Step-by-step Explanation

A safe state does not mean deadlock cannot ever happen.

It means:

  • There exists an order of execution
  • Such that each process can obtain needed resources and finish

Deadlock avoidance algorithms aim to keep the system always in a safe state.

16. In a system where each resource type has a single instance, deadlock can be detected by checking whether:






Correct Answer: A

For single-instance resources, a cycle in the resource allocation graph is both necessary and sufficient for deadlock.

Step-by-step Explanation

A resource allocation graph (RAG) represents:

  • Processes as nodes
  • Resources as nodes
  • Request and assignment edges

If each resource has only one instance:

  • A cycle directly implies circular wait
  • Circular wait guarantees deadlock

Hence, cycle detection alone is enough.

17. Deadlock detection for systems with multiple instances of resource types primarily uses:






Correct Answer: B

Cycle detection alone is insufficient when multiple instances of resources exist.

Step-by-step Explanation

For multiple instances:

  • A cycle may or may not indicate deadlock
  • Availability of extra instances matters

Hence, deadlock detection uses:

  • Resource allocation matrix
  • Available vector
  • Work and Finish vectors (similar to Banker’s safety check)

This algorithm determines whether all processes can eventually finish.

18. Which deadlock recovery technique is considered the most disruptive?






Correct Answer: C

Terminating processes wastes all computation done so far.

Step-by-step Explanation

Deadlock recovery methods include:

  • Terminating all deadlocked processes
  • Terminating processes one by one
  • Resource preemption with rollback

Process termination:

  • Causes complete loss of work
  • May affect system consistency

Therefore, it is the most disruptive option.

19. While selecting a victim process for deadlock recovery, which factor is most relevant?






Correct Answer: B

The goal is to minimize overall system loss during recovery.

Step-by-step Explanation

When choosing a victim, the OS considers:

  • Priority of the process
  • How much work it has completed
  • Cost of rollback or restart

Terminating or preempting a low-cost, low-priority process minimizes damage to system performance.

20. Starvation differs from deadlock because starvation:






Correct Answer: C

Starvation does not require cyclic resource dependencies.

Step-by-step Explanation

Deadlock:

  • Requires circular wait
  • Processes are permanently blocked

Starvation:

  • A process waits indefinitely
  • Due to scheduling or resource allocation policy
  • No circular wait is necessary

For example, low-priority processes in priority scheduling may starve without any deadlock.

21. Which CPU scheduling algorithm is most prone to the convoy effect?






Correct Answer: B

A long CPU-bound process can block many short I/O-bound processes.

Step-by-step Explanation

In FCFS scheduling, processes are executed strictly in arrival order.

If a long CPU-bound process arrives first:

  • Short I/O-bound processes must wait
  • I/O devices remain idle
  • Overall system throughput drops

This phenomenon is called the convoy effect.

22. Which technique is commonly used to reduce starvation in priority scheduling?






Correct Answer: B

Aging gradually increases the priority of waiting processes.

Step-by-step Explanation

In priority scheduling, low-priority processes may starve.

Aging solves this by:

  • Gradually increasing a process’s priority over time
  • Ensuring that every process eventually executes

Thus, aging maintains fairness without sacrificing priority semantics.

23. The Shortest Job First (SJF) scheduling algorithm optimizes which metric?






Correct Answer: B

SJF is provably optimal for minimizing average turnaround time.

Step-by-step Explanation

By executing shorter jobs first:

  • More processes complete earlier
  • Average completion time decreases

Mathematically, SJF minimizes the mean turnaround time among all non-preemptive scheduling algorithms.

24. Which scheduling policy is most suitable for real-time operating systems?






Correct Answer: C

Real-time systems require predictable timing guarantees.

Step-by-step Explanation

In real-time systems:

  • Meeting deadlines is more important than throughput
  • Worst-case execution time must be predictable

Deterministic scheduling ensures that task execution order and timing can be analytically guaranteed.

25. Which statement best distinguishes deadlock avoidance from deadlock prevention?






Correct Answer: B

Avoidance evaluates each request dynamically, while prevention restricts behavior by design.

Step-by-step Explanation

Deadlock prevention:

  • Imposes static constraints
  • Violates at least one deadlock condition permanently

Deadlock avoidance:

  • Checks system safety at runtime
  • Grants a request only if a safe state exists

Thus, avoidance is more flexible but computationally expensive.