Monday, November 3, 2025

Model Validation in Machine Learning – 10 HOT MCQs with Answers


Model Validation in Machine Learning – 10 HOT MCQs with Answers | Cross-Validation, Hold-Out & Nested CV Explained


1. A data scientist performs 10-fold cross-validation and reports 95% accuracy. Later, they find that data preprocessing was applied after splitting. What does this imply?

A. Accuracy is still valid
B. Accuracy may be optimistically biased
C. Folds were too small
D. It prevents data leakage

Answer: B
Explanation: Preprocessing after splitting can leak info from validation folds into training folds, inflating accuracy. That is, preprocessing after splitting can systematically overestimate model performance due to data leakage.

When data preprocessing—such as scaling, normalization, or feature selection—is applied after splitting (i.e., on the entire dataset before dividing into folds), information from the validation/test set can inadvertently leak into the training process. This leakage inflates the measured performance, causing results like the reported 95% accuracy to be higher than what the model would achieve on truly unseen data. This is a well-known issue in cross-validation and machine learning validation.

Correct procedure of data preprocessing in cross-validation

Proper practice is to split the data first, then apply preprocessing separately to each fold to avoid biasing results.

For each fold:

  1. Split → Training and Validation subsets

  2. Fit preprocessing only on training data

  3. Transform both training and validation sets

  4. Train model

  5. Evaluate


2. Which validation strategy most likely overestimates model performance?

A. Nested cross-validation
B. Random train/test split without stratification
C. Cross-validation on dataset used for feature selection
D. Stratified k-fold

Answer: C
Explanation: Feature selection before CV leaks validation data info, inflating scores. If you perform feature selection on the entire dataset before cross-validation, the model has already “seen” information from all samples (including what should be test data).
  • This causes data leakage,
  • which makes accuracy look higher than it truly is,
  • hence the performance is overestimated.
More explanation: This happens because when feature selection is carried out on the entire dataset before performing cross-validation, information from test folds leaks into the training process. This makes accuracy estimates unrealistically high and not representative of unseen data. Feature selection should always be nested inside the cross-validation loop — i.e., done within each training subset.
3. After tuning using 5-fold CV, how should you report final accuracy?

A. CV average
B. Retrain on full data and test on held-out test set
C. Best fold score
D. Validation score after tuning

4. Why might Leave-One-Out CV lead to high variance?

A. Too little training data
B. Needs resampling 
C. Fold too large
D. Almost all data used for training

5. When should Time Series CV be used?

A. Independent samples
B. Predicting future from past
C. Imbalanced data
D. Faster training

Answer: B

Explanation:
Time Series CV preserves temporal order to avoid lookahead bias. Use Time Series Cross-Validation when the data have a temporal order, and you want to predict future outcomes from past patterns without data leakage.

Time Series Cross-Validation (TSCV) is used when data points are ordered over time — for example, stock prices, weather data, or sensor readings.

  • The order of data matters.
  • Future values depend on past patterns.
  • You must not shuffle the data, or it will leak future information.

Unlike standard k-fold cross-validation, TSCV respects the chronological order and ensures that the model is trained only on past data and evaluated on future data, mimicking real-world forecasting scenarios.

6. Performing many random 80/20 splits and averaging accuracy is called:

A. Bootstrapping
B. Leave-p-out
C. Monte Carlo Cross-Validation
D. Nested CV

Answer: C

Explanation: Monte Carlo validation averages performance over multiple random splits.

Monte Carlo Cross-Validation (also known as Repeated Random Subsampling Validation) involves randomly splitting the dataset into training and testing subsets multiple times (e.g., 80% training and 20% testing).

The model is trained and evaluated on these splits repeatedly, and the results (such as accuracy) are averaged to estimate the model's performance.

This differs from k-fold cross-validation because the splits are random and may overlap — some data points might appear in multiple test sets or not appear at all in some iterations.

When is Monte Carlo Cross-Validation useful?

  • You have limited data but want a more reliable performance estimate.
  • You want flexibility in training/test split sizes.
  • The dataset is large, and full k-fold CV is too slow.
  • You don’t need deterministic folds.
  • The data are independent and identically distributed (i.i.d.).
7. Model performs well in CV but poorly on test set. Why?

A. Too many folds
B. Overfitting during tuning
C. Underfitted model
D. Large test set

8. Which gives most reliable generalization estimate with extensive tuning?

A. Single 80/20 split
B. Nested CV
C. Stratified 10-fold
D. Leave-One-Out

Answer: B
Explanation: Nested CV separates tuning and evaluation, avoiding bias. When you perform extensive hyperparameter tuning, use Nested Cross-Validation to get the most reliable, unbiased estimate of true generalization performance.

How does Nested CV handle optimistic bias?

In standard cross-validation, if the same data is used both to tune hyperparameters and to estimate model performance, it can lead to an optimistic bias. That is, the model "sees" the validation data during tuning, which inflates performance estimates but does not truly represent how the model will perform on new unseen data. 
Nested CV solves this by separating the tuning and evaluation processes into two loops: 
  • Inner loop: Used exclusively to tune the model's hyperparameters by cross-validation on the training data. 
  • Outer loop: Used to evaluate the generalized performance of the model with the tuned hyperparameters on a held-out test fold that was never seen during the inner tuning. 
This structure ensures no data leakage between tuning and testing phases, providing a less biased, more honest estimate of how the model will perform in real-world scenarios. 

When to use Nested Cross-Validation?

Nested CV is computationally expensive. It is recommended especially when you do extensive hyperparameter optimization to avoid overfitting in model selection and get a realistic estimate of true model performance.
9. Major advantage of k-fold CV over simple hold-out?

A. Ensures higher accuracy
B. Eliminates overfitting
C. Uses full dataset efficiently
D. Requires less computation

10. What best describes the purpose of model validation?

A. Improve training accuracy
B. Reduce dataset size
C. Reduce training time
D. Measure generalization to unseen data

Answer: D
Explanation: Validation estimates generalization performance before final testing.

No comments:

Post a Comment

Featured Content

Multiple choice questions in Natural Language Processing Home

MCQ in Natural Language Processing, Quiz questions with answers in NLP, Top interview questions in NLP with answers Multiple Choice Que...

All time most popular contents