One stop guide to computer science students for solved questions, Notes, tutorials, solved exercises, online quizzes, MCQs and more on DBMS, Advanced DBMS, Data Structures, Operating Systems, Machine learning, Natural Language Processing etc.
Model Validation in Machine Learning – 10 HOT MCQs with Answers
Model Validation in Machine Learning – 10 HOT MCQs with Answers | Cross-Validation, Hold-Out & Nested CV Explained
1. A data scientist performs 10-fold cross-validation and reports 95% accuracy. Later, they find that data preprocessing was applied after splitting. What does this imply?
A. Accuracy is still valid
B. Accuracy may be optimistically biased
C. Folds were too small
D. It prevents data leakage
Answer: B
Explanation:Preprocessing after splitting can leak info from validation folds into training folds, inflating accuracy. That is, preprocessing after splitting can systematically overestimate model performance due to data leakage.
When data preprocessing—such as scaling, normalization, or feature selection—is applied after splitting (i.e., on the entire dataset before dividing into folds), information from the validation/test set can inadvertently leak into the training process. This leakage inflates the measured performance, causing results like the reported 95% accuracy to be higher than what the model would achieve on truly unseen data. This is a well-known issue in cross-validation and machine learning validation.
Correct procedure of data preprocessing in cross-validation
Proper practice is to split the data first, then apply preprocessing separately to each fold to avoid biasing results.
For each fold:
Split → Training and Validation subsets
Fit preprocessing only on training data
Transform both training and validation sets
Train model
Evaluate
2. Which validation strategy most likely overestimates model performance?
A. Nested cross-validation
B. Random train/test split without stratification
C. Cross-validation on dataset used for feature selection
D. Stratified k-fold
Answer: C
Explanation:Feature selection before CV leaks validation data info, inflating scores.
If you perform feature selection on the entire dataset before cross-validation, the model
has already “seen” information from all samples (including what should be test data).
This causes data leakage,
which makes accuracy look higher than it truly is,
hence the performance is overestimated.
More explanation:
This happens because when feature selection is carried out on the entire dataset before performing cross-validation, information from test folds leaks into the training process. This makes accuracy estimates unrealistically high and not representative of unseen data.
Feature selection should always be nested inside the cross-validation loop — i.e., done within each training subset.
3. After tuning using 5-fold CV, how should you report final accuracy?
A. CV average
B. Retrain on full data and test on held-out test set
C. Best fold score
D. Validation score after tuning
Answer: B Explanation: Always test final tuned model on unseen test set.
The correct way to report final accuracy after tuning using 5-fold cross-validation (CV) is to retrain the model on the full dataset with the best found hyperparameters and then evaluate it on a held-out test set.
What is 5-fold cross validation?
5-fold cross-validation is a technique used to estimate model performance and tune hyperparameters during the development phase. It divides the training data into 5 folds, trains the model 5 times (each time using 4 folds for training and 1 fold for validation), and averages the results to get a more robust performance estimate.
Steps
5-fold CV is used to estimate the model's performance and tune hyperparameters without overfitting to the training data.
The accuracy scores obtained from each fold are averaged (CV average) to estimate expected model performance, but these scores are based on training subsets and cannot be considered final.
After selecting the best hyperparameters from CV, you typically retrain the model on the entire training data to leverage all available data.
The final accuracy should then be reported based on evaluation on a separate held-out test set that was not used in any training or validation to provide an unbiased estimate of performance.
Why Retrain on Full Data while using cross-validation?
Training on all available data maximizes the information the model learns from and typically results in better performance than models trained on only 80% of the training data.
Why Use a Held-Out Test Set?
A separate test set ensures you have an unbiased estimate of how the model will perform on truly unseen data. If you report CV scores as final accuracy, you're reporting performance on data that was used (indirectly) in tuning decisions, which can lead to optimistic estimates.
4. Why might Leave-One-Out CV lead to high variance?
A. Too little training data
B. Needs resampling
C. Fold too large
D. Almost all data used for training
Answer: D Explanation: Small change in one sample affects result → high variance.
What is LOOCV?
In Leave-One-Out Cross-Validation (LOOCV), we use n folds, where n = number of samples.
For each iteration, we train the model on n − 1 samples and test it on the single remaining sample.
This is repeated n times and the results are averaged.
Why is it high variance?
Each training set is almost the same with only one sample changing between folds. That means the model sees nearly all the data each time, so each trained model is very similar, but each test case (the one left-out point) can cause a big swing in the error if the model slightly mispredicts it.
As a result, the estimated performance for each fold fluctuates heavily depending on which single observation is left out. When you average them, the mean may still vary a lot between datasets — hence high variance in the performance estimate.
Example
Suppose you have a dataset of 100 SUVs (Toyota Fortuner, Volkswagen Taigun, Mercedes-Benz GLS, etc.).
You train your model 99 times, each time leaving one SUV out as the test case.
For the first run you train on 99 SUVs and test on 1 (say, a Range Rover Evoque), then repeat for all 100 cars so every car gets to be the “left-out” test case once.
Most SUVs in your dataset might be mid-range models (₹20–30 lakhs), but a few might be luxury SUVs (like a Range Rover at ₹90 lakhs).
Because LOOCV tests on just one car at a time, if that single car happens to be a rare or unusual model (e.g., the only electric SUV), or has outlier features (very high horsepower, unique brand, etc.), the model trained on the other 99 cars may not generalize well to that one.
That single prediction will produce a large error, which strongly affects that fold’s test score. The average of these highly variable fold scores becomes unstable — small changes in the dataset (or presence/absence of a few outliers) can lead to large changes in the reported CV score.
5. When should Time Series CV be used?
A. Independent samples
B. Predicting future from past
C. Imbalanced data
D. Faster training
Answer: B
Explanation:
Time Series CV preserves temporal order to avoid lookahead bias.
Use Time Series Cross-Validation when the data have a temporal order,
and you want to predict future outcomes from past patterns without data leakage.
Time Series Cross-Validation (TSCV) is used when data points are ordered over time — for example, stock prices, weather data, or sensor readings.
The order of data matters.
Future values depend on past patterns.
You must not shuffle the data, or it will leak future information.
Unlike standard k-fold cross-validation, TSCV respects the chronological order and ensures that the model is trained only on past data and evaluated on future data, mimicking real-world forecasting scenarios.
6. Performing many random 80/20 splits and averaging accuracy is called:
A. Bootstrapping
B. Leave-p-out
C. Monte Carlo Cross-Validation
D. Nested CV
Answer: C
Explanation:Monte Carlo validation averages performance over multiple random splits.
Monte Carlo Cross-Validation (also known as Repeated Random Subsampling Validation) involves randomly splitting the dataset into training and testing subsets multiple times (e.g., 80% training and 20% testing).
The model is trained and evaluated on these splits repeatedly, and the results (such as accuracy) are averaged to estimate the model's performance.
This differs from k-fold cross-validation because the splits are random and may overlap — some data points might appear in multiple test sets or not appear at all in some iterations.
When is Monte Carlo Cross-Validation useful?
You have limited data but want a more reliable performance estimate.
You want flexibility in training/test split sizes.
The dataset is large, and full k-fold CV is too slow.
You don’t need deterministic folds.
The data are independent and identically distributed (i.i.d.).
7. Model performs well in CV but poorly on test set. Why?
A. Too many folds
B. Overfitting during tuning
C. Underfitted model
D. Large test set
Answer: B
Explanation:
Repeated tuning on the same cross-validation folds can cause overfitting to validation data.
A model that performs well in cross-validation but poorly on the test set is often
overfitted to the validation folds.
It has learned fold-specific noise or patterns that don’t generalize to unseen data.
When a model performs well on cross-validation (CV) but poorly on the test set, it is often
because the model has overfitted to the validation data during hyperparameter tuning.
In CV, the model and hyperparameters are repeatedly adjusted to optimize performance on
the validation folds. This can lead to a model that is too closely tailored to the
specific folds used in CV, capturing noise or patterns that do not generalize outside those folds.
How overfitting might be caused in cross-validation?
We use cross-validation (CV) to choose hyperparameters (e.g., best learning rate, number of trees, etc.).
Each time, we train and validate the model on different CV folds, and we pick the hyperparameters that give the best CV score.
Because many hyperparameter combinations are tried, the final set may end up accidentally tuned to the specific folds used in CV rather than the true data pattern.
When we finally test the model on a completely unseen test set, performance drops — showing that the CV score was over-optimistic.
8. Which gives most reliable generalization estimate with extensive tuning?
A. Single 80/20 split
B. Nested CV
C. Stratified 10-fold
D. Leave-One-Out
Answer: B
Explanation:Nested CV separates tuning and evaluation, avoiding bias. When you perform extensive hyperparameter tuning, use Nested Cross-Validation to get the most reliable, unbiased estimate of true generalization performance.
How does Nested CV handle optimistic bias?
In standard cross-validation, if the same data is used both to tune hyperparameters and to estimate model performance, it can lead to an optimistic bias. That is, the model "sees" the validation data during tuning, which inflates performance estimates but does not truly represent how the model will perform on new unseen data.
Nested CV solves this by separating the tuning and evaluation processes into two loops:
Inner loop: Used exclusively to tune the model's hyperparameters by cross-validation on the training data.
Outer loop: Used to evaluate the generalized performance of the model with the tuned hyperparameters on a held-out test fold that was never seen during the inner tuning.
This structure ensures no data leakage between tuning and testing phases, providing a less biased, more honest estimate of how the model will perform in real-world scenarios.
When to use Nested Cross-Validation?
Nested CV is computationally expensive. It is recommended especially when you do extensive hyperparameter optimization to avoid overfitting in model selection and get a realistic estimate of true model performance.
9. Major advantage of k-fold CV over simple hold-out?
A. Ensures higher accuracy
B. Eliminates overfitting
C. Uses full dataset efficiently
D. Requires less computation
Answer: C Explanation: k-fold cross-validation allows each data sample to serve as both training and validation data,
making efficient use of the entire dataset.
Other advantages include:
Provides a more reliable estimate of model performance.
Reduces variance in model evaluation compared to a single train/test split.
Ensures better use of limited data when datasets are small.
10. What best describes the purpose of model validation?
A. Improve training accuracy
B. Reduce dataset size
C. Reduce training time
D. Measure generalization to unseen data
Answer: D Explanation: Validation estimates generalization performance before final testing.
No comments:
Post a Comment