Wednesday, December 4, 2024

Machine Learning MCQ - effect of small k value in kNN

Multiple choices questions in Machine learning. Interview questions on machine learning, quiz questions for data scientist answers explained, Exam questions in machine learning, how does the k value affects the model using kNN, when does knn is sensitive to outliers, choosing a high value for k is better in kNN algorithm?

Machine Learning MCQ - Effect of choosing a small k value in kNN clustering algorithm

< Previous                      

Next >

 

1. In k-Nearest Neighbour (kNN clustering) algorithm, choosing a small value for k will lead to

a) Low bias and high variance

b) Low variance and high bias

c) Balanced bias and variance

d) K value doesn’t do anything with bias and variance

Answer: (a) Low bias and high variance

Choosing a small value for k will make the model more sensitive to individual data points in the training data. This means that the algorithm can overfit (low bias – biased to the training data, and high variance – highly variable predictions on test data) the training data, producing a model that is very flexible and can capture the finer details and noise in the data. Hence, the predictions will closely match the training data, leading to low bias.

Small k – model is flexible hence low bias, model is highly variable (prediction determined by a single data point) hence high variance.

 

What will a large k value do to the model?

More data points (large k) are taken into account hence the noise can be reduced (outliers may not affect the model)

The model generalizes well hence high bias, least affected by individual data points hence low variamce.

 

Note: Data with more outliers or noise will likely perform better with higher values of k.


< Previous                      

Next >

 

 

************************

Related links:

Why choosing small k value in knn lead to better results?

What is the effect of choosing a small k value for knn?

Why choosing a small value for k will lead to low bias and high variance in knn algorithm?

Small or large k value, which is better for generalizing the model using knn algorithm?

Machine learning solved mcq, machine learning solved mcq 

 

Saturday, November 30, 2024

Machine Learning MCQ - Which ML algorithm have lowest training time

Multiple choices questions in Machine learning. Interview questions on machine learning, quiz questions for data scientist answers explained, Exam questions in machine learning, time complexities of different ML algorithms, why knn has no training phase, which ML algorithm have lowest training time?

Machine Learning MCQ - Which Machine Learning algorithm have lowest training time for very large datasets?

< Previous                      

Next >

 

1. For very large training data sets, which of the following will usually have the lowest training time?

a) Logistic regression

b) Neural nets

c) K-Nearest Neighbors

d) Random forests

e) Linear SVM

Answer: (c) K-Nearest Neighbors

K-Nearest Neighbors (KNN) is often referred to as a "lazy learner" because it does not have a conventional training phase. Instead of learning parameters (like weights in logistic regression or neural networks), KNN stores the entire training dataset during the training phase.

 

Why KNN does not have training phase or lowest training time?

Since KNN does not involve fitting a model or optimizing any parameters during training, it does not require any significant computation or model-building steps before predictions. In other words, it does not learn or build a model in advance. This is why we say it has no training time.

 

Why not other options?

Time complexities of other machine learning algorithms are as follows;

 

Linear SVM – O(n*p)

Random Forest – O(n*p*log n)

Neural nets (complexity per iteration) – O(n*p*h)

Logistic regression – O(n*p)

Here, n refers to number of training samples, p refers to the number of features, h refers to the number of hidden units in a neural net.

 

< Previous                      

Next >

 

 

************************

Related links:

What is the training time complexities of various machine learning algorithms?

Which ML algorithm(s) have lowest training time for very large datasets?

Why knn does have zero training time complexity?

Why does the norm of the weight vector grows in soft-margin SVM due to the increase in regularization parameter C?

Machine learning solved mcq, machine learning solved mcq 

 

Featured Content

Multiple choice questions in Natural Language Processing Home

MCQ in Natural Language Processing, Quiz questions with answers in NLP, Top interview questions in NLP with answers Multiple Choice Que...

All time most popular contents