Showing posts with label Machine Learning Quiz. Show all posts
Showing posts with label Machine Learning Quiz. Show all posts

Tuesday, December 10, 2024

Machine Learning MCQ - Which of the following is true about dropout in a neural network

Multiple choices questions in Machine learning. Interview questions on machine learning, quiz questions for data scientist answers explained, Exam questions in machine learning, what is dropout in a neural network, purpose of dropout in deep learning, where is dropout applied in a neural network?

Machine Learning MCQ - Application of dropout in a neural network to reduce overfitting

< Previous                      

Next >

 

1. Which of the following is true about dropout?

a) Dropout leads to sparsity in the trained weights

b) At test time, dropout is applied with inverted keep probability

c) The larger the keep probability of a layer, the stronger the regularization of the weights in that layer

d) Dropout is applied to different layers of a neural network, but not the output layer

 

Answer: (d) Dropout is applied to different layers of a neural network, but not the output layer


  • Dropout is a machine learning technique that randomly disables a portion of neurons in a neural network during training to prevent overfitting.
  • It works by randomly "dropping out" (setting to zero) a fraction of the neurons (units) in a layer during each forward pass in training. This forces the network to become more robust by preventing it from relying too heavily on any one neuron, thus encouraging the network to learn more diverse features.
  • Dropout can be applied on input layer (to remove deemed to be irrelevant data), and hidden layers (because much of the intermediate processing would end up noise) of a neural network but not on the output layer.

 

What is dropout? 

The term “dropout” refers to dropping out the nodes (input and hidden layer) in a neural network. All the forward and backwards connections with a dropped node are temporarily removed, thus creating new network architecture out of the parent network. The nodes are dropped by a dropout probability of p.

 

Why dropout is not used in output layer?

Dropout is typically not used in the output layer of a neural network because the output layer is responsible for making final predictions, and this layer should produce deterministic and stable results. Random dropout could interfere with the reliability of those predictions. 


Alternate to dropout at the output layer?

If needed one could use any other regularization techniques that do not affect the stability of the prediction at the output layer.


Why not option (b)?

Keep probability is the probability of retaining neurons during dropout. Also, dropout is applied during training but not during testing phase. 


Why not option (c)?

Having a larger keep probability (say 95% of neurons are kept during dropout) may lead to overfit problem. In such cases, dropout may not be effective.

 

 

< Previous                      

Next >

 

 

************************

Related links:

What is dropout in a neural network and why is it used?

Where can we use dropout in a neural network?

Why we cannot use dropout technique in the output layer of a neural net?

If at all you need to use some technique to overcome overfitting in the output layer, what we can do?

Machine learning solved mcq, machine learning solved mcq 

 

Machine Learning MCQ - Comparison of bias of shallow and deep decision trees

Multiple choices questions in Machine learning. Interview questions on machine learning, quiz questions for data scientist answers explained, Exam questions in machine learning, how does the depth of a decision tree affects the accuracy, why does the bias of shallow decision tree is higher than that of deeper trees?

Machine Learning MCQ - Bias of shallow decision tree is greater than the bias of deeper tree

< Previous                      

Next >

 

1. Consider T1, a decision stump (tree of depth 2) and T2, a decision tree that is grown till a maximum depth of 4. Which of the following is/are correct?

a) Bias(T1) < Bias(T2)

b) Bias(T1) > Bias(T2)

c) Variance(T1) > Variance(T2)

d) None of the above


Answer: (b) Bias(T1) > Bias(T2)


A shallow (limited depth) decision tree like this (depth 2) has high bias (and low variance) because it makes very strong assumptions about the underlying data. It assumes that the data can be divided into very few classes, which usually too simplistic for many real-world problems. A tree with high bias and low variance will result in poor accuracy on the training and test data.

 

In simpler terms, if the tree is shallow then we are not checking a lot of conditions/constrains i.e., the logic is simple or less complex.

 

High bias means the model consistently makes inaccurate predictions by missing important patterns and relationships in the data, and this behavior leads to underfitting the data.

 

A decision tree that grows deeper will be a complex tree and will be overfitting with low bias (and high variance).

 

Model complexity

  • Tree T1 which is a decision tree of depth 2 is relatively simple. It can only create at most 7 leaf nodes (depending on the data and splits), meaning it can model fewer decision boundaries.
  • On the other hand, tree T2, a decision tree of depth 4 is more complex than T1. It can create up to 15 leaf nodes and can make more detailed splits than T1.

Answer C is not correct because variance of decision trees with smaller depths will be smaller than the variance of decision trees with higher depths. Refer bias variance tradeoff (link1, link2) for more information.

 

< Previous                      

Next >

 

 

************************

Related links:

How does the depth of decision tree affect the accuracy of the model?

Decision tree with smaller depth will have high bias than the tree with low bias

Compare decision trees with high bias and low bias

bias (shallow tree) is greater than the bias (deeper tree). why?

Machine learning solved mcq, machine learning solved mcq 

 

Wednesday, December 4, 2024

Machine Learning MCQ - effect of small k value in kNN

Multiple choices questions in Machine learning. Interview questions on machine learning, quiz questions for data scientist answers explained, Exam questions in machine learning, how does the k value affects the model using kNN, when does knn is sensitive to outliers, choosing a high value for k is better in kNN algorithm?

Machine Learning MCQ - Effect of choosing a small k value in kNN clustering algorithm

< Previous                      

Next >

 

1. In k-Nearest Neighbour (kNN clustering) algorithm, choosing a small value for k will lead to

a) Low bias and high variance

b) Low variance and high bias

c) Balanced bias and variance

d) K value doesn’t do anything with bias and variance

Answer: (a) Low bias and high variance

Choosing a small value for k will make the model more sensitive to individual data points in the training data. This means that the algorithm can overfit (low bias – biased to the training data, and high variance – highly variable predictions on test data) the training data, producing a model that is very flexible and can capture the finer details and noise in the data. Hence, the predictions will closely match the training data, leading to low bias.

Small k – model is flexible hence low bias, model is highly variable (prediction determined by a single data point) hence high variance.

 

What will a large k value do to the model?

More data points (large k) are taken into account hence the noise can be reduced (outliers may not affect the model)

The model generalizes well hence high bias, least affected by individual data points hence low variamce.

 

Note: Data with more outliers or noise will likely perform better with higher values of k.


< Previous                      

Next >

 

 

************************

Related links:

Why choosing small k value in knn lead to better results?

What is the effect of choosing a small k value for knn?

Why choosing a small value for k will lead to low bias and high variance in knn algorithm?

Small or large k value, which is better for generalizing the model using knn algorithm?

Machine learning solved mcq, machine learning solved mcq 

 

Featured Content

Multiple choice questions in Natural Language Processing Home

MCQ in Natural Language Processing, Quiz questions with answers in NLP, Top interview questions in NLP with answers Multiple Choice Que...

All time most popular contents

data recovery