Showing posts with label Machine Learning Quiz. Show all posts
Showing posts with label Machine Learning Quiz. Show all posts

Tuesday, July 26, 2022

Machine Learning MCQ - Type of nodes in a decision tree

Multiple choices questions in Machine learning. Interview questions on machine learning, quiz questions for data scientist answers explained, decision tree, types of nodes, root, decision node, leaft node, how many incoming edges are there in an internal node?

Machine Learning MCQ - Types of nodes in a decision tree

< Previous                      

Next >

 

1. What are different types of nodes a decision tree has?

a) Root node

b) Internal nodes

c) Leaf nodes

d) All of the above

Answer: (d) All of the above

A decision tree has all the three nodes.

Root node – node with NO incoming edges and ZERO or more outgoing edges. It contains attribute test conditions to separate records.

Internal nodes – nodes with exactly ONE incoming edge and ZERO or more outgoing edges. Internal nodes contain attribute test conditions to separate records.

Leaf (terminal) nodes – nodes with exactly ONE incoming edge with NO outgoing edges. Leaf nodes have class labels.

 

Decision tree nodes - an example

 

 

 

< Previous                      

Next >

 

************************

Related links:

Type of nodes in a decision tree

Which is a decision node?

Can we have labels in internal nodes in a decision tree?

Machine learning solved mcq, machine learning solved mcq

Thursday, July 21, 2022

Machine Learning MCQ - Cost function of logistic regression is convex

Multiple choices questions in Machine learning. Interview questions on machine learning, quiz questions for data scientist answers explained, what is the cost function of logistic regression? Why the cost function used in linear regression cannot be used in logistic regression? Why the cost function of logistic regression is convex?

Machine Learning MCQ - Cost function of logistic regression is convex

< Previous                      

Next >

 

1. Which of the following statements about logistic regression are correct?

a) Logistic regression uses the squared error as the loss function

b) Logistic regression assumes that each class’s points are generated from a Gaussian distribution

c) The cost function of logistic regression is concave

d) The cost function of logistic regression is convex

Answer: (d) The cost function of logistic regression is convex


Gradient descent will converge into global minimum only if the cost function is convex in the case of logistic regression.

 

Can we use the same cost function used in linear regression for logistic regression?

If we use the cost function of linear regression (Mean Squared Error) in logistic regression, we end up with a non-convex function with many local minimums. In this case, it is very difficult to find the global minimum. This strange outcome is due to the fact that in logistic regression we have the sigmoid function around, which is non-linear (i.e. not a line).

   

< Previous                      

Next >

 

************************

Related links:

Why the cost function of logistic regression is convex?

Why can't we use the cost function of linear regression in the case of logistic regression?

Can we use the same cost function of linear regression in logistic regression?

What will happen if you use linear regression's cost function in logistic regression?

Machine learning solved mcq, machine learning solved mcq

Friday, July 15, 2022

Machine Learning MCQ - How to fix the problem of overfitting in neural network

Multiple choices questions in Machine learning. Interview questions on machine learning, quiz questions for data scientist answers explained, How to fix the problem of overfitting in neural network? How does regularization prevent overfitting? How does early stopping prevent overfitting? How does "decrease in model complexity" prevent overfitting? List various methods to fix overfitting in neural network.

Machine Learning MCQ - List of various methods to fix overfitting problem in neural network

< Previous                      

Next >

 

1. Suppose you have a neural network that is overfitting to the training data. Which of the following can fix the situation?

a) Regularization

b) Decrease model complexity

c) Train less/early stopping

d) All of the above

Answer: (d) All of the above

Overfitting happens when your model is too complicated to generalize for new data. When your model fits your training data perfectly, it is unlikely to fit new data (test data) well.

 

How does regularization help in fixing the overfitting problem?

Regularization helps to choose preferred model complexity, so that model is better at predicting. Regularization is nothing but adding a penalty term to the objective function and control the model complexity using that penalty term. Regularization parameter (lambda) penalizes all the parameters except intercept so that model generalizes the data and won’t overfit.

 

How does “decreasing the model complexity” help in overcoming overfitting problem?

A model with a high degree of complexity may be able to capture more variations in the data, but it will also be more difficult to train and may be more prone to overfitting. On the other hand, a model with a low degree of complexity may be easier to train but may not be able to capture all the relevant information in the data. [Refer here for more]

 

How does “early stopping” prevent overfitting?

Early stopping is used to stop overfitting on training data. When a model is too eagerly learning noise, the validation loss may start to increase during training. To prevent this, we can simply stop the training whenever it seems the validation loss isn't decreasing anymore. Once we detect that the validation loss is starting to rise again, we can reset the weights back to where the minimum occurred. This ensures that the model won't continue to learn noise and overfit the data. [Refer here for more]

 

 

  

< Previous                      

Next >

 

************************

Related links:

What is overfitting in neural network?

How to prevent overfitting in neural network in training data?

How does "early stopping" help in preventing overfitting problem?

How does "decrease in model complexity" help fighting overfitting problme in neural net?

How does "regularization" prevent overfitting in training data?

Why overfitting is considered as a serious problem?

Machine learning solved mcq, machine learning solved mcq

Featured Content

Multiple choice questions in Natural Language Processing Home

MCQ in Natural Language Processing, Quiz questions with answers in NLP, Top interview questions in NLP with answers Multiple Choice Que...

All time most popular contents

data recovery