Machine learning exam questions, ML solved quiz questions, Machine Learning TRUE or FALSE questions, TOP 5 machine learning quiz questions with answers
Machine Learning TRUE / FALSE Questions - SET 17
1. The Perceptron Learning Rule is a sound and complete method for a Perceptron to learn to correctly classify any 2-class classification problem.
(a) TRUE (b) FALSE
Answer: FALSE It can only learn linearly-separable functions. |
2. Selecting the decision tree split (at each node as you move down the tree) that minimizes classification error will guarantee an optimal decision tree.
(a) TRUE (b) FALSE
Answer: FALSE The decision tree split that minimizes the classification error will not guarantee an optimal decision tree.A feature may be considered as the best feature to split a node, if it can minimize the classification error. Classification error does not favor pure nodes i.e., nodes that have instances belonging to only one class.Rather than classification error, the measures that characterize the purity of a node can be more helpful in working out an optimal decision tree. Entropy and Gini index can be used to find the impurity of nodes. |
3. Increasing the dimensionality of our data always decreases our misclassification rate.
(a) TRUE (b) FALSE
Answer: FALSE Increasing the dimensionality could significantly increase the misclassification rate.As the dimensionality increases, the classifier’s performance increases until the optimal number of features is reached. Further increasing the dimensionality without increasing the number of training samples results in a decrease in classifier performance. This is called as ‘Curse of dimensionality’. [Please refer
here for ‘How curse of dimensionality affects classification?’] |
4. As model complexity increases, bias will decrease while variance will increase.
(a) TRUE (b) FALSE
Answer: TRUE Models that are too complex tend to have high variance and low bias. As more and more parameters are added to a model, the complexity of the model rises and this leads to increased variance and decreased bias.Models that are simple will have a low variance and high bias.Overfitting happens when our model captures the noise along with the underlying pattern in data. It happens when we train our model a lot over noisy dataset. These models have low bias and high variance. These models are very complex and are prone to overfitting. [Refer here for more – Bias-Variance tradeoff] |
5. In the discriminative approach to solving classification problems, we model the conditional probability of the labels given the observations.
(a) TRUE (b) FALSE
Answer: TRUE A Discriminative model learns the conditional probability distribution p(y|x) directly from the training data. It predicts probability of y(target) when given x(training samples).It learns by modeling the decision boundary between the classes. Logistic regression, SVM and Conditional Random Fields (CRF) are some of the popular discriminative methods. |
*********************
Related links:
No comments:
Post a Comment