#
*Machine
learning quiz questions TRUE or FALSE with answers, important machine
learning interview questions for data science, Top 3 machine learning
question set*

##
__Machine Learning TRUE / FALSE Questions - SET 03__

###
*1. MAP estimates are
equivalent to the ML estimates when the prior used in the MAP is a uniform
prior over the parameter space.*

*1. MAP estimates are equivalent to the ML estimates when the prior used in the MAP is a uniform prior over the parameter space.*

(a)
TRUE (b)
FALSE

**View Answer**Answer: TRUE##
The only
difference between MLE and MAP is, in MAP the prior probability P(θ) is
included. In simpler terms, the likelihood is weighted with prior probability
to become MAP. We can say MLE is a special case of MAP when prior follows a
uniform distribution.Uniform prior is
about assigning equal weights everywhere like a constant.Hence, MAP
estimates are equivalent to the MLE when the prior used in the MAP is a
uniform prior over the parameter space. [Refer for more:
MLE Vs MAP] |

###
**2. Because decision trees learn to classify discrete-valued
outputs instead of real-valued functions it is impossible for them to overfit.**

**2. Because decision trees learn to classify discrete-valued outputs instead of real-valued functions it is impossible for them to overfit.**

(a)
TRUE (b)
FALSE

**View Answer**Answer: FALSEOverfitting is
possible in decision trees. If a decision tree is fully grown, it may lose
some generalization capability. This is a phenomenon known as overfitting.Not just a
decision tree, (almost) every ML algorithm is prone to overfitting.##
What is
overfitting?Over-fitting is
the phenomenon in which the learning system tightly fits the given training
data so much that it would be inaccurate in predicting the outcomes of the
untrained data. One simple way to understand this is to compare the accuracy
of your model w.r.t. to training set and test set. If there is a huge
difference between them, then your model has achieved overfitting.##
Pruning is the
method to solve overfitting in decision trees. |

###
*3. If P(A|B) = P(A)
then P(A ∩ B) = P(A)P(B).*

*3. If P(A|B) = P(A) then P(A ∩ B) = P(A)P(B).*

(a)
TRUE (b)
FALSE

**View Answer**Answer: TRUEThe joint
probability of the random variables A and B together are actually separable
into the product of their individual probabilities. This states that the
probability of any outcome of A and any outcome of B occurring simultaneously
is the product of those individual probabilities.P(A|B) = P(A) –
this implies that A is independent of B.Conditional
probability P(A|B) = P(A ∩ B)/P(B)Multiplication
rule: P(A ∩ B) = P(A|B)P(B).If P(A|B) = P(A),
then P(A ∩ B) = P(A)P(B). |

**************************

###
**Related links:**

**Related links:**