Showing posts with label Data warehousing quiz questions. Show all posts
Showing posts with label Data warehousing quiz questions. Show all posts

Monday, October 12, 2020

Data warehousing and mining quiz questions and answers set 01

Data warehousing and Data mining solved quiz questions and answers, multiple choice questions MCQ in data mining, questions and answers explained in data mining concepts, data warehouse exam questions, data mining mcq

Data Warehousing and Data Mining - MCQ Questions and Answers SET 01


1. In a data mining task when it is not clear about what type of patterns could be interesting, the data mining system should:

a) Perform all possible data mining tasks

b) Handle different granularities of data and patterns

c) Perform both descriptive and predictive tasks

d) Allow interaction with the user to guide the mining process

Answer: (d) Allow interaction with the user to guide the mining process  

Users have a good sense of which “direction” of mining may lead to interesting patterns and the “form” of the patterns or rules they want to find. They may also have a sense of “conditions” for the rules, which would eliminate the discovery of certain rules that they know would not be of interest. Thus, a good heuristic is to have the users specify such intuition or expectations as constraints to confine the search space. This strategy is known as constraint-based mining.

 

2. To detect fraudulent usage of credit cards, the following data mining task should be used:

a) Feature selection

b) Prediction

c) Outlier analysis

d) All of the above

Answer: (c) Outlier analysis

Fraudulent usage of credit cards can be detected using outlier analysis or outlier detection.

Outlier

A data element that stands out from the rest of the data. The values that deviate from other observations on data are called outliers. In data distribution, they are not part of the pattern. Sometimes referred to as abnormalities, anomalies, or deviants, outliers can occur by chance in any given distribution.

Outlier analysis

The analysis used to find unusual patterns in a dataset. There are many outlier detection algorithms proposed under these broad categories; statistical based approaches, distance-based approaches, fuzzy approaches and kernel functions.

 

 

3. In high dimensional spaces, the distance between data points becomes meaningless because:

a) It becomes difficult to distinguish between the nearest and farthest neighbors

b) The nearest neighbor becomes unreachable

c) The data becomes sparse

d) There are many uncorrelated features

Answer: (a) It becomes difficult to distinguish between the nearest and farthest neighbors

Curse of dimensionality

The dimensionality curse phenomenon states that in high dimensional spaces distances between nearest and farthest points from query points become almost equal. Therefore, nearest neighbor calculations cannot discriminate candidate points.

By high dimensional spaces, we are talking about hundreds to thousands of dimensions for a dense vector (sparse vectors are a completely different topic). Basically once you get up to high-dimensionality, pairwise distance between all of your points approaches a constant.

 

 

4. The difference between supervised learning and unsupervised learning is given by:

a) Unlike unsupervised learning, supervised learning needs labeled data

b) Unlike unsupervised leaning, supervised learning can form new classes

c) Unlike unsupervised learning, supervised learning can be used to detect outliers

d) Unlike supervised learning, unsupervised learning can predict the output class from among the known classes

Answer: (a) Unlike unsupervised learning, supervised learning needs labeled data

Supervised learning: Supervised learning is the machine learning task of learning a function that maps an input to an output based on example input-output pairs. It is basically a synonym for classification. The supervision in the learning comes from the labeled examples in the training data set.

Unsupervised learning: Unsupervised learning is essentially a synonym for clustering. The learning process is unsupervised since the input examples are not class labeled. Typically, we may use clustering to discover classes within the data. The goal of unsupervised learning is to model the hidden patterns in the given input data in order to learn about the data.

 

 

5. Which of the following is used to find inherent regularities in data?

a) Clustering

b) Frequent pattern analysis

c) Regression analysis

d) Outlier analysis

Answer: (b) Frequent pattern analysis

Frequent pattern: a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set. It is an intrinsic and important property of datasets.

Basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis are some of the applications of frequent pattern analysis.

 

**********************

 

Related links:

 

 

What are the applications of frequent pattern analysis

Difference between supervised and unsupervised learning

What is curse of dimensionality

Why the distance between data points are meaningless in high dimensional spaces?

Application of outlier analysis is to detect fraudulent credit card usage

Data warehousing and mining quiz questions and answers set 05

Data warehousing and Data mining solved quiz questions and answers, multiple choice questions MCQ in data mining, questions and answers explained in data mining concepts, data warehouse exam questions, data mining mcq

Data Warehousing and Data Mining - MCQ Questions and Answers SET 05


1. Which of the following best describes the sample of data used to provide an unbiased evaluation of a model fit on the training dataset while tuning model hyper-parameters?

a) Training dataset

b) Test dataset

c) Validation dataset

d) Holdout dataset

Answer: (c) Validation dataset

Validation dataset is the sample of data used to provide an unbiased evaluation of a model fit on the training dataset while tuning model hyper-parameters.

It is usually used for parameter selection and to avoid overfitting. It helps in tuning the parameters of the model. For example, in neural network, it is used to choose the number of hidden units.

Validation dataset is different from test dataset.

The validation set is also known as the Development set.

 

2. In which of the following, data are stored, retrieved and updated?

a) OLAP

b) MOLAP

c) HTTP

d) OLTP

 

Answer: (d) OLTP

Online Transaction Processing (OLTP) is a type of data processing in information systems that typically facilitate transaction oriented applications. A system to handle inventory of a super market, ticket booking system, and financial transaction systems are some examples of OLTP.

OLAP is Online Analytical Processing system used primarily for data warehouse environments.

 

3. Data warehouse deals with which type of data that is never found in the operational environment?

a) Normalized

b) Informal

c) Summarized

d) Denormalized

Answer: (c) Summarized

Data warehouse handles summarized (aggregated) data that are aggregated from OLTP systems.

A data warehouse is a relational database that is designed for query and analysis rather than for transaction processing. It usually contains historical data derived from transaction data.

Data warehouses are large databases that are specifically designed for OLAP and business analytics workloads.

As per definition of Ralph Kimball, a data warehouse is “a copy of transaction data specifically structured for query and analysis.”

 

4. Classification is a data mining task that maps the data into _________ .

a) predefined group

b) real valued prediction variable

c) time series

d) clusters

Answer: (a) predefined group

Classification is a data mining function that assigns items in a collection to target categories or classes that are predefined. The goal of classification is to accurately predict the target class for each case in the data. For example, a classification model could be used to identify loan applicants as low, medium, or high credit risks. [for more on sample classification problems]

k-nearest neighbor (knn), naïve bayes and support vector machine (svm) are few of the classification algorithms.

 

5. Which of the following clustering techniques start with as many clusters as there are records or observations with each cluster having only one observation at the starting?

a) Agglomerative clustering

b) Fuzzy clustering

c) Divisive clustering

d) Model-based clustering

Answer: (a) Agglomerative clustering

This is a "bottom-up" approach: each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy.

Agglomerative clustering starts with single object clusters (singletons) and proceeds by progressively merging the most similar clusters, until a stopping criterion (which could be a predefined number of groups k) is reached. In some cases, the procedure ends only when all the clusters are merged into a single one, which is when one aims at investigating the overall granularity of the data structure.

You may refer here for applications of hierarchical clustering

 

**********************

 

Related links:

 

 

Which of the clustering technique works in bottom-up approach

List few classification algorithms

What type of data used by data warehouse

Difference between OLAP and OLTP

How validation set is different from test set and training set

Validation dataset is used for parameter selection and avoid overfitting 

Data warehousing and mining quiz questions and answers set 04

Data warehousing and Data mining solved quiz questions and answers, multiple choice questions MCQ in data mining, questions and answers explained in data mining concepts, data warehouse exam questions, data mining mcq

Data Warehousing and Data Mining - MCQ Questions and Answers SET 04

 

1. Minkowski distance is a function used to find the distance between two

a) Binary vectors

b) Boolean-valued vectors

c) Real-valued vectors

d) Categorical vectors

Answer: (c) Real-valued vectors

Minkowski distance finds the distance between two real-valued vectors. It is a generalization of the Euclidean and Manhattan distance measures and adds a parameter, called the “order” or “p“, that allows different distance measures to be calculated.

Minkowski distance, 

Minkowski distance

If p=1 then L1 which is Manhattan distance (change p with 1 in above equation)

If p=2 then L2 which is Euclidean distance (change p with 2 in above equation)

[For more, please refer here]

 

2. Which of the following distance measure is similar to Simple Matching Coefficient (SMC)?

a) Euclidean distance

b) Hamming distance

c) Jaccard distance

d) Manhattan distance

Answer: (b) Hamming distance

Hamming distance is the number of bits that are different between two binary vectors.

The Hamming distance is similar to the SMC in which both methods look at the whole data and looks for when data points are similar and dissimilar.  The Hamming distance gives the number of bits that are different whereas the SMC gives the result of the ratio of how many bits were the same over the entirety of the sample set.  In a nutshell, Hamming distance reveals how many were different, SMC reveals how many were same, and therefore one reveals the inverse information of the other.

SMC = Hamming distance / number of bits

 

3. The statement “if an itemset is frequent then all of its subsets must also be frequent” describes _________ .

a) Unique item property

b) Downward closure property

c) Apriori property

d) Contrast set learning

Answer: (b) Downward closure property and (c) Apriori property

The Apriori property state that if an itemset is frequent then all of its subsets must also be frequent.

Apriori algorithm is a classical data mining algorithm used for mining frequent itemsets and learning of relevant association rules over relational databases.

Apriori property expresses monotonic decrease of an evaluation criterion accompanying with the progress of a sequential pattern.

Both downward closure property and Apriori property are synonyms to each other.

 

4. Prediction differs from classification in which of the following senses?

a) Not requiring a training phase

b) The type of the outcome value

c) Using unlabeled data instead of labeled data

d) Prediction is about determining a class

Answer: (b) The type of the outcome value

The type of outcome values of prediction differs from that of classification.

Predicting class labels is classification, and predicting values (e.g. using regression techniques) is prediction.

Classification is the process of identifying the category or class label of the new observation to which it belongs.  Predication is the process of identifying the missing or unavailable numerical data for a new observation.

 

5. The statement “if an itemset is infrequent then it’s superset must also be an infrequent set” denotes _______.

a) Maximal frequent set.

b) Border set.

c) Upward closure property.

d) Downward closure property.

Answer: (c) Upward closure property

Any subset of a frequent item set must be frequent (downward closure property) or any superset of an infrequent item set must be infrequent (Upward closure property). Both are Apriori properties.

 

**********************

 

Related links:

 

 

What are the various properties under Apriori algorithm?

Define upward closure and downward closure properties

Difference between classification and prediction

Which distance metric is similar to simple matching coefficient 

How different Manhattan and Euclidean distances are from Minkowski distance

Machine learning algorithms MCQ with answers

Machine learning question banks and answers

Featured Content

Multiple choice questions in Natural Language Processing Home

MCQ in Natural Language Processing, Quiz questions with answers in NLP, Top interview questions in NLP with answers Multiple Choice Que...

All time most popular contents

data recovery