Handling Ambiguities in NLP – HOT MCQs with Instant Answers
Test your understanding of how NLP techniques resolve lexical, syntactic, semantic, and pragmatic ambiguities.
A. Dependency parsing
B. Word Sense Disambiguation using context embeddings
C. Lemmatization
D. Stopword removal
Answer: B. Word Sense Disambiguation using context embeddings
Explanation: Contextual embeddings analyze nearby words to infer the correct sense of a word like “bank” (financial vs. river edge).
What is lexical ambiguity?
Lexical ambiguity occurs when a single word has multiple possible meanings, and its correct sense depends on the surrounding context.
How WSD Resolves Ambiguity
Word Sense Disambiguation (WSD) is the computational process of determining which meaning of a word is activated by its use in a particular context. It directly addresses lexical ambiguity, the fundamental challenge where words have multiple meanings that must be distinguished to ensure accurate language understanding.
WSD operates as a classification task where word senses represent the classes, contextual information provides the evidence, and each word occurrence is assigned to the most appropriate sense. This process involves three main stages: identifying possible senses for a word, analyzing the surrounding context, and selecting the most likely meaning based on that context.
Here’s a clear example illustrating how Word Sense Disambiguation (WSD) resolves ambiguity:
Example:
Consider the sentence “He went to the bank to deposit money.”
- Possible senses of “bank” – (1) financial institution, (2) river edge)
- Context words: “deposit” and “money”
- WSD decision: Since “deposit” and “money” relate to finance, the algorithm selects the financial institution sense of “bank.”
A. Using stemming before parsing
B. Using probabilistic or neural parsers (PCFG, dependency parsers)
C. Removing prepositions
D. Applying TF-IDF before parsing
Answer: B. Using probabilistic or neural parsers (PCFG, dependency parsers)
Explanation: Probabilistic and neural parsers rank multiple parse trees by likelihood, selecting the most grammatically probable one.
What is syntactic ambiguity?
Syntactic ambiguity occurs when a sentence can be parsed in more than one way due to its grammatical structure. That is, it occurs when a sentence structure permits multiple grammatical interpretations, each leading to a different meaning.
Example“The girl saw the man with the telescope.” — it’s unclear whether the girl used the telescope or the man had the telescope.
A. Named Entity Recognition
B. TF-IDF weighting
C. Semantic role labeling (SRL) and contextual embeddings
D. Lemmatization
Answer: C. Semantic role labeling (SRL) and contextual embeddings
Explanation: SRL determines the roles of entities (who did what to whom), reducing confusion about meaning and intent in sentences.
What is semantic ambiguity?
semantic ambiguity occurs when an entire sentence or phrase can have multiple interpretations based on context and meaning relationships, rather than from individual word meanings (lexical) or sentence structure (syntactic).
Example"He gave her a ring" could mean either presenting a piece of jewelry or making a phone call, depending entirely on context.
Semantic role labeling and contextual embeddings
Semantic Role Labeling (SRL)identifies and labels the semantic roles of words or phrases in a sentence—such as agent, patient, location, time, or manner—by answering "who did what to whom, when, where, and how".
Contextual embeddings (such as those from BERT or GPT models) complement SRL by capturing rich semantic meaning based on surrounding context.
A. Grammar correction models
B. Discourse analysis and dialogue act classification
C. Stemming
D. POS tagging
Answer: B. Discourse analysis and dialogue act classification
Explanation: Dialogue act classification helps NLP models understand intended meaning—here a request, not a literal question.
Pragmatic ambiguity
Pragmatic ambiguity arises when the intended meaning of a sentence depends on context, tone, or speaker intention rather than its literal wording. For example, “Can you open the door?” can be a question about ability or a polite request for action.
Discourse analysis and dialogue act classification help NLP systems interpret such contextual meaning by analyzing the function of utterances within a conversation — distinguishing between a request, command, or query — thereby reducing pragmatic ambiguity.
Another example:“Can you pass the salt?”
Ambiguity:Literal meaning: The speaker is asking whether the listener has the ability to pass the salt.
Intended meaning: The speaker is politely requesting the listener to pass the salt.
Why it’s pragmatic ambiguity?
Because the ambiguity arises from pragmatics — how language is used in context — not from grammar or word meaning. The intended interpretation depends on social and conversational context, not the literal structure of the sentence.
A. Dependency parsing
B. Coreference resolution
C. Tokenization
D. Lemmatization
Answer: B. Coreference resolution
Explanation:Coreference models identify when words like “he,” “she,” or “it” refer to the same entity, resolving pronoun ambiguity.
What is referential ambiguity?
Referential ambiguity occurs when a pronoun or noun phrase in a text could refer to multiple entities, making it unclear which entity is being discussed. For example, in the sentence "John told Mary that he would help her," the pronoun "he" could theoretically refer to either John or another male mentioned earlier in the discourse.
What is coreference resolution?
Coreference resolution is an NLP task that determines when different words or phrases refer to the same real-world entity in a text or conversation. It links together pronouns, noun phrases, and proper names that refer to the same thing, resolving ambiguity about who or what is being talked about.
Coreference resolution directly addresses referential ambiguity by determining what each referring expression (like “he,” “she,” “it,” or “this”) actually points to in context.
A. Syntax-only parsing
B. Stopword removal
C. POS tagging
D. Logical form generation and quantifier scoping
Answer: D. Logical form generation and quantifier scoping
Explanation: Logical form representations clarify whether each student read the same or different books by modeling quantifier scope.
What is Scope ambiguity?
Scope ambiguity occurs when a sentence contains quantifiers (like every, some, a) and it’s unclear how far their meaning extends — i.e., which part of the sentence each quantifier governs.
Example:The classic example is "Every student read a book," which has two possible interpretations depending on the scope of the quantifiers.
Understanding the Ambiguity of "Every student read a book"Interpretation 1 (narrow scope for "a book"): For each student, there exists a (possibly different) book that the student read.
Interpretation 2 (narrow scope for "every student"): There exists one specific book that all students read.
The grammatical structure alone doesn't disambiguate which reading is intended.
How does NLP resolve Scope ambiguity?
To distinguish between these interpretations, NLP systems use:
Logical form generation: representing sentences in a formal, logical structure.
Quantifier scoping: determining which quantifier (“every”, “a”, “some”) applies to which parts of the sentence.
This process converts natural language sentences into formal logical representations where the scope of each quantifier is explicitly specified. This helps machines understand the intended meaning beyond surface syntax.
A. Dependency parsing with coordination detection
B. Tokenization
C. Lemmatization
D. Spell correction
Answer: A. Dependency parsing with coordination detection
Explanation: Dependency parsers analyze grammatical links and help decide if “old” or other modifiers apply to one or both elements.
Coordination ambiguity happens when it’s unclear which parts of a sentence are joined together by a conjunction like and, or, or but.
These sentences are ambiguous because the structure of coordination can be interpreted in more than one way.
Example:
“I like cooking and my family.”
This can mean:
How "Dependency parsing with coordination detection" handles coordination ambiguity?
It helps by analyzing grammatical relationships in a sentence: It identifies which words are heads (main verbs/nouns) and which words or phrases they coordinate with using conjunctions
This technique identifies grammatical dependencies and explicitly marks which elements are coordinated and how they relate to the main predicate.
A. Coreference resolution models
B. Word2Vec embeddings
C. Bag-of-Words models
D. TF-IDF models
Answer: A. Coreference resolution models
Explanation: Discourse-level ambiguity involves understanding how entities and references connect across sentences. Coreference resolution helps identify when different expressions refer to the same entity, improving coherence and meaning at the document level.
Discourse-level ambiguity arises when meaning depends on how different sentences relate to each other — especially when pronouns or phrases refer to earlier entities.
For example:Here, it’s unclear whether “he” refers to John or David. Coreference resolution models are designed to resolve such ambiguities by identifying which words or phrases refer to the same real-world entity across sentences.
A. By focusing on one linguistic task
B. By simplifying models with fewer layers
C. By jointly training POS tagging, NER, and parsing for shared understanding
D. By removing ambiguous sentences
Answer: C. By jointly training POS tagging, NER, and parsing for shared understanding
Explanation: Multi-task models learn richer syntactic and semantic relations, improving disambiguation across linguistic levels.
Multi-task learning (MTL) is a machine learning approach where a single model is trained simultaneously on multiple related tasks, allowing it to learn shared representations and linguistic knowledge that benefit all tasks. This contrasts with single-task learning, where models are trained on isolated objectives. For NLP ambiguity resolution, MTL is particularly powerful because different linguistic phenomena are often interdependent.
Example:If a model learns that "book" is typically a noun (POS tagging), this helps it recognize that "book" in "I read a book" is an object, which informs dependency parsing and helps with coreference resolution of pronouns like "it."
A. Cross-lingual embeddings and alignment models
B. Transliteration
C. Token frequency normalization
D. Data augmentation
Answer: A. Cross-lingual embeddings and alignment models
Explanation: Cross-lingual embeddings map similar words across languages into shared space, minimizing translation ambiguity.
Cross-lingual ambiguity
Cross-lingual ambiguity arises when words, phrases, or concepts have different meanings, grammatical properties, or pragmatic functions across languages. Multilingual NLP systems must recognize that ambiguities don't resolve uniformly across language boundaries and that a word's meaning in one language may not translate directly to another, or may be ambiguous in different ways.
Overview: NLP Techniques for Handling Different Types of Ambiguities
| Ambiguity Type | Example | NLP Technique |
|---|---|---|
| Pragmatic ambiguity | "Can you open the door?" (genuine question vs. polite request) | Discourse analysis and dialogue act classification |
| Referential ambiguity | "John told Mary that he would help her" (who is "he"?) | Coreference resolution |
| Scope ambiguity | "Every student read a book" (does each student read a different book, or do all read the same one?) | Logical form generation and quantifier scoping |
| Coordination ambiguity | "I like cooking and my family" (what elements are conjoined?) | Dependency parsing with coordination detection |
| Discourse-level ambiguity (reference-specific) | "John met Mary at the conference. He gave her a book. It was impressive." (tracking entities across sentences) | Coreference resolution models |
| Cross-lingual ambiguity | English "bank" (financial institution vs. riverbank) maps differently to Spanish "banco" vs. "orilla"; pragmatic functions differ across languages | Cross-lingual embeddings and alignment models |

No comments:
Post a Comment