lecture 8 Flashcards

1
Q

constituency tree vs dependency tree

A
  • The constituency tree depicts the hierarchical syntactic structure by breaking the sentence down into nested sub-phrases.
    –> Focuses on hierarchical structure and phrase groupings.
  • The dependency tree emphasizes the grammatical relationships between words. It shows which words depend on others
    –> Focuses on direct word-to-word relationships and dependencies.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

dependency grammar

A

directed binary grammatical relations between words

direct encoding of the relationship between predicates (verbs) and their arguments (nouns)

more popular than constituency grammar because of its emphasis on predicate-argument structure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

predicates

A

functions that take different numbers and kinds of arguments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

dependency grammar: arcs

A

arcs go from heads to dependents.
–> heads are often predicates, while dependents are often arguments

exactly 1 incoming edge for all tokens

root is the head of the entire structure

especially useful for language with free word order

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

why does dependency parsing matter for meaning

A

resolves attachment ambiguities that can matter for meaning

  1. grammatical structure of a sentence based on the relationships between words
  2. syntactic dependencies can be close to semantic relations
  3. applicable across languages
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

for what types of tasks might dependency parsing be useful

A
  1. information extraction
  2. machine translation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

syntax generally:

A
  1. allows generalization from specific units to abstract categories that are often cross-linguistically valid
  2. allows for understanding of constituency (how words group together) or linguistic units
  3. shows how constituents relate to each other structurally and functionally
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what information does syntax give us about meaning

A
  1. abstract categories can tell us something about the meaning of a consituent
    –> e.g., nouns
  2. abstract categories can tell us something about the meaning of a sentence
    –> closed-class/function words

syntactic structure is the skeleton with which we build our mental representations, both enabling us to generalize and constrain possible meaning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

is syntax enough to enable a model to understand meaning: the chinese room experiment

A
  • suppose AI has successfully created a computer that behaves as if it understands chinese, input of chinese produces logical output of chinese (passes turing test)
  • does the machine literally understand chinese (strong AI) or does it merely simulate understanding of chinese (weak AI)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

why is syntax insufficient for understanding language

A
  1. syntax alone does not ground words in actions, objects, etc. in the world
  2. we cannot evaluate the grammaticality, truth, or naturalness of an utterance with only syntax
  3. language is flexible:
    –> often we want to know who did what to whom
    –> the same event and participants can have different syntactic relations
    –> ex: [the walrus ate a sea cucumber] vs [a sea cucumber was eaten by the walrus]
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

the task of semantic role labeling (SRL)

A

task of identifying which constituents play which roles in an event

typically framed as a supervised classification task

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

semantic role labeling vs dependency parsing

A

Focus:

  • SRL: Focuses on the roles that words or phrases play in the context of an event. It answers questions like who did what to whom, when, where, and how.
  • Dependency Parsing: Focuses on the grammatical relationships between words, identifying syntactic structures and dependencies.

Output:

  • SRL: Produces semantic labels that describe the roles of words in relation to the main verb or action. For example, identifying the agent, theme, instrument, etc.
  • Dependency Parsing: Produces a dependency tree showing syntactic dependencies between words, such as subject, object, modifier, etc.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

argument structure

A
  • the lexical representation of items (predicates) that take arguments
  • indicates how many participants (arguments) an item has, what their semantic relation (role) is to the item, and what their syntactic expression (e.g., nsubj, dobj) is.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

uniformity of theta-assignment hypothesis (UTAH)

A
  • states that identical semantic relations between items are represented by identical structural (syntactical) relationships between items
  • if semantic roles/relations determine structural/syntactic relations, we can use our knowledge about syntax to help determine semantic roles
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

selectional restrictions of semantic roles

A

allow predicates to constrain the semantics of their arguments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

semantic or thematic roles

A

abstract models of the role an argument plays in the event described by the predicate

17
Q

recourses for SRL

A
  1. propbank: verb-oriented –> simpler, more data
  2. framenet: frame-oriented –> richer, less data
18
Q

the proposition bank (propbank)

A
  • predicate-argument lexicon with frame files for each verb
    –> detailing the various senses and roles associated with the verb
  • coarse categorical labels (ARG0, ARG1) that capture some syntactic variation and shallow semantics
  • includes annotations on constituents as found in the penn treebank
19
Q

framenet

A
  • semantic frames: conceptual structures that describe an event and its participants
  • core and non-core frame elements to add rich semantic information
20
Q

SRL traditional pipeline

A
  1. assume or compute syntactic parse and predicate senses
    –> use broad-coverage parser
    –> traverse parse to find all predicates
  2. argument identification
    –> select the predicate’s argument phrases by parsing the parse tree
  3. argument classification
    –> select a role for each argument using supervised classification (wrt the frame role for the predicate’s sentence)
21
Q

feature-based algorithm for SRL

A
  1. assign syntactic parse to input string
  2. traverse parse to find all predicates
  3. for each predicate, examine each node in the parse tree and use supervised classification to decide the semantic role it plays for the predicate (if any)
22
Q

features for SRL

A

given a labeled training set, a feature vector is extracted for each node

common feature templates:
- governing predicate
- phrase type
- headword POS

this information comes from syntax

23
Q

evaluation of SRL

A
  • goal: compute highest probability tag sequence given an input sequence of words
  • evaluation on unseen test sentences
  • each argument label must be assigned to the correct word sequence or parse constituent
  • compute precision, recall, F1
  • common evaluation datasets are CoNLL and OntoNotes
24
Q

example system architecture SRL

A

main idea: treat SRL as neural sequence labeling task, similar to NER

  1. sentence is passed through encoder, which generates contextualized embeddings for each word
  2. concatenation of word embeddings with predicate information
  3. FFN extracts relevant features of each word
  4. decoder: CRF + biLSTM layer
  5. output: distribution over the SRL labels for each word, which indicates the likelihood of each label
25
Q

SRL limitations

A
  1. no universally agreed upon set of roles that suffice for all predicates
    –> predicates can be construed differently within and across languages
  2. items with the same role might not behave the same
    –> [charlie opened the door with a key] vs [charlie ate the sea cucumber with a fork]
  3. need for annotated data
26
Q

what is meaning

A
  1. mapping words onto concepts
  2. transmitting information (between entities)
  3. inference (meaning extends beyond immediate context)
  4. getting things done (language facilitates action and response)
27
Q

types of semantics

A
  1. lexical semantics
    –> aims to represent the meaning of individual words (and their relations)
  2. logical semantics
    –> aims to represent the meaning of entire sentences
    –> requires some sort of meaning representation
28
Q

first order logic

A
  • represents objects, properties, and relationships
  • makes use of selectional restrictions
  • nouns are one-place predicates
  • adjectives are one-place predicates
  • verbs are one-place, two-place, or three-place predicates
  • form: relation(constant1, constant2)
29
Q

event semantics

A

clauses in natural languages have a core that is a description of events

  1. core: at least a main verb, also syntactic subject, objects, etc.
  2. event: spatiotemporal things that help ground a language
    –> allow us to treat parts of a clause that are separate syntactically as separate predicates semantically

By using this event-based approach, we can holistically describe the who, what, where, when, and how of the action in a structured and precise manner

30
Q

compositional event semantics

A
  • we can build complex expressions from the basic symbols we’ve defined
  • we can make general assertions using quantifiers and other operations, and restrict assertions to individuals using relativization
  • we can use deductive rules to determine which statements are *true and false**
31
Q

purpose of semantic roles in NLP

A

help us map syntactic structures to real world meanings

though there are patterns, they are not trivially inferable from syntax

32
Q

impact of the design of meaning representations

A
  1. impacts how expressive these representations can be
    –> certain types of grammars can express sentence-meaning relations with bounded memory during parsing that others cannot
  2. impacts how we design and evaluate our semantic parsers
    –> shared structure across meaning representations built on an understanding of compositionality allows for more accurate and efficient parsing across these representations
33
Q

syntactic trees vs semantic graphs

A
  • syntactic trees: showcase how individual words and their syntactic categories fit together hierarchically, detailing the relationships between them
    –> structural roles: root, adv, etc.
  • semantic graphs: highlight the relational dynamic between elements, moving beyond hierarchical syntax to include the roles and connections integral to understanding the sentence’s meaning
    –> semantic roles: ARG1, ARG2, BV
34
Q

visually grounded SRL

A

advancing from abstract semantic representations of text to anchoring those semantics in images

aims to map language directly to visual scenes

35
Q

can meaning be derived from (logical) form alone?

conceptual role theory

A

no, simply learning patterns and structures in language data (form) is insufficient for truly understanding meaning.

traditional assumption: the meaning of words and sentences is determined by their reference to things in the world
–> understanding meaning involves mapping linguistic expressions to their corresponding real-world entities and events

conceptual role theory: meaning lies in how a system’s internal representational states relate to each other
–> According to this theory, meaning is not solely about external reference but about the relationships and roles within the internal representations of the system.