Chapter2 Flashcards

1
Q

What is the topic for chapter2

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is text corpus?

A

a text corpus is a large body of text. Many corpora are designed to contain a careful balance of material in one or more genres

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is Gutenberg Corpus?

A

○ NLTK includes a small selection of texts from the Project Gutenberg electronic text archive, which contains some 25,000 free electronic books, hosted at http://www.gutenberg.org/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Importing words from NTLK using diff python import

A

import nltk >>> nltk.corpus.gutenberg.fileids() >>> emma = nltk.corpus.gutenberg.words(‘austen-emma.txt’) (Let’s pick out the first of these texts which is emma.txt)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Python provides another version of the import statement, as follows?

A

>>> from nltk.corpus import gutenberg >>> gutenberg.fileids() [‘austen-emma.txt’, ‘austen-persuasion.txt’, ‘austen-sense.txt’, …] >>> emma = gutenberg.words(‘austen-emma.txt’)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

write program to display other information about each text, by looping over all the values of fileid corresponding to the gutenberg file and compute statistics.?

A

>>> for fileid in gutenberg.fileids(): … num_chars = len(gutenberg.raw(fileid)) … num_words = len(gutenberg.words(fileid)) … num_sents = len(gutenberg.sents(fileid)) … num_vocab = len(set(w.lower() for w in gutenberg.words(fileid))) … print(round(num_chars/num_words), round(num_words/num_sents), round(num_words/num_vocab), fileid) 5 25 26 austen-emma.txt 5 26 17 austen-persuasion.txt 5 28 22 austen-sense.txt 4 34 79 bible-kjv.txt 5 19 5 blake-poems.txt 4 19 14 bryant-stories.txt ○ This program displays three statistics for each text: average word length, average sentence length, and the number of times each vocabulary item appears in the text on average (our lexical diversity score). Observe that average word length appears to be a general property of English, since it has a recurrent value of 4. (In fact, the average word length is really 3 not 4, since the num_chars variable counts space characters.) By contrast average sentence length and lexical diversity appear to be characteristics of particular authors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does row() function does?

A

The raw() function gives us the contents of the file without any linguistic processing.or example, len(gutenberg.raw(‘blake-poems.txt’)) tells us how many letters occur in the text, including the spaces between words

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is sent() function ?

A

The sents() function divides the text up into its sentences, where each sentence is a list of words:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is Brown corpus?

A

The Brown Corpus was the first million-word electronic corpus of English, created in 1961 at Brown University. This corpus contains text from 500 sources, and the sources have been categorized by genre, such as news, editorial, and so on . b. We can access the corpus as a list of words, or a list of sentences (where each sentence is itself just a list of words). We can optionally specify particular categories or files to read:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

how to find categories at Brown corpus?

A

>>> from nltk.corpus import brown >>> brown.categories()

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

In which case Brown university corpus important?

A

ii. The Brown Corpus is a convenient resource for studying systematic differences between genres, a kind of linguistic inquiry known as stylistics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

WRITE program to compare genres in their usage of modal verbs

A

>>> from nltk.corpus import brown >>> news_text = brown.words(categories=’news’) >>> fdist = nltk.FreqDist(w.lower() for w in news_text) >>> modals = [‘can’, ‘could’, ‘may’, ‘might’, ‘must’, ‘will’] >>> for m in modals: … print(m + ‘:’, fdist[m], end=’ ‘) … can: 94 could: 87 may: 93 might: 38 must: 53 will: 389 We need to include end=’ ‘ in order for the print function to put its output on a single line.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

If you want use FreqDistr u need to import what?

A

from nltk import FreqDist

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is Reuters Corpus ?

A

The Reuters Corpus contains 10,788 news documents totaling 1.3 million words. b. The documents have been classified into 90 topics, and grouped into two sets, called “training” and “test”; thus, the text with fileid ‘test/14826’ is a document drawn from the test set.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Inaugural Address Corpus

A

we looked at the Inaugural Address Corpus, but treated it as a single text . However, the corpus is actually a collection of 55 texts, one for each presidential address. An interesting property of this collection is its time dimension:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Example Inaugural corpus

A

>> from nltk.corpus import inaugural >>> inaugural.fileids() [‘1789-Washington.txt’, ‘1793-Washington.txt’, ‘1797-Adams.txt’, …] >>> [fileid[:4] for fileid in inaugural.fileids()] [‘1789’, ‘1793’, ‘1797’, ‘1801’, ‘1805’, ‘1809’, ‘1813’, ‘1817’, ‘1821’, …]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Annotated Text Corpora for teaching and learning

A

Many text corpora contain linguistic annotations, representing POS tags, named entities, syntactic structures, semantic roles, and so forth. NLTK provides convenient ways to access several of these corpora, and has data packages containing corpora and corpus samples, freely downloadable for use in teaching and research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Some of the corpora

A
  1. Movie Reviews Pang, Lee 2k movie reviews with sentiment polarity classification SentiWordNet Esuli, Sebastiani sentiment scores for 145k WordNet synonym sets Wordlist Corpus OpenOffice.org et al 960k words and 20k affixes for 8 languages WordNet 3.0 (English) Miller, Fellbaum 145k synonym sets
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Loading your own Corpus

A

f you have your own collection of text files that you would like to access using the above methods, you can easily load them with the help of NLTK’s PlaintextCorpusReader

  1. The second parameter of the PlaintextCorpusReader initializer
    can be a list of fileids, like [‘a.txt’, ‘test/b.txt’], or a pattern that matches all fileids, like ‘[abc]/.*.txt’
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Example of importing file

A

>>> from nltk.corpus import PlaintextCorpusReader >>> corpus_root = ‘/usr/share/dict’

>>> wordlists = PlaintextCorpusReader(corpus_root, ‘.*’)

>>> wordlists.fileids() [‘README’, ‘connectives’, ‘propernames’, ‘web2’, ‘web2a’, ‘words’] >>> wordlists.words(‘connectives’) [‘the’, ‘of’, ‘and’, ‘to’, ‘a’, ‘in’, ‘that’, ‘is’, …]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Generating Random Text with Bigrams Eaxample?

A

The bigrams() function takes a list of words and builds a list of consecutive word pairs. Remember that, in order to see the result and not a cryptic “generator object”, we need to use the list()function:

>>> import nltk
>>> sent = [‘In’, ‘the’, ‘beginning’, ‘God’, ‘created’, ‘the’, ‘heaven’]
>>> list(nltk.bigrams(sent))
[(‘In’, ‘the’), (‘the’, ‘beginning’), (‘beginning’, ‘God’), (‘God’, ‘created’), (‘created’, ‘the’), (‘the’, ‘heaven’)]
>>>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Advise on testing code with an editor?

A

• It is often convenient to test your ideas using the interpreter, revising a line of code until it does what you expect
Once you’re ready, you can paste the code (minus any >>> or … prompts) into the text editor, continue to expand it, and finally save the program in a file so that you don’t have to type it in again later

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How to name file in python?

A

Give the file a short but descriptive name, using all lowercase letters and separating words with underscore

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Functions

A

A function is just a named block of code that performs some well-defined task

• We define a function using the keyword def followed by the function name and any input parameters, followed by the body of the functio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Simple function of Plural ?

A

def plural(word):

if word.endswith(‘y’): return word[:-1] + ‘ies’ elif word[-1] in ‘sx’ or word[-2:] in [‘sh’, ‘ch’]: return word + ‘es’ elif word.endswith(‘an’): return word[:-2] + ‘en’ else: return word + ‘s
® >>> plural(‘fairy’) ‘fairies’ >>> plural(‘woman’) ‘women’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

endswith() method or function

A

• The endswith() function is always associated with a string object (e.g., word in 3.1). To call such functions, we give the name of the object, a period, and then the name of the function. These functions are usually known as methods.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is module?

A

• A collection of variable and function definitions in a file is called a Python module.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is Package ?

A

• A collection of related modules is called a package.
NLTK’s code for processing the Brown Corpus is an example of a module, and its collection of code for processing all the different corpora is an example of a package. NLTK itself is a set of packages, sometimes called a library.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is lexicon?

A

A lexicon, or lexical resource, is a collection of words and/or phrases along with associated information such as part of speech and sense definitions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is lexical resource?

A

○ Lexical resources are secondary to texts, and are usually created and enriched with the help of texts. For example, if we have defined a text my_text, then vocab = sorted(set(my_text)) builds the vocabulary of my_text, while word_freq = FreqDist(my_text)counts the frequency of each word in the text. Both of vocab and word_freq are simple lexical resources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is lexical entry?

A

○ A lexical entry consists of a headword (also known as a lemma) along with additional information such as the part of speech and the sense definition. Two distinct words having the same spelling are called homonyms.

32
Q
A
33
Q

What is lexical enrty? explain diagramatically

A
34
Q

Wordlist Corpora

A

NLTK includes some corpora that are nothing more than wordlists. The Words Corpus is the /usr/share/dict/words file from Unix, used by some spell checkers. We can use it to find unusual or mis-spelt words in a text corpus

35
Q

Stop word corpus

A

There is also a corpus of stopwords, that is, high-frequency words like the, to and also that we sometimes want to filter out of a document before further processing.

Stopwords usually have little lexical content, and their presence in a text fails to distinguish it from other texts.

Example: ○ >>> from nltk.corpus import stopwords >>> stopwords.words(‘english’) [‘i’, ‘me’, ‘my’, ‘myself’, ‘we’, ‘our’, ‘ours’, ‘ourselves’, ‘you’, ‘your’, ‘yours’, ‘yourself’, ‘yourselves’, ‘he’, ‘him’, ‘his’, ‘himself’, ‘she’, ‘her’, ‘hers’, ‘herself’, ‘it’, ‘its’, ‘itself’, ‘they’, ‘them’, ‘their’, ‘theirs’, ‘themselves’, ‘what’, ‘which’, ‘who’, ‘whom’, ‘this’, ‘that’, ‘these’, ‘those’, ‘am’, ‘is’, ‘are’, ‘was’, ‘were’, ‘be’, ‘been’, ‘being’, ‘have’, ‘has’, ‘had’, ‘having’, ‘do’, ‘does’, ‘did’, ‘doing’, ‘a’, ‘an’, ‘the’, ‘and’, ‘but’, ‘if’, ‘or’, ‘because’, ‘as’, ‘until’, ‘while’, ‘of’, ‘at’, ‘by’, ‘for’, ‘with’, ‘about’, ‘against’, ‘between’, ‘into’, ‘through’, ‘during’, ‘before’, ‘after’, ‘above’, ‘below’, ‘to’, ‘from’, ‘up’, ‘down’, ‘in’, ‘out’, ‘on’, ‘off’, ‘over’, ‘under’, ‘again’, ‘further’, ‘then’, ‘once’, ‘here’, ‘there’, ‘when’, ‘where’, ‘why’, ‘how’, ‘all’, ‘any’, ‘both’, ‘each’, ‘few’, ‘more’, ‘most’, ‘other’, ‘some’, ‘such’, ‘no’, ‘nor’, ‘not’, ‘only’, ‘own’, ‘same’, ‘so’, ‘than’, ‘too’, ‘very’, ‘s’, ‘t’, ‘can’, ‘will’, ‘just’, ‘don’, ‘should’, ‘now’]

36
Q

define a function to compute what fraction of words in a text are not in the stopwords list:

A

>>> def content_fraction(text): …

stopwords = nltk.corpus.stopwords.words(‘english’) … content = [w for w in text if w.lower() not in stopwords] … return len(content) / len(text) … >>> content_fraction(nltk.corpus.reuters.words()) 0.7364374824583169

37
Q

Names corpus,

A

One more wordlist corpus is the Names corpus, containing 8,000 first names categorized by gender. The male and female names are stored in separate file

38
Q

Let’s find names which appear in both files, i.e. names that are ambiguous for gender:?

A

>>> names = nltk.corpus.names >>> names.fileids() [‘female.txt’, ‘male.txt’] >>> male_names = names.words(‘male.txt’) >>> female_names = names.words(‘female.txt’) >>> [w for w in male_names if w in female_names] [‘Abbey’, ‘Abbie’, ‘Abby’, ‘Addie’, ‘Adrian’, ‘Adrien’, ‘Ajay’, ‘Alex’, ‘Alexis’, ‘Alfie’, ‘Ali’, ‘Alix’, ‘Allie’, ‘Allyn’, ‘Andie’, ‘Andrea’, ‘Andy’, ‘Angel’, ‘Angie’, ‘Ariel’, ‘Ashley’, ‘Aubrey’, ‘Augustine’, ‘Austin’, ‘Averil’, …]

It is well known that names ending in the letter a are almost always female.Remember that name[-1] is the last letter of name below

39
Q

Comparative Wordlists

A

• NLTK includes so-called Swadesh wordlists, lists of about 200 common words in several languages. The languages are identified using an ISO 639 two-letter code.
example

○ >>> from nltk.corpus import swadesh >>> swadesh.fileids() [‘be’, ‘bg’, ‘bs’, ‘ca’, ‘cs’, ‘cu’, ‘de’, ‘en’, ‘es’, ‘fr’, ‘hr’, ‘it’, ‘la’, ‘mk’, ‘nl’, ‘pl’, ‘pt’, ‘ro’, ‘ru’, ‘sk’, ‘sl’, ‘sr’, ‘sw’, ‘uk’] >>> swadesh.words(‘en’) [‘I’, ‘you (singular), thou’, ‘he’, ‘we’, ‘you (plural)’, ‘they’, ‘this’, ‘that’, ‘here’, ‘there’, ‘who’, ‘what’, ‘where’, ‘when’, ‘how’, ‘not’, ‘all’, ‘many’, ‘some’, ‘few’, ‘other’, ‘one’, ‘two’, ‘three’, ‘four’, ‘five’, ‘big’, ‘long’, ‘wide’, …]

40
Q

What is Shoebox and Toolbox Lexicons?

A

○ Perhaps the single most popular tool used by linguists for managing data is Toolbox, previously known as Shoebox since it replaces the field linguist’s traditional shoebox full of file cards. Toolbox is freely downloadable from http://www.sil.org/computing/toolbox/.

41
Q

WordNet

A

○ WordNet is a semantically-oriented dictionary of English, similar to a traditional thesaurus but with a richer structure. NLTK includes the English WordNet, with 155,287 words and 117,659 synonym sets. We’ll begin by looking at synonyms and how they are accessed in WordNet.

42
Q

Summary

A

• A text corpus is a large, structured collection of texts. NLTK comes with many corpora, e.g., the Brown Corpus, nltk.corpus.brown.
• Some text corpora are categorized, e.g., by genre or topic; sometimes the categories of a corpus overlap each other.
• A conditional frequency distribution is a collection of frequency distributions, each one for a different condition. They can be used for counting word frequencies, given a context or a genre.
• Python programs more than a few lines long should be entered using a text editor, saved to a file with a .py extension, and accessed using an import statement.
• Python functions permit you to associate a name with a particular block of code, and re-use that code as often as necessary.
• Some functions, known as “methods”, are associated with an object and we give the object name followed by a period followed by the function, like this: x.funct(y), e.g., word.isalpha().
• To find out about some variable v, type help(v) in the Python interactive interpreter to read the help entry for this kind of object.
• WordNet is a semantically-oriented dictionary of English, consisting of synonym sets — or synsets — and organized into a network.
Some functions are not available by default, but must be accessed using Python’s import statement

43
Q

Set () what it is used in python?

A

To find unique words

44
Q

Why do we normalize text and how to do it?

A

Always normalize text using w.lower because using All and all will be counted diff in text processing

45
Q

Can you combine functions with object using . ? how?

A

len(set(a.lower().split(‘ ‘)))

46
Q

What are string operation?

A

s. lower() , s.upper() , s.titlecase()
s. split(t) , s.joint(t) , s.strip(t) ,s.rstrip() , s.find(t) , s.rfind(s) , s.replace(u,v)

47
Q

Two ways to get character from word?

A
  1. using list function
  2. using [w for w in word]
48
Q

If you have text with space and tabs, how can u get exact list of words in the text without those tabs and space?

A

use strip function first to remove the space and tabs then use split function to split the text with space

The strip() method returns a copy of the string with both leading and trailing characters removed (based on the string argument passed).

The strip() removes characters from both left and right based on the argument (a string specifying the set of characters to be removed).

The syntax of strip() is:

string.strip([chars])

strip() Parameters
chars (optional) - a string specifying the set of characters to be removed.
If the chars argument is not provided, all leading and trailing whitespaces are removed from the string

49
Q

w.find() , w.rfind() , w.replace( old, new)

A

All this find string , find string in reverse and find and replace string

50
Q

File operations

A

f = open(filename, mode)

f.readline() , f.read() , f.read(n)

for line in f : do something(line)

f. skeek()
f. write()
f. close()
f. closed to check if a particular file is really close.

51
Q

What is on eprobelm with reading text ? hw to dealt with it?

A
52
Q

How to create mutiple lines in python?

A
  1. s = “”” this is a very
    long string if I had the
    energy to type more and more …”””

but will have(\n) at end this is a very\n long string if I had the\n energy to type more and more …’

  1. s = (“this is a very”
    “long string too”
    “for sure …”
    )

This will not include extra space

3.

Breaking lines by \ works for me.

longStr = “This is a very long string “ \
“that I wrote to help somebody “ \
“who had a question about “ \
“writing long strings in Python”

4.

string = “"”This is a
very long string,
containing commas,
that I split up
for readability””“.replace(‘\n’, ‘ ‘ )

53
Q

Find a word that start with # in senetence?tweet = “@nltk Text analysis is awesome! #regex #pandas #python

A

print([word for word in tweet.split() if word.startswith(‘#’)])

54
Q

Advise on using text

A

Advice on using text. Always split ur text into words before working on each words

55
Q

How to find specific word in python?

A

[w for w in (knowntext) if condition(on w)]

56
Q

In regular expression what is the function of [] , [‘abc] , a|b,\d , \D

A

[] means matches one of the character inside,

[‘abc] …matches any charcter that is not abc

a|b…matches a or b

\d means any digit [0-9]

\D any non digit [^0-9]

57
Q

What is \s , \S , \w

A

\s….means any whitespace

\S any no-whitespace

\w…….means alpha numerice character

\W ..non alpha numeric character

58
Q

Metacharacter repitions ?

A

* : matches zero or more occurance

+ : matches one or more occurances

?: matches zero or one more occurance

{n} exactly n repitions

{n , } atleast n repition

{,n} at most n repition

{m,n} atleast n repition and atmost n repition

59
Q
A
60
Q

Example of searching string

A

a = “usman @gwammaja @bayan masqa akwai spa”c

In [72]: [w for w in b.split() if re.search(‘@[A-Za-z0-9]+’,w)]
Out[72]: [‘@gwammaja’, ‘@bayan’]

Using Alpha numeric character ?

[w for w in b.split() if re.search(‘@\w+’,w)]
Out[73]: [‘@gwammaja’, ‘@bayan’]

61
Q

What is r infront in regular expression?

A

The ‘r’ in front tells Python the expression is a raw string. In a raw string, escape sequences are not parsed. For example, ‘\n’ is a single newline character. But, r’\n’ would be two characters: a backslash and an ‘n’.

62
Q

re.search() , re.march() , re.

A

re. match(), re.search(), re.findall() with Example
https: //www.guru99.com/python-regular-expressions-complete-tutorial.html

63
Q

live as if your were to die tomorrow , learn……

A

“Live as if you were to die tomorrow. Learn as if you were to live forever,” as Mahatma Gandhi

64
Q

How mant text corpus do we have in NLTK?

How do we access the sentences in a particular text? forxample snetence na 13

A

8 , sent2 , sent13

65
Q

how to calculate first ten words in sentence?

A

list(sent(sent7))[:10]

66
Q

Why do we call list sequence type?

A

Since lists in Python store ordered collections of items or objects, we can say that they are sequence types, exactly because they behave like a sequence. Other types that are also considered to be sequence types are strings and tuples.

67
Q

What is the relationship between sequence type and iterable?

A

You might wonder what’s so special about sequence types. Well, in simple words, it means that the program can iterate over them! This is why lists, strings, tuples, and sets are called “iterables”.

68
Q

Lists Versus Tuples? difference, when to use one?

A

Tuples are used to collect an immutable ordered list of elements. This means that:

You can’t add elements to a tuple. There’s no append() or extend() method for tuples,
You can’t remove elements from a tuple. Tuples have no remove() or pop() method,
You can find elements in a tuple since this doesn’t change the tuple.
You can also use the in operator to check if an element exists in the tuple.

So, if you’re defining a constant set of values and all you’re going to do with it is iterate through it, use a tuple instead of a list. It will be faster than working with lists and also safer, as the tuples contain “write-protect” data.

69
Q

List vs Dictionary?

A

A list stores an ordered collection of items, so it keeps some order. Dictionaries don’t have any order.
Dictionaries are known to associate each key with a value, while lists just contain values.

Use a dictionary when you have an unordered set of unique keys that map to values.

Note that, because you have keys and values that link to each other, the performance will be better than lists in cases where you’re checking membership of an element.

70
Q

Lists Versus Sets

A

Just like dictionaries, sets have no order in their collection of items. Not like lists.
Set requires the items contained in it to be hashable, lists store non-hashable items.
Sets require your items to be unique and immutable. Duplicates are not allowed in sets, while lists allow for duplicates and are mutable.

You should make use of sets when you have an unordered set of unique, immutable values that are hashable.

You aren’t sure which values are hashable?

71
Q

What this method does in python? dict.keys()

A

This method returns a list of all the available keys in the dictionary.

72
Q
A
73
Q

How to import help for NLTK tagset explainning part of speech?

A

nltk.help.upenn_tagset()

74
Q

hypernym and hyponym?

A

In simpler terms, a hyponym is in a type-of relationship with its hypernym. For example, pigeon, crow, eagle and seagull are all hyponyms of bird (their hypernym); which, in turn, is a hyponym of animal.

Hyponymy shows the relationship between a generic term (hypernym) and a specific instance of it (hyponym). A hyponym is a word or phrase whose semantic field is more specific than its hypernym. The semantic field of a hypernym, also known as a superordinate, is broader than that of a hyponym

75
Q

what is etymology of the word computer?

A

The Online Etymology Dictionary states that the use of the term to mean “calculating machine” (of any type) is from 1897.”

76
Q

List comprehension using not in

A

para = “This is my message to you shamsudd I will come to your house tomorrow”

words = word_tokenize(para)
print(words)
useful_words = [word for word in words if word not in stopwords.words(‘english’)]
print(useful_words)

77
Q
A