Additional Mathematical and Computational Concepts Flashcards

1
Q

What is Dynamic programming?

A

Dynamic programming is a method for solving complex problems by breaking them down into smaller subproblems. It is used to find the optimal solution to a problem by storing the results of smaller subproblems and using them to solve larger ones. This allows for faster and more efficient problem-solving, as the same subproblems do not need to be solved multiple times. Dynamic programming is commonly used in the field of computer science and operations research.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is Zipf’s Law?

A

Zipf’s law is a statistical observation that states that the frequency of a word in a given language is inversely proportional to its rank in the frequency table. In other words, the second most common word in a language will occur half as often as the most common word, the third most common word will occur one-third as often, and so on. This law is named after the linguist George Zipf, who first observed it in the 1930s. It has been found to hold true for a wide variety of languages, although there are some exceptions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Search space in parsing: What is one searching for and what is one searching through?

A

In parsing, the search space refers to the set of all possible parses that a parser can consider when trying to determine the syntactic structure of a given sentence. The parser is searching for the correct parse that correctly represents the syntactic structure of the sentence. It does this by searching through the search space.

The size of the search space in parsing can vary depending on the complexity of the sentence and the grammar used by the parser. In general, the search space grows exponentially with the length of the sentence, making it computationally challenging to search through all possible parses in a reasonable amount of time. This is why many parsing algorithms use various techniques, such as pruning and heuristics, to reduce the size of the search space and make the search more efficient.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Compare breadth-first search and depth-first search, and the differences between them

A

Breadth-first search and depth-first search are two common algorithms for traversing and searching a tree or graph. They both start at the root node and explore the neighbor nodes, but there are some key differences between the two algorithms.

Breadth-first search (BFS) explores the neighbor nodes first, before moving on to the next level of the tree. This means that it visits all the nodes at the same depth in the tree before moving on to the next level. BFS is useful for finding the shortest path between two nodes, as it always explores the paths with the fewest number of edges first.

Depth-first search (DFS) explores the nodes in a tree or graph in a depth-ward motion, moving deeper into the tree before exploring the nodes at the same level. This means that it will visit all the nodes on one branch of the tree before moving on to the next branch. DFS is useful for searching through a large tree or graph, as it can explore a large number of nodes in a short amount of time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Compare Top-Down parsing and bottom-Up parsing

A

Top-down parsing and bottom-up parsing are two methods for analyzing a string of tokens in order to determine its syntactic structure. They are both used in natural language processing and computer science to determine the grammatical structure of a sentence.

Top-down parsing, also known as recursive descent parsing, starts with the highest level of the parse tree and works its way down towards the leaves. This means that it begins by trying to find the highest-level syntactic structure of the sentence, and then breaks it down into smaller and smaller pieces until it reaches the individual words. This method is called top-down because it starts at the top of the parse tree and works its way down.

Bottom-up parsing, on the other hand, starts with the individual words of the sentence and works its way up towards the highest level of the parse tree. This means that it begins by trying to find the smallest units of meaning in the sentence, and then combines them to form larger and larger syntactic structures. This method is called bottom-up because it starts at the bottom of the parse tree and works its way up.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is Best-first probabilistic parsing

A

Best-first probabilistic parsing is a type of parsing algorithm that uses probabilistic models to guide the search for the most likely parse of a sentence. It combines the principles of best-first search, which explores the most promising nodes in a search space first, with probabilistic models, which estimate the likelihood of different parses based on the observed data.

In best-first probabilistic parsing, the parser uses a probabilistic model to estimate the likelihood of different parses at each step of the search. This allows the parser to focus its search on the most promising parses, rather than exploring the entire search space. The probabilistic model can be trained on a large corpus of annotated text, allowing it to learn the statistical regularities of the language and make more accurate predictions about the likelihood of different parses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the difference between recognition and parsing?

A

Recognition and parsing are two related but distinct tasks in natural language processing. Recognition refers to the process of identifying the words and phrases in a sentence, while parsing refers to the process of analyzing the syntactic structure of a sentence.

In recognition, the goal is to identify the words and phrases in a sentence and assign a unique identifier to each of them. This is typically done using a lexical analyzer, which breaks the sentence down into its individual words and applies rules or patterns to identify their meaning and function.

In parsing, the goal is to analyze the syntactic structure of a sentence and determine how the words and phrases are related to each other. This is typically done using a parser, which uses a set of grammar rules or a probabilistic model to analyze the sentence and determine its grammatical structure.

Overall, the key difference between recognition and parsing is the level of detail at which they analyze a sentence. Recognition focuses on identifying individual words and phrases, while parsing focuses on analyzing the syntactic structure of the sentence as a whole.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Give examples of vector-based similarity measure as well as their pros and cons

A

Vector-based similarity measures are algorithms that compare two vectors and estimate their similarity based on the angles between them. Some examples of vector-based similarity measures include the following:

Cosine similarity: This measure calculates the cosine of the angle between two vectors, with a value of 1 indicating perfect similarity and a value of 0 indicating orthogonality. This measure is easy to compute and is often used in text-based applications, but it is sensitive to the length of the vectors and does not take the order of the elements into account.

Dot product/Euclidean distance: This measure calculates the straight-line distance between two vectors, with a value of 0 indicating perfect similarity and a higher value indicating greater dissimilarity. This measure is easy to understand and is often used in data mining and machine learning, but it is sensitive to the scale of the vectors and does not take the direction of the vectors into account.

Overall, vector-based similarity measures are useful tools for comparing the similarity of two vectors, but their accuracy and usefulness can vary depending on the specific application and the properties of the vectors being compared.
t.kwakpovwe@gmail.com
Explain why we might use vector representation of words/word embedding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Explain why we might use vector representation of words/word embedding

A

Vector representation of words, also known as word embedding, is a method for representing words in a vector space. This allows us to perform mathematical operations on words, such as addition, subtraction, and comparison, and to use them as inputs to machine learning algorithms.

There are several reasons why we might use vector representation of words in natural language processing and other applications.

First, vector representation of words allows us to capture the semantic relationships between words. For example, we can represent the words “king” and “queen” in a vector space such that they are close together, indicating that they have similar meanings. This can be useful for tasks such as word similarity and analogy, where we need to capture the relationships between words.

Second, vector representation of words allows us to efficiently store and process large amounts of text data. By representing words as vectors, we can reduce the amount of data needed to represent a sentence, and we can perform mathematical operations on the vectors in order to process the text data.

Third, vector representation of words can improve the performance of machine learning algorithms. By representing words as vectors, we can use them as input features to train machine learning models, which can then learn to predict the output based on the relationships between the words. This can be useful for tasks such as sentiment analysis and text classification, where the relationships between words are important for making accurate predictions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Explain the limitations of vector representation of words/word embedding

A

While vector representation of words, or word embedding, has many useful applications in natural language processing and other fields, there are also some limitations to this approach.

First, vector representation of words is based on a mathematical model, which may not always accurately capture the semantic relationships between words. For example, the model may not be able to capture the subtle differences between words with similar meanings, or it may not be able to capture the context-dependent meaning of words. This can lead to errors in tasks such as word similarity and analogy, where the relationships between words are important.

Second, vector representation of words can be computationally intensive, especially for large vocabularies or large amounts of text data. This can make it challenging to train word embedding models and to use them for real-time applications.

Third, vector representation of words may not be able to capture certain aspects of language that are important for understanding and processing text. For example, it may not be able to capture the syntactic structure of a sentence, or the relationships between words in a sentence. This can limit the usefulness of word embedding models for certain tasks, such as parsing and translation.

Overall, while vector representation of words has many useful applications, it is not a perfect representation of language and has some limitations that should be considered when using it for natural language processing and other tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are examples of Dynamic Programming algorithms?

A

Sequence Alignment (Levenshtein Distance).

Forward-Backward algorithm.

Viterbi algorithm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Why might dynamic programming algorithms be better than other algorithms?

A

First, dynamic programming algorithms can provide the optimal solution to a problem, whereas other algorithms may only find a suboptimal solution. This is because dynamic programming algorithms are able to take into account the entire problem, rather than just a part of it, when making decisions.

Second, dynamic programming algorithms can be very efficient in terms of time and space complexity. This is because they store the results of smaller subproblems and reuse them to solve larger ones, rather than recalculating the same subproblems multiple times.

Third, dynamic programming algorithms can be very versatile and can be applied to a wide range of problems. This makes them a useful tool for solving complex problems in a variety of fields, such as computer science and operations research.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is parsing?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a Context vector and what is it used for?

A

A context vector is a vector representation of the context in which a word appears. It is typically used in natural language processing to capture the meaning of a word in a given context.

In natural language processing, a context vector is typically constructed by concatenating the vectors for the words that appear in the context of the target word. For example, if we are trying to determine the meaning of the word “bank” in the sentence “I deposited money in the bank”, the context vector for the word “bank” might be constructed by concatenating the vectors for the words “I”, “deposited”, “money”, “in”, and “the”.

The context vector is then used to disambiguate the meaning of the target word. For example, if the word “bank” can refer to a financial institution or to the edge of a river, the context vector can be used to determine which of these meanings is intended in a given sentence. This can be done using a machine learning model that has been trained to predict the meaning of a word based on its context vector.

Overall, context vectors are a useful tool for natural language processing, as they allow us to capture the meaning of a word in a given context and to disambiguate words with multiple meanings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The noisy channel model

A

The noisy channel model is a mathematical model used in natural language processing to describe the process of communication between a sender and a receiver. In this model, the sender encodes a message using a certain language, and the receiver decodes the message using the same or a different language. However, the process of transmitting the message is noisy, meaning that the message may be corrupted or distorted in some way.

The noisy channel model is typically used to model the process of machine translation, where the sender encodes a message in one language and the receiver decodes it in another language. In this scenario, the noisy channel represents the difficulty of translating the message accurately, due to factors such as the complexity of the language and the ambiguity of the words and phrases in the sentence.

The noisy channel model can be used to evaluate the performance of machine translation algorithms, as well as to design new algorithms that can better handle the noise in the channel. It is also used in other natural language processing tasks, such as speech recognition and text summarization, where the process of extracting the meaning from a sentence is subject to noise and uncertainty.

Overall, the noisy channel model is a useful tool for understanding and modeling the process of communication in natural language processing, and for designing algorithms that can accurately extract the meaning from a sentence despite the presence of noise and uncertainty.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly