deck_15613364 Flashcards

1
Q

FOL

A

First Order Logic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

In terms of FOL, what is the following:

Objects:

Relations:

Functions:

A
  • Objects: people, houses, numbers, theories, Ronald McDonald, colors,
    baseball games, wars, centuries . . .
  • Relations: red, round, bogus, prime, multistoried . . .,
    brother of, bigger than, inside, part of, has color, occurred after, owns,
    comes between, . . .
  • Functions: father of, best friend, third inning of, one more than, end of
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Ontological Engineering

A

General and flexible representations for complex
domains.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Upper ontology:

A

The general framework of concepts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Categories and Objects

A

Stuff: a significant portion of reality that seems to defy any obvious
individuation—division into distinct objects

Intrinsic: they belong to the very substance of the object, rather than to the
object as a whole.

Extrinsic: weight, length, shape

Substance: a category of objects that includes in its definition only intrinsic
properties (mass noun).

Count noun: class that includes any extrinsic properties
© 2021 Pearson Education Ltd.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Mental Objects

A

Mental objects are knowledge in someone’s head (or KB)
Propositional attitudes that an agent can have toward mental objects
* Eg: Believes, Knows, Wants, and Informs
Lois knows that Superman can fly:
Knows(Lois, CanFly(Superman))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Modal Logic

A

Sentences can sometimes be verbose and clumsy. Regular logic is concerned with
a single modality, the modality of truth.
Modal logic addresses this, with special modal operators that take sentences
(rather than terms) as arguments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Semantic networks

A
  • convenient to perform inheritance reasoning
  • Eg: Mary inherits the property of having two legs. Thus, to find out how many
    legs Mary has, the inheritance algorithm follows the MemberOf link from Mary
    to the category she belongs to and then follows SubsetOf links up the hierarchy
    until it finds a category for which there is a boxed Legs link
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Description logics

A
  • notations that are designed to make it easier to describe definitions and
    properties of categories
  • evolved from semantic networks in response to pressure to formalize what the
    networks mean while retaining the emphasis on taxonomic structure as an
    organizing principle
  • Principal inference tasks:
  • Subsumption: checking if one category is a subset of another by
    comparing their definitions
  • Classification: checking whether an object belongs to a category
  • The CLASSIC language (Borgida et al., 1989) is a typical description logic
  • Eg: bachelors are unmarried adult males
  • Bachelor = And(Unmarried, Adult, Male)
    © 2021 Pearson Education Ltd.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Belief revision:

A

inferred facts will turn out to be wrong and will have to be
retracted in

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Truth maintenance systems

A

or TMSs, are designed to handle complications of
any additional sentences that inferred from a wrong sentence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Justification-based truth maintenance system (JTMS)

A
  • Each sentence in the knowledge base is annotated with a justification
    consisting of the set of sentences from which it was inferred
  • Justifications make retraction efficient
  • Assumes that sentences that are considered once will probably be
    considered again
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

fol basic elemets

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Events

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

NLP

A

Natural Language Processing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

N-GRAMS

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Smoothing n-gram models

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Atomic Model in N-grams

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Part-of-speech (POS) tagging

A

Part-of-speech (POS) tagging
* way to categorize words (lexical category/tag)
* POS allows language models to capture
generalizations such as “adjectives generally come
before nouns in English”
* useful first step in many other NLP tasks, such as
question answering or translation
Big Libraries for this are NLTK and spaCy. Other libraries exist
in other languages but Python generally regarded as main NLP
language these days.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Grammar

A

A grammar is a set of rules that defines the tree structure of allowable phrases
* A language is the set of sentences that follow those rules.
* Syntactic categories such as noun phrase or verb phrase help to constrain the
probable words at each point within a sentence
* The phrase structure provides a framework for the meaning or semantics of the
sentence
Probabilistic context-free grammar (PCFG)
* A probabilistic grammar assigns a probability to each string
* “context-free” means that any rule can be used in any context

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Parsing

A

Parsing is the process of analyzing a string of words to uncover its phrase
structure, according to the rules of a grammar.
* Search for a valid parse tree whose leaves are the words of the string
* can start with the S symbol and search top down, or we can start with the
words and search bottom up.
* Inefficiency: If the algorithm guesses wrong, it will have to backtrack all
the way to the first word and reanalyze the whole sentence under the
other interpretation.
* every time we analyze a substring, store the results so we won’t have to
reanalyze it later.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Parsing Tree

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

dependency grammar:

A
  • assumes that syntactic structure is formed by binary relations between lexical
    items, without a need for syntactic constituents
  • phrase structure tree is annotated with the head of each phrase
  • recover dependency tree
  • Convert dependency tree to phrase structure with arbitrary categories
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Learn Parser From Examples

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Learning semantic grammars

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Pragmatics:

A
  • resolving the meaning of indexicals, which are phrases
    that refer directly to the current situation
  • Example sentence: “I am in Boston today,” both “I” and
    “today” are indexicals. The word “I” would be represented
    by Speaker, a fluent that refers to different objects at
    different times
  • interpreting the speaker’s intent
  • The speaker’s utterance is considered a speech act, and it
    is up to the hearer to decipher what type of action it is
    (question, a statement, a promise, a warning, a command,
    etc.)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Time and tense:

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Ambiguity:

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Disambiguation

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Deep Learning

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Feedforward network

A
  • connections only in one direction (input to output)
  • directed acyclic graph with designated input and output nodes
  • No loops
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Recurrent network

A
  • its intermediate or final outputs back into its own inputs.
  • signal values within the network form a dynamical system that has internal
    state or memory
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Networks as complex functions

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Different activation functions

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Input encoding

A
36
Q

Input Encoding For Image

A
37
Q

back-propagation

A

back-propagation: the way that the error at the output is passed back
through the network – not going to get into this in a ton of detail today
In practice when we train a neural network with see that a loss function
is minimized

38
Q

Recurrent neural networks (RNNs)

A

Recurrent neural networks (RNNs) allow cycles in the computation graph
each cycle has a delay
* units may take as input a value computed from their own output at an
earlier step
* RNN has an internal state/memory
* RNNs add expressive power compared to feedforward networks,
Training a basic RNN
* input layer x, a hidden layer z with recurrent connections, and an output
layer y,

39
Q

Generalization

A

Choosing a network architecture
Some neural network architectures are explicitly designed to generalize well on
particular types of data
When comparing two networks with similar numbers of weights, the deeper network
usually gives better generalization performance.
Deep learning systems perform better than any other pure machine learning
approaches for high dimensional inputs (images, video, speech signals, etc)
Deep learning models lack the compositional and quantificational expressive power
May also produce unintuitive errors. Tend to produce input–output mappings that
are discontinuous

40
Q

Neural architecture search

A

Neural architecture search
* neural architecture search to explore the state space of possible network
architectures.
* Some options to do this:
* Evolutionary algorithms: recombination (joining parts of two networks
together) and mutation (adding or removing a layer or changing a
parameter value)
* train one big network, search for subgraphs of the network that perform
better

41
Q

Transfer learning and multitask learning

A

For transfer learning experience with one learning task helps an agent learn
better on another task
freeze the first few layers of the pretrained model that serve as feature
detectors
modify the parameters of the higher levels only
* problem-specific features and do classification
Common to start with pretrained model such as the ROBERTA model
Followed by fine-tuning the model in two ways:
* giving it examples of the specialized vocabulary used in the desired domain
* training the model on the task it is to perform
Multitask learning is a form of transfer learning in which we simultaneously train
a model on multiple objectives

42
Q

Vision – CNNs (didn’t cover these today)

A
  • success of the AlexNet deep learning system in the 2012 ImageNet competition
    that propelled deep learning into the limelight
  • supervised learning task with 1,200,000 images in 1,000 different categories
  • top-5 error rate has been reduced to less than 2%— below error rate of trained
    human (5%)
43
Q

Natural language processing - RNNs

A
  • machine translation and speech recognition
  • end-to-end learning, the automatic generation of internal representations for the
    meanings of words, and the interchangeability of learned encoders and decoders
  • end-to-end learning outperforms classical pipelines
  • re-representing individual words as vectors in a high-dimensional space—so-
    called word embeddings
44
Q

What is a nlm

A

Neural Language Model

45
Q

What is a Corpus

A
46
Q

What is a Token

A
47
Q

Vocabulary

A
48
Q

AI and the Cloud

A
  • Need video memory
  • Need time
  • Difficult to run computers for extended
    periods (power supply, hardware)
  • Training of big neural networks has moved to
    the cloud
49
Q

Rant on GPUs

A
  • Deep learning can be done on the CPU but it is
    slow
  • Generally need a GPU (4060Ti, 4080, 3090,
    4090)
  • Memory biggest factor for training models
  • 16GB+
50
Q

FLAN-T5:

A

works if you ask it questions
directly

51
Q

What is a hidden layer in the computation graph in deep learning

A

Intermediate computations before producing the output y.
Different representation for the input x.
Each layer transforms the representation produced by the preceding layer to produce
a new representation
In the process of forming all these internal transformations, deep networks often
discover meaningful intermediate representations of the data
The hidden layers of neural networks are typically less diverse than the output layers.

52
Q

Back Propagation

A

back-propagation: the way that the error at the output is passed back
through the network – not going to get into this in a ton of detail today
In practice when we train a neural network with see that a loss function
is minimized

53
Q

Neural network inputs

A
54
Q

Properties of Neural Networks

A
55
Q

Activation Function

A
56
Q

Tensor operations in CNNs

A

tensors, which (in deep learning terminology) are simply multidimensional
arrays
of any dimension
Vectors and matrices are one-dimensional and two-dimensional special
cases of tensors
Computational efficiency of tensor operations: given a description of a
network as a sequence of tensor operations, deep learning software package
can generate compiled code that is highly optimized
Are run on GPUs (graphics processing units) or TPUs (tensor processing
units), which make available a high degree of parallelism

57
Q

Parameters (Weights)

A

tunable values learned during training
process

58
Q

Epoch

A

full pass of your training data

59
Q

Batch Size

A

N samples from your training data

60
Q

Iteration

A

updating your model every batch

61
Q

Hidden Layer

A

intermediate layer

62
Q

Loss function

A

we want to minimize this to zero

63
Q

Frameworks

A

Keras, PyTorch

64
Q

Image classification with convolutional neural networks

A

With enough training data and enough training ingenuity,
CNNs produce very successful classification systems
Images can have small alterations, patterns can be quite
informative.
Convolution followed by a ReLU activation function—as a
local pattern detector
composite patterns can be detected by applying another
layer to the output of the first layer.

65
Q

Convolution Filter

A
66
Q

Data set augmentation

A

training examples are copied and modified slightly
Images can have small alterations without changing the identity
* randomly shift, rotate,or stretch an image by a small amount, or randomly shift
the hue of the pixels by a small amount local patterns
* CNN-based classifiers are good at ignoring patterns that aren’t discriminative
* Context: patterns that lie off the object might be discriminative
* e.g., a cat toy, a collar with a little bell, or a dish of cat food might actually
help tell that we are looking at a cat

67
Q

Detecting Objects

A

Object detectors find multiple objects in an image, report what class each object
is and also report where each object is by giving a bounding box around the
object
Building an object detector:
* looking at a small sliding window onto the larger image—a rectangle.
* At each spot, we classify what we see in the window, using a CNN classifier
Details:
* Decide on a window shape
* Build a classifier for windows
* Decide which windows to look at
* Choose which windows to report
* Report precise locations of objects using these windows

68
Q

Robot Overview

A
  • physical agents that perform tasks by manipulating the physical
    world
  • equipped with effectors such as legs, wheels, joints, and grippers
  • equipped with sensors, which enable them to perceive their
    environment
  • Maximizing expected utility for a robot means choosing how to
    actuate its effectors to assert the right physical forces
  • Robotic learning is constrained because the real world operates at
    real time
69
Q

Applications of Robots

A

Home care: Robots have started to enter the home to care for older adults and
people with motor impairments
Health care: Robots assist and augment surgeons, enabling more precise,
minimally invasive, safer procedures with better patient outcomes
Services: Mobile robots help out in office buildings, hotels, and hospitals
Autonomous cars
Entertainment: Disney has been using robots (under the name animatronics) in
their parks since 1963.
Exploration and hazardous environments: Robots have gone where no human
has gone before, including the surface of Mars.
Industry: The majority of robots today are deployed in factories, automating
tasks that are difficult, dangerous, or dull for humans.

70
Q

Deployment of AI: Considerations

A
71
Q

Deploying ai Low stakes

A
72
Q

Deploying ai high stakes

A
73
Q

Digital Content

A
74
Q

No Free Lunch Therome

A
75
Q

AGI ethics

A
76
Q

Transparency

A
77
Q

Weak Ai

A
  • weak AI: the idea that machines could act as if they were intelligent
78
Q

Strong Ai

A

he assertion that machines that do so are actually consciously
thinking (not just simulating thinking)

79
Q

Good Old-Fashioned AI (GOFAI

A
  • simplest logical agent design
  • qualification problem: difficult to capture every contingency of appropriate
    behavior in a set of necessary and sufficient logical rules
80
Q

The argument from disability

A
81
Q

Measuring AI

A
82
Q

Representing the state of the world

A
83
Q

The mathematical objection

A
84
Q

Deciding what we want

A
85
Q

Resources

A
86
Q

AI engineering

A

AI engineering
* The AI industry has not yet reached that level of maturity.
* We do have a variety of powerful tools and frameworks, such as
TensorFlow, Keras, PyTorch, CAFFE, Scikit-Learn and SCIPY.
* But many of the most promising approaches, such as GANs and deep
reinforcement learning, have proven to be difficult to work with—they
require experience and a degree of fiddling to get them to train properly in
a new domain
* start with a single huge system and, for each new task, extract from it the
parts that are relevant to the task

87
Q

The future

A