AI Flashcards

1
Q

LLMs

A

(Large Language Models

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

AGI

A

Artificial General Intelligence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

AI winter Thoughts?

A

Dry spell

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Name 3 reasons of why Ai probably replace everything

A

AI still not suitable to replace humans for a number of tasks
Primarily because trustis a major issue with current AI systems and
those for foreseeable future
Science-fiction: widely argue AI is dangerous, although there are
some examples of good AI
Can AI systems actually be fair, just, and ethical, or will they simply
appear to be?
- Definitions of those terms vary from person to person
Still an open problem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Ai vs Game Ai

A

AI in general has more to do with knowledge
representation and taking reasonable actions based on
available data. In AI, “Available Data” tends to be limited
to sensory input and previously learned or experienced
examples.
In Game AI, we can break a lot of rules regarding what
“intelligence” is and give lots more information about the
world to agents than they normally would have based on
sensory inputs alone.
Still, there can be lots of overlap between the two.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Is AI a subset of machien learning

A

No, Machine learning is a subset of ai!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Define Machine Learning

A

In Machine Learning the general goal is find ways to get values that
separate different classes of data or produce an accurate prediction
based on some data

In Deep Learning we use massive datasets to train complex neural
networks to output text or recognize objects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

The Turing test

A

A test to see if a AI can successfully pretend to be a human

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Rational Agents

A

Artificial intelligence is the synthesis and analysis of
computational agents that act intelligently.
An agent is something that acts in an environment.

An agent acts intelligently if:
its actions are appropriate for its goals and
circumstances
it is flexible to changing environments and goals
it learns from experience
it makes appropriate choices given perceptual and
computational limitations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Provide some examples of rational Agents

A

Organizations Microsoft, European Union, Real Madrid FC,
an ant colony,…

People teacher, physician, stock trader, engineer, researcher,
travel agent, farmer, waiter…

Computers/devices thermostat, user interface, airplane
controller, network controller, game, advising system, tutoring
system, diagnostic assistant, robot, Google car, Mars rover…

Animals dog, mouse, bird, insect, worm, bacterium, bacteria…
book(?), sentence(?), word(?), letter(?)

Can a book or article do things?
Convince? Argue? Inspire? Cause people to act differently?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

List the scientific and Engineering goal behind rational agents

A

Scientific goal: to understand the principles that make
intelligent behavior possible in natural or artificial systems.
analyze natural and artificial agents
formulate and test hypotheses about what it takes to construct
intelligent agents
design, build, and experiment with computational systems that
perform tasks that require intelligence

Engineering goal: design useful, intelligent artifacts.
Analogy between studying flying machines and thinking
machines.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What Are the inputs of an agent?

What are its outputs

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Break down the following agent, Self driving car:

Abilities:
Goals:
Prior Knowledge:
Stimuli:
Experiences:

A

abilities: steer, accelerate, brake

goals/preferences safety, get to destination,
timeliness . . .

prior knowledge: street maps, what signs mean,
what to stop for . . .

stimuli: vision, laser, GPS, voice commands. . .

past experiences: how braking and steering affects
direction and speed. . .

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Risk of ai

A
  • Lethal autonomous weapons
  • Surveillance and persuasion
  • Biased decision making
  • Impact on employment
  • Safety-critical applications
  • Cybersecurity threats
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Benefits of AI

A
  • Decrease repetitive work
  • Increase production of goods and services
  • Accelerate scientific research (disease cures, climate change and
    resource shortages solutions)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Define the environment in terms of agents

A

The environment could be everything
the (entire universe!)
In practice, it is just that part of the universe whose
state we care about when designing this agent—the
part that affects what the agent perceives and that is
affected by the agent’s actions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

percept

A

to refer to the content an agent’s sensors are perceiving

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

percept sequence

A

percept sequence is the complete history of everything the agent
has ever perceived
- Function maps this to an action

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Agent percepts

A

info provided by the environment to the agents

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Actuators

A

Acts for the agent, to preform actions on enviroment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Rationality

A

Humans have preferences, rationality has to do with success in choosing
actions that result in a positive environment state
- Point of view
Machines don’t have preferences or aspirations by default
- performance measure is up to the designer
- goals can be explicit and understood
-but sometimes perhaps not
Sometimes a performance measure is unclear
Consider aspects of vacuum cleaner agent
- mediocre job always or super clean but big charge time?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is rational depends on four things:

A

The performance measure that defines the criterion of success.
The agent’s prior knowledge of the environment.
The actions that the agent can perform.
The agent’s percept sequence to date.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Performance Measure:

A

Fixed performance measure evaluates the environment
– one point per square cleaned up in time T?
– one point per clean square per time step, minus one per move?
– penalize for > k dirty squares?
A rational agent chooses whichever action maximizes the expected value of
the performance measure given the percept sequence to date
Rational /= omniscient
– percepts may notsupply all relevant information
Rational /= clairvoyant
– action outcomes may not be as expected
Hence, rational /= successful
Rational ⇒ exploration, learning, autonomy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Rationality & Omniscience

A

Game AI tends to be more omniscience than the realistic take on AI
Game AI often knows the outcome of its action and potentially how it
maps onto environment states
The reality with AI and Game AI is that sometimes you don’t know if
something is bad, or you don’t know if a bad event might occur
Book: Walk across a clear street to friend, door falls on you from plane
- you didn’t made a bad decision here, but it was unfortunate
Inverse - GTA: take a taxi off of a tower
- the taxi AI just has no clue driving off a tower is dangerous
The result, getting closer to the destination, is rational, at least from the
limited view of the environment
Trade off between actual and expected performance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Rationality & Omniscience

A

Book: Walk across a clear street to friend, door falls on you from plane
- you didn’t made a bad decision here, but it was unfortunate
- actual performance: look
Inverse - GTA: take a taxi off of a tower
- the taxi AI just has no clue driving off a tower is dangerous
- actual performance: NPC should check to see if high up
Process is called information gathering
Modifies percepts
Gather information and learn when possible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

WHAT IS PEAS

A

Performance measure?? safety, destination, profits, legality, comfort, …
Environment?? US streets/freeways, traffic, pedestrians, weather, …
Actuators?? steering, accelerator, brake, horn, speaker/display, …
Sensors?? video, accelerometers, gauges, engine sensors, keyboard, GPS, …

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Observable

A

Fully-observable – sensors give it access to
complete state of environment
Partially-observable – sensors give it access to
some of the environment state
Agent has no sensors, the environment is
unobservable (not hopeless though)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Deterministic??

A

If the next state of the environment is completely
determined by the current state and the action
executed by agents, it is deterministic
Otherwise, non-deterministic
Most real situations so complex not possible to
keep track of unobserved aspects, so treat as nondeterministic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Episodic??

A

In an episodic task environment, the agent’s
experience is divided into atomic episodes. In each
episode the agent receives a percept and then
performs a single action -> Robots!
- next episode does not depend on the actions
taken in previous episodes
Sequential:
current decision could affect all future decisions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Static vs. dynamic??

A

If environment can change while agent
deliberates, environment is dynamic, otherwise, it
is static

31
Q

Discrete vs. Continuous??

A

Chess has finite number of states/percepts/actions
It is discrete
Taxi driving is continuous -> continuous values

32
Q

Single-agent vs. Multi-agent??

A

solving a crossword puzzle by itself is clearly in a
single-agent environment
whereas an agent playing chess is in a two-agent
environment
have described how an entity may be viewed as an
agent, but we have not explained which entities
must be viewed as agents

33
Q

Four basic types in order of increasing generality

A

– simple reflex agents
– reflex agents with state
– goal-based agents
– utility-based agents

34
Q

Reflex-based agents

A

Can implement with a Finite State Machine
Algorithm:
Set an initial state (Idle is common)

If percept1 then:
SetState(Reaction1)
Else if percept2 then:
SetState(Reaction2)
Else if percept3 then:
SetState(Reaction3)
Easy to implement and generally gets pretty good results
Can still make fairly realistic agents as long as perception is reasonable

35
Q

Reflex agents with state

A

Can again implement with a Finite State Machine
One (possible) Algorithm:
Set an initial state (Idle is common)
worldState = PerceiveWorldState()
If percept1 && worldState.reaction1Benefit then:
SetState(Reaction1)
Else if percept2 && worldState.reaction2Benefit then:
SetState(Reaction2)
Else if percept3 && worldState.reaction3Benefit then:
SetState(Reaction3)
Could loop over world state action benefits and take the maximum

36
Q

Utility-based agents

A
37
Q

Goal-based agents

A
38
Q

Agent environments: command-prompt

A

Command Prompt Agent
(basically the same as a grid)
Environment:
- real-time
- turn-based?
- steps?
Sensors:
- limited view of the characters
- object representation
- can you hear other agents?
Actuators: very basic

39
Q

Agent Environments: 2D space

A

2D Agent
(pixels now instead of cells)
Environment:
- turn-based?
- steps?
Sensors:
- limited view of the characters
- can you hear other agents?
Actuators: pretty basic

40
Q

Agent Environments: 3D Space

A

3D Agent
(free-form movement)
Environment:
- turn-based?
- steps?
Sensors:
- limited view of the characters
- can you hear other agents?
- More options given world
Actuators: more complex

41
Q

LLM

A

Large Language Model

42
Q

Belief States

A

An agent doesn’t have access to its entire history. It only has
access to what it has remembered.
The memory or belief state of an agent at time t encodes all
of the agent’s history that it has access to.
The belief state of an agent encapsulates the information
about its past that it can use for current and future actions.
At every time a controller has to decide on:
What should it do?
What should it remember?
(How should it update its memory?)
— as a function of its percepts and its memory

43
Q

A purely reactive agent:

A

A purely reactive agent doesn’t have a belief state.

44
Q

A dead reckoning agent:

A

doesn’t perceive the world.
— neither work very well in complicated domains.

45
Q

Hierarchy of controllers

A

A better architecture is a hierarchy of controllers.
Each controller sees the controllers below it as a
virtual body
from which it gets percepts and sends commands.

The lower-level controllers can
run much faster, and react to the world more
quickly
deliver a simpler view of the world to the higherlevel controllers.

46
Q

Problem Types

A
47
Q

4 aspects in solving by searching

A

States
Actions
Goal Test
Path cost

48
Q

Search Problems

A

A search problem can be defined as follows:
The initial state that the agent starts in.
A set of one or more goal states.
The actions available to the agent. Given a state s ACTIONS(s)
returns a finite set of actions that can be executed in s.
A transition model, which describes what each action does.
A sequence of actions forms a path, and a solution is a path from
the initial state to a goal state.

49
Q

How does a search algorithm takes:

A

A search algorithm takes a search problem as input and returns a
solution, or an indication of failure.
The state space describes the (possibly infinite) set of states in the
world, and the actions that allow transitions from one state to
another. The search tree describes paths between these states,
reaching towards the goal.
We can expand the node, by considering the available Actions for
that state, using the Result function to see where those actions lead
to, and generating a new node.

50
Q

Tree search example

A

Three kinds of queues are used in search algorithms:
A priority queue first pops the node with the minimum cost
according to some evaluation function, It is used in best-first search.
A FIFO queue or first-in-first-out queue first pops the node that was
added to the queue first; we shall see it is used in breadth-first
search.
A LIFO queue or last-in-first-out queue (also known as a stack) pops
first the most recently added node; we shall see it is used in depthfirst search.

51
Q

Uninformed search strategies:

A

Uninformed strategies use only the information available
in the problem definition
Breadth-first search
Depth-first search
Uniform-cost search
Depth-limited search
Iterative deepening search

52
Q

Defining Constraint Satisfaction Problems

A
53
Q

Constraint satisfaction problems (CSPs)

A

Standard search problem:
state is a “black box”—any old data structure
that supports goal test, eval, successor
CSP:
state is defined by variables Xi with values from domain Di
goal test is a set of constraints specifying
allowable combinations of values for subsets of variables
Simple example of a formal representation language
Allows useful general-purpose algorithms with more power
than standard search algorithms

54
Q

CSPs: Variants

A
  • determine whether or not a solution exists
  • find a solution
  • find all solutions
  • count the number of solutions
  • find the best solution given some solution quality
  • soft constraints specify preferences
  • determine whether some property holds in all of the solutions
55
Q

Binary CSP:
Constraint graph:

A

Binary CSP: each constraint relates at most two variables
Constraint graph: nodes are variables, arcs show constraint

56
Q

Discrete variables

A

finite domains; complete assignments
♦ e.g., Boolean CSPs, incl. Boolean satisfiability (NP-complete)
infinite domains (integers, strings, etc.)
♦ e.g., job scheduling, variables are start/end days for each job
♦ need a constraint language, e.g., StartJob1+ 5 ≤ StartJob3
♦ linear constraints solvable, nonlinear undecidable
Continuous variables
♦ e.g., start/end times for Hubble Telescope observations
♦ linear constraints solvable in poly time by LP methods

57
Q

Satisfiability problems:

Optimization problems:

A
58
Q

Standard search formulation (incremental)

A
59
Q

CSP As Graph Searching

A
60
Q

Consistency Algorithms

A
61
Q

Components of a learning problem

A
62
Q
  • Supervised Learning basics
A
  • agent observes input-output pairs
  • learns a function that maps from input to output
63
Q

Unsupervised Learning basics

A

agent learns patterns in the input without any explicit feedback
* clustering

64
Q

Reinforcement Learning basics

A

agent learns from a series of reinforcements: rewards &
punishments

65
Q
A
  • Use bias to analyze hypothesis space
  • the tendency of a predictive hypothesis to deviate from the expected
    value when averaged over different training set
  • Underfitting: fails to find a pattern in the data
  • Variance: the amount of change in the hypothesis due to fluctuation in the
    training data.
  • Overfitting: when it pays too much attention to the particular data set it is
    trained on, causing it to perform poorly on unseen data.
  • Bias–variance tradeoff: a choice between more complex, low-bias
    hypotheses that fit the training data well and simpler, low-variance
    hypotheses that may generalize better.
66
Q

Supervised Learning

A

Example problem: Restaurant waiting
* the problem of deciding whether to wait for a table at a restaurant.
* For this problem the output, y, is a Boolean variable that we will call
WillWait.
* The input, x, is a vector of ten attribute values, each of which has discrete
values:
1. Alternate: whether there is a suitable alternative restaurant nearby.
2. Bar: whether the restaurant has a comfortable bar area to wait in.
3. Fri/Sat: true on Fridays and Saturdays.
4. Hungry: whether we are hungry right now.
5. Patrons: how many people are in the restaurant (values are None, Some, and
Full).
6. Price: the restaurant’s price range ($, $$, $$$).
7. Raining: whether it is raining outside.
8. Reservation: whether we made a reservation.
9. Type: the kind of restaurant (French, Italian, Thai, or burger).
10. WaitEstimate: host’s wait estimate: 0–10, 10–30, 30–60, or >60minutes

67
Q

Model Selection and Optimization

A
  • Task of finding a good hypothesis as two subtasks:
  • Model selection: model selection chooses a good hypothesis space
  • Optimization (training) finds the best hypothesis within that space.
    A training set to create the hypothesis, and a test set to evaluate it.
    Error rate: the proportion of times that h(x) /= y for an (x, y)
    Three data sets are needed:
    1. A training set to train candidate models.
    2. A validation set, also known as a development set or dev set, to evaluate the
    candidate models and choose the best one.
    3. A test set to do a final unbiased evaluation of the best model.
    When insufficient amount of data to create three sets: k-fold cross-validation
  • split the data into k equal subsets
  • perform k rounds of learning
  • on each round 1/k of the data are held out as a validation set and the remaining
    examples are used as the training set.
  • Popular values for k are 5 & 10
  • leave-one-out cross-validation or LOOCV, k=
68
Q

Linear Regression and Classification

A
69
Q

Nonparametric Models

A

Parametric model: learning model that summarizes data with a set of
parameters of fixed size (independent of the number of training examples)
Nonparametric model: model that cannot be characterized by a bounded
set of parameters
One example piecewise linear function that retains all the data points as
part of the model. (instance-based learning or memory-based learning)
Simplest instance-based learning method: table lookup
* take all the training examples, put them in a lookup table, and then when
asked for h(x), see if x is in the table; if it is, return the corresponding y.

70
Q

Ensemble Learning

A

The idea of ensemble learning is to select a collection, or ensemble,
of hypotheses, h1
, h2
, . . . , hn
, and combine their predictions by
averaging, voting, or by another level of machine learning
* individual hypotheses: base models
* Combination of base models: Ensemble models
* Reasons to do ensemble learning
* Reduce bias, ensemble can be more expressive thus less bias
than base models
* Reduce variance, it is hoped it is less likely multiple classifiers
will misclassify

71
Q

Bagging

A
  • generate K distinct training sets by sampling with replacement from
    the original training set.
  • randomly pick N examples from the training set, but each of those
    picks might be an example picked before.
  • run our machine learning algorithm on the N examples to get a
    hypothesis
  • repeat this process K times, getting K different hypotheses
  • aggregate the predictions from all K hypotheses.
  • for classification problems, that means taking the plurality vote (the
    majority vote for binary classification).
  • for regression problems, the final output is the average of
    hypotheses:
72
Q

Random forests

A

Random forests
* a form of decision tree bagging
* randomly vary the attribute choices
* At each split point in constructing the tree, we select a random
sampling of attributes, and then compute which of those gives the
highest information gain
* Given n attributes, 𝑛 common number of attributes randomly
picked at each split for classification n/3 for regression problems.
* Extremely randomized trees (ExtraTrees):
* for each selected attribute, randomly sample several candidate
values from a uniform distribution over the attribute’s range.
* select the value that has the highest information gain.
* Pruning prevents overfitting

73
Q

Stacking

A

Stacking
* combines multiple base models from different model classes
trained on the same data
* approach:
* use the same training data to train each of the base models,
* use the held-out validation data (plus predictions) to train the
ensemble model.
* Also possible to use cross-validation if desired.
* can be thought of as a layer of base models with an ensemble
model stacked above it, operating on the output of the base models

74
Q

Boosting

A

Boosting
* weighted training set: each example has an associated weight wj ≥ 0
that describes how much the example should count during training.
* Start with first hypothesis h1
.
* increase their weights while decreasing the weights of the correctly
classified examples.
* process continues in this way until we have generated K hypotheses,
where K is an input to the boosting algorithm.
* Similar to a Greedy algorithm in the sense that it does not
backtrack; once it has chosen a hypothesis hi
it will never undo that
choice; rather it will add new hypotheses