1 | Artificial intelligence and agents Flashcards

1
Q

What is an agent?

A

Someone who acts upon the world

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is an environment?

A

A given world. For example a room

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What elements does the agent rely on?

A
  • Prior knowledge - about the agent and the environment
  • History - of interaction with the environment, which is composed of
  • Stimuli - received from the current environment, which can include observations about the environment, as well as actions that the environment imposes on the agent and
  • Past experiences - of previous actions and stimuli, or other data, from which it can learn
  • Goals - that it must try to achieve or preferences over states of the world
  • Abilities - the primitive actions the agent is capable of carrying out.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is design time computation?

A

is the computation that is carried out to design the agent. It is carried out by the designer of the agent, not the agent itself.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is offline computation?

A

is the computation done by the agent before it has to act. It can include compilation and learning. Offline, an agent can take background knowledge and data and compile them into a usable form called a knowledge base. Background knowledge can be given either at design time or offline.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is online computation?

A

is the computation done by the agent between observing the environment and acting in the environment. A piece of information obtained online is called an observation. An agent typically must use its knowledge base, its beliefs and its observations to determine what to do next.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What two strategies are there to building an agent?

A

Simplify the environment

The first is to simplify environments and build complex reasoning systems for these simple environments. For example, factory robots can do sophisticated tasks in the engineered environment of a factory, but they may be hopeless in a natural environment. Much of the complexity of the task can be reduced by simplifying the environment. This is also important for building practical systems because many environments can be engineered to make them simpler for agents.

Simplify the agent

The second strategy is to build simple agents in natural environments. This is inspired by seeing how insects can survive in complex environments even though they have very limited reasoning abilities. Researchers then make the agents have more reasoning abilities as their tasks become more complicated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a general difference between a computer program and AI?

A

Computer program: procedural
AI: declarative

One way that AI representations differ from computer programs in traditional languages is that an AI representation typically specifies what needs to be computed, not how it is to be computed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does the general framework of task solving consist of?

A
  1. Determine what constitutes a solution (Solve)
  2. Represent the task in a way a computer can reason about (Represent)
  3. Use the computer to compute an output, which is answers presented to a user or actions to be carried out in the environment (Compute)
  4. Interpret the output as a solution to the task. (Interpet)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is knowledge?

A

Knowledge is the info about a domain that can be used to solve tasks in that domain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a representation language?

A

A representation language is used to express the knowledge that is used in an agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What makes a good representation language?

A

rich enough to express the knowledge needed to solve the task.

as close to a natural specification of the task as possible; it should be compact, natural, and maintainable. It should be easy to see the relationship between the representation and the domain being represented, so that it is easy to determine whether the knowledge represented is correct. A small change in the task should result in a small change in the representation of the task.

amenable to efficient computation, or tractable, which means that the agent can act quickly enough. To ensure this, representations exploit features of the task for computational gain and trade off accuracy and computation time.

able to be acquired from people, data and past experiences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is an optimal solution?

A

An optimal solution to a task is one that is the best solution according to some measure of solution quality.

For example, a robot may need to take out as much trash as possible; the more trash it can take out, the better.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is an satisficing solution?

A

A satisficing solution is one that is good enough according to some description of which solutions are adequate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is an approximately optimal solution?

A

An approximately optimal solution is one whose measure of quality is close to the best that could theoretically be obtained.

Typically, agents do not need optimal solutions to tasks; they only need to get close enough. For example, the robot may not need to travel the optimal distance to take out the trash but may only need to be within, say, 10% of the optimal distance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a probable solution?

A

A probable solution is one that, even though it may not actually be a solution to the task, is likely to be a solution.

This is one way to approximate, in a precise manner, a satisficing solution. For example, in the case where the delivery robot could drop the trash or fail to pick it up when it attempts to, you may need the robot to be 80% sure that it has picked up three items of trash.

17
Q

What is a symbol?

A

a meaningful pattern that can be manipulated.

18
Q

What is a symbol system?

A

A symbol system creates, copies, modifies, and destroys symbols.

19
Q

What is a model?

A

A model of a world is a representation of an agent’s beliefs about what is true in the world or how the world changes.

20
Q

What is an abstraction?

A

All models are abstractions; represent only part of the world and leave out many of the details.

An agent can have a very simplistic model of the world, or it can have a very detailed model of the world.

21
Q

What is a design space?

A

These dimensions define a design space for AI; different points in this space are obtained by varying the values on each dimension.

22
Q

What is modularity dimension?

A

Modularity is the extent to which a system can be decomposed into interacting modules that can be understood separately.

23
Q

What is planning dimension?

A

The planning dimension is how far ahead in time the agent plans.

For example, consider a dog as an agent. When a dog is called to come, it should turn around to start running in order to get a reward in the future. It does not act only to get an immediate reward. Plausibly, a dog does not act for goals arbitrarily far in the future (e.g., in a few months), whereas people do (e.g., working hard now to get a holiday next year).

24
Q

What is representation dimension?

A

The representation dimension concerns how the world is described.

The different ways the world could be are called states. A state of the world specifies the agent’s internal state (its belief state) and the environment state.

25
Q

What is the computational limit dimension?

A

Sometimes an agent can decide on its best action quickly enough for it to act. Often there are computational resource limits that prevent an agent from carrying out the best action.

The computational limits dimension determines whether an agent has:

  • perfect rationality, where an agent reasons about the best action without taking into account its limited computational resources, or
  • bounded rationality, where an agent decides on the best action that it can find given its computational limitations.
26
Q

What is the learning dimension?

A

The learning dimension determines whether:

  • knowledge is given, or
  • knowledge is learned (from data or past experience).
27
Q

What is the uncertainty dimension?

A

An agent could assume there is no uncertainty, or it could take uncertainty in the domain into consideration.

Uncertainty is divided into two dimensions: one for uncertainty from sensing and one for uncertainty about the effects of actions.

The sensing uncertainty dimension concerns:
• Fully observable
• Partially observable

The dynamics in the effect uncertainty dimension can be:
• Deterministic when the state resulting from an action is determined by an action and the prior state
• Stochastic when there is only a probability distribution over the resulting states

28
Q

What is the preference dimension?

A

The preference dimension considers whether the agent has goals or richer preferences:

• A goal is either an achievement goal, which is a proposition to be true in some final state, or a maintenance goal, a proposition that must be true in all visited states.

For example, the goals for a robot may be to deliver a cup of coffee and a banana to Sam, and not to make a mess or hurt anyone.

• Complex preferences involve trade-offs among the desirability of various outcomes, perhaps at different times. An ordinal preference is where only the ordering of the preferences is important. A cardinal preference is where the magnitude of the values matters.

For example, an ordinal preference may be that Sam prefers cappuccino over black coffee and prefers black coffee over tea. A cardinal preference may give a trade-off between the wait time and the type of beverage, and a mess versus taste trade-off, where Sam is prepared to put up with more mess in the preparation of the coffee if the taste of the coffee is exceptionally good.

29
Q

What is the number of agents dimension?

A

Taking the point of view of a single agent, the number of agents dimension considers whether the agent explicitly considers other agents:

  • Single agent reasoning means the agent assumes that there are no other agents in the environment or that all other agents are part of nature, and so are non-purposive. This is a reasonable assumption if there are no other agents or if the other agents are not going to change what they do based on the agent’s action.
  • Multiple agent reasoning (or multiagent reasoning) means the agent takes the reasoning of other agents into account. This occurs when there are other intelligent agents whose goals or preferences depend, in part, on what the agent does or if the agent must communicate with other agents.
30
Q

What is the interaction dimension?

A

The interaction dimension considers whether the agent does:

  • Offline reasoning where the agent determines what to do before interacting with the environment, or:
  • Online reasoning where the agent must determine what action to do while interacting in the environment, and needs to make timely decisions.
31
Q

Which structures can the agent have in modularity dimension?

A

Flat – there is no organizational structure

Modular – the system is decomposed into interacting modules that can be understood on their own

Hierarchical – the system is modular, and the modules themselves are decomposed into simpler modules, each of which are hierarchical systems or simple components