AI Flashcards
LLMs
(Large Language Models
AGI
Artificial General Intelligence
AI winter Thoughts?
Dry spell
Name 3 reasons of why Ai probably replace everything
AI still not suitable to replace humans for a number of tasks
Primarily because trustis a major issue with current AI systems and
those for foreseeable future
Science-fiction: widely argue AI is dangerous, although there are
some examples of good AI
Can AI systems actually be fair, just, and ethical, or will they simply
appear to be?
- Definitions of those terms vary from person to person
Still an open problem
Ai vs Game Ai
AI in general has more to do with knowledge
representation and taking reasonable actions based on
available data. In AI, “Available Data” tends to be limited
to sensory input and previously learned or experienced
examples.
In Game AI, we can break a lot of rules regarding what
“intelligence” is and give lots more information about the
world to agents than they normally would have based on
sensory inputs alone.
Still, there can be lots of overlap between the two.
Is AI a subset of machien learning
No, Machine learning is a subset of ai!
Define Machine Learning
In Machine Learning the general goal is find ways to get values that
separate different classes of data or produce an accurate prediction
based on some data
In Deep Learning we use massive datasets to train complex neural
networks to output text or recognize objects
The Turing test
A test to see if a AI can successfully pretend to be a human
Rational Agents
Artificial intelligence is the synthesis and analysis of
computational agents that act intelligently.
An agent is something that acts in an environment.
An agent acts intelligently if:
its actions are appropriate for its goals and
circumstances
it is flexible to changing environments and goals
it learns from experience
it makes appropriate choices given perceptual and
computational limitations
Provide some examples of rational Agents
Organizations Microsoft, European Union, Real Madrid FC,
an ant colony,…
People teacher, physician, stock trader, engineer, researcher,
travel agent, farmer, waiter…
Computers/devices thermostat, user interface, airplane
controller, network controller, game, advising system, tutoring
system, diagnostic assistant, robot, Google car, Mars rover…
Animals dog, mouse, bird, insect, worm, bacterium, bacteria…
book(?), sentence(?), word(?), letter(?)
Can a book or article do things?
Convince? Argue? Inspire? Cause people to act differently?
List the scientific and Engineering goal behind rational agents
Scientific goal: to understand the principles that make
intelligent behavior possible in natural or artificial systems.
analyze natural and artificial agents
formulate and test hypotheses about what it takes to construct
intelligent agents
design, build, and experiment with computational systems that
perform tasks that require intelligence
Engineering goal: design useful, intelligent artifacts.
Analogy between studying flying machines and thinking
machines.
What Are the inputs of an agent?
What are its outputs
Break down the following agent, Self driving car:
Abilities:
Goals:
Prior Knowledge:
Stimuli:
Experiences:
abilities: steer, accelerate, brake
goals/preferences safety, get to destination,
timeliness . . .
prior knowledge: street maps, what signs mean,
what to stop for . . .
stimuli: vision, laser, GPS, voice commands. . .
past experiences: how braking and steering affects
direction and speed. . .
Risk of ai
- Lethal autonomous weapons
- Surveillance and persuasion
- Biased decision making
- Impact on employment
- Safety-critical applications
- Cybersecurity threats
Benefits of AI
- Decrease repetitive work
- Increase production of goods and services
- Accelerate scientific research (disease cures, climate change and
resource shortages solutions)
Define the environment in terms of agents
The environment could be everything
the (entire universe!)
In practice, it is just that part of the universe whose
state we care about when designing this agent—the
part that affects what the agent perceives and that is
affected by the agent’s actions
percept
to refer to the content an agent’s sensors are perceiving
percept sequence
percept sequence is the complete history of everything the agent
has ever perceived
- Function maps this to an action
Agent percepts
info provided by the environment to the agents
Actuators
Acts for the agent, to preform actions on enviroment
Rationality
Humans have preferences, rationality has to do with success in choosing
actions that result in a positive environment state
- Point of view
Machines don’t have preferences or aspirations by default
- performance measure is up to the designer
- goals can be explicit and understood
-but sometimes perhaps not
Sometimes a performance measure is unclear
Consider aspects of vacuum cleaner agent
- mediocre job always or super clean but big charge time?
What is rational depends on four things:
The performance measure that defines the criterion of success.
The agent’s prior knowledge of the environment.
The actions that the agent can perform.
The agent’s percept sequence to date.
Performance Measure:
Fixed performance measure evaluates the environment
– one point per square cleaned up in time T?
– one point per clean square per time step, minus one per move?
– penalize for > k dirty squares?
A rational agent chooses whichever action maximizes the expected value of
the performance measure given the percept sequence to date
Rational /= omniscient
– percepts may notsupply all relevant information
Rational /= clairvoyant
– action outcomes may not be as expected
Hence, rational /= successful
Rational ⇒ exploration, learning, autonomy
Rationality & Omniscience
Game AI tends to be more omniscience than the realistic take on AI
Game AI often knows the outcome of its action and potentially how it
maps onto environment states
The reality with AI and Game AI is that sometimes you don’t know if
something is bad, or you don’t know if a bad event might occur
Book: Walk across a clear street to friend, door falls on you from plane
- you didn’t made a bad decision here, but it was unfortunate
Inverse - GTA: take a taxi off of a tower
- the taxi AI just has no clue driving off a tower is dangerous
The result, getting closer to the destination, is rational, at least from the
limited view of the environment
Trade off between actual and expected performance
Rationality & Omniscience
Book: Walk across a clear street to friend, door falls on you from plane
- you didn’t made a bad decision here, but it was unfortunate
- actual performance: look
Inverse - GTA: take a taxi off of a tower
- the taxi AI just has no clue driving off a tower is dangerous
- actual performance: NPC should check to see if high up
Process is called information gathering
Modifies percepts
Gather information and learn when possible
WHAT IS PEAS
Performance measure?? safety, destination, profits, legality, comfort, …
Environment?? US streets/freeways, traffic, pedestrians, weather, …
Actuators?? steering, accelerator, brake, horn, speaker/display, …
Sensors?? video, accelerometers, gauges, engine sensors, keyboard, GPS, …
Observable
Fully-observable – sensors give it access to
complete state of environment
Partially-observable – sensors give it access to
some of the environment state
Agent has no sensors, the environment is
unobservable (not hopeless though)
Deterministic??
If the next state of the environment is completely
determined by the current state and the action
executed by agents, it is deterministic
Otherwise, non-deterministic
Most real situations so complex not possible to
keep track of unobserved aspects, so treat as nondeterministic
Episodic??
In an episodic task environment, the agent’s
experience is divided into atomic episodes. In each
episode the agent receives a percept and then
performs a single action -> Robots!
- next episode does not depend on the actions
taken in previous episodes
Sequential:
current decision could affect all future decisions