Final Exam Flashcards
What makes a robot?
Interactive aspect it important. Sense, think, act, and communicate.
Which kind of robots are there?
Physical manipulators and Social manipulators.
Social manipulator
Social manipulators manipulate the social world. They communicate using same interaction modalities used between people and have a limited mobility and manipulation capabilities.
Physical manipulator
Physical manipulators manipulate the physical world. They have good manipulation abilities to interact with the world, it does not always have to be mobile.
Autonomy for living agents
The degree which the agent determines its own goals.
Autonomy for robots
The degree which there is no direct user control. Goals are pre-determined by programming.
Cognitive architecture
Embodiments of scientific hypothesis about aspects of human cognition, which are relatively constant over time and independent over task.
Agents
a system that is situated in some environment and capable of autonomous action in this environment to meet its goals.
Cognitive model
Cognitive architecture + knowledge
Cycle in agents
perceiving environment -> thinking -> act
Natural motivations in humans
Belief, Goals, and Intentions
Asimov’s 3 laws of robotics
- robot may not injure human or through inaction allow human to come to harm
- robot must obey orders except when it conflisct 1st law
- robot must protect itself as long it doesn’t conflict with 1st and 2nd law.
Intelligence
- autonomy = operate without human intervention and have some control over actions and internal state
- social ability = interact with other agents
- reactivity = perceive and respond to environment
- pro-activeness = exhibit goal-directed behavior
Intentional system (1st and 2nd order)
Behavior can be predicted by attribution of intentional notions/mental states.
1st order has beliefs and desires and rational acumen, but 2nd order also has concerning beliefs and desires of itself and other agents
Symbolic reasoning agents
knowledge-based system, symbolic representation of world, decision via symbolic reasoning, behavior according to rules
Abstract architecture
For symbolic agents. Abstract representations of knowledge such as symbols, predicates or logical formulas. For example environments, actions and runs (alternating sequences of states and actions)
Planning
Reason about sequences of actions and possible outcomes.
Deductive agent
Agent that acts by deducing appropriate action from logical formulas describing the current state and set of rules.
3 problems with symbolic reasoning agents
Frame problem, Transduction problem and Representation/reasoning problem.
Frame problem
Figuring out which statements are necessary and sufficient ti describe the environment for a symbolic agent.
Representation/reasoning problem
Figuring out how to symbolically represent info about complex world and processes (symbolic framework) and how to reason with it.
Transduction problem
Figuring out hwo to translate the real world into a symbolic description that is accurate and adequate.
Practical reasoning in agents
Process of figuring out what action to do
Rational agents
Committed to doing what they intend/plan that is feasible.
BDI architectures
Beliefs, Desires, Intentions controller. Associated with symbolic agents. Agents are modeled based on beliefs, desires, and intentions. Takes into account that everything has a time cost.
Deliberation
Deciding what state of affairs we want to achieve, which becomes intentions.
Two components of deliberation
Option generation based on current beliefs and intentions and filtering which to commit (= new intentions).
Means-ends reasoning
Deciding how to achieve intentions and re-plan when needed.
Desires
like goals (reason for doing things) and/or options for the agent.
Intentions
Desires which the agent is committed.
What are the roles of intentions?
Drive means-ends reasoning, persist, constrain future deliberation, and influence beliefs.
Beliefs
assumptions, current state of the world according to the agent
Intention-belief inconsistency
Having an intention which you belief won’t achieve.
Intention-belief incompleteness
Having an intention without believing that necessary prerequisities will happen.
Blind commitment (intentions)
Continue to maintain an intention until is has been achieved.
Single-minded commitment (intentions)
Maintain intention until agent believes that either intention has been achieved or is no longer possible to achieve.
Reactive robots
Inteligent behavior emerges from interaction of simpler behavior systems. Perception is critical, decide action very quicly based on percepts. No symbolic presentation and reasoning.
Open-minded commitment (intentions)
Maintain intentions as long as it is still optimal.
Affordances
Able to directly perceive the action possibilities with objects.
Ecological niche
Goals, world and sensorimotor possibilities
Reflexive behavior
Relative in reactive robots. Reflexes, taxes, fixed-action paterns, and sequencing of innate behaviors.
Reflexes
Simple involuntary response to a specific event/stimulus. Proportional to duration and intesity (hardwired).
Taxes
Movement in relation to stimulus at particular orientation. Navigate based on taxes.
Fixed-action patterns
Action sequence of rigid order, continues until completion. Not result of prior learning.
Sequencing of innate behaviors
Behavior coordination mechanisms through external environmental stimuli.
Concurrent behaviors + types
Multiple behaviors are active concurrently. Equilibrium, dominance or cancellation.
Brook’s key propositions for reactive robots
- Intelligence is emergent, can be generated.
- The world is its best model, you have to sense it appropriately and often enough
Subsumption architecture
Paradigm by Brooks for reactive robots. Layered control structures where simpler behaviors have precedence over complex ones (hierarchy).
Advantages of reactive agents
- iterative construction
- behaviors from reactive to pro-active
- simple rule-like behaviors
- only few hard-coded assumptions, good in dynamic environments
Disadvantages of reactive agents
- hard to engineer overall behavior that needs to emerge from simple behaviors
- they avoid internal model/symbolic representation (which are sometimes needed)
Hybrid agents
Combining symbolic and reactive agents with layered architecture.
2 systems of hybrid agents
- deliberative system = symbolic world model, develop plans and decisions
- reactive system = reacting to events without complex reasoning (has precedence)
PID controller
Proportional, Integral, Derivative. Try to get a certain variable remain at some value.
Proportional control
K_P * e(t). Where e(t) is error signal
Derivative control
Rate of change of the error. K_D * derivative of e(t)
Critical damping
Decrease error quickly, and correct it to the set point. Never oscillates.
Integral control
History of the error. K_I * e(t)dt. Integral of error over time.
Kalman filter
Estimate state of the system as precise as possible.
Why is a Kalman filter needed?
It is a good way to obtain a reasonable good guess about the actual state of system when all information we have is noisy (like PID controller)
What assumption do we have while using a Kalman filter?
That the noise is normally Gaussian distributed.
How does Kalman filter work?
Combine estimate of current computed state and measured state which will be our new estimated (multiply them). The mean will be the means of both estimations.
Kinematics
The study of how things move.
Forward kinematics
From constrol signal to position of end effector.
From robot configuration in joint space to location in task space.
Inverse kinematics
From position of end effector to control signal.
From location in task space to configuration in joint space.
Task space
Frame of reference in the world, typically with cartesian x,y,z coordinates.
Joint space
State of joints of a robot as an angle with respect to its own frame of reference.
Why is inverse kinematics much more complex than forward kinematics?
- Could be none or multiple solutions of control signal needed
- non-linear, inverse trigonomtry needed
- not all joints work fully in all configurations
When forward kinematics is also challenging (non rigid bodies), how do we learn those kinematics?
We learn them through motor babbling, demonstration or prediction.
Motor babbling
Learn mapping of what commands causes what action, through keeping track of sensory consequences of motor commonds sent.
Learning from demonstration
Demonstrate desirable movement to robot.
Learning to predict consequences of actions
Online/Offline prediction: already knowing reponse of action with given percept
Forward model
Allow you to predict outcomes of possible actions without carrying them out. (Good for planning)
Inverse model
Allow you to determine what actions will achieve a specific goal.
HAMMER
Hierarchical Attentive Multiple Models for Execution and Recognition: a robot control architecture with forward and inverse models.
Morphological computation
The computations of the body.
Artificial agents
Agents that do what they are programmed to do, but without constant remote control.
What are the benefits in adopting the intentional stance to artificial systems?
- Humans naturally attribute mental states to systems anyways
- Low level explanations are not enough for complex artificial agents acting in complex environments
- Makes sense for programmers to think of these notions to capture what intended behavior is
Physical stance
Explain behavior through laws of physics.
Design stance
Explain behavior through knowledge of purpose of the system.
Intentional stance
Explain behavior through terms of mental properties.
Environment in abstract architecture
Triplet: Env = (E, e0, t)
E = set of possible states
e0 = initial state
t = state transformer, maps a run ending in action to possible next states
Implementation of abstract architecture (5 steps)
- start in initial internal state
- observe environment -> percepts
- update internal state
- select appropriate action
- repeat
Synthesis problem
Given task environment, automatically find an agent that can solve it. We want agent that perform well given an environment.
Core problem symbolic agents
Rely on complete description of the world in some formal language.
Practical reasoning for humans
Deliberation and means-ends reasoning (= planning)
Strategies in intention reconsideration
Bold agents (never reconsider) and cautious agents (reconsider after every action).
Embodied cognition
Bodily interaction with the environment is primary to cognition.
IRM
Innate Releasing Mechanism = releaser of control signal that can be triggered.
Brooks’ reactive robots
Basic behaviors interact through inhibition and supression. Reactive paradigm (Sense -> act). No representaional model of the world.
What is needed for a PID controller?
- set point: goal state of variable
- way of measuring the error
- way of reducing the error
D-term
Based on D-term in a PID controller you can predict what future values of error might be if P-term continues.
Damping
Combine P-term with D-term to dampen influence of P-term.
Pro and con of P-gain
Pro: can make system more accurate and respond more rapidly
Con: can lead to oscillatory movement
Pro and con of D-gain
Pro: can reduce oscillation in system
Con: can slow response down
Pro and con I-gain
Pro: can eliminate constant errors
Con: most likely will destabilze system
How does HAMMER work?
- inverse model receives information about current state + target goal
- outputs motor commands needed to achieve target goal
- forward model provides estimate of upcoming states after those commands
- error of prediction returned back to inverse model