Introduction to Artificial Intelligence Flashcards
Thinking humanly
Cognitive science approach to make humans think humanly
Thinking rationally
Rule-based thinking using algorithmic thinking
Acting humanly
Passing the Turing test
Definition of intelligence
- Carry out abstract thinking
- Think rationally
- Play chess
- Score high at an IQ test
- Ability to learn from past experiences.
Reductionism
- Break down the problem into manageable pieces
- Study each piece separately
- Put the pieces together again to understand the whole
- TOP-DOWN perspective
Holism
Argues that reductionism isn’t possible (i.e. to break down the problem) because we lose important pieces not present in any one part. These are called emergent properties.
Studies simple but complete systems. BOTTOM-DOWN perspective.
Analytical approach
- Reductionistic approach
- Primary method for empirical analysis
- Hypothesis
- Test
- Analyze results
Synthetic approach
Understand by building a model of the problem or structure. AI and cognitive science is mostly synthetic. Synthetic and analytical approaches are complementary, not contradictory.
Microworlds
A subset of the world for which it is easy to create a domain ontology
Agents
Percieve things and act. Humans use eyes, ears, touch and more to perceive it’s environment and then act on it.
Percept sequence
Everything an agent has ever perceived.
Percept
What an agent perceives atm
Rational agent
An agent that at all times do the actions with highest expected outcome-value. I.e. maximize it’s performance measure, so it depends on what you define as measure. Also, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
The rationality of an agent depends on:
- Possible actions
- Performance measure
- Percept sequence
- Built-in knowledge
What is the Task environment?
PEAS (Performance, Environment, Actuators, Sensors)
Types of Task environments
- Partly vs. fully observable
- Static vs. dynamic
- Discrete vs. continuous
- Episodic vs. sequential
- Deterministic vs. stochastic
- Single vs. multi agent
- Known vs. unknown
Differences between fully and partly observable environment
Fully observable: The sensors detect all aspects relevant to the choice of action. Depends on the sensors, environment and performance measurement.
Partly observable: Most real-life tasks
Unobservable: No sensors
Differences between static and dynamic environment
Dynamic: Can change as the agent makes a choice. Ex: cooking agent can burn the food while it cooks.
Semi-dynamic: The environment doesn’t change but the score does.
Static: No change in environment no matter the time it takes to complete the task. Ex: Sudoku-solver.
Discrete vs continuous environment
Discrete: Can be divided into a finite number of states.
Continuous: Can not. There are several reasons for this:
1. The environment can change continuously (weather)
2. The agent can take input continuously (infra-red sensor), and or
3. The agent acts in a continuous matter (driving)
Note: One can often model discrete states as continuous and vice versa.