AI and intelligent systems Flashcards
Different types of AI -
type 1: capabilities (narrow AI, general AI, strong AI)
type 2: functionality (reactive AI, limited memory, theory of mind, self AWS)
Describe type 1 capabilities -
narrow AI → most common and available AI, perform dedicated tasks (Netflix and Spotify recommendation system, a virtual assistant as Alexa).
General AI → could perform any intellectual task with efficiency like a human, not available yet (could be a personal assistant - solve complex tasks and adapt without training).
Strong AI - could act better than a human, perform any task, think, reason, judge, is a hypothetical concept - development of such a system is a world-changing task (solve a task not humans can solve).
Describe type 2 functionality -
reactive AI → most basic AI system which only reacts, does not store past experiences for the future, focus only on current situations (Netflix recommendation system).
Limited memory → can store past experiences with a limited period (self-driving cars - observe other cars for their speed).
Theory of mind → AI can understand emotions, people, beliefs, be able to interact socially, is not developed yet.
Self AWS → future of AI, super intelligent, have consciousness, sentiments, awareness, smarter than human, does not exist yet.
What is AI today -
the ability of a machine to show human-like capabilities such as reasoning, learning, planning and creativity.
Sense → acquire, recognize, analyze data.
Comprehend → understand and depicting information into insights.
Action → finish a task based on insights derived.
Learning → be able to learn, act and adapt.
What is learning -
knowledge acquisition through studying, memorizing, learning facts through observation, experience and exploration, development of skills through practice.
Two types of learning/reasoning -
Deductive (deduce new/interesting rules/facts from already known rules/facts, general rule → specific example).
Inductive (learn new rules/facts from experience, specific example → general rule).
PAC learning -
If a hypothesis works well on a large number of training and test examples, it is probably close to the truth. This idea is called Probably Approximately Correct (PAC) learning. It means that if a model performs well on enough data, it is unlikely to be very wrong.
Trends in computing -
- Ubiquity → Lower costs make processors common in many devices.
- Interconnection → Most computers are now networked, often via the internet.
- Intelligence → Computers handle increasingly complex tasks.
- Delegation → More control is given to automated systems.
- Human Orientation → Shift from machine-focused to user-friendly design.
Properties of an intelligent agent -
reactivity (event-driven) → the ability to perceive the environment and react to changes in a timely fashion.
Proactive (autonomous) → goal-directed behavior, about taking the initiative to act to achieve some designing goals.
Social ability → communicate with others, e.g. to cooperate or reach agreements.
Why is a balance between a reactive and proactive system important -
building a purely goal-directed or reactive system is typically not hard, but building a system that exhibits an effective balance between goal-directed and reactive behaviour can be difficult. We want to build systems that react to changes in the environment and to work systematically towards long-term goals.
What is meant by social ability in computer systems? -
It is about performing meaningful interaction with other systems (and humans) via some communication language.
What is an agent? -
An intelligent autonomous system is a computer system that can act on its own without constant instructions.
• It can adapt to different situations.
• It operates in unpredictable environments with other agents (systems or people).
• Instead of being told exactly what to do, it figures out the best actions to reach its goal.
• A multi-agent system is a group of these intelligent systems working together and interacting.
Example of agents -
shopping and pricing comparison, game bots, robots, self-driving cars.
What does environment mean in terms of agents? -
An agent is situated in some environments. Accessible or inaccessible, deterministic or non-deterministic, static or dynamic, discrete or continuous, episodic or sequential. The decision taken by an agent is typically based on incomplete information.
Agent capabilities -
- operate autonomously,
- react to changes in the environment,
- communicate with other agents,
- learn, adapt to changes in the environment,
- construct plans, reason, move between computers.
What is an object -
encapsulated some state, communicates via a message passing, has methods, corresponding to operations that may be performed on this state.
Agents vs objects -
Agents are autonomous, meaning they act on their own. Unlike objects, they decide for themselves whether to follow a request from another agent.
Agents are smart (capable of flexible (reactive, pro-active, social) behavior, are not included in the standard object model).
Agents are active, meaning they work on their own without waiting for instructions. In a multi-agent system, each agent runs independently, like having its own task or process (similar to multi-threading).
Rational agent -
A rational agent is something (like a robot, software, or even a person) that observes its environment and makes decisions to take actions.
• These actions change the environment, creating a chain of events.
• A performance measure checks how good or bad the results are.
• The agent tries to maximize success based on:
• Its goal (what it wants to achieve)
• What it already knows about the environment
• The actions it can take
• What it has observed so far
In simple terms, a rational agent always picks the best action it can, using the information it has, to achieve its goal.
Task environment -
the “problems” to which rational agents are the “solutions.”
A task environment can be described using the PEAS abbreviation
- Performance
- Environment
- Actuators
- Sensors
When building an agent, it is useful to specify the task environment in as much detail as possible.
When to use agents -
Agents are useful when:
• The environment is unpredictable or complex.
• Systems need to act independently and adapt.
• Problems involve multiple interacting parties.
• Data and expertise are decentralized.
Types of agents -
- Simple Reflex Agent: Acts based on IF-THEN rules (e.g., “If the traffic light is red, stop”).
- Needs a fully observable environment (sees everything).
- Doesn’t learn or adapt—just reacts.
- Model-Based Reflex Agent: Works when the environment is partially observable (doesn’t see everything).
- Uses a mental model to keep track of missing information.
- Maintains an internal state to represent the current situation based on past perceptions.
- More flexible than simple reflex agents.
- Uses a mental model to keep track of missing information.
- Goal-Based Agent: Builds on model-based agents but focuses on reaching a specific goal.
- Uses planning and searching to find the best path to success.
- Thinks ahead instead of just reacting.
- Utility-Based Agent: Similar to goal-based agents but doesn’t just aim for success—it aims for the best outcome.
- Uses a utility function to measure how good each action is.
- Helps when there are multiple choices and some are better than others.
- Learning Agent: Learns from past experiences and adapts over time. Starts with basic knowledge and improves automatically. Has four key parts:
- Learning Element: Learns from experiences.
- Critic: Evaluates performance and gives feedback.
- Performance Element: Chooses actions.
- Problem Generator: Suggests new actions to try.
Search algorithm -
is an algorithm that takes a problem as input and returns a solution for solving that problem. In AI, search techniques are universal problem-solving methods.
Rational agents or problem-solving agents in AI mostly use these search strategies or algorithms to:
1. Solve a specific problem
2. Provide the best result.
What three main factors can a search problem have? -
Searching is a step-by-step procedure to solve a search-problem in a given search space.
1) Search Space: It represents a set of possible solutions, which a system may have.
2) Start State: It is a state from where the agent begins the search.
3) Goal State: It is a function which observes the current state and returns whether the goal state is achieved or not.
Explain transition, search tree, action and path cost in search algorithms -
Transition: It is called the act of moving between different positions.
Search Tree: A tree representation of a search problem is called a search tree. The root of the search tree is the root node which corresponds to the initial state.
Action: It describes all the available actions to the agent.
Path cost: It is a function which assigns a numeric cost to each path.