AI and intelligent systems Flashcards

1
Q

Different types of AI -

A

type 1: capabilities (narrow AI, general AI, strong AI)

type 2: functionality (reactive AI, limited memory, theory of mind, self AWS)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Describe type 1 capabilities -

A

narrow AI → most common and available AI, perform dedicated tasks (Netflix and Spotify recommendation system, a virtual assistant as Alexa).

General AI → could perform any intellectual task with efficiency like a human, not available yet (could be a personal assistant - solve complex tasks and adapt without training).

Strong AI - could act better than a human, perform any task, think, reason, judge, is a hypothetical concept - development of such a system is a world-changing task (solve a task not humans can solve).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Describe type 2 functionality -

A

reactive AI → most basic AI system which only reacts, does not store past experiences for the future, focus only on current situations (Netflix recommendation system).

Limited memory → can store past experiences with a limited period (self-driving cars - observe other cars for their speed).

Theory of mind → AI can understand emotions, people, beliefs, be able to interact socially, is not developed yet.

Self AWS → future of AI, super intelligent, have consciousness, sentiments, awareness, smarter than human, does not exist yet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is AI today -

A

the ability of a machine to show human-like capabilities such as reasoning, learning, planning and creativity.
Sense → acquire, recognize, analyze data.
Comprehend → understand and depicting information into insights.
Action → finish a task based on insights derived.
Learning → be able to learn, act and adapt.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is learning -

A

knowledge acquisition through studying, memorizing, learning facts through observation, experience and exploration, development of skills through practice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Two types of learning/reasoning -

A

Deductive (deduce new/interesting rules/facts from already known rules/facts, general rule → specific example).

Inductive (learn new rules/facts from experience, specific example → general rule).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

PAC learning -

A

If a hypothesis works well on a large number of training and test examples, it is probably close to the truth. This idea is called Probably Approximately Correct (PAC) learning. It means that if a model performs well on enough data, it is unlikely to be very wrong.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Trends in computing -

A
  • Ubiquity → Lower costs make processors common in many devices.
  • Interconnection → Most computers are now networked, often via the internet.
  • Intelligence → Computers handle increasingly complex tasks.
  • Delegation → More control is given to automated systems.
  • Human Orientation → Shift from machine-focused to user-friendly design.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Properties of an intelligent agent -

A

reactivity (event-driven) → the ability to perceive the environment and react to changes in a timely fashion.

Proactive (autonomous) → goal-directed behavior, about taking the initiative to act to achieve some designing goals.

Social ability → communicate with others, e.g. to cooperate or reach agreements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why is a balance between a reactive and proactive system important -

A

building a purely goal-directed or reactive system is typically not hard, but building a system that exhibits an effective balance between goal-directed and reactive behaviour can be difficult. We want to build systems that react to changes in the environment and to work systematically towards long-term goals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is meant by social ability in computer systems? -

A

It is about performing meaningful interaction with other systems (and humans) via some communication language.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is an agent? -

A

An intelligent autonomous system is a computer system that can act on its own without constant instructions.
• It can adapt to different situations.
• It operates in unpredictable environments with other agents (systems or people).
• Instead of being told exactly what to do, it figures out the best actions to reach its goal.
• A multi-agent system is a group of these intelligent systems working together and interacting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Example of agents -

A

shopping and pricing comparison, game bots, robots, self-driving cars.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does environment mean in terms of agents? -

A

An agent is situated in some environments. Accessible or inaccessible, deterministic or non-deterministic, static or dynamic, discrete or continuous, episodic or sequential. The decision taken by an agent is typically based on incomplete information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Agent capabilities -

A
  • operate autonomously,
  • react to changes in the environment,
  • communicate with other agents,
  • learn, adapt to changes in the environment,
  • construct plans, reason, move between computers.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is an object -

A

encapsulated some state, communicates via a message passing, has methods, corresponding to operations that may be performed on this state.

17
Q

Agents vs objects -

A

Agents are autonomous, meaning they act on their own. Unlike objects, they decide for themselves whether to follow a request from another agent.

Agents are smart (capable of flexible (reactive, pro-active, social) behavior, are not included in the standard object model).

Agents are active, meaning they work on their own without waiting for instructions. In a multi-agent system, each agent runs independently, like having its own task or process (similar to multi-threading).

18
Q

Rational agent -

A

A rational agent is something (like a robot, software, or even a person) that observes its environment and makes decisions to take actions.
• These actions change the environment, creating a chain of events.
• A performance measure checks how good or bad the results are.
• The agent tries to maximize success based on:
• Its goal (what it wants to achieve)
• What it already knows about the environment
• The actions it can take
• What it has observed so far

In simple terms, a rational agent always picks the best action it can, using the information it has, to achieve its goal.

19
Q

Task environment -

A

the “problems” to which rational agents are the “solutions.”

A task environment can be described using the PEAS abbreviation
- Performance
- Environment
- Actuators
- Sensors

When building an agent, it is useful to specify the task environment in as much detail as possible.

20
Q

When to use agents -

A

Agents are useful when:
• The environment is unpredictable or complex.
• Systems need to act independently and adapt.
• Problems involve multiple interacting parties.
• Data and expertise are decentralized.

21
Q

Types of agents -

A
  • Simple Reflex Agent: Acts based on IF-THEN rules (e.g., “If the traffic light is red, stop”).
    • Needs a fully observable environment (sees everything).
    • Doesn’t learn or adapt—just reacts.
  • Model-Based Reflex Agent: Works when the environment is partially observable (doesn’t see everything).
    • Uses a mental model to keep track of missing information.
      • Maintains an internal state to represent the current situation based on past perceptions.
      • More flexible than simple reflex agents.
  • Goal-Based Agent: Builds on model-based agents but focuses on reaching a specific goal.
    • Uses planning and searching to find the best path to success.
    • Thinks ahead instead of just reacting.
  • Utility-Based Agent: Similar to goal-based agents but doesn’t just aim for success—it aims for the best outcome.
    • Uses a utility function to measure how good each action is.
    • Helps when there are multiple choices and some are better than others.
  • Learning Agent: Learns from past experiences and adapts over time. Starts with basic knowledge and improves automatically. Has four key parts:
    • Learning Element: Learns from experiences.
    • Critic: Evaluates performance and gives feedback.
    • Performance Element: Chooses actions.
    • Problem Generator: Suggests new actions to try.
22
Q

Search algorithm -

A

is an algorithm that takes a problem as input and returns a solution for solving that problem. In AI, search techniques are universal problem-solving methods.

Rational agents or problem-solving agents in AI mostly use these search strategies or algorithms to:
1. Solve a specific problem
2. Provide the best result.

23
Q

What three main factors can a search problem have? -

A

Searching is a step-by-step procedure to solve a search-problem in a given search space.
1) Search Space: It represents a set of possible solutions, which a system may have.
2) Start State: It is a state from where the agent begins the search.
3) Goal State: It is a function which observes the current state and returns whether the goal state is achieved or not.

24
Q

Explain transition, search tree, action and path cost in search algorithms -

A

Transition: It is called the act of moving between different positions.

Search Tree: A tree representation of a search problem is called a search tree. The root of the search tree is the root node which corresponds to the initial state.

Action: It describes all the available actions to the agent.

Path cost: It is a function which assigns a numeric cost to each path.

25
Properties of search algorithms -
Completeness: A search algorithm is said to be complete if it guarantees to return a solution if at least any solution exists for any random input. Optimality: If a solution found for an algorithm is guaranteed to be the best solution (lowest path cost) among all other solutions, then such a solution is said to be an optimal solution. Time Complexity: Time complexity is a measure of time for an algorithm to complete its task. Space Complexity: It is the maximum storage space required at any point during the search, as the complexity of the problem.
26
Types of search algorithms -
Informed search (heuristic search) algorithms. Informed search → greedy search, A* Search, graph search. Uninformed search (blind search) algorithms. Uniformed search → depth first search, breadth first search, uninformed cost search.
27
Uninformed algorithms -
No extra goal information beyond the problem definition. Differ in action order/length but not in strategy. Can only generate successors and check if a state is the goal. Problem Graph: Contains start node (S) and goal node (G). Strategy: Defines how the algorithm decides with node to explore next. Fringe: Stores possible next states. Tree: Formed while searching for the goal. Solution Plan: The path from S to G.
28
Depth first search -
Used for: Searching trees or graphs. How it works: Starts at the root, explores one branch as far as possible, then backtracks. Strategy: Last In, First Out (LIFO) → Uses a stack. Applications: 1. maze solving. 2. topological sorting. 3. Finding connecting components.
29
Breadth-first search -
Used for: Searching trees or graphs. How it works: Explores all neighbors first before going deeper. Strategy: First In, First Out (FIFO) → Uses a queue. Applications: 1. web crawling. 2. social network connectivity. 3. puzzle solving. 4. robotics.
30
Uninformed cost search -
Unlike DFS & BFS, UCS considers costs for each path. Goal: Find the path with the lowest total cost. Cost of a node = Sum of all edge costs from the start node. Cost of the root node = 0 (starting point). Uses a priority queue, always expanding the lowest-cost node first. Application: 1. supply chain optimization. 2. game playing.
31
Informed search (heuristic search) algorithms -
Uses heuristics to estimate closeness to the goal. More efficient than uninformed search. Helps solve complex problems faster. Heuristic Function – Estimates distance to the goal (e.g., Manhattan or Euclidean distance). (Lower value = Closer to goal).
32
Greedy search -
Expands (pick) the node closest to the goal node, and the closeness is estimated by a heuristic. Heuristic (h(x)) estimates the distance from node x to the goal. Lower h(x) = Closer to the goal. Strategy: Always expand the node with the lowest h(x) first. Application: reach the destination in the shortest time.
33
A* Tree Search -
A* combines Uniform Cost Search (UCS) and Greedy Search. It uses two costs: g(x): The actual cost from the start node to the current node. h(x): The estimated distance from the current node to the goal (heuristic). f(x) = g(x) + h(x): Total estimated cost to reach the goal through the current node. Is optimal only when for all nodes, the forward cost underestimates the actual cost to reach the goal. Strategy: choose the node with the lowest f(x) value. Admissibility: property of A* heuristic. Must always be less than or equal to the real cost. Application: 1. video games.
34
A* graph -
A* tree rmight e-explored the same nodes multiple times, wasting time. A*graph fixes this by not expanding the same node twice. It ensures more efficient searching by avoiding repeated work. Consistency in graph search means the estimated cost from node A to node B (h(A) - h(B)) should not be greater than the actual cost to move between them (g(A -> B)). Formula: h(A) - h(B) ≤ g(A -> B). Consistency ensures a efficiently find the best path without unnecessary re-exploring nodes.
35
A*Tree vs A*Graph -
A* Tree Search: * It may revisit nodes like B or E from different paths. * Since it does not remember explored nodes, it expands paths unnecessarily. A* Graph Search: * Stores visited nodes so it doesn’t expand the same node again. * More efficient because it avoids redundant calculations.
36
Difference between informed and uninformed algorithms
Informed: uses knowledge for the searching process, it finds solutions more quickly, cost is low. less time. Uninformed: does not use knowledge from searching, find solution slow, is mandatory efficient, cost is high, consumes moderate time.