325 Flashcards
What are Agents?
An agent is a system or entity that perceives its environment and takes actions to achieve specific goals. An agent can be a robot, software, or any other entity that is capable of receiving inputs from its environment through sensors, processing data, and producing an output.
An agent typically operates in a dynamic environment, able to process its environment internally and make decisions or take actions based on its goals or objectives. These actions may involve modifying the environment, interacting with other agents, or performing specific tasks.
Agents include humans, robots, softbots, thermostats, etc. The agent function maps from percept histories to actions: f ∶ 𝒫∗ → 𝒜
What is a Reflex Agent?
Simple reflex agents make decisions based on the information it receives from its sensors at any given moment. The agent does not consider the history of past percepts or the potential future consequences of its actions.
- The agent has sensors to perceive its environment. These sensors can be physical devices or inputs from a computer system.
- The agent follows a set of condition-action rules, known as production rules or IFTHEN rules. Each rule specifies a condition to be met in the percept, if a condition is met, a corresponding action is taken.
- The agent has actuators to perform actions in the environment based on the conditions met. Actuators can be physical devices such as motors, or software components that interact with the system.
They essentially only react to given stimuli without any consideration of what previous stimuli was.
What is a Model-Based Reflex Agent?
Model-based reflex agents consider the current percept as well as the internal state, which they update based on the history of percepts and actions. They maintain an internal model of the environment to make more informed decisions.
- Like simple reflex agents, model-based reflex agents make use of sensors to receive inputs from its environment, condition-action rules to determine what action to take, and actuators to perform actions in the environment.
- Additionally, model-based reflex maintains an internal model of the environment. This model is an abstract representation of the world that captures relevant aspects, relationships and dynamics. This allows the agent to simulate or predict the consequences of its actions on the environment.
Goal Based Agent
Goal-based agents have explicit goals or objectives and take actions based on their current state and the desired goal state. They make decisions by considering the available actions and the expected outcome or utility of those actions.
- Like simple reflex agents, goal-based agents make use of sensors to receive inputs from its environment, condition-action rules to determine what action to take, and actuators to perform actions in the environment.
- The agent has explicit goals that define the desired state it aims to achieve.
It possesses knowledge about its environment, available actions, and the potential consequences of its actions. It uses reasoning mechanisms, such as logical or probabilistic reasoning, to evaluate different actions and their likelihood of achieving
the desired goals.
What are BDI Agents?
Belief-Desire-Intention agents model human-like reasoning and decision-making processes. BDI agents aim to capture the cognitive aspects of human behaviour by incorporating beliefs, desires and intentions as fundamental concepts.
- Beliefs represent the agent’s knowledge about the world. These beliefs can include facts about the environment, the agent’s internal state, the states of other agents, and other relevant information. Beliefs are typically represented as a set of propositions or statements.
- Desires reflect the agent’s goals; they represent what the agent wants to achieve or the states of the world it finds desirable. Desires can range from simple goals to complex preferences and can be hierarchal with desires having subgoals and dependencies.
- Intentions represent the agent’s selected course of action to achieve its goals. Intentions are formed based on the agent’s beliefs and desires. An intention is a commitment to perform a specific action or set of actions and is influenced by the agent’s beliefs about the current state of its environment.
What are Utility Based Agents?
Utility-based agents assign utilities or values to different states and actions, enabling them to make decisions based on maximising expected utility. They consider not only the goal but also the potential outcomes and their desirability.
- Like simple reflex agents, utility-based agents make use of sensors to receive inputs from its environment, condition-action rules to determine what action to take, and actuators to perform actions in the environment.
- The agent has a utility function that quantifies the desirability associated with different states of the world. The utility function maps each state to a numerical value representing its utility.
- A utility-based agent may incorporate learning mechanisms to improve its decision-making over time. It can learn from the outcomes of its actions and adjust its utility function.
What are Learning Agents?
Learning agents have the ability to learn from their interactions with the environment. They can acquire knowledge, update their internal models, and improve their decision-making abilities over time.
- The critic component of the agent evaluates the performance of the agent by providing feedback on how well the agent is doing. The feedback is used by the learning element to update its knowledge and improve future decision-making.
- The learning element is responsible for acquiring new knowledge or skills based on the available feedback. It uses a learning algorithm and techniques to analyse the data and update its internal representation or model of the environment.
- The performance element interacts with the environment through the actuators, takes actions, and makes decisions based on the acquired knowledge. The performance element can be guided by the learned knowledge to achieve its goals.
- The agent will balance between exploration and exploitation, that is to gather information for its learned knowledge and be able to use the learned knowledge to achieve its goals more effectively.
- Learning agents typically learn through reinforcement learning, which involved receiving feedback based on the outcomes of their actions. The agent seeks to maximise cumulative feedback over time by adjusting its behaviour.
What is an Expert System?
Expert systems are a type of artificial intelligence that aim to emulate the knowledge and reasoning capabilities of human experts in a specific domain. It is designed to provide specialised knowledge in a particular field, allowing it to solve complex problems and make informed decisions.
What is a Knowledge Base in the context of an Expert System?
The knowledge base is a repository that stores domain-specific knowledge. It contains facts, rules, heuristics, and relationships relevant to the problem domain.
The knowledge base can be represented as a series of IF-THEN rules: if condition A then conclusion B.
What is an Inference Engine in the context of an Expert System?
The inference engine is the reasoning component of the expert system. It utilises the knowledge stored in the knowledge base to draw conclusions, make inferences, and answer queries or solve problems. It makes use of reasoning mechanisms, such as rule-based reasoning, logical reasoning, or probabilistic reasoning to process the available knowledge.
What is a User Interface in the context of an Expert System?
the user interface allows users to interact with the expert system, posing questions, providing input, and receiving responses. The interface can be text-based, graphical, or even voice-based, depending on the design and purpose of the system.
What are the features of Rules in an Expert System?
Modularity: each rule defines a relatively independent piece of knowledge
Incrementality: new rules added relatively independently of other rules
Modifiable and Transparent: can see what goes wrong or needs addition, has explicit rules
Can represent uncertainty
Chaining of rules model extensive reasoning
What is Forward Chaining in the context of an Expert System?
Forwards chaining is a bottom-up approach where the inference engine starts with the available facts and applies rules to derive new conclusions. It works by matching the conditions of the rules with the known facts and inferring new facts and conclusions. This process continues until no more rules can be applied or until the goal is reached.
Example:
- Rule 1: IF A and B THEN C
- Rule 2: IF C THEN D
- Facts: A is true, B is true
1. The inference engine matches Rule 1’s conditions (A and B) with the known facts that A and B are true.
2. The inference engine matches Rule 2’s condition (C) with the new fact that C is true and infers D as a new fact.
What is Backwards Chaining in the context of an Expert System?
Backwards chaining is a top-down approach where the inference engine starts with a goal and works backwards, finding rules and facts that support the goal. It recursively applies rules by matching the conclusions of the rules with the goal until it reaches a set of known facts.
Example:
- Rule 1: IF A and B THEN C
- Rule 2: IF C THEN D
- Facts: D is true
1. The inference engine starts with goal D and searches for rules that have D as a conclusion.
2. It finds Rule 2 that has D as a conclusion, and checks if its conditions (C) can be satisfied.
3. It finds Rule 1 that has C as a conclusion, and checks if its conditions (A and B) can be satisfied.
4. It verifies whether the known facts satisfy the conditions of Rule 1.
5. If the conditions are satisfied, the inference engine concludes that D is true.
What are the different problem types?
- Deterministic, Fully Observable → single-state problem
—- Agent knows exactly which state it will be in; solution is a sequence - Non-Observable → conformant problem
—- Agent may have no idea where it is; solution (if any) is a sequence - Nondeterministic and/or Partially Observable → contingency problem
—- Percepts provide new information about the current state
—- Solution is a contingent plan or policy
—- Often interleave search, execution - Unknown State Space → exploration problem (“online”)
What is Graph Traversal?
Graph and tree search algorithms are fundamental techniques problem-solving agents use to explore and navigate through problem spaces to find solutions. Both algorithms systematically explore the search space by expanding nodes and traversing edges. The nodes represent states or configurations, the edges represent actions or transitions between
states.
What are 3 graph search algorithms?
Graph search algorithms operate on a graph structure. They move between states, maintaining a data structure of visited nodes allowing them to effectively handle repeated states and loops. Common graph search algorithms include:
- A* Search
- Dijkstra’s Algorithm
- Greedy Best-First Search
These algorithms consider both the cost to reach a state and an estimated heuristic value to guide the search towards the goal.
What are 3 tree search algorithms?
Tree search algorithms operate on tree structures. They explore the search space by expanding child nodes and systematically searching the tree until a goal state is reached.
Common tree search algorithms include:
- Depth-First Search (DFS)
- Breadth-First Search (BFS)
- Uniform-Cost Search (UCS)
How do we measure problem solving performance?
- Completeness: Is the algorithm guaranteed to find a solution?
- Optimality: Does the strategy find the optimal solution?
- Time Complexity: How long does it take to find a solution?
- Space Complexity: How much memory is needed to perform the search?
Time and space complexity are measured in terms of:
- b – maximum branching factor of the search tree
- d – depth of the least-cost solution
- m – maximum depth of the state space (can be infinite)
Measure Breadth-First Search
Completeness - Yes (if b is infinite)
Optimality - Yes (if cost = 1 per step); not optimal in general
Time Complexity - O(b d+1) , exponential in d
Space Complexity - O(b d+1) , keeps every node in memory