Exam May 2023 Flashcards

1
Q

What is the difference between imperative algorithms and a declarative approach?

A

Imperative Algorithms: achieve a result by showing how step by step.

Declarative Approach: Trying to achieve a desired result without showing steps of getting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Give two reasons why the development of neural networks and deep learning has gained momentum in the last 10 the years.

A
  1. The development of stronger CPUs, previously you have not had enough power in the processors to be able to process the data that you want to do with DL and neural networks.
  2. Access to Big Data - One has easier access to big data and can thus use DL and NN much easier than what was previously done.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Give a reason why the development of the perceptron stopped for several decades.

A

There weren’t enough uses for the perceptron, it was thought that other type of AI was better and more adapted to the data to be handled then. In the first ones the perceptron, you could also not change the weights on the nodes, which made it smaller
useful.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Give two examples of desirable properties of a knowledge representation system

A
  1. Accessibility, for a knowledge representation system to be as useful as possible.
  2. Well trained in its domain to be able to categorize the information it possesses as well as possible.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Give a couple of reasons why AI research has experienced different “winters” historically. Do you think it will
happen again? Why/why not?

A

AI winters have historically happened a lot because of promises that could not be kept, people have claimed and promised very big changes that have not been able to happen but also because of
financial aspects and technical obstacles that have stopped development.

I do not think it will happen again shortly as development has a very fast pace and we will quickly on new solutions to the problems that arise.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Write down STRIPS actions required to solve the following problem - from initial state to goal state. In other words, write in STRIPS language the following: Initial, Goal, Actions, Path.

A

InitialPrecond: on(B,A) on(A, ground) CraneEmpty
Goal: on(A,ground) on(B,Ground), CraneEmpty
Actions: move(crane) grasp(crane,X), loose(crane,X)
Path: move(crane), grasp(crane,B), move(crane), loose(crane,B)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Explain the significance of initial weights and the number of nodes in hidden layers in the process of training a Neural Networks (NNs).

A

The significance of the initial weights is necessarily not super important, what is important is that you keep them in a specific stated range and can change them via the help of bias when moving through the epochs, the number of nodes is important in regards to how much data is processed as we can use parallel processing and backpropagation to see where the “error” is at hand. With more nodes you can handle more data and also transfer knowledge.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Imagine you are tasked with designing an RL model for a robotic vacuum cleaner to clean a
room.
a. Define the reward structure you would design for the robot, along with the penalties to
discourage undesirable behavior.
b. Identify the relevant states for the RL model.
c. discuss the possible actions that the robot can take to efficiently clean the room.

A

First of all the robotic vacuum cleaner needs to learn about the environment that it is in. In this
case it is just one room.
a) The reward structure should be based on desirable behaviour, in this case the robot needs to learn how big the room is and what areas of the rooms needs to be cleaned. For example,
if the robot has cleaned a specific part of the room and is done with that part it should move on to the next part, if that is done without any problems it could be rewarded of doing the
exact same thing faster the next time, it can also be awarded by the user via an UI saying that the robot cleaned the room well or not. If the robot at any time would crash or move in to
an object which hinders the robot it should move back to the initial state in the room and do the cleaning again.
b) The relevant states for the model is trying to evaluate every single state it is in to maximize the speed of cleaning the room. It needs knowledge of where it is given a certain state in the environment so it does not return there and where to move next.
c) moving in all directions(left, right, forward, backwards), and vacuuming when identifying dirt

How well did you know this?
1
Not at all
2
3
4
5
Perfectly