Robotics AI Techniques Flashcards
Master AI techniques used in Robotics, PID, SLAM, Kalman Filters etc
When calculating the posterior probability after sensing in the Histogram Filter do you take a product or a convolution
To get the posterior probability after the robot senses, we take a product. We incorporate our measurements after sensing by assigning higher weights to prior measurements closer to our actual measurment.
When calculating the probabilities after moving in the Histogram Filter, do you take a product or a convolution
We take a convolution which is called a weighted sum of the movements to get the new posterior probability.
Briefly explain what happens to the information about the world when a robot moves and when it senses
When a robot moves, it looses information, this is because robot motion is inaccurate. When it senses it gains information, this is because its beliefs are updated based on the measurements it senses from the world.
How does motion and sensing affect entropy
Motion makes the entropy go up and sensing makes it go down
What is entropy ?
It is the measurement of the randomness or uncertainty in a robot’s movement. Given as the expected value of the log likelihood
-Σ(p) logp(x)
What is the Kalman Filter and how does it differ from Monte Carlo Localization ?
The Kalman Filter is a state estimation model that estimates a continuous state while the Monte carlo localization estimates a discreete state.
The Kalman filter is Unimodal, this means it can only track a single state or belief of the world at a time.
The MonteCarlo Localization is a MultiModal distribution
What is the intuition behind the values of the Q and R matrices in the Kalman filter
The Q matrix represents the system noise. It reflects how confident we are in the state (dynamics) model of the system. Higher values of Q means we’re uncertain about the system’s state and makes the model put more weight on the measurements.
The R matrix is the measurement covariance, it represents how confident we are about our measurements. Low values mean we’re more certain about our measurements and makes the system trust the measurements more and hence reach convergence faster. High values lead to smoother measurements but cause the system to converge slowly.