Lect 27 Flashcards
What to do in order to get rid of the errors of the readings of laser sensor because of occlusion and distance..
Identify different scanned versions of the environment. Which parts of the scanned versions are the same and merge two scans to solve the occlusions. Merge several scans of the sensors. Because when we have one single scan, we have occlusion.
What are probabilistic techniques and probabilistic robotics?
We dont have to interpret single scan, we can merge several perceptions. Unified perception is done with probabilistic techniques.
Example of laser range finders?
SİCK: Accuracy and repeadability. Excellent results. Reliability is good. Laser Range finders: Found on many robots Highly accurate measures distances and angles
What is rotating laser scanner and what is the difference between the laser range sensor?
There is no rotating mirror anymore. Vacuum cleaner da bu kullanılıyor. 3D camera spinning infrared emmiter and receiver is spinning. Merging all the scans all together
LIDAR vs LRF
LRF(Laser range Finder) : refers to 1D scanning
LIDAR: refers to light and radar. 3D laser scanning
You dont have one point swiping around, you have laser lines which are rotating and swiping in 3D
3D laser sensors are using map based navigation (generally autonomous cars)
What are time of flight cameras?
Something in between 3D Lıdar and digital cameras. The idea is not to have single or multiple lines of laser swiping and going around, but to have a sensor CD or CMOS, that is collecting energy coming, reflected from the environment
What is CMOS?
Collects energy reflected from the environment. Measures the distance of an object from every point. from 3D camera in each pixel its collecting the pixels
What does 3D camera?
It is measuring in real time the distance of an object directly in every direction. Like normal cameras but instead of collecting RGB signals, in each pixel it stores the distance measured along each pixel
Where does LIDAR used
3D scans of objects and environments. Augmented reality applications, phones and tablets
What are ground beacons?
One approach to solve localization is to use active or passieve beacons. Using the interaction of the environmental beacons and sensors. They are used for GPS.
What is 3 Lateration?
You have three objects, you can measure the distance. We need at least 3 reference to locate on. If we can measure the distance of several objects, and if we have one position for each object, we can know the position of the sensor or the robot. From each object you measure the distance but you dont know where you are. You know your distances from each object and with these information you can calculate your position. If you have a good sensor calibration for sure you know exactly where you are depending on the distances between each object
Global Positioning Systems (GPS)
Location of any GPS receiver is determined through a time of flight measurement plus time. We can estimate the distance from the satellites by looking at least 7 different satellites. We can calculate our position not only in 2D but also in 3D surface on earth.
The errors you get from the GPS is very high in order of meters. For mobile phones: you are not using a GPS signal, it is a GPS signal corrected by a software which is projecting the information
Problems about GPS?
Our robots are generally indoor robots, so GPS is not so usefull.
1) Sattelite coverage: Urban channeling, very high buildings you can not have enough satelite to calculte your position(3 Lateration)
2) Multipath problem: Suffering from reflections of sattelights
Stereo Range Maps?
Uses two cameras, passieve sensors, sensitive to calibration.
Depth From X?
To be able to recover enough information not from phase shift or time or flight, we can use other techniques.
Like: stereo vision matching patterns into images
or by looking at the motion in the pixture or the focus or texture