Lecture 9 - Complex Sensors Flashcards

1
Q

Complex sensors

  1. What do they provide and require?
  2. Examples
A
  1. Provide much more information & require sophisticated processing
  2. ultrasound, laser, vision, radar
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Sonar

  1. Subtype of..
  2. What does it use?
  3. What kind of sensor?
A
  1. Subtype of Ultrasounds
  2. Localizes with echolocation (bats)
  3. Active sensor (transmit sounds, wait for it to bounce back)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Exact sonar method (3)

-> distance up to..

A
  1. Measure time between emission and detection of sounds
  2. Multiply with speed of sound
  3. Divide by 2

-> distances up to 10 meters & 30 degrees sound cone

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Sonar

Problem

A

-Specular reflection

Sound waves from emitter bounces of the object surface before it returns to detector

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Sonar
When is specular reflection likely?
What is a solution to it?

A

Smooth surfaces, small angles, reflective surface

Solution: Multiple readings, active sensing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Laser

  1. How does laser range sensor work?
  2. Comparison to sonar Resolution
  3. and Frequency
A
  1. Same principle as sonar, but faster (phase-shift measurement)
  2. Light is emitted in beam, not a cone
  3. Light is faster -> more measurement possibility
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Vision

  1. Motion vision
  2. Stereo/binocular vision
A
  1. Difference between sequential frames (many tasks)

2. Find matches between stereo images (see depth)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Vision

What is needed for tracking a ball (2)?

A
  1. Unique color detection

2. Movement detection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Vision

  1. Sensor fusion
  2. How to simplify processing?
A
  1. Use vision in combination with other sensors

2. Use knowledge about the environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
Vision
Edge detection
1. How are edges detected?
2. Which mathematical method is used? General and precise?
3. What is its purpose?
A
  1. An edge (image feature) is where pixels change their brightness
  2. The second derivative, Laplacian L (convolution)
  3. Segmentation of the image
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Vision
Edge detection
What is a problem?
Solution?

A
  1. Noise

2. Gaussian smoothing (smoothing the image)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
Image feature
SIFT
1. What does it find?
2. Which method is used?
-> what should not be where?
A
  1. Image points with high contrast
  2. Difference of Gaussian (DoG) Filter
    - > Extrema in DoG imagine should not be on a line
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Image feature
SIFT
1. Descriptor (what to do with extrema??)
-> save what where?

A
  1. Look at pixels around extrema in DoG and check their gradient (described keypoint)
    - > save gradient histograms in database
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
Image feature
SIFT
Matching
1. What to compare?
2. When does it recognize location/object?
A
  1. Compare new keypoint descriptor with descriptors in database (compute distance, if small -> recognition)
  2. When enough keypoints are associated with the location are recognized
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Motion Detection

  1. Motion field
  2. Optical flow
A
  1. Motion of actual object

2. Motion as it optically appears (could just be reflection)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Color Detection

  1. What does it detect and subsequently compare?
  2. When is it useful?
A
  1. Color-blobs, the positions of which are then compared in subsequent pictures
  2. Useful in environment with low illumination variance
17
Q

Perceptual pipeline

  1. What does it describe?
  2. (4) steps
A
  1. Pathway from sensor readings to knowledge models

2. Sensing -> Signal Treatment -> Feature Extraction -> Scene Interpretation

18
Q

Image Processing

  1. Feature extraction
  2. range data: what is it used for?
A
  1. High-level feature, abstraction of raw data

2. sonar and laser can be used for line extraction (recognition) and particular corners like ‘doorway’

19
Q

Feature extraction
Visual Appearance
1. What does it need?
2. (2) main feature types

A
  1. Needs image preprocessing

2. (1) Spatially localized features and (2) Whole-image features

20
Q

Feature extraction
Spatially localized features
(3) examples + (1) complex

A
  1. Edge detection
  2. Straight edge detection
  3. Interest points -> SIFT
  4. Floor plane extraction
21
Q
Feature extraction
Whole-image features
1. What information used?
2. What gives it a representation of?
3. What should small changes do?
A
  1. Information captured by entire image
  2. Gives representation of the entire local region around robot
  3. Small changes should NOT change the whole-image feature completely
22
Q

Transducer

A

Device that transforms one form of energy (e.g. mechanical) into another (e.g. sound)
-> used in sonar to create sound

23
Q

Device that transforms one form of energy (e.g. mechanical) into another (e.g. sound)

A

Transducer

24
Q
  1. Motion of actual object

2. Motion as it optically appears (could just be reflection)

A

Motion Detection

  1. Motion field
  2. Optical flow