Segmentation and Clustering p1 - Lecture 7 - Week 3 Flashcards

1
Q

What is grouping in computer vision?

A

To gather features that belong together

Goals:
- Gather features that belong together
- Obtain an intermediate representation that compactly describes key image (video) parts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is segmentation useful for?

A

Compactly provides information about key parts of the image
Can use the results in other algorithms
Can extract “tiger” from images and then make a model of it’s appearance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What constitutes a group?

A

Gestalt school shows grouping is key to visual perception
Elements in a collection can have properties that result from relationships
“The whole is greater than the sum of its parts”

Relationships among parts can yield new properties/features

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the gestalt factors?

A

Proximity
Similarity
Common Fate
Common Region

Parallelism
Symmetry
Continuity
Closure

Can apply algorithms based on common fate by finding the gradient of the image and grouping based on that

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is top-down and bottom-up segmentation?

A

Top down: pixels belong together because they are from the same object

Bottom Up: pixels belong together because they look ‘similar’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is it hard to measure segmentation success?

A

What is interesting depends on the app

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How is the ground truth found in the Berkeley Segmentation dataset?

A

It’s the summation of all of the human annotations, weighted by how consistently they appeared.

This allows comparing algorithms in a fair manner

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are superpixels and what is the idea behind them?

A

Groupings of pixels, over-segmentation where each region is very likely to be uniform

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What graph is useful for clustering if we don’t know how many clusters there are?

A

A graph of pixel count versus intensity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Where are the best intensity cluster centre points?

A

The centres that minimise the sum squared distance (SSD) between all points and their nearest cluster centre

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How does K-means clustering work?

A

Randomly initialise the k cluster centres and iterate between the two steps of:
Allocating intensities to clusters based on the centres
Allocating centres based on the allocated intensity clusters

  1. Randomly initialise the cluster centers, c1, …, ck
  2. Given cluster centers, determine points in each cluster
    - For each point p, find the closest c1. Put p into cluster i
  3. Given points in each cluster, solve for ci
    - Set ci to be the mean of points in cluster i
  4. If ci have changed, repeat Step 2

Properties:
- Will always converge to some solution
- Can be a “local minimum”
- Does not always find the global minimum of objective function

In order to find the global minimum we run this a few times through and then find the value overall that minimises the object function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a feature space?

A

The property we cluster images based on, e.g:
Intensity
Colour similarity
These properties don’t imply spatial coherence, so we can cluster (r,g,b,x,y) to encode similarity and proximity
RGB could be the worst colour space to use to cluster but that’s not really the point

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the pros and cons of K-means clustering?

A

Pros
- SImple, fast to compute
- Converges to local minimum of within-cluster squared error

Cons/Issues
- Setting k?
- Sensitive to initial centres
- Sensitive to outliers
- Detects spherical clusters only
- Gives a hard assignment, better to be in cluster 1 70% and cluster 2 30% rather than just cluster 1

Still very widely used, very simple, fast to compute and can get a quick answer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly