Week 3 - unsupervised learning Flashcards

1
Q

Unsupervised learning
- 2 kenmerken
- 2 common tasks
- example

A

No Labeled Guidance*

Exploration and Patterns*

Common Tasks:

Common tasks in unsupervised learning include clustering and dimensionality reduction.

Clustering*

Examples:

Imagine you have a collection of articles, and you want the algorithm to group them into topics without telling it what the topics are. Unsupervised learning can be used to discover natural themes or clusters within the articles.

In image analysis, unsupervised learning might be applied to identify common patterns or features without specifying what to look for.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Unsupervised learning:

K-means clustering

4 steps

A

Initialization*

Assignment*

Update Centroids*

Repeat*

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Example: Let’s say you have data representing the heights and weights of people. You want to group them into clusters based on their physical characteristics.

Initialization:
- example

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Example: Let’s say you have data representing the heights and weights of people. You want to group them into clusters based on their physical characteristics.

Assignment:
- example

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Example: Let’s say you have data representing the heights and weights of people. You want to group them into clusters based on their physical characteristics.

Update centroids:
Example

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Example: Let’s say you have data representing the heights and weights of people. You want to group them into clusters based on their physical characteristics.

Repeat
- Example

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Example: Let’s say you have data representing the heights and weights of people. You want to group them into clusters based on their physical characteristics.

  • result
A

After the algorithm converges, you might find clusters where people with similar heights and weights are grouped together.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Exploration and Patterns

A

The algorithm explores the data to identify inherent patterns, similarities, or structures on its own.

It tries to understand the natural organization or grouping within the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Clustering

A

Grouping similar data points together

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Dimensionality Reduction

A

Simplifying the data while retaining its essential features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

K-means clustering:

Clusters

A

A cluster is a group of data points that are similar to each other.

The idea is to find natural groupings in the data without knowing beforehand what those groups are.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

K-means clustering:

K

A

“k” is the number of clusters you want to identify in the data.

You need to decide or specify the value of k before running the algorithm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

K means clustering:

Initialization

A

Randomly select k points as the initial cluster centroids (centers).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

K means clustering:

Update Centroids

A

Assign each data point to the cluster whose centroid is the closest (based on distance metrics like Euclidean distance).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

K means clustering:

A

Recalculate the centroids of each cluster based on the mean of the data points in that cluster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

K means clustering:

Repeat

A

Repeat the assignment and centroid update steps until the clusters stabilize or a certain number of iterations is reached.

17
Q

No free lunch theorem

A

The “No Free Lunch Theorem” is a concept in machine learning and optimization that essentially states that there is no one-size-fits-all algorithm that performs best for every problem.

18
Q
A