G-Prots Flashcards

1
Q

Definiere Specificity

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

PCA removes irrelevant features.
Stimmt das?

A

PCA itself doesn’t explicitly remove irrelevant features; rather, it projects the data into a lower-dimensional space. It can, however, help highlight the features that contribute most to the variance in the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Does ChiMerge need labels?

A

It is typically used for discretizing continuous features into intervals but relies on a target (label) variable to compute the Chi-square value for merging intervals. Therefore, ChiMerge does indeed use labels.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Chi-Sqaure

H0: Es gibt keinen Zusammenhang.
Wie berechnet man expected values?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Name reasons for doing feature discretization.

A

Feature discretization, also known as binning, involves converting continuous data into discrete bins or intervals.
1. Simplification: Discretization can simplify the model and make it easier to interpret, especially when dealing with continuous variables that have a large range of values.
2. Reduction of Noise: Aggregating continuous data into bins can reduce the noise, particularly if the continuous measurements are prone to errors or fluctuations that are not relevant for the analysis.
3. Improving Model Performance Some machine learning algorithms, such as decision trees, might perform better with categorical data. Discretizing features can thus potentially improve the performance of these models.
4. Handling Outliers: Discretization can mitigate the impact of outliers by pooling them into the same bins as less extreme values, thus reducing their influence on the model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Outline the steps of the ChiMerge Algorithm.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Name one advantage from regular ReLU vs Sigmoid functions.

A
  • ReLU helps to mitigate the vanishing gradient problem
  • ReLU is computationally simpler and thus faster to compute compared to the Sigmoid function
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Was ist Leaky ReLU?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Was ist der Vorteil von Leaky ReLU?

A
  1. Reduziert das “Dead Neurons”-Problem, indem Neuronen auch bei negativen Eingabewerten aktiv bleiben können.
  2. Bietet im Vergleich zu ReLU eine kontinuierliche und differenzierbare Funktion für negative Werte, was den Lernprozess stabilisiert.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Was sind Nachteile von Leaky ReLU

A
  1. Der optimale Wert für 𝛼
    α muss oft durch Experimentieren oder Cross-Validation bestimmt werden.
  2. In bestimmten Situationen kann die Verwendung einer kleinen negativen Steigung unerwünschte Auswirkungen haben, z. B. wenn sie nicht die beste Wahl für spezifische Datensätze oder Modellarchitekturen ist.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Benne:

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly