Oldie Flashcards

1
Q

Shortcomings of Information Gain?

A
  • IG tends to prefer highly branching attributes
  • Subsets are more likely to be pure if there is a large number of variables
    • May result in overfitting

Solve with Gain Ratio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What aspect of Information Gain is “Gain Ratio” intended to remedy? Explain with equation how to achieve this.

A
  • Fixes IG’s shortcoming of preferring highly branching attributes (i.e. ID attributes low entropy and lots of them so high IG)
  • GR reduces the bias for IG toward highly branching attributes by normalising relative to the Split Information (SI).

GR(Ra | R) = IG(Ra | R) / SI(Ra | R)
= H(R) - SumOf P(xi)H(xi) / - (sumof P(xi)logP(xi))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What’s the difference between classification and regression?

A
Classification = Predicting which class a data point is part of.
- The dependent variables are categorical

Regression = Predicting continuous values
- The dependent variables are numerical

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Outline the nature of “hill climbing” & provide example of hill climbing algorithm.

A
  • Finds the global maximum iteratively - sometimes gets stuck in local maximum - unless convex function?
    e. g. EM algorithm - guaranteed positive hill climb
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What basic approach does hill climbing contrast with?

A

???

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Advantages of “bagging”

A
  • Decrease variance
    • Highly effective over noisy data
  • Performance is generally better & never substantially worse
  • Possibility to parallelise computation of individual base classifiers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is sum of squared errors? Name one thing it’s applied to.

A

Method to evaluate the quality of clusters.

Applied to K-Means

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How does stratified cross validation contrast with non-stratified cross validation?

A

Stratification is generally better than regular.
- Both in terms of bias and variance

Achieves this by rearranging the data to ensure each fold is a good representation of the whole data set.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Outline the difference between supervised and unsupervised ML methods

A

Unsupervised:

  • No knowledge of labelled data
  • Needs to find clusters, patterns, etc by itself

Supervised:

  • Has knowledge of labelled data
  • Learning involves comparing its current predicted output with the correct outputs
    • Able to know what the error is
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Define “discretisation” with an example

A

Process of converting continuous attributes to nominal or ordinal attributes.

Some learners, such as Decision Trees, generally work better with nominal attributes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Briefly describe what “overfitting” is

A

Overfitting is when the model picks up errors and/or noise OR has a lack of training data

  • High variance in classifier
  • Causes the training accuracy and test accuracy to not be similar - big gap between test and training curves
  • Occurs when the model is excessively complex
  • Struggles to generalise
  • Overreacts to minor fluctuations in the training data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Briefly outline the “inductive learning hypothesis”

A

Any hypothesis found to approx. the target function well, over a sufficiently large set of training examples, will also approximate the target function well, over any other unobserved examples

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Categorise each as either “parametric” or “non-parametric”

i) SVM
ii) Max Entropy Model
iii) C4.5 -> Style DT

A

SVM - Depends on kernel
Max Entropy - Parametric?
DT - Non-parametric

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a “consistent” classifier?

A

A classifier which is able to flawlessly predict the class of all TRAINING instances

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Outline the difference between “Lazy” and “eager” classifiers

A

Lazy (instance-based) -> KNN
- Stores data and waits until a query is made to do anything

Eager (decision tree, SVM)
- Constructs a classification model (generalise the training data) before receiving queries (new data to classify)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is Zero-R?

A

Classifies all instances according to most occurring class.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Briefly outline the nature of conditional independence assumption in the context of NB

A

???

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is “hidden” in a HMM?

A

???

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the role of “log likelihood” in EM algorithm?

A

???

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

“Smoothing” is considered important when dealing with probability

i) why?
ii) name one smoothing method - explain how it works

A

???

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are 2 parameters required to generate a Gaussian probability density function?

A

???

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

How are missing values in test instances dealt with in NB?

A

If training -> Don’t include in classifier
If test -> Ignore from end result
Just drop missing shit

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Briefly explain how to generate a training & test learning curve

A

???

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Explain with equations what PRECISION and RECALL are

A

Use Precision and Recall - if we want to know what we have positively identified NOT what we correctly identified

Precision = positive predictive values - proportion of positive predictions that are correct
= TP / TP+FP

Recall = sensitivity - accuracy with respect to positive cases
= TP / TP+FN

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What sort of task is “root relative squared error” used to evaluate

A

Don’t think this is part of course

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Describe with equation how the “overlap” metric is used to calculate distance between 2 continuous feature values in instance based learning

A

Don’t think this is part of course

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Advantage of “overlap” metric over Euclidean Distance?

A

??? don’t think relevant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Describe how instance based learning combines distances for discrete and continuous features

A

??? don’t think relevant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Describe with use of an equation how “inverse distance” is used in weighting voting for instance based learning

A

??? don’t think relevant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Outline each step you would go through in calculating classification accuracy based on “10-fold stratified cross-validation” for a given dataset and supervised algo.

A

???

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Why is stratified cross-validation superior to “holdout” method

A

???

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Outline steps involved in the “bagging” algorithm

A

???

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Give an example of an “unstable” algorithm

A

???

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Classify as either supervised or unsupervised

i) C4.5 DT
ii) K-means
iii) boosting
iv) zero-R

A

i) Supervised
ii) Unsupervised
iii) ?
iv) Supervised

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What is an “incremental” algo? When is this a desirable property?

A

???

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is the relationship between similarity and distance?

A

???

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What is a nearest neighbour?

A

Closest point: Maximum similarity or Minimum distance

d(x,y) = min(d(z,y) | z belonging to C)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Big O of brute force nearest neighbour?

A

N samples, D dimensions
- O(DN^2)

Good for small datasets, but quickly becomes infeasible as N grows

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

PROS and CONS of Nearest Neighbours

A

PROS

  • Simple
  • Can handle arbitrarily many classes
  • Incremental (can add extra data to the classifier on the fly)

CONS

  • Prone to bias
  • Arbitrary k-value
  • Need a useful distance function
  • Need an averaging function
  • Expensive
  • Lazy learner
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What is the specificity equation

A

Accuracy with respect to negative cases

Specificity = TN / TN+FP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What is the (training) bias of a classifier?

A

The difference between correct and predicted value, averaged over training sets

  • Bias is large if the learning method produces classifiers that are consistently wrong (underfitting)
  • Bias is small if consistently correct or different training sets cause errors on different test sets
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What is the (test) variance of a classifier?

A

The validation in the prediction of learned classifiers across training sets - Measures inconsistency not accuracy

  • Variance is large if different training sets give rise to very different classifiers (overfitting)
  • Variance is small if different training sets have a minor effect on the classification decisions made, don’t care if correct or incorrect
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Pros and Cons of Holdout Strategy?

A

Pros

  • Simple to work with
  • High reproducibility

Cons

  • Tradeoff between more training and more test data (variance vs bias)
  • Representativeness of the training and test data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Pros and Cons of Random Sampling?

A

PROS
- Reduction in variance and bias over “holdout” strategy

Cons
- Lack of reproducibility

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Pros and Cons of leave-one-out?

A

Pros

  • No sampling bias in evaluating the system and results will be unique and repeatable
  • Generally gives high accuracy

Cons
- Very expensive if working with a large dataset

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Pros and Cons of M-Fold Cross Validation

A

Pros

  • Need to only train the model M times (rather than N times like in leave-one-out) M is partitions, N is all instances
  • Can measure stability of the system across different training/test combinations

Cons

  • How data is distributed among the M partitions due to sampling can lead to bias (training data differs from test)
  • The results will not be unique unless we always partition the data identically
  • Slightly lower accuracy because (M-1) / M of data is used for training
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

What is M-Fold CV similar to?

A

Similar to leave-one-out, instead of doing over all N instances, you partition into M and only do over M

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

What is a baseline and what is a benchmark?

A

Baseline
- Naïve method which we would expect any reasonable well-developed classifier to perform better (the bare minimum)

Benchmark
- Established rival technique which we are pitching our method against (goal)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Discuss the importance of baselines, give examples

A
  • Establishing whether any proposed method is doing better than “dumb and simple”
  • Valuable in getting a sense for the intrinsic difficulty of a given task
    e. g. Random Baseline, Zero-R, One-R
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

What is One-R? How does it work?

A

Baseline method

  • Creates one rule for each attribute in the training data
  • Then selects the rule with the smallest error rate as its “one rule”

How it works

  • Create a decision stump for each attribute with branches for each value (attribute)
  • Populate leaf with the majority class at that leaf (i.e. make all instances the majority class in this leaf - if majority is YES, make all instances YES)
  • Select decision stump which leads to lowest error rate over training

Weather (Outlook) 9 Yes 5 No

  • Sunny 2 Yes 3 NO (Replace all with NO - 2 errors)
  • O’cast 4 YES 0 No (replace all with YES)
  • Rainy 3 YES 2 No (replace all with YES - 2 errors)

Total errors = 4 / 14

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

What to be aware of / sensitive of when formulating a baseline?

A

Sensitive to the importance of positive and negatives in the classification task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Pros & cons of One-R?

A

Pros

  • Simple to understand and implement
  • Simple to comprehend results
  • Good results

Cons

  • Unable to capture attribute interactions
  • Bias towards high-arity attributes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

What is the “convergence” criterion in k-means?

A

No change for past 2 iterations or if difference between iterations falls below a threshold specified.
=> Stable state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

What is the name of the process of converting nominal -> continuous values? explain how this works with e.g.

A

Similarities / Hamming Distance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

What is “meta-classification”?

A

???

56
Q

What is the range in entropy values?

A

0 <= Entropy =< log(n)

n = number of outcomes

57
Q

Formula for Bayes Rule? How to apply? Give example

A

???

58
Q

What is F-Score? How to apply? Give example

A

???

59
Q

Derive Naïve Bayes

A

???

60
Q

K-Means algorithm?

A

???

61
Q

How to calculate base probabilities in Naïve Bayes

A

???

62
Q

Nearest Neighbour algorithm?

A

???

63
Q

Nearest Prototype Algorithm?

A

???

64
Q

What is semi-supervised learning and when is it desirable?

A

TL;DR do both supervised and unsupervised

When we have a small number of labelled instances, large number of unlabelled instances.

labelled &laquo_space;unlabelled

This is not enough data to train a RELIABLE classifier (purely supervised), but we can potentially leverage the labelled instances to build a better classifier than a purely unsupervised method.

65
Q

What is self training?

A

Self Training

  • Train the learner on the currently labelled instances
  • Use the learner to predict the labels of unlabelled instances
  • Where the learner is very confident, add newly labelled instances to the training set
  • Repeat until all instances are labelled, or no new instances can be labelled confidently
66
Q

What is co-training?

A

More or less the same as self-training, but with two learners operating (hopefully) independently.

67
Q

What is active learning? And what are some methods to choose instances?

A

In active learning, the learner is allowed to choose a small number of instances to be labelled by a human judge (an oracle).

The idea is that, many instances are easy to classify and a small number of instances are difficult to classify, BUT, would be easy to classify with more training data (however we don’t have that luxury).

Some methods to choose instances include;

  • The learner generating its own difficult instances
  • Instances are selected as the ones which are most difficult to classify in a fixed, unlabelled set
68
Q

What is a structured classification?

A

Supervised Machine Learning technique that involves predicting structured objects, rather than discrete or real values

  • Sequential structure
  • Hierarchical structure
  • Graph structure
69
Q

Give an example of a structured classification

A

Translating a natural language sentence into a syntactic representation of a parse tree

70
Q

What is a Hidden Markov Model?

A

Variant of a finite state machine having a set of hidden states

Current state is not observable (hidden)

Each state produces an output with a certain probability

71
Q

What are the 3 canonical problems to solve with HMM?

A

1) Evaluation: Compute the PROBABILITY of a particular output sequence - forward & backward algo
2) Decode: Find the MOST LIKELY SEQUENCE OF STATES which could have generated a given output sequence - Viterbi & posterior decoding
3) Train: Given an output sequence, find the most likely set of state transitions and output probability - Baum Welch

72
Q

How to evaluate HMM?

A

1) Likelihood of test data
- Keep some test data and compute the likelihood of the test sequences by using the forward algo

2) Predicting parts of the data given other parts

73
Q

How to decode HMM?

A

1) Enumerate all the hidden state sequences, brute force & sort
2) Viterbi

74
Q

How to train HMM (unsupervised)?

A

Baum-Welch

75
Q

Limitations of HMM?

A

Similar to Naïve Bayes, suffers from floating point underflow.

As with generative models, hard to add ad-hoc features

76
Q

Difference between max entropy markov model and HMM?

A

Max Entropy Markov Model is able to add extra features indiscriminately, as well as capturing unidirectional tag interactions

77
Q

What are some alternatives to HMM for structured classification?

A

Maximum Entropy Markov Models - just logistic regression model where we condition on the tag for the preceding instance

Conditional Random Fields

78
Q

What is clustering?

A

Method for unsupervised learning.

Grouping a set of objects in such a way that the objects in the same group (called a cluster) are more similar to each other than those in other groups (clusters)

79
Q

The 2 types of clustering?

A

Hard clustering
- Each data point either belongs to a cluster or not

Soft clustering
- Probability or likelihood of that data point belonging to that cluster is assigned

80
Q

Types of clustering models?

A
Connectivity models
- Hierarchical
Centroid models
- K-means &amp; soft k-means
Distribution models
- Expectation Maximisation (EM)
Density models
- DBSCAN
81
Q

Incremental vs Batch

A

???

82
Q

What is k-means? How does it work?

A

Iterative clustering algorithm to find the local maxima in each iteration

1) if not given k, specify k (number of centroids)
1. 5) Randomly assign k
2) randomly assign each data point to a cluster
3) compute centroid point of cluster
4) reassign each data point to the closest cluster centroid
5) recompute cluster centroids
6) repeat 4 and 5 until no more updates can be made -> convergence criterion reached

83
Q

What is soft k-means? How does it work?

A

K-means with softmax function

1) Set t = 0, randomly initialise the centroids
2) Soft assign each instance Xj to a cluster
3) Update each centroid
4) Set t = t+1, go back to step 2, until centroids stabalise

84
Q

Overlapping, probabilistic, partitioning, batch clustering. Which k-means?

A

Soft k-means

85
Q

Exclusive, deterministic, partitioning, batch clustering. Which k-means?

A

K-means

86
Q

What is EM clustering?

A

Expectation Maximisation.
- Quasi-newton parameter estimation method with guaranteed positive hill climbing characteristics to the gradient of log likelihood

  • used to estimate hidden parameter values or cluster membership
87
Q

How does EM work?

A

Iteratively alternates between performing 2 steps

Step E (expectation)
- Calculate the expected log-likelihood based on the current estimates of the parameters
Step M (maximisation)
- Compute new parameter distribution, maximising the log-likelihood found in step E.
88
Q

How to measure convergence?

A

Relative difference in log likelihood from one iteration to the next, once this falls below a certain predefined level, can consider that the estimate to have converged.

89
Q

WTF Is Baum Welch?

A

Application of EM to HMM (unsupervised)

90
Q

PROS and CONS of EM?

A

PROS

  • Guaranteed positive hill climbing
  • Fast to converge
  • Results in probabilistic cluster assignment
  • Relatively simple but powerful

CONS

  • Can get stuck in a local maximum
  • Relies on arbitrary k
  • Tends to overfit
91
Q

What is the output of hard/soft k-means and EM sensitive to?

A
  • The initial random seed centroids

* Initial class assignments

92
Q

What are the two types of cluster evaluation measures?

A

Unsupervised:
- Cohesion and Separation

Supervised:
- Purity and entropy

93
Q

Discuss Unsupervised Cluster Evaluation

A

A good cluster analysis should have one or both:

  • High cluster cohesion
  • High cluster separation
94
Q

What will the implementation detail for unsupervised cluster evaluation depend on?

A

Whether clustering method is:

  • prototype or graph-based
  • deterministic or probabilistic
  • etc.
95
Q

WTF is cluster compactness (squared errors)?

A

Sum of Squared Errors (SSE) is a method to evaluate the quality of clusters (especially k-means)

96
Q

WTF is silhouette coefficient?

A

Combines cohesion and separation to compute a figure of merit for the overall clustering output, individual clusters and individual points.

97
Q

Discuss Supervised cluster evaluation

A
Measures the degree to which predicted class labels MATCH the actual class labels
* purity and entropy
98
Q

What assumptions is co-training based on?

A

That there is a feature split which leads to independent classifiers.

If given this, can lead to significant improvement in classifier accuracy.

99
Q

What are the main sampling strategies in active learning?

A

1) membership query synthesis
- Synthesises queries for labelling
2) Stream-based selecting sampling
- Determines for each stream of instances whether to have them labelled or not
3) Pool based sampling
- Selects from a fixed set of instances what it wants to have labelled

100
Q

Outline a selection of query strategies in active learning

A

1) query those that the classifier is least confident on
2) perform margin sampling
3) query-by-committee

101
Q

What is classifier combination?

A

Constructs a set of base classifiers from a given set of training data and aggregates the outputs into a signle meta classifier

102
Q

What are the motivations behind classifier combination?

A

1) the sum of lots of weak classifiers can be at least as good as one strong classifier
2) the sum of a selection of strong classifiers is usually at least as good as the best of the base classifiers

103
Q

What is bagging?

A

Simple classifier combination method based on sampling and voting

  • Can parallelise computation of individual base classifiers
  • Highly effective over noisy data
  • Performance is generally significantly better and never substantially worse
  • Decreases variance
104
Q

Thinking behind Bagging? How does it work?

A
  • Construct a novel dataset through random sampling with replacement
  • Generate k permutations of the training set, build classifier for each
  • Combine classifiers via voting
105
Q

What is boosting? How does it work?

A

Tunes classifiers to focus on the HARD TO CLASSIFY instances.

  • Mathematically complicated, computationally cheap
  • More computuationally expensive than bagging
  • Tendency to overfit

How it works?
Iteratively change the distribution and weights of training instances to reflect performance of classifier in previous iteration
1) Each instance has probability of 1/N of being included in sample
2) Over T iterations, train classifier and update the weight of each instance according to whether it is correctly classified
3) Combine base classifiers via WEIGHTED voting

106
Q

WTF is random forest?

A

Bagging of Decision Trees

- An extention of DT

107
Q

Is bagging prone to overfitting?

A

No

108
Q

What does Bagging/RF minimise?

A

Variance

109
Q

What does Boosting minimise?

A

Bias

110
Q

What is stacking?

A

Smooths errors over a range of algorithm with different biases

Method 1) Simple voting - assumes the classifiers have equal performance
Method 2) Train a classifier over the outputs of the base classifiers (meta classification)

111
Q

What is a meta classifier?

A

A classifier that has the outputs of base classifiers aggregated into one single classifier, A META CLASSIFIER

112
Q

What is error analysis?

A

Legit just analysing the errors, figure out it the issue is to do with quantity of data or something else

  • Identifying different classes of errors that the system makes
  • Hypothesising what caused the errors
  • Feed hypothesis back into feature/model engineering
113
Q

How to carry out error analysis?

A

1) Confusion matrix
2) Random subsample of misclassified instances
3) Come up with hypothesis for error
4) Test hypothesis against data
5) Where possible, use the model to guide the error analysis

114
Q

What are models HYPERPARAMETERS and PARAMETERS?

A

Hyperparameter:
- Parameters which define/bias/constrain the learning process

Parameters:
- What is learnt when a given learner with a given set of hyperparameters is applied to a particular training set and is then used to classify test instances

115
Q

What can a model trained with a given set of hyperparameters be interpreted to?

A

Interpreted relative to the parameters associated with a given test instance

116
Q

Hyperparams for Nearest Neighbour?

A
  • K (neighbourhood size)
  • Distance / Similarity metric
  • Feature weighting / selection
117
Q

Parameters for Nearest Neighbour?

A

NONE

  • because it’s a lazy learner
  • doesn’t abstract away from the training instances in anyway
118
Q

How can Nearest Neighbour model be interpreted?

A

Relative to the training instances that give rise to a given classification and their geometric distribution

119
Q

Hyperparameters for Nearest Prototype?

A
  • Distance / similarity metric

- Feature weighting / selection

120
Q

Parameters for Nearest Prototype?

A
  • Prototype for each class
  • Size = O(|C| |F|)
    c = set of classes
    f = set of features
121
Q

How can Nearest Prototype be interpreted?

A

relative to the geometric distribution of the prototypes and distance to each for a given test instance

122
Q

Hyperparameters for Naïve Bayes?

A
  • Choice of smoothing method

- Optionally the choice of distribution to used to model the features - binomial or multinomial

123
Q

Parameters for Naïve Bayes

A
  • Class priors and conditional probability for each feature-value-class combination
  • Size = O(|C| +|C| |FV|)
    c = set of clases
    fv = set of feature value pairs
124
Q

How can Naïve Bayes be interpreted?

A

Usually based on the most positively weighted features associated with a given instance

125
Q

Hyperparameters for Decision Trees?

A
  • Choice of function used for attribute selection - Information Gain or Gain Ratio
  • Convergence Criterion
126
Q

Parameters for Decision Trees?

A
  • The decision tree itself
  • Worst case size O(V^|Tr|)
  • Average case size O(|FV|^2|Tr|)
V = average branching factor
tr = set of training instances
fv = set of feature-value pairs
127
Q

How can decision trees be interpreted?

A

Based directly on the path through the decision tree

128
Q

Hyperparameters for SVMs?

A
  • Penalty term for soft marging SVM
  • Feature value scaling
  • Choice of kernel (and any hyperparameters associated with it)
129
Q

Parameters for SVM?

A
  • Vector of feature weights + bias term

- Size = O(|C|^2|F|) - assuming one-vs-rest

130
Q

Hyperparameter for Logistic Regression?

A

Choice of weight of regulariser

131
Q

Parameters of logistic regression?

A
  • Weight associated with each feature function and the bias term
  • Size = O(|C||FV|)
    c = set of classes
    fv = feature-value pairs
132
Q

How can logistic regreesion be interpreted?

A

based on the absolute-weighted features associated with a given instance

  • high positive = correlation
  • high negative = anti-correlation
133
Q

What is dimensionality reduction?

A

reducing the number of features or attributes to be more easily graphable

134
Q

What is the caveat to dimensionality reduction?

A

Going to be lossy

- Not possible to reproduce the original data FROM the reduced version

135
Q

WTF is PCA? How does PCA work?

A

Principal Component Analysis - a method of dimensionality reduction

Transforming to a new set of variables, the principal components, which are uncorrelated, and which are ordered so that the first few return retain most of the variation present in all of the original variables

136
Q

Wtf is nearest prototype?

A

Also known as nearest centroid classifier, assigns to instances the label of the class of training samples whose mean (centroid) is closest to the observation.