algorithms and theory Flashcards
What’s the trade-off between bias and variance?
Bias is error due to erroneous assumptions in the learning algorithm- leads to model underfitting data, making it hard to have high predictive accuracy and to generalise knowledge from the training set to the test set.
Variance is error due to too much complexity in the learning algorithm- leads to the algorithm being highly sensitive to high degrees of variation in training data, which can lead model to overfit the data. Carrying too much noise from training data for model to be useful for test data.
Learning error from any algorithm due to the bias, the variance and irreducible error due to noise in the underlying dataset. If you make the model more complex and add more variables, you’ll lose bias but gain some variance — in order to get the optimally reduced amount of error, you’ll have to tradeoff bias and variance. You don’t want either high bias or high variance in your model.
What are the different types of machine learning?
Supervised learning requires training labeled data and predicts outcome variables
Unsupervised learning requires training unlabelled data to uncover hidden structures, e.g. finding groups of photos with similar cars
Reinforcement learning involves the model learning based on the rewards it received for its previous action.
How is KNN different from k-means clustering?
K-Nearest Neighbors is a supervised classification algorithm, while k-means clustering is an unsupervised clustering algorithm. In order for K-Nearest Neighbors to work, you need labeled data you want to classify an unlabeled point into. K-means clustering requires only a set of unlabeled points and a threshold: the algorithm will take unlabeled points and gradually learn how to cluster them into groups by computing the mean of the distance between different points.
Explain how a ROC curve works
The ROC curve is a graphical representation of the contrast between true positive rates and the false positive rate at various thresholds. It’s often used as a proxy for the trade-off between the sensitivity of the model (true positives) vs the fall-out or the probability it will trigger a false alarm (false positives).
Curve to the top left corner is good, line along x=y bad
Define precision and recall.
Recall is the true positive rate: the amount of positives your model claims compared to the actual number of positives there are throughout the data.
Precision is the positive predictive value, and it is a measure of the amount of accurate positives your model claims compared to the number of positives it actually claims.
Think of a case where you’ve predicted that there were 10 apples and 5 oranges in a case of 10 apples. You’d have perfect recall (there are actually 10 apples, and you predicted there would be 10) but 66.7% precision because out of the 15 events you predicted, only 10 (the apples) are correct.
What is Bayes’ Theorem? How is it useful in a machine learning context?
Bayes’ Theorem gives you the posterior probability of an event given prior knowledge.
Mathematically, it’s expressed as the true positive rate of a condition sample divided by the sum of the false positive rate of the population and the true positive rate of a condition.
Bayes’ Theorem is the basis behind a branch of machine learning that most notably includes the Naive Bayes classifier.
Why is “Naive” Bayes naive?
Naive Bayes is naive because it assumes absolute independence of features when calculating the conditional probability as the pure product of the individual probabilities of components, a condition probably never met in real life.
Explain the difference between L1 and L2 regularization.
L2 regularization tends to spread error among all the terms, while L1 is more binary/sparse, with many variables either being assigned a 1 or 0 in weighting. L1 corresponds to setting a Laplacean prior on the terms, while L2 corresponds to a Gaussian prior.
What is regularisation?
A form of regression that shrinks the coefficient estimates towards zero. It discourages learning a more complex model so avoids the risk of overfitting.
What is regression?
Estimating the relationship between known x variable and observed y variable, single output value produced using training data- FITS THE DATA
What’s your favorite algorithm, and can you explain it to me in less than a minute?
Neural networks
Neural networks are designed to replicate the way human brains learn. They consist of layers of nodes that are interconnected. There is first an input layer, followed by any number of hidden layers, and finally an output layer. The input layer takes in the values of the features in the training set, the output layer produces the final predicted output.
A neural network will compute a single output for each node based on a weighted combination of all of the inputs to that node. The output is produced by taking the result of the node’s activation function and thresholding it against some value/ threshold function. The inputs are values from the features of the data or from previous layers, and the output is a single value that is passed to the next layer or is the final output if in the output layer.
Neural networks learn by continually updating the weights to minimize error. The concept of backpropagation is that the errors on your output “flow back” from the output layer to update the weights throughout the network. If the output of the network matches the label on the data then the weights are not updated. Two different methods of updating the weights are the perceptron rule and the delta rule (gradient descent).
Different activation functions can be used at each layer. Common activation functions are:
Rectified Linear Unit (ReLU) – thresholded at 0
Perceptron – Discrete -1 or 1 value, will find anything that is linearly separable
Sigmoid – Gradual continuous from 0 to 1, is differentiable so can use gradient descent
Advantages
Hidden layers can invent new features and therefore create a better representation of the problem
Good at handling large data sets
Disadvantages
Hard to interpret output
More complex/ bigger network, more likely to overfit
What’s the difference between Type I and Type II error?
Type I error is a false positive, while Type II error is a false negative. Briefly stated, Type I error means claiming something has happened when it hasn’t, while Type II error means that you claim nothing is happening when in fact something is.
What’s a Fourier transform?
A Fourier transform is a method to decompose generic functions into a superposition of symmetric functions. The Fourier transform finds the set of cycle speeds, amplitudes, and phases to match any time signal. A Fourier transform converts a signal from time to frequency domain.
What’s the difference between probability and likelihood?
Probability corresponds to finding the chance of something given a sample distribution of the data, while Likelihood refers to finding the best distribution of the data given a particular value of some feature or some situation in the data.
Given fixed parameter, what is the probability of different outcomes
vs
Given fixed outcomes, what is the likelihood of different parameter values (likelihoods proportional a probability but not one bc don’t add up to 1)
What is deep learning, and how does it contrast with other machine learning algorithms?
Deep learning is a subset of machine learning that is concerned with neural networks: how to use backpropagation and certain principles from neuroscience to more accurately model large sets of unlabelled or semi-structured data.
Deep learning represents an unsupervised learning algorithm that learns representations of data through the use of neural nets.