GLM Flashcards

1
Q

This is the most common link function for count data

A

Log
Square root

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

These are the most common link functions for binomial data

A

Logit, Probit, comploglog

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Types of distributions

A

Normal, Binomial, Asymptotic, Poisson, Gamma, Weibul, Exponential, Beta, Inverse, Gaussian, Quassipoisson, Quassibinomial

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The three elements of a GLM

A

A Y distribution from the exponential family: Gausian, Binomial, Exponential, Poisson, Gamma.

Linear predictor model

Non-linear link function that can act on both sides of the equation. Inverse on the predictors, Link function on the predicted

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

MLE

A

Maximum Likelihood Estimation. Computes a loss function for the distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Under this circumstance SSE and MLE are the same

A

When the distribution is normal/Gaussian

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Difference between Generalized Linear Model and General Linear Model

A

GLM chooses parameters so that the data observed was most likely generated by the model.

General Linear Model chooses the parameters that reduce error the most

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Loss function

A

Computed by MLE and is the difference between a model’s predicted values and the actual values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
  • LL
A

Negative log likelihood. Repressent likelihood in a GLM, you want this as negative as possible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

x 2 (chi square)

A

Indicate whether the model is significant or not when compared to the null model (model with no predictors)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

-2LL

A

Is deviance. Follow a distribution of a chi square function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Name the link function:

Gaussian

A

Identity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Name the link function:

Binomial

A

Logit

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Name the link function:

Poisson

A

logarithmic / log

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Name the link function:

Gamma

A

Inverse

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Name the link function:

Quassi binomial

A

Logit (the probability of something occuring divided by the probability of not occuring)

17
Q

Name the link function:

Quassi poisson

18
Q

Inverse link function:

logit

A

logistic (unlogging the log odds)

19
Q

Inverse link function:

log

A

e (exponential but base is e the exponential is the linear formula)

20
Q

Inverse link function:

Inverse

21
Q

Inverse link function:

Square root

22
Q

A binomial distribution is common for what kind of data?

A

Binomial outcome variable. Data that has a floor and a ceiling (proportions/percentages)

23
Q

A poisson distribution is common for what kind of data?

A

Count data, dicrete values with lots of 0s inbetween

24
Q

A gamma distribution is common for what kind of data?

A

Salaries and latencies or reaction times

25
Q

How does link and inverse work?

A

You do a link on the Y side and the inverse on the X side

26
Q

MLE

A

Uses a type of loss function

27
Q

Loss function

A

Technically SSE is a loss function for normally distributed errors

28
Q

Why MLE?

A

So that the data you see is the most likely to have been generated by your model by the selected parameters

29
Q

Why do you get Zs and not Ts anymore?

A

Because the relationship between them two. Once you get to a certain sample size they are both the same

30
Q

You have multiple continuous predictors and a single outcome variable… This is

A

A multiple regression

31
Q

You have continuos predictors and categorical predictors… This is….