Concepts chapter 2 Flashcards

1
Q

Nivel α

A

La probabilidad de cometer un error de tipo I. (usualmente este valor es 0,05).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Hipótesis alternativa

A

La predicción de que habrá un efecto (de que la manipulación experimental tendrá algún efecto o que ciertas variables se relacionarán unas con otras).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Nivel β

A

La probabilidad de cometer un error de tipo 2 (Cohen, 1992, sugiere un valor máximo de 0,2).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Corrección de Bonferroni

A

Una corrección aplicada al nivel α para controlar la tasa general de error de tipo I cuando múltiples pruebas de significancia son llevadas a cabo. Cada prueba conducida debe usar un criterio de significancia de nivel α (normalmente 0,05) dividido por el número de pruebas conducidas. Esta es una corrección simple pero efectiva, aunque tiende a ser demasiado estricta cuando se realizan muchas pruebas.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Teorema del límite central

A

Establece que cuando las muestras son grandes (>30) la distribución muestral tomará la forma de una distribución normal sin importar la forma de la población de la cual la muestra fue tomada. Para muestras pequeñas la distribución t se aproxima mejor a la forma de la distribución de muestreo. También sabemos de este teorema que la desviación estándar de la distribución de muestreo (el error estándar de la media de las muestras) será igual a la desviación estándar de la o las muestras dividida por la raíz cuadrada del tamaño de la muestra (N).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Intervalo de confianza

A

rango de valores en torno a una estadística dada calculada para una muestra de observaciones (ej: la media), del cual se cree que contiene, en una cierta proporción de las muestras (ej: 95%), el verdadero valor de esa estadística (ej: el parámetro de la población). Esto también implica que para la otra proporción de muestras (ej: 5%), el intervalo de confianza no contendrá los verdaderos valores. El problema es que no sabrás en cuál categoría caerá tu muestra particular.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Grados de libertad

A

Es el número de “entidades” que son libres de variar cuando se estima algún tipo de parámetro estadístico. Se relaciona con las pruebas de significación para muchas pruebas estadísticas usadas comúnmente (tales como la estadística F, estadística t, prueba de chi-cuadrado) y determina la forma exacta de la distribución de probabilidad para estas pruebas estadísticas.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Deviance

A

the difference between the observed value of a variable and the value of that variable predicted by a statistical model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Hipótesis experimental

A

Sinónimo de hipótesis alternativa

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Tasa de error experimental

A

La probabilidad de cometer un error de Tipo I en un experimento que involucra una o más comparaciones estadísticas cuando la hipótesis nula es verdadera en cada caso.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Tasa de error familiar

A

La probabilidad de cometer un error de Tipo I en cualquier familia de pruebas cuando la hipótesis nula es verdadera en cada caso. La “familia de pruebas” se puede definir libremente como un conjunto de pruebas realizadas en el mismo conjunto de datos y que abordan la misma pregunta empírica.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Idoneidad, ajuste (Fit)

A

Grado en que un modelo estadístico es una representación precisa de algún conjunto de datos u observaciones.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Interval estimate

A

see confidence interval

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Linear model

A

a statistical model that is based upon an equation of the form Y = BX + E, in which Y is a vector containing scores from an outcome variable, B represents the b-values, X the predictor variables and E the error terms associated with each predictor. The equation can represent a solitary predictor variable (B, X and E are vectors) as in simple regression or multiple predictors (B, X and E are matrices) as in multiple regression. The key is the form of the model, which is linear (e.g., with a single predictor the equation is that of a straight line).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Method of least squares

A

a method of estimating parameters (such as the mean, or a regression coefficient) that is based on minimizing the sum of squared errors. The parameter estimate will be the value, out of all of those possible, which has the smallest sum of squared errors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Null hypothesis

A

the reverse of the experimental hypothesis, it states that your prediction is wrong and the predicted effect doesn’t exist.

17
Q

One-tailed test

A

a test of a directional hypothesis. For example, the hypothesis ‘the longer I write this glossary, the more I want to place my editor’s genitals in a starved crocodile’s mouth’ requires a one-tailed test because I’ve stated the direction of the relationship. I would generally advise against using them because of the temptation to interpret interesting effects in the opposite direction to that predicted. See also two-tailed test.

18
Q

Ordinary least squares

A

a method of regression in which the parameters of the model are estimated using the method of least squares.

19
Q

Parameter

A

a very difficult thing to describe. When you fit a statistical model to your data, that model will consist of variables and parameters: variables are measured constructs that vary across entities in the sample, whereas parameters describe the relations between those variables in the population. In other words, they are constants believed to represent some fundamental truth about the measured variables. We use sample data to estimate the likely value of parameters because we don’t have direct access to the population. Of course, it’s not quite as simple as that.

20
Q

Point estimate

A

Pendiente

21
Q

Population

A

in statistical terms this usually refers to the collection of units (be they people, plankton, plants, cities, suicidal authors, etc.) to which we want to generalize a set of findings or a statistical model.

22
Q

Power

A

the ability of a test to detect an effect of a particular size (a value of 0.8 is a good level to aim for).

23
Q

Sample

A

a smaller (but hopefully representative) collection of units from a population used to determine truths about that population (e.g., how a given population behaves in certain conditions).

24
Q

Sampling distribution

A

the probability distribution of a statistic. We can think of this as follows: if we take a sample from a population and calculate some statistic (e.g., the mean), the value of this statistic will depend somewhat on the sample we took. As such the statistic will vary slightly from sample to sample. If, hypothetically, we took lots and lots of samples from the population and calculated the statistic of interest we could create a frequency distribution of the values we get. The resulting distribution is what the sampling distribution represents: the distribution of possible values of a given statistic that we could expect to get from a given population.

25
Q

Sampling variation

A

the extent to which a statistic (the mean, median, t, F, etc.) varies in samples taken from the same population.

26
Q

Standard error

A

the standard deviation of the sampling distribution of a statistic. For a given statistic (e.g., the mean) it tells us how much variability there is in this statistic across samples from the same population. Large values, therefore, indicate that a statistic from a given sample may not be an accurate reflection of the population from which the samp

27
Q

Standard error of the mean (SE)

A

the standard error associated with the mean.

28
Q

Test statistic

A

a statistic for which we know how frequently different values occur. The observed value of such a statistic is typically used to test hypotheses.

29
Q

Two-tailed test

A

a test of a non-directional hypothesis. For example, the hypothesis ‘writing this glossary has some effect on what I want to do with my editor’s genitals’ requires a two-tailed test because it doesn’t suggest the direction of the relationship. See also one-tailed test.

30
Q

Type I error

A

occurs when we believe that there is a genuine effect in our population, when in fact there isn’t.

31
Q

Type II error

A

occurs when we believe that there is no effect in the population, when in fact there is.