SAFMEDS Flashcards

1
Q

A generic term describing the centre of a frequency distribution of observations, measured by mean, mode, and median.

A

Central tendency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A variable (that may or may not have been measured), other than the independent variable/s, which influences the outcome of the dependent variable

A

Confounding variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Evidence that the content of a test corresponds to the content of the construct it was designed to cover.

A

Content validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A measure of the strength and direction of the association between two variables. There are two common variants- Pearson’s for parametric data, and Spearman’s for non-parametric data. In both cases, coefficients range between -1 and 1.

A

Correlation coefficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A set of data which is yet to be screened for analysis.

A

Raw data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A test using the t-statistic that establishes whether two means collected from the same sample differ significantly.

A

Repeated measures/within subjects t-test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A test using the t-statistic that establishes whether two means collected from independent samples differ significantly.

A

Independent samples/between subjects t-test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Evidence that the results of a study, experiment, or test can be applied, and allow inferences, to real-world conditions.

A

Ecological validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The prediction that there will be an effect (ie, that an experimental manipulation will have some effect on the dependent variables, or that certain variables will relate to each other).

A

Experimental hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The reverse of the experimental hypothesis- that there will be no effect from your experimental manipulation, or that certain variables are not related.

A

Null hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The degree to which a statistical model is an accurate representation of the observed data. These range from basic models (eg, the mean) to more complex models (eg, t-test and correlations).

A

Fit

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A graph plotting values of observations on the Y axis, and the frequency with which those values occur on the X axis, commonly called a histogram. Used to assess the distribution of data.

A

Frequency distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

An assumption for parametric testing in between-groups designs, where the variance of one variable is stable (roughly equal) at all levels of another variable.

A

Homogeneity of variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A prediction about the state of the world.

A

Hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

An experimental design in which different treatment conditions use different participants, resulting in independent data.

A

Independent design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

An experimental design in which different treatment conditions use the same participants, resulting in related or repeated data.

A

Repeated design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Data measures on a scale along which all intervals are equal, for example pain ratings on a scale of 1 to 10.

A

Interval data

18
Q

Interval data, with the additional property that ratios are meaningful. For example, when assessing pain on a scale of 1 to 10, for the data to be considered ratio level, a score of 4 should genuinely represent twice as much pain as a score of 2.

A

Ratio data

19
Q

Data where numbers represent categories or names.

A

Nominal data

20
Q

Data that tell us not only that something occurred, but the order in which it occurred. Examples include data presented as ranks, for example placements on participants in a race.

A

Ordinal data

21
Q

Measures the degree to which scores cluster at the tails of a frequency distribution; positive kurtosis indicates too many scores in the tails, resulting in a peaked curve. Negative kurtosis indicates too few scores in the tails, resulting in a flattened curve.

A

Kurtosis

22
Q

The probability of obtaining a set of observations given the parameters of a model fitted to those observations.

A

Likelihood

23
Q

A non-parametric test which examines differences between two independent samples. The non-parametric equivalent of an independent t-test

A

Mann-Whitney test

24
Q

A simple statistical model on the centre of a distribution of scores; a hypothetical estimate of a ‘typical’ score, calculated by summing the observed scores and dividing by the number of observations (n).

A

Mean

25
Q

The middle score of a test of ordered observations.

A

Median

26
Q

The most frequently occurring score in a set of data.

A

Mode

27
Q

A family of statistical tests that do not rely on the restrictive assumptions of parametric tests. In particular they do not assume sampling distribution is normal. Normally considered less powerful.

A

Non-parametric tests

28
Q

A family of statistical tests that require data to meet certain assumptions, in particular around the distribution of the data and the inter-relation between variables levels. The basic assumptions for parametric test are; data are normally distributed, homogeneity of variance, interval or ratio data, and independence of scores.

A

Parametric tests

29
Q

In statistical terms, this refers to the group from which we draw a sample, and to which we want to generalise results.

A

Population

30
Q

Extrapolating evidence for a theory from what people say and write (in contrast to quantitative methods).

A

Qualitative methods

31
Q

Inferring evidence for a theory through measurement of variables that produce numeric outcomes (in contrast to qualitative methods).

A

Quantitative methods

32
Q

A generic term for the three values that cut an ordered data set into four equal parts. The three sections are known as the lower, middle, and upper quartiles. The space between the lower and upper quartiles is known as the inter-quartile range.

A

Quartiles

33
Q

A smaller (but hopefully representative) collection of units from a population, used to determine truths about that population (eg, how the population behaves under certain conditions).

A

Sample

34
Q

A measure of the symmetry of a distribution, with a skew of 0 representing perfect symmetry. When scores are clustered at the lower end of the distribution, the skew is positive. When scores are clustered at the higher end of the higher end of the distribution, the skew is negative.

A

Skew

35
Q

An estimate of the average variability (spread) of a set of data, measures in the same unit of measurement as the original data. It is the square-root of the variance.

A

Standard deviation

36
Q

A statistic for which we know how frequently different values occur in random samples. The observed value of such a statistic is usually used to test a hypothesis.

A

Test statistic

37
Q

Evidence that a study allows correct inferences about the question it was designed to answer, or that a test measures what is set out to measure.

A

Validity

38
Q

An estimate of the average variability (spread) of a set of data. It is the sum of the squares divided by the number of values on which the sum of the squares is based, minus 1.

A

Variance

39
Q

A non-parametric test that looks for differences between related samples. A non-parametric equivalent of the related t-test.

A

Wilcoxon’s signed-rank test

40
Q

The value of an observation expressed in standard deviation units, calculated by taking the observed score and subtracting the sample mean, then dividing the result by the standard deviation of all observations. Z-scores can be used to assess the likelihood of obtaining certain scores on a measure.

A

Z-score