Safmeds Flashcards

1
Q

Central tendency

A

A generic term describing the centre of a frequency distribution of observations, measured by mean, mode, and median.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Confounding variable

A

A variable (that may or may not have been measured), other than the independent variable/s, which influences the outcome of the dependent variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Content validity

A

Evidence that the content of a test corresponds to the content of the construct it was designed to cover.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Correlation coefficient

A

A measure of the strength and direction of the association between two variables. There are two common variants- Pearson’s for parametric data, and Spearman’s for non-parametric data. In both cases, coefficients range between -1 and 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Raw data

A

A set of data which is yet to be screened for analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Repeated measures/within subjects t-test

A

A test using the t-statistic that establishes whether two means collected from the same sample differ significantly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Independent samples/between subjects t-test

A

A test using the t-statistic that establishes whether two means collected from indepdendent samples differ significantly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Ecological validity

A

Evidence that the results of a study, experiment, or test can be applied, and allow inferences, to real-world conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Experimental hypothesis

A

The prediction that there will be an effect (ie, that an experimental manipulation will have some effect on the dependent variables, or that certain variables will relate to each other).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Null hypothesis

A

The reverse of the experimental hypothesis- that there will be no effect from your experimental manipulation, or that certain variables are not related.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Fit

A

The degree to which a statistical model is an accurate representation of the observed data. These range from basic models (eg, the mean) to more complex models (eg, t-test and correlations).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Frequency distribution

A

A graph plotting values of observations on the Y axis, and the frequency with which those values occur on the X axis, commonly called a histogram. Used to assess the distribution of data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Homogeneity of variance

A

An assumption for parametric testing in between-groups designs, where the variance of one variable is stable (roughly equal) at all levels of another variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Hypothesis

A

A prediction about the state of the world.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Independent design

A

An experimental design in which different treatment conditions use different participants, resulting in independent data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Repeated design

A

An experimental design in which different treatment conditions use the same participants, resulting in related or repeated data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Interval data

A

Data measures on a scale along which all intervals are equal, for example pain ratings on a scale of 1 to 10.

18
Q

Ratio data

A

Interval data, with the additional property that ratios are meaningful. For example, when assessing pain on a scale of 1 to 10, for the data to be considered ratio level, a score of 4 should genuinely represent twice as much pain as a score of 2.

19
Q

Nominal data

A

Data where numbers represent categories or names.

20
Q

Ordinal data

A

Data that tell us not only that something occurred, but the order in which it occurred. Examples include data presented as ranks, for example placements on participants in a race.

21
Q

Kurtosis

A

Measures the degree to which scores cluster at the tails of a frequency distribution; positive kurtosis indicates too many scores in the tails, resulting in a peaked curve. Negative kurtosis indicates too few scores in the tails, resulting in a flattened curve.

22
Q

Likelihood

A

The probability of obtaining a set of observations given the parameters of a model fitted to those observations.

23
Q

Mann-Whitney test

A

A non-parametric test which examines differences between two indepdent samples. The non-parametric equivalent of an independent t-test

24
Q

Mean

A

A simple statistical model on the centre of a distribution of scores; a hypothetical estimate of a ‘typical’ score, calculated by summing the observed scores and dividing by the number of observations (n).

25
Q

Median

A

The middle score of a test of ordered observations.

26
Q

Mode

A

The most frequently occurring score in a set of data.

27
Q

Non-parametric tests

A

A family of statistical tests that do not rely on the restrictive assumptions of parametric tests. In particular they do not assume sampling distribution is normal. Normally considered less powerful.

28
Q

Parametric tests

A

A family of statistical tests that require data to meet certain assumptions, in particular around the distribution of the data and the inter-relation between variables levels. The basic assumptions for parametric test are; data are normally distributed, homogeneity of variance, interval or ratio data, and independence of scores.

29
Q

Population

A

In statistical terms, this refers to the group from which we draw a sample, and to which we want to generalise results.

30
Q

Qualitative methods

A

Extrapolating evidence for a theory from what people say and write (in contrast to quantitative methods).

31
Q

Quantitative methods

A

Inferring evidence for a theory through measurement of variables that produce numeric outcomes (in contrast to qualitative methods).

32
Q

Quartiles

A

A generic term for the three values that cut an ordered data set into four equal parts. The three sections are known as the lower, middle, and upper quartiles. The space between the lower and upper quartiles is known as the inter-quartile range.

33
Q

Sample

A

A smaller (but hopefully representative) collection of units from a population, used to determine truths about that population (eg, how the population behaves under certain conditions).

34
Q

Skew

A

A measure of the symmetry of a distribution, with a skew of 0 representing perfect symmetry. When scores are clustered at the lower end of the distribution, the skew is positive. When scores are clustered at the higher end of the higher end of the distribution, the skew is negative.

35
Q

Standard deviation

A

An estimate of the average variability (spread) of a set of data, measures in the same unit of measurement as the original data. It is the square-root of the variance.

36
Q

Test statistic

A

A statistic for which we know how frequently different values occur in random samples. The observed value of such a statistic is usually used to test a hypothesis.

37
Q

Validity

A

Evidence that a study allows correct inferences about the question it was designed to answer, or that a test measures what is set out to measure.

38
Q

Variance

A

An estimate of the average variability (spread) of a set of data. It is the sum of the squares divided by the number of values on which the sum of the squares is based, minus 1.

39
Q

Wilcoxon’s signed-rank test

A

A non-parametric test that looks for differences between related samples. A non-parametric equivalent of the related t-test.

40
Q

Z-score

A

The value of an observation expressed in standard deviation units, calculated by taking the observed score and subtracting the sample mean, then dividing the result by the standard deviation of all observations. Z-scores can be used to assess the likelihood of obtaining certain scores on a measure.