Definitions Flashcards
Ratio scales
have equal intervals between adjacent scores on the scale and an absolute 0.
Interval scales
have equal intervals between adjacent scores but do not have an adjacent 0.
Ordinal scales
have some sort of order to the categories but the intervals between adjacent points on the scale are not necessarily equal
Extraneous variables
might have an impact on the other variables that we are interested in but may have failed to take these into account when designing the study
Confounding variables
are a specific type of extraneous variable that is related to both of the main variables we are interested in.
correlational designs
those that investigate relationships between variables
experimental designs
where the experiment manipulates the IV to see what effect this has upon the DV
Quasi-experimental designs
involve seeing if there are differences on the DV between conditions of the IV. Unlike experimental designs, there is no random allocation of participants to the various conditions of the IV.
within participants deisgns
have the same participants in every condition of the IV. Each participant performs under all conditions in the study.
order effects
are a consequence of a within participants design whereby completing the conditions in a particular order leads to differences in the DV that are not a result of the manipulation of the IV.
Counterbalancing
Where you systematically vary the order in which participants take part in the various conditions of the IV.
Between participants designs
have different groups of participants in each condition of the IV. Thus, the group of participants in one condition of the IV is different from the participants in another condition of the IV
Between participants advantages
- relative absence of practice and fatigue effects
- participants less likely to work out purpose of study
Between participants disadvantages
- need more participants
- not as much control of confounding variables between conditions
Within participants advantages
- need fewer participants
- greater control of confounding variables between conditions
Within participants disadvantages
- increased likelihood of practice or fatigue effects
- participants more likely to guess purpose of the study
Population
consists of all possible people or items who/which have a particular characteristic
Sample
refers to a selection of individual people or items from a population
parameters
descriptions of populations whereas statistics are descriptions of samples
Measures of central tenendcy
give us an indication of the typical score in our sample. it is effectively an estimate of the middle point of our distribution of scores
frequency histogram
graphical means of representing the frequency of occurrence of each score on a variable in our sample
stem and leaf plots
similar to histograms but the frequency of occurrence of a particular score is represented by repeatedly writing the particular score itself rather than drawing a bar on a chart.
box plots
enable us to easily identify extreme scores as well as seeing how the scores in a sample are distrbuted
outliers or extreme scores
are those scores in our sample that are a considerate distance either higher or lower than the majority of the other score in the sample
Variance or variation of scores
indicates the degree to which the scores on a variable are different from one another
variance
the average squared deviation of scores in a sample from the mean
standard deviation
the degree to which the scores in a dataset deviate around the mean. it is an estimate of the average deviation of scores from the mean.
kurtosis of a distribution
is a measure of how peaked the distribution is
leptokurtic
is a very peaked distribution
platykurtic
is a flat distribution
skewed distributions
are those where the peak is shifted away from the centre of the distribution and there is an extended tail on one of the sides of the peak
the p-value
is the probability of obtaining the patterns of results we found in our study if there was no relationship between the variables in which we were interested in the population.
the null hypothesis
always states that there is no effect in the underlying population
type 1 error
where you decide to reject the null hypothesis when it is in fact true in the underlying population
type 2 error
where you conclude that there is no effect in the population when in reality there is. it represents the case when you do not reject the null hypothesis when in fact you should do because in the underlying population the null hypothesis is not true.
one tailed hypothesis
is one where you have specified the direction of the relationship between variables or the differences between two conditions
2 tailed hypothesis
where you have predicted that there will be a relationship between variables or a difference between conditions, but you have not precited the direction of the relationship between the variables or the difference between the conditions
Measures of central tendency
gives us an indication of the typical score in our sample
independent samples t-test
compares two groups of participants
paired samples t-test
compares the same participants in two different conditions.
calculating standard deviation
- square all the deviations from the mean
- add them up (gives us the sum of squares)
- calculate the average by dividing the sum of squares by the number of scores (gives us variance)
- take the square root of the variance (SD)
Sampling distribution
hypothetical distribution.
where you have selected an infinite number of samples from a population and calculated a particular statistics (e.g. mean) for each one
standard error
refers to the standard deviation of a particular sampling distribution
factor
refers to an independent variable
level
refers to a level of the independent variable
ANOVA
is used when you want to compare more than two means.
it looks at whether there are differences in the means of groups.
it does this by comparing the group means to the grand mean and seeing how different each group mean is to the grand mean.
total sum of squares
the total variance in data
modal sum of squares
tells us the variation that the experimental manipulation explains
residual sum of squares
tells us the total variation that is due to extraneous factors
mean square
can be seen as the ‘average deviation’ from the mean
= sum of squares / degrees of freedom
calculate within subjects sum of squares
- calculate the variance in an individuals scores
- compare each individuals data points to the individuals own mean
- add all of the participant variances together