Topic Two: Research Methods Flashcards
Variables
Characteristics of process or phenomenon we are interested in, that can vary in quantity or quality.
Independent Variables (IV)
Something that we manipulate during an experiment to influence behaviour. Anything that you can SYSTEMATICALLY CHANGE to see the effect on the Dependent Variable. IV is independent of anything else that is going on.
e.g. Time of the exam (9 am, 1 pm)
Dependent Variables (DV)
Something that we measure during an experiment to learn about behaviour. Anything that you can MEASURE. The DV depends on what happens with the IV.
e.g. Measuring exam results of the 9 am group and 1 pm group.
Extraneous Variables/ Nuisance variables
Something that we do not measure in an experiment but has some influence on the DV. Anything UNWANTED or UNANTICIPATED that influences the DV but in a way not systematically related to IV.
e.g. Whether the exam room was noisy or not.
Confounding Variables
Something that has a SYSTEMATIC INFLUENCE on the DV, that is DIFFERENT for EACH GROUP or level of IV.
Influences on the DV are not separable from the one or more groups/levels of IV you want to study.
e.g. Whether the exam room was noisy or not:
9am building work outside exam room
1pm no building work outside exam room
IV different exam times, but the unanticipated noise affecting one group and not the other, will affect DV and ruin experiment.
Purpose of Experiments
To produce a statement of CAUSE AND EFFECT backed up by EMPIRICAL EVIDENCE
- the experimental method is the ONLY method to really provide findings about causation.
What is the goal when designing an experimental procedure?
When designing an experimental procedure, the goal is to adequately CONTROL EXTRANEOUS & CONFOUNDING VARIABLES so that the effects of IV manipulations on DV can be determined.
Direct observation
Variables can be observed and measured directly
Indirect observation
Variables indirectly observed (the effects of the variable) using proxies for the behaviour that we want to study.
Validity of OD
Measured what it has claimed to measure (a valid measure of what it is meant to measure)
Hypothetical constructs vs operational constructs
abstract concept/ feeling vs operationally defined processes to measure
Control group
to compare the effect of the environmental group
Placebo group
such as sugar pills vs homoeopathic pills
Random selection
of participants for research
Random allocation
Assigning participants to a group receiving different conditions by chance.
Blinding to condition
so that both participants and experimenter don’t know which group each participant has been allocated to
Matching of situational variables
e.g. time
Matching of subject variables
e.g. age and gender (if random selection is not possible)
Blinding to condition (a double-blind experiment is the gold standard)
so that both participants and experimenter don’t know which group each participant has been allocated to
Correlation
Linear RELATIONSHIP between 2 variables.
No relationship
0 indicates no relationship between the variables
Strong relationship
.60 - .79 = a strong relationship
.80 - 1.0 = a very strong relationship
Correlation
Linear RELATIONSHIP between 2 variables. NOT causation.
No relationship ( | ) or ( - )
0 indicates no relationship between the variables
Positive relationship ( / )
If both variables increase or decrease together then this is called a positive correlation.
Negative relationship ( \ )
If one variable increases while the other decreases then this is called a negative correlation.
Strong relationship
Where the two variables correspond most of the time. .60 - .79 = a strong relationship
.80 - 1.0 = a very strong relationship
Weak relationship
Where in a weak correlation the relationship is seldom found.
.00-.19 = a very weak relationship
.20 - .39 = a weak relationship
Strength
is how closely the two variables are related to one another. So a strong correlation can be either positive (+) or negative (-). It is about how close the relationship is to either +1 or -1, and nothing to do with the direction.
Correlational design
is used when variables are measured but there is no manipulated independent variable.
Third Variable Problem
is when there is an additional variable that we either didn’t know about, neglected to measure or control, or accidentally because systematically related to our independent variable or dependent variable.
Correlational design
is used when variables are measured but there is no manipulated independent variable. Allows us to make predictions based on the relationship between variables.
Longitudinal Study
A study that follows the same group of people over time.
Placebo effect (a confounding variable)
When receiving something new or special treatment affects human behaviour.
e.g. participants thinking that they are receiving a new trialled drug, but it is only a sugar pill.
Participant demand (a confounding variable)
When participants act in a way they think the experimenter wants them to behave.
Experimenter expectations (confounding variable)
When the experimenter’s expectations influence the outcome of a study.
e.g if the experimenter knew who was in the placebo and who was in drug group, they might perceive the pill participants
Experimenter expectations (confounding variable)
When the experimenter’s expectations influence the outcome of a study.
e.g if the experimenter knew who was in the placebo and who was in the drug group, they might PERCEIVE the results that are not truly there.
Double-blind procedure
Neither the experimenter nor participants know who is in which group.
Correlation coefficient
r = provides information about the strength of a relationship between two variables.
Qualitative design
methodologies that help us investigate that we cannot manipulate as in experiments.
e.g including participant observation, case studies, and narrative analysis, surveys.
Quasi-experimental design
An experiment that does not require random allocation. The existing groups become the independent variable.
Surveys
Largely used to gather information for correlational studies. An effective way of reaching a large number of people at a low cost.
Four basic types for numerical scale in measuring variables
Nominal scales - naming. To classify objects by assigning labels or numbers to them on the
basis of qualitative differences. Numbers on a nominal scale do not imply quantity. Numbers cannot be added or subtracted
Ordinal scales - rank; named and ordered e.g 1st, 2nd, 3rd in a race but doesn’t tell us any further specific information such as time difference between ranks. Numbers cannot be added or subtracted
Interval scales - named, ordered and has equal intervals between units of measure. e.g celsius scale has equal intervals of measure, numbers can be added and subtracted (but not ratios - multiplied/divided). Zero-point is arbitrary.
Ratio scales - named, ordered, equal intervals and meaningful zero starting point. Can be added/subtracted, multiplied/divided because there is a true zero starting point. e.g. weight, height and time
Measures of central tendency
Mean (average; heavily influenced by outliers);
Median (the middle number of all data once in order, not influenced by outliers, useful for ordinal scales),
Mode (most frequently occurring data, useful for data collected on the nominal scale)
Measures of variability
Range (the difference between the lowest and highest sets of data);
Standard deviation (is a measure of how spread out the individual scores are around the mean);
Sum of the deviations (above and below it will always cancel out and equal zero);
Variance (sum of the squared deviations);
The square root of the variance is the standard deviation.
Bar graph vs histogram
Bar graph - generally used when the scale values represent discrete intervals, can be used for ordinal, interval, ratio and nominal data. Spaces between data sets of data on x-axis.
Histogram is used for continuous scale values. Can be used for ordinal, interval and ratio data, no spaces between data sets on x-axis because data is continuous.
Frequency distribution
Upright line in the centre is the Mean; with either end of the range at the edges of the curve
Normal distribution - is a bell-shaped continuous distribution that is symmetrical about the mean.
We should be reluctant to conclude that there is a difference in the population groups even though there is some difference between the samples where:
1) the difference in the sample means is small
2) there is a lot of variability in the sample measurements (with a lot of overlap between
the two sample distributions)
3) there is only a small number of sample measurements
Statistical inference
allows us to draw a rational conclusion about the actual state of things on the basis of
incomplete information, but in accordance with statistical probabilities.