Research Design and Reporting Flashcards
research designs, the development of research hypotheses, objectives, and constructs, and the format of a psychological report
A ———— study collects data from a population at a single point in time to analyze and compare different groups or variables at that specific moment
cross-sectional
A ———– study follows the same subjects over a period of time, collecting data at multiple time points to observe changes and trends over time.
longitudinal
What are the key differences between experimental, observational, and quasi-experimental study designs?
:
-Experimental studies involve manipulation of variables with random assignment.
-Observational studies observe without intervention. —Quasi-experimental studies resemble experiments but lack random assignment.
study design that involve researchers manipulating an independent variable to observe its effect on a dependent variable, often utilizing random assignment to control for confounding variables.
Experimental
studie design that involves researchers watching and analysing naturally occurring events or phenomena without intervention, aiming to understand relationships or associations between variables.
Observational
this study design resembles an experimental designs but lacks random assignment to treatment groups, often due to ethical or practical constraints, leading to potential biases in causal inference.
Quasi-experimental
Quasi Experimental designs are prone to:
potential biases in causal inference caused by the lack of random assignment
Quasi-experimental studies lack random assignment because they
using existing groups or conditions (as opposed to random allocation) because they aim to mimic experiments in real-world settings.
Educational research, policy evaluations, clinical studies with ethical constraints, and organizational psychology often use —— research designs.
quasi-experimental
What characterizes a between-subject design?
Data is collected from participants in relation to one condition only.
What defines a within-subject design?
Data is collected from participants in relation to more than one condition, typically all conditions (typically all the conditions that are being assessed in the study)
incorporates both between-subject and within-subject assessments is called a
mixed design
What is another term for a within-subject design?
Repeated measures design.
If I have taken data on a group of people that studies two variables, and I only took data one day and did not repeat it - is it a within-subjects or a between subjects design?
It is bewteen subjects as it has not been repeated. Although data was taken on more than one variable, there is only one set of data per variable. (If a repeat capture of data was take - say a day later - then it would be a between subjects design)
Any design that is measuring the same individual more than once is a ——- subject design
Within Subjects design, which we also call repeated measures
Sample types
Population based - representative, random
Convenience - not representative
Stratification - sampling base on pre-defined groups
3 methods of data collection
In the field
survey
Laboratory
Sampling Bias Types:
Self Selection (volunteering)
Healthy Bias (more healthy people liekly to volunteer)
Under Coverage Bias (people in care may be excluded for example)
WEIRD (sample) stands for
Western educated Industrialised Rich Democratic
—— of a measure tells you how precisely you are measuring something,
reliability
——– of a measure tells you how accurate the measure is
validity
This relates to consistency over time. If we repeat the measurement at a later date do we get the same answer?
Test-retest reliability.
This relates to consistency across people. If someone else repeats the measurement (e.g., someone else rates my intelligence) will they produce the same answer?
Inter-rater reliability.
This relates to consistency across theoretically-equivalent measurements. If I use a different set of bathroom scales to measure my weight does it give the same answer?
Parallel forms reliability.
If a measurement is constructed from lots of different parts that perform similar functions (e.g., a personality questionnaire result is added up across several questions) do the individual parts tend to give similar answers.
Internal consistency reliability.
“to be explained”
dependent variable (DV)
modern name:
outcome
“to do the explaining”
independent variable (IV)
modern name:
predictor
when and experiment fails, because it violates the structure of the “natural” world - it will give an _____ result
“artefactual” result
——– validity refers to the extent to which you are able draw the correct conclusions about the causal relationships between variables.
Internal.
It’s called “internal” because it refers to the relationships between things “inside” the study.
——– validity relates to the generalisability or applicability of your findings. That is, to what extent do you expect to see the same pattern of results in “real life” as you saw in your study.
External
(if it turns out that the results don’t actually generalise or apply to people and situations beyond the ones that you studied, then what you’ve got is a lack of external validity.)
——– validity is basically a question of whether you’re measuring what you want to be measuring.
Construct
A measurement has good construct validity if it is actually measuring the correct theoretical construct, and bad construct validity if it doesn’t
—— validity simply refers to whether or not a measure “looks like” it’s doing what it’s supposed to,
Face Validity
(face validity isn’t very important from a pure scientific perspective. After all, what we care about is whether or not the measure actually does what it’s supposed to do, not whether it looks like it does what it’s supposed to do)
———- validity is a different notion of validity, which is similar to external validity, but less important. The idea is that, in order to be ecologically valid, the entire set up of the study should closely approximate the real world scenario that is being investigated
Ecological validity
(It relates mostly to whether the study “looks” right, but with a bit more rigour to it. The idea behind it is the intuition that a study that is ecologically valid is more likely to be externally valid. It’s no guarantee, of course)
——– is an additional, often unmeasured variable5 that turns out to be related to both the predictors and the outcome.
Confounder
(The existence of confounders threatens the internal validity of the study because you can’t tell whether the predictor causes the outcome, or if the confounding variable causes it.)
The possibility that your result is an ——— describes a threat to your external validity, because it raises the possibility that you can’t generalise or apply your results to the actual population that you care about.
artefact
(artefactual results tend to be a concern for experimental studies than for non-experimental studies.)
———– effects are fundamentally about change over time.
maturational effects.
(maturation effects aren’t in response to specific events. Rather, they relate to how people change on their own over time.)
——- effects refer to the possibility that specific events may occur during the study that might influence the outcome measure
History
When running a very long experiment in the lab (say, something that goes for 3 hours) it’s very likely that people will begin to get bored and tired, and that this ————– effect will cause performance to decline regardless of anything else going on in the experiment
maturational
is a history effect in which the “event” that influences the second measurement is the first measurement itself!
Repeated testing
—— ——— occurs when participants in a study are systematically different from the population, leading to skewed or inaccurate results.
Selection bias
——– refers to the reduction in participants or dropout rate over the course of a study, which can affect the validity and generalisability of results.
Attrition
—————- Attrition: Participants drop out across different groups, leading to varied group compositions.
Heterogeneous or differential attrition (this is a kind of selection bias that is caused by the study itself. )
———— Attrition: Participants drop out from similar groups, maintaining group similarities.
Homogeneous
——– attrition, in which the attrition effect is the same for all groups, treatments or conditions.
Homogeneous
he attrition would be homogeneous if (and only if) the easily bored participants are dropping out of all of the conditions in my experiment at about the same rate
——— occurs when participants who choose not to respond to a survey or study differ in meaningful ways from those who do respond, leading to skewed or unrepresentative results.
Non-response bias
A ——- measures the number of standard deviations a data point is from the mean of a dataset, helping to standardise and compare values across different distributions.
Z-score
A zscore helps convey how —— or ——– a data point is
usual or unusual
a z score of 1.77 communicates it —– than one but ——- than 2 standard deviations from the mean
more
less
Zscore equation
z-score = data point - mean divided by the standard deviation (from the mean)
the tendency for extreme values in a dataset to move closer to the mean upon subsequent measurements, often observed in repeated measurements or interventions.
Regression to the mean
—— ——-can come in multiple forms. The basic idea is that the experimenter, despite the best of intentions, can accidentally end up influencing the results of the experiment by subtly communicating the “right answer” or the “desired behaviour” to the participants
Experimenter bias
—— effect, where if you expect great things of people they’ll tend to rise to the occasion. But if you expect them to fail then they’ll do that too.
Pygmalion
trying to analyse your data in lots of different ways, you’ll eventually find something that “looks” like a real effect but isn’t. This is referred to as”
“data mining
The different types of reliability commonly discussed in research are:
Test-Retest Reliability: Measures consistency by comparing the results of a test taken by the same group of people at two different times.
Internal Consistency Reliability: Assesses the consistency of results across items within a test or instrument, often measured using Cronbach’s alpha.
Inter-Rater Reliability: Examines the consistency between different raters or observers assessing the same phenomenon.
Parallel Forms Reliability: Compares the consistency of results between different versions of the same test.
An effect to consider when perform subsequent or repat testing is the ——- — —- —-
regression to the mean
(statistical tendency that is likely, not guaranteed to occur, the more testing the more likely)