week 3 research design. Flashcards
Why is size important in sampling?
Size is important for generalisability, but it’s not the only factor.
How does sample size affect the reliability of questionnaires?
Size is important for reliability, but it’s not the only factor.
A statement that there is no effect or no difference, which can be rejected with a sufficiently large sample size.
Null Hypothesis
A measure that indicates whether the results observed in a study are likely to be due to chance or if there is a true effect present.
Statistical Significance
What happens when a sample size is too large?
A large sample can result in detecting miniscule differences as statistically significant, which may not have practical importance
A term referring to participants from Western, Educated, Industrialized, Rich, and Democratic societies, often used in psychological studies.
WEIRD Populations
A method of sampling that involves dividing the population into subgroups and sampling from each subgroup to ensure representation.
Stratified Sampling
What does generalisability refer to in research?
Generalisability refers to how well findings from a sample can be applied to a larger target population.
What is a key issue withe overwhelming focus on USA college students in psycholoth gical research?
This focus may limit the understanding of human behaviour due to cultural and contextual differences present in non-WEIRD populations.
What is observational design in research?
Observational design involves the researcher looking at associations between variables without manipulating any variables.
Can you give an example of observational design?
An example of observational design is studying the relationship between relationship breakups and ice cream-from-the-tub consumption.
What characterizes experimental design?
Experimental design is characterized by the manipulation of one or more variables to examine their effects on other variable(s).
Provide an example of experimental design.
An example of experimental design is assigning vignettes about relationship status and then rating self-reported cravings for ice cream.
Within-subjects designs
Within-subjects designs are research designs where all participants experience all conditions, allowing for the elimination of person confounds and requiring fewer participants.
Advantages of within-subjects designs
The advantages of within-subjects designs include the need for fewer participants and the elimination of person confounds.
Disadvantages of within-subjects designs
Disadvantages of within-subjects designs include sequence effects, practice effects, and interference effects.
Between-subjects designs
Between-subjects designs are research designs where each participant experiences only one condition, which reduces carryover effects from other conditions.
What are the advantages of between-subjects designs?
The advantages of between-subjects designs include no effects from other conditions and reduced carryover experiencing.
Essential factors for establishing causation
The essential factors for establishing causation include co-variation (IV and DV must change together), temporal order (IV must precede DV), and ruling out alternative explanations for covariation.
What is inter-rater reliability?
The degree to which different judges independently agree upon a subjective observation.
How is internal consistency defined in research?
The degree to which all the specific items/observations in a multiple-item measure behave in the same way
What does test-retest reliability measure?
The degree to which an item/scale correlates positively with itself over time.
What is external validity?
The extent to which the results of the study can be generalised to populations beyond the sample.
How is internal validity defined?
The extent to which causality can be inferred, specifically how confident we can be that any effects of the dependent variable were caused by the independent variable.
What does construct validity refer to?
The extent to which the independent variable and dependent variable in a study truly represent the construct of interest to the researcher.
The extent to which research findings can be generalized to broader populations and situations beyond the specific sample studied.
External Validity
The degree to which a study accurately establishes a causal relationship between the independent variable and dependent variable.
Internal Validity
A measure of consistency where different observers or judges agree on the assessment of a variable or phenomenon.
Inter-rater Reliability
A measure of whether all items on a test or assessment produce similar results, indicating that they are measuring the same construct.
Internal Consistency
What are examples of threats to internal validity?
Selection, history effects, social desirability, demand effects, regression toward the mean, non-specific treatment effects, placebo effects, experimenter bias, instrumentation, maturation, testing effects, observer reactivity, attrition, and confounds.
Changes in participants’ responses due to their expectations about the treatment rather than the treatment itself.
Placebo Effects
The phenomenon where individuals alter their behavior because they are aware they are being observed.
Observation Reactivity
What is the Hawthorne Effect?
A change in behavior by study participants due to their awareness of being observed in a study.