Week 4: Focusing Your Question and Choosing a Design Flashcards
The process by which a researcher strives to define variables by putting them in measurable terms
Operationalizing
Overall, this concept refers to the idea that your measurements and methodology allow you to capture what you think you are trying to measure or study
Validity
The idea that an investigation or measurement tool should yield consistent findings if researchers use the same procedures and method repeatedly
Reliability
The concept that researchers can test whether the hypothesis or claim can be proven wrong
Falsifiable
An important step in formulating your research project is crafting a specific and ____________ that will guide your investigation
From a Testable Hypothesis
There must be a way to know whether your hypothesis is false, that is, if the videos do not boost intelligence
Ensure Your Hypothesis is Falsifiable
A concept of Piaget’s (a cognitive developmental psychologist) that refers to a process where internal mental structures take in new information and fit it in with existing structures (schemas)
Assimilation
A concept of Piaget’s (a cognitive developmental psychologist) that refers to a process where internal mental structures change as a function of maturation and taking in new information
Accommodation
A test of implicit cognition that measures participants’ reaction time to investigate the strength of association between people’s mental representation
Implicit Association Test
A clearly specified definition of your variables, stated in observable and measurable terms
Operational definition
The use of numerical data and statistical techniques to examine questions of interest. Or, research that results in data can be numerically measured
Quantitative research
The assignment of participants to different conditions in an experiment by methods that rely on chance and probability so that potential biases related to assignment to conditions are removed
Random assignment
In an experimental design, the set of participants that receives the intervention or treatment with the goal of determining whether the treatment impact the outcome
Experimental group
In an experimental design, the set of participants who do not receive the experimental treatment (or who receive an inert version). This group is compared with the experimental group
Control group
A variable manipulated by the experimenter to observe its effect on a dependent variable
Independent variable
The factor of interest being measured in the experiment that changes in response to manipulation of the independent variable
Dependent variable
A measure of the degree to which the conclusions drawn from a research study can be applied to real-life situations
Ecological validity
A group of research approaches that do not attempt to manipulate or control the environment, but rather involve the researcher using a systematic technique to examine what is already occurring.
Non-experimental Methods
In a cause-and-effect relationship between two variables, both variables can act as the causal variable. For example, higher intelligence may cause one to be in a higher socio-economic bracket and being in a higher socio-economic bracket may cause one’s intelligence to increase; the effect goes both ways
Biderectionality
Research that results in data that are non-numerical and are analyzed for meaning or patterns
Qualitative research
A theoretical perspective that examines how participants derive meaning from their lived experiences
Social Constructionism
A theoretical perspective that seeks to explore and understand how individuals are shaped and experienced through cultural and social norms
Post-structural feminism
Research that uses both quantitative and qualitative data within a single study
Mixed methods research
A mixed methods research design that first collects and analyzes qualitative data then uses these findings to inform the quantitative data collection
Exploratory design
A mixed methods research design that first collects and analyzes quantitative data then follows up the findings with a more in-depth qualitative study
Explanatory design
A mixed methods research design that collects and analyzes the quantitative and qualitative data separately, but then integrates the results together
Convergent design
The difference between the actual or true value of what you are measuring and the result obtained using the measurement instrument
Measurement error
A general measure of the consistency of your assessment, usually measured by specific types of ________
Reliability
A measure of the consistency of results obtained using the same assessment tool on multiple occasions
Test-retest
A measure of the consistency of results obtained on different but equivalent forms of the same assessment tool
Alternate forms
A measure of agreement in the scores provided by two or more different raters
Inter-rater reliability
A measure of how much the scores on items within an assessment yield the same values–to what extent are items within the assessment correlated with each other?
Internal consistency
Within a test, this is a measure of internal consistency where the scores on half the items on an assessment are correlated with the scores on the other half of the assessment
Split half
Within a test, this is a measure of internal consistency that measures the average correlation between all items in the assessment
Cronbach’s alpha (a)
refers to how well the content of a test measures the construct it is intended to measure
Evidence based on test content
A construct is measured too narrowly and its measurement does not include all relevant aspects of the construct
Construct underrepresentation
A construct is measured too broadly and its measurement includes aspects that are not part of the construct
Construct irrelevant variance
The degree to which the mechanisms underlying what people think, feel, or do when responding to test items or tasks matches the construct the test or task is trying to measure
Evidence-based on response processes
The degree to which test items and components match the structure of the underlying construct
Evidence-based on internal structure
A statistical technique that examines the relationships between items in a scale. The approach is useful for determining whether a scale measures a single variable or multiple ones
Factor analysis
The degree to which a test/construct is related to other external variables
Evidence-based on relations to other variables
The degree to which two constructs that theoretically should be related to one another are related
Convergent evidence
A single dimension of personality
General factor of personality
A cluster of three antisocial personality traits: Machiavellianism, narcissism, and psychopathy
Dark triad
A lack of correlation between two constructs that should not be related to one another
Discriminant evidence
The tendency of individuals to respond in a way that will be viewed favorably by others
Socially desirable responding
The degree to which test scores predict a criterion variable measured at the same point in time
Concurrent evidence
The degree to which test scores predict a criterion variable measured at a future point in time
Predictive evidence
Evidence that examines the interpretations and uses of test scores and their resulting intended and unintended consequences
Evidence-based on consequences of testing
The degree to which the content of a measure assesses what it is intended to measure
Evidence-based on test content
The degree to which the mechanisms underlying what people think, fell, or do when responding to items or tasks matches what was intended
Evidence-based on response processes
The degree to which the items on a measure match the underlying structure of the construct
Evidence-based on internal structure
The degree to which the scores on measure are related to other external variables
Evidence-based on relations to other variables
The degree to which two assessments designed to measure the same construct or behaviour actually do measure the same thing
Convergent validity
The degree to which two assessments designed to measure different constructs or behaviours are in fact measuring different things
Discriminant validity
The degree to which an assessment correlates with a future measure of a construct or behaviour
Predictive validity
The degree to which an assessment is correlated with an outcome measure at present
Concurrent validity
The degree to which a measure “appears” to assess the behaviour of interest
Face validity
The degree to which a particular variable is actually the cause of a particular outcome
Internal validity
The degree to which the conclusions drawn from the results of an investigation can be generalized to other samples or situations
External validity