Psyc232 Test 1 Flashcards
Identifying phenomena, such as why some individuals fail to stop at red lights.
Describing
Formulating hypotheses, e.g., whether wealth influences ethical behavior.
Predicting
Developing theories based on collected data to clarify observed behaviors.
Explaining
The ultimate aim of many psychological studies is to influence or change behaviors based on findings
Controlling
Starts with a theory, formulates a hypothesis, and then collects data to test it.
Deductive Process
Begins with data collection, identifies patterns, and formulates a theory based on those patterns.
Inductive Process
an abstract concept that cannot be directly observed, such as ‘intelligence’ or ‘anxiety’.
theoretical construct
refers to the tools or methods used to observe these constructs, like surveys or behavioral assessments.
measure
how to turn a concept into something we can design and measure, informs study design
Operationalisation
Categorical variables without a meaningful order (e.g., gender, race).
Nominal Scale Variables
Variables with a meaningful order but no defined intervals (e.g., race finishing positions).
Ordinal Scale Variables
Variables with meaningful intervals but no true zero (e.g., temperature in Celsius).
Interval Scale Variables
Variables with a true zero, allowing for meaningful multiplication and division (e.g., weight).
Ratio Scale Variables
Consistency of a measure over time; repeated tests yield similar results.
Test-Retest Reliability
Consistency across different observers; different raters should produce similar results.
Inter-Rater Reliability
Consistency across different forms of a test; different versions should yield similar outcomes.
Parallel Forms Reliability
Consistency of results across items within a test; items should correlate well with each other.
Internal Consistency Reliability
The three core principles of human ethics in psychology
Respect/Autonomy, Beneficence, and Justice
in logical reasoning: if you ask people to decide whether a particular argument is logically valid (i.e., the conclusion would be true if the premises were true), we tend to be influenced by the believability of the conclusion, even when we shouldn’t
Belief Bias Effect
What do we want to know? How do we know what we know? What do we do with what we know?
Epistemology
Systematic way to organise data, results, information to explain a phenomenon
Theory
The process of how we test
predictions
Method
Does the measurement match up to the theory?- e.g. theory of the poll; these should be the options that people have when they vote
Construct Validity
Is this the best design to answer the question?- can we trust what that thing says, e.g. giant meteor hitting the earth doesn’t reflect what they actually want
Internal Validity
Does it associate with the things it should in the world?- does it make sense with what we see in the world, e.g. Hillary Clinton did not win the election
External Validity
Participants respond in ways they think
1. the researcher wants/hypothesizes
2. that are acceptable or desirable under sociocultural norms
Demand Characteristics and Social Desirability
Dealing with Demand/Desirability
- “Double blind” measures, ensure confidentiality or anonymity wherever possible, emphasize there are no
right or wrong answers - Check data for external validity with other sources
- Use methodologies that counter (or capitalize on) demand/desirability characteristics
People tend to agree more commonly and more strongly than they tend to disagree
Acquiescence Bias
Dealing with Acquiescence Bias
- Use multiple items to average into one scale
- Good reverse-wording avoids “not/never”
Extroversion/Introversion
“I talk to a lot…”/“I keep in the background” - Bonus: You can also remove the mid-point of the scale to avoid “neutrality bias”, but then be extra extra careful about acquiescence because people tend to agree. Or that they just get annoyed because
they can’t answer how they want.
used to check consistency of our items that
we want to average together into a scale
Cronbach’s Alpha test
Above .70 = acceptable; above .80 = good; above .90 = excellent
Exposure to a question/answering a question influences answers and interpretation of subsequent questions
Priming
“Affective priming” when answering questions about values, morals or attitudes (e.g., “good/bad”, ”love/hate”)
Dealing with Priming
- Move impactful questions to the end of the survey or have a “distraction task” in between scales
- Randomize or counterbalance question order (this only averages the error across participants)
The extent to which your measurement is consistent
Reliability
Looks at people’s responses to the items and groups them to make the best summary of the items (the groups are called “components”)
Principal Components Analysis (PCA)
where you try different options in the data,
method, or analysis, until you get a significant p-value
P-hacking
Planning and documenting the (1) hypothesis or research questions, (2) method, and (3) analyses before collecting data
Pre-Registration
hypothesising after the results are known- run a study, see the result, and then write the hypothesis, damaging to continuity (pattern identification & generalisation is moved to the wrong place), should have been reported as an exploratory finding
HARKing
Each person in a population has an
equal chance of being chosen
Probability Sampling/Random Sampling
Divide the population into relevant
groups, then randomly select people
Stratified Sampling
Specifically selected person/people
Purposive Sampling/Targeted Sampling
Whoever’s available
Convenience Sampling
concerned with reality and truth seeking
e.g., what is existence, what is depression
Ontology
concerned with the nature of knowledge and the methods of learning
e.g., what do you know, how do you know it
Epistemology
concerned with the method of data collection, Framework, how are you going to answer question
e.g., empiricism, kaupapa Māori, interface research
methodology
describes how data was collected specifically, Tools, in what ways will you collect data
e.g., surveys, interviews, experiments
methods
reality exists independently of the knowledge and experience of it- remove yourself, personal values/beliefs cannot and should not impact your perception of reality
“third person perspective”
Objective Ontology
reality only exists when we experience it and give meaning to it- put yourself in, non-western approach, kaupapa maori, claims we cannot and should not extract ourselves from research, beliefs and values are important to how we analyse data
“first person perspective”
Subjective Ontology
specific group, can produce knowledge that generalises but not specific goal, do not need lots of people to take part, more exploratory in nature, don’t necessarily make theories before starting, subjective, fluid method, guidelines rather than strict instructions, time-consuming
Qualitative
wider population, fixed method, quick, easier, less expensive
Quantitative