Chapters 2 & 3: Personality Methods & Assessment Flashcards
Research
exploration of the unknown
finding out something that nobody knew before one discovered it
Funder’s Second Law
there are no perfect indicators of personality
there are only clues, and clues as always ambiguous
Funder’s Third Law
something beats nothing, two times out of three
S Data
self-judgements, or ratings that people provide of their own personality attributes or behavior
Face Validity
the degree to which an assessment instrument, such as a questionnaire, on its face appears to measure what it is intended to measure
for example, a face-valid measure of sociability might ask about attendance at parties
Self-Verification
the process by which people try to bring others to treat them in a manner that confirms their self-conceptions
I Data
informants’ data, or judgements made by knowledgeable informants about general attributes of an individual’s personality
Judgements
data that derive, in the final analysis, from someone using his or her common sense and observations to rate personality or behavior
Expectancy Effect
the tendency for someone to become the kind of person others expect him or her to be
also known as a self-fulfilling prophecy and behavioral confirmation
Behavioral Confirmation
the self-fulfilling prophecy tendency for a person to become the kind of person others expect them to be
also called the expectancy effect
L Data
life data, or more-or-less easily verifiable, concrete, real-life outcomes, which are of possible psychological significance
B Data
behavioral data, or direct observations of another’s behavior that are translated directly into numerical form
B data can be gathered in natural or contrived (experimental) settings
Reliability
in measurement, the tendency of an instrument to provide the same comparative information on repeated occasions
Measurement Error
the variation of a number around its true mean due to uncontrolled, essentially random influences
also called error variance
State
a temporary psychological event, such as an emotion, thought, or perception
Trait
a relatively stable and long-lasting attribute of personality
Aggregation
the combining together of different measurements, such as by averaging them
Spearman-Brown Formula
in psychometrics, a mathematical formula that predicts the degree to which the reliability of a test can be improved by adding more items
Psychometrics
the technology of psychological measurement
Validity
the degree to which a measurement actually reflects what it is intended to measure
Construct
an idea about a psychological attribute that goes beyond what might be assessed through any particular method of assessment
Construct Validation
the strategy of establishing the validity of a measure by comparing it with a wide range of other measures
Generalizability
the degree to which a measurement can be found under diverse circumstances, such as time, context, participant population, and so on
in modern psychometrics, this terms includes both reliability and validity
Case Method
studying a particular phenomenon or individual in depth both to understand the particular case and to discover general lessons or scientific laws
Experimental Method
a research technique that establishes the causal relationship between an independent variable (x) and dependent variable (y) by randomly assigning participants to experimental groups characterized by differing levels of x, and measuring the average behavior (y) that results in each group
Correlational Method
a research technique that establishes the relationship (not necessarily causal) between two variables, traditionally denoted x and y, by measuring both variables in a sample of participants
Scatter Plot
a diagram that shows the relationship between two variables by displaying points on a two-dimensional plot
usually the two variables are denoted x and y, each point represent a pair of scores, and the x variable is plotted on the horizontal axis while the y variable is plotted on the vertical axis
Correlation Coefficient
a number between -1 and +1 that reflects the degree to which one variable, traditionally called y, is a linear function of another, traditionally called x
a negative correlation means that as x goes up, y goes down
a positive correlation means that as x foes up, so does y
a zero correlation means that x and y are unrelated
Objective Test
a personality test that consists of a list of questions to be answered by the subject as true or false, yes or no, or along a numeric scale
Factor Analysis
a statistical technique for finding clusters of related traits, tests, or items
p-Level
in statistical data analysis, the probability that the obtained correlation or difference between experimental conditions would be expected by chance
Type I Error
in research, the mistake of thinking that one variable has an effect on, or relationship with, another variable, when it really does not
Type II Error
in research, the mistake of thinking that one variable does not have an effect on or relationship with another, when it really does
Effect Size
a number that reflects the degree to which one variable affects, or is related to, another variable
Binomial Effect Size Display (BESD)
a method for displaying and understanding more clearly the magnitude of an effect reported as a correlation, by translating the value of r into a 2x2 table comparing predicted with obtained results
Replication
doing a study again to see if the results hold up
replications are especially persuasive when done by different researchers in different labs than the original study
Publication
the tendency of scientific journals preferentially to publish studies with strong results
Questionable Research Practices (QRP’s)
research practices that, while not exactly deceptive, can increase the chances of obtaining the result the researcher desires
p-Hacking
analyzing data in various ways until one finds the desired results
Open Science
a set of emerging principles intended to improve the transparency of scientific research and that encourage fully reporting all methods and variables used in a study, reporting studies that failed as well as succeeded and sharing data among scientists
What is Funder’s Second Law?
there are no perfect indicators of personality; there are only clues, and clues are always ambiguous
the clues may be wrong
What is Funder’s Third Law?
something beats nothing, two times out of three
need evidence, not just a gut feeling
What are the advantages of S-Data?
high face validity
large amount of data
access to thoughts, feelings, and intentions
definitional truth: self judgements are the only trustworthy truth of things such as self-worth
causal force
simple and easy
What are the disadvantages of S-Data?
bias
error
too simple and easy (overuse)
What is face validity?
the degree to which an assessment instrument appears to measure what it is intended to measure
What factors reduce reliability?
random variation or mistakes, result in measurement error
What factors reduce validity?
directed, pointing to one particular result, results in bias
What are the advantages of I-Data?
real-world bias
large amount of data
common sense
definitional truth
causal force
What are the disadvantages of I-Data?
limited information
lack of access to private experience
bias
error
What are the advantages of L-Data?
not prone to bias
intrinsic importance
psychological relevance
What are the disadvantages of L-Data?
multi-determinism: somethings are determined by multiple things not just personality
may lack relevance or be hard to access
records may be incomplete or inaccurate
What are the advantages of B-Data?
wide range of contexts
appearance of subjectivity
What are the disadvantages of B-Data?
difficult and expensive
uncertain interpretation
How is S data differentiated from B data?
S data: asking people about themselves
B data: performance based assessments, people demonstrate behaviors, not report them
What is reliability?
consistency in measurement
What is test-retest reliability?
consistency over time
only for traits, not for states
What is inter-rater reliability?
consistency among observers
What is internal consistency?
consistency across items all measuring the same thing
What is validity?
the degree to which a measurement actually measures what is it supposed to
What is construct validity?
establish that our measure is related to the construct we want to measure
What is criterion validity?
outcome or behavior
What is convergent validity?
someone should score high on extraversion and low on introversion
What is discriminant validity?
make sure we are measuring intelligence, not just how much the child talks
What is Funder’s Fourth Law?
there are only two kinds of data; terrible data and no data
the potential shortcomings of all kinds of data are precisely what require researchers to always gather every kind they possibly can
What are the four conditions of rational test construction?
items mean the same thing to the test taker and creator
capability for accurate self-assessment
willingness to make an accurate and undistorted report
items must be valid indicators of what is being measured
What are experimental methods?
manipulated at least one variable
can demonstrate causality
determine whether one variable can affect another
not possible (or ethical) with certain variables
What are correlational methods?
measured variables
can’t demonstrate causality
shows how often or how much variables are related, typically
What is a representative design?
generalizability across people
generalizability across methods/designs
consistency in patterns
What is the p level?
the probability that the obtained correlation or difference between conditions would be expected by chance
the probability of getting the result found, if the null hypothesis were true
What are the problems with significance testing?
the logic is difficult to describe (and understand)
the criterion for significance is a arbitrary rule of thumb
nonsignificant results are sometimes misinterpreted to mean “no result” or no relationship or difference
only provides information about the probability of one type of error
What is the effect size?
strength of association
more meaningful than a significance (p) level
many effect-size measures have been developed
correlation coefficient or Cohen’s d
can be used for predictions
How can we make research more dependable?
use larger numbers of participants
disclose all methods
share data
report studies that don’t “work”, and explore your data
never regard one study as conclusive proof of anything