Midterm I Flashcards
What are the 2 categories of variables?
Measured and manipulated
What are the 3 basic types of variables?
- Independent
- Dependent
- Control
What is an independent variable?
Typically manipulated variable to evaluate what is being measured (DV). Can also be measured tho
In multiple-regression analysis, predictor variable is used to explain variance in criterion variable (DV).
What is a dependent variable?
The variable that is being measured
In multiple-regression analysis=single outcome
Criterion variable that researchers are most interested in studying or predicting.
What is operationalization?
Process of turning a construct of interest into a measured or manipulated variable. Need 2 definitions of each variable.
How to measure conceptual variable.
What is a conceptual definition?
Researcher’s definition of variable at theoretical level.
What is the difference between categorical and quantitative variables?
Categorical= consists of categories (e.g. sex)
Quantitative/ continuous variables= coded with meaningful numbers (e.g height, IQ, brain activity)
What are the 3 sub-types of quantitative variables/ levels of measurement? (hint: think of scales)
Ordinal: Rank order, distance between each result not necessarily equal
Interval: Distance of equal number between results. No true 0 (e.g IQ scale). Linearity
Ratio: Equal interval between results and true 0 (e.g exam questions/ result on test). Scale with most characteristics to it
What is reliability?
Refers to how consistent the results of a measure are
What is validity?
Refers to whether the variable was measured adequately, measuring what it is supposed to measure.
What are 2 statistical devices researchers can use to test reliability?
Scatterplots and correlation coefficient r because reliability= association claim
What are the 3 kinds of reliability?
Test-retest reliability: Study participants will get similar score each time scores are measured again
Interrater reliability: Consistent scores are obtained no matter who measured the variable
Internal reliability: Participant gives consistent pattern of answers, no matter how questions are phrased
What type of reliability is being assessed with this question: Does it correlate with itself?
Internal reliability
What type of reliability is being assessed with this question: Does it correlate with itself on two occasions?
Test-retest reliability
What type of reliability is being assessed with this question: Do the observers’ scores correlate with each other?
Interrater reliability
What does the r coefficient identify?
The strength and direction of a relationship
What does an r of 0 indicate?
That there is no relationship between the 2 variables
Is it true that for a measure to be valid, it must be reliable?
True but not opposite
What does good validity should do?
Assess and prove you’re not measuring something else
What are the 9 types of validity? Explain each
Internal: Are there better alternative explanations?
External: How representative is your sample? (can it be replicated?)
Construct: Does your measure capture the right thing? (includes face, content, criterion, convergent and discriminant)
Face: At face value, does it seem to be valid?
Statistical: How big is your effect?
Content: Did you capture all aspects?
Criterion: Does it relate well to a concrete outcome?
Convergent: Does it relate to other things it should?
Discriminant: Does it relate to other things it shouldn’t?
Why is it important to think about the order effects in questionnaires?
Because can be a threat to internal validity in a within-group design. Exposure to one condition changes responding for later conditions.
Mostly affects people’s quality of responding
What are the two biggest problems with response sets?
Response sets often used to end more quickly
- Acquiescence/ yea-saying: Tendency to answer positively to everything. Threatens construct validity, by effectively changing your questionnaire into a measure of the tendency to think carefully.
- Fence-sitting: Selecting naturally neutral responses. Weakens construct validity in the same way as yea- saying but can be hard to differentiate from someone who really does have a neutral opinion.
What could be solutions to prevent response sets?
- Use forced-choice questions
- Use reverse-worded questions
Name 2 types of faking techniques
Social desirability: Trying to look better in the eyes of others
Malingering: Trying to look bad. (this can be helped by mentioning survey is anonymous
Can self-reports be trusted? Why?
Yes, people can be trusted, just need good questionnaires. Think about the questions and potential limitations.
What is needed for a questionnaire to be considered useful?
Good Design o Captured construct well o Appropriate number of items Good answers o Order effects o Response sets (similar answers= to identify response sets) o Faking
What are bivariate correlations/ linear correlations? What are their primary features?
- Exactly two variables
- Both variables are measured
- The minimum value is -1
- The maximum value is +1
- A value of 0 indicates no association between the variables
What is the range for interpreting a
Pearson Correlation Coefficient (r)?
- As low as ± .10 shows a small or weak association
- As low as ± .30 shows a medium or moderate association
- As low as ± .50 shows a large or strong association
If the line of best fit does not seem to work with the data, is the correlation still valid?
No
What are the 3 causal criteria?
- Covariance of cause and effect: Results must show a correlation, or association, between the cause variable and effect variable
- Temporal precedence: The method must ensure that the cause variable preceded the effect variable; must come first in time
- Internal validity: No other plausible alternative explanations for the relationship between the 2 variables.
What are outliers? Can they be removed from a study?
Outliers are unusual observations or scores that are different from the rest. Yes, but justification is needed to prevent attrition and regression to the mean. It is often not needed in a big sample.