Final Stuff Flashcards
Random Sampling
A sample in which everyone in your sampling frame has an equal chance of being selected
Simple Random Sample
A basic sampling method where a group of subjects (sample) is selected for study from a larger population, and each member of the population has an equal chance of being selected
Stratified sample
A type of sampling that uses a technique in which different subcategories of a sample are identified and then randomly selected
Proportional Stratified Random Sample
A type of sampling that uses a technique in which different subcategories of a sample are identified and then selected PROPORTIONAL to the population
Cluster Sampling
A type of sampling in which clusters, or groups, are identified that are representative of the entire population, and then sampled randomly within each cluster, letting each cluster represent the population
Nonrandom Sampling
Sample that is not generalizable to the population, sample that is not a random sample
Convenience Sample
A group of people that is easy to access
Volunteer Sample
Consists of people who are willing to volunteer for a study
Snowball Sampling
Study participants make referrals to other to other potential participants`
Reliability
The ability of a measure to produce the same results if replicated
Validity
Accuracy of a measure, in terms of measuring intended constructs or observations
Test-retest Reliability
A reliability method in which the same measure is given to the same people at two different times
Alternate form reliability
A reliability method to determine if the order in which the items in a measure are presented affect the ways in which people respond
Split-half reliability
A means of evaluating internal consistency of a scale that compares one randomly selected half of a scale from the other randomly selected half of the scale
Item total reliability
A means of evaluating internal consistency of a scale that compares the total score for a scale with individual items for the same scale
Inter-coder reliability
An indicator of how similarly coders are coding content, both in terms of identifying units of analysis and in the contextual labels they ascribe to those units
Reliability Statistics examples
Cronbach’s Alpha (for Alpha risk worldview 1), Scott’s Pi (For worldview 2 inter-coder reliability)
Face Validity
A type of validity consideration in which measures, or procedures, are looked at and questioned if they make sense at face value
Criterion Validity
A type of validity consideration that deals with how a particular measure holds up when compared to some outside criterion
2 types:
-Predictive Validity & Concurrent Validity
Predictive Validity
How well a measure predicts something will happen in the future.
Concurrent Validity
How well a scale measures up against another scale that has been demonstrated to measure the exact same thing
Convergent Validity
When two measures you expect to be related are shown to be positively statistically related
e. g.,
- Attitude toward Same -Sex marriage & Attitude toward homophobia measures end up being positively related
Discriminant Validity
When two measures you expect to be negatively related (opposite to each other) are shown to have a negative statistical relationship
History (validity)
A validity issue where something totally unrelated to your study happened at a particular time, and may have affected the responses
Maturation (Validity)
A validity issue dealing with the fact that subjects can change over time, which can affect their responses to the measures a researcher is interested in
Testing Validity
A validity threat where if someone is more familiar or more comfortable with a series of questions or items, they may respond differently
Instrumentation Validity
A validity threat that deals with differences that are observed at two different times in which different instruments are used
External Validity
Are your findings valid among the population you’re testing?
Proper sampling ensures that you are appropriately representing whoever you say you are representing
Z-test (when to use it)
o A large “n” (sample size) (technically 30+, but Nick says truly 100+)
o Knowledge of the population parameters
o Random selected sample
o Normally distributed DV
o Interval/Ratio DV
o Nominal iv ***
- When the Standard deviation of the population is giving
T-test (in general)
• Used when:
o You want to compare two groups
o Your distribution is normal-ish
• Parametric Test
o You have an Interval or Ratio DV
o T distribution changes shape depending on the degrees of freedom you have
• Degrees of Freedom
o (n-k)
• n=the number of events observed
• k= the number of independent samples being compared
o The degrees of freedom in our sample tells us the shape of our probability
• Takes out random things so we don’t over extend our parameters
• Adjusts the probability curve so we can see if its statistically significant
• This provides us with a specific critical value based on our sample size
One sample t-test
Compares a sample to a population
Independent Samples T-test
Compares two samples that cannot overlap
e.g., the effect of derogatory campaign ads on republicans and democrats
Dependent Samples T-test
Looks at the differences between two matched samples
e.g., how privates communicate with their officers between various different squads of the army
Univariate Anova
ls sometimes called a “one-way” ANOVA
o Allows us to compare multiple groups defined by variation in a nominal IV, at the same time
• DV is still continuous (Ratio or Interval)
Factorial Anova
allows for Multiple IVs or moderators
• Multiple IV’s (or IV + Moderator) that are interacting and having a simultaneous impact on the Dependent Variable
• Higher the test statistic, the stronger the relationship