exam 2 Flashcards
concept
an idea that can be named, defined, and eventually measured
conceptualization
process of precisely defining ideas and turning them into variables
operationalization
turning abstract concepts into measurable observations
categorical variables
have a finite set of possible values
no known distances between values
includes nominal and ordinal variables
nominal
catalog states or statuses that are parallel and cannot be ranked or ordered
ordinal
have categories that can be ordered in some way, but the distance between the values is not known
example of ordinal variables
clothing sizes ranging from XS to xl
continuous variables
have an infinite set of possible variables
values have fixed distances between them
includes interval and ratio variables
interval
have a continuum of values with meaningful distances between them, but no true zero. the values can be compared directly, but they cannot be used in proportions or mathematical operations.
example of interval variables
SAT score or temperature
ratio
are interval variables that do not have a true zero, and the distances between values can be measured, and values can be expressed as proportions
dimensions
manifestations, angles, or units of the concept
indicators
the values assigned to a variable to provide the blueprint for measurement
unit of analysis
the level of social life about which we want to generalize
individual unit of analysis
used to refine our understanding of the ties that bind individuals together into a society
group unit of analysis
how social structure and forces affect whole categories of people on the basis of race, class, or gender
organization or insitution unit of analysis
can be used to understand how corporations impact various aspects of social and economic life
society unit of analysis
used for understanding how larger social structures shape us
mismatches between units of analysis
when one unit of analysis does not translate into another, the results of study can be invalidated
ecological fallacy
a mistake that researchers make by drawing conclusions about the micro level based on some macro level analysis
reductionism
a mistake that researchers make by drawing conclusions about macro level unit based on analyses of micro level data
four basic forms of measurements
reports
observation
artifact counts/assessments
manipulation
reports (open-ended and close-ended questions)
open-ended allow subjects to respond in their own words
close-ended have preset response categories
observation
the process of seeing, recording, and assessing social phenomena
artifact counts/assessments
manipulation
calculation reliability: cronbach’s alpha and internal reliability
measures a specific kind of reliability for a composite variable
how to assess a cronbach alpha score
score of 0 would mean no reliability, or that items are not tapping a common underlying phenomenon
score of 1 would mean perfect reliability, though this is not practical
intercoder reliability
reveals how much different coders or observers agree with one another when looking at the same data
internal validity
the degree to which the study establishes a causal effect of the independent variable on the dependent variable. the measures truly and accurately capture concepts.
generalizability
extent to which results or conclusions based on one population can be applied to others
representativeness
the degree of similarity of a study population compared to an external population
face validity
a dimension concerning whether a measure looks valid
concurrent validity
how closely the measure is associated with a preexisting measure
predictive validity
how closely the measure is correlated with something it should be correlated with
content validity
concerns how well a measure encompasses the many different meanings of a single concept
construct validity
concerns how well multiple indicators are connected to some underlying factor
robustness
the capacity to produce unbiased results when small changes happen
split-half method
assesses robustness by testing the similarity of results after administering one subset of an item to a sample and then another subset
test-retest method
assesses robustness by administering a measure to the same sample at two different times
pilot testing
a method of administering some measurement protocol to a small preliminary sample of subjects as a means of assessing how well the measure works
sampling
the process of deciding what or whom to observe when you cannot observe and analyze everyone or everything
literary digest poll: how did it go wrong?
their sample did not represent their population
probability sample
sample chosen via selection
advantages of probability sampling
it doesn’t represent one group more than another
bias doesn’t reflect into research
sampling error
the difference between the estimates from a sample an the true parameter that arise due to random chance
systematic error
a flaw built into the design of the study that causes a sample estimate to diverge from the population parameter
what’s the difference between systematic and sampling error
sample error is the difference between the sample and true parameter while systematic error is flaw that causes sample estimate to diverge from population parameter
margin of error
the amount of uncertainty in an estimate
sampling distribution
a probability distribution of a statistic that comes from choosing random sample of a given population
target population
a group about which social scientists attempt to make generalizations about
census
includes data on every member of a population
population parameter
a number that describes something about an entire population or group
confidence levels
the probability that an estimate includes the population parameter
confidence intervals
the range implied by the margin of error. my average would fall in between [#,#]
what is the commonly accepted confidence interval?
95%
what happens to the margin of error when the sample size increased?
the bigger the sample, less chance for error
simple random sampling
a type of probability sample in which each individual has the same probability of being selected
what is the sampling frame
a list of population members from which a probability sample is drawn
systematic sample
a probability sampling strategy in which sample members are selected by using a fixed interval, such as taking every fifth person on a list of everyone in the population
cluster smapling
researchers divide up the target population into groups, or “clusters”, first selecting clusters randomly and then selecting individuals within those clusters
stratified sampling
the population is divided into groups, or strata, and sample members are selected in strategic proportions from each group
oversampling
a group that is deliberately sampled at a rate higher than its frequency in the population
weighting
used to account for the fact that the sample is no longer representative
non representative samples
sample is not representative of the population
David Snowden’s nun study
allowed him to learn more about Alzheimer’s disease than he could have with a representative sample
issues of generalizability with non representative samples
researchers are not able to conclude that a hypothesis holds true in the same way throughout all population subgroups
case-oriented research
scientists rather large amounts of data about a single case or small number of cases
how does case oriented research differ from variable oriented research
variable oriented studies a large number of cases but only a limited amount of data. case oriented gathers large amounts of data from one single case or a small number of cases.
purposive sampling
cases are selected on the basis of features that distinguish them from other cases
why is purposive sampling better than probability sampling for case oriented research
potential candidates are thoughtfully chosen
sequential sampling and the role it plays
enables researchers to make decisions about what additional data to collect based on their findings from data they’ve already collected
sampling for range
maximizing respondents’ range of experiences with the phenomena under study
saturation
when new materials fail to yield new insights and simply reinforce what the researcher already knows
typical cases: typicality
a case is typical when its features are similar in as many respects as possible to the average of the population it represents
extreme cases: extremity
focusing on “extreme” cases can provide researchers with particularly vivid examples of the phenomenon they wish to study
why would researchers choose deviant cases?
to choose cases that are unusual, unexpected, or hard to explain given what is currently known about a topic
contrasting outcomes
researchers may sometimes choose a pair of cases that represent a “puzzle” or in some way have different outcomes in response to the same stimulus
key differences, past experiences, and intuition
researchers may be interested in the range of consequences that might follow from a key difference between two cases. they may use their past experiences or their intuition about which settings will be useful for studying their research.
sampling and big data
data sets with billions of pieces of information, typically created through individuals’ interactions with technology
administrative records
data collected by government agencies or corporations as part of their own record keeping
example of nominal variables
gender or religion