3 - Nature of Quantiative Research Flashcards
the main steps in quantitative research
theory, hypothesis, research design, devising measures of concepts, select research site, select research subjects/respondents, administer research instruments/collect data, process data, analyze data, findings/conclusions, write up findings/conclusions
concept
ideas or mental representations of things
- building blocks of theory
- represents points around which social research is conducted
- categories for organization of ideas/observation
- concept can be interdependent or dependent variable, descriptive or comparative
independent variable vs dependent variable
something to be explained vs possible explanation
concept can be descriptive or comparative
changes in amount of social mobility in Canada over time vs variations among comparable nations in levels of social mobility
why measure concepts?
- allows for delineation of fine differences between people in terms of characteristic in question (it’s harder to recognize fine distinctions than extreme differences)
- Provides consistent device for gauging distinctions (measure’s results shouldn’t be affected by time/person administering the measure)
- Provides basis for estimates of the nature/strength of relationship between concepts
Indicators
stand for or represent concept, necessary to measure concepts (can be indirect, for example absenteeism as an indicator for low job satisfaction)
two types of definitions of concepts in quantitative research
- nominal; describes in words like dictionary (crime is any violation of the Criminal Code of Canada)
- operational; spells out operations that will be performed to measure concept (to measure crime, this researcher will use statistics provided by police force)
ways to devise indicators
through questions part of the interview/questionnaire (respondents attitudes, personal experiences, behaviours, etc)
developing criteria for classifying observed behaviour (pupil behaviour in classroom)
through use of official statistics (stats canada)
developing classification schemes to analyze written data (analysis of how newspapers characterize sex workers)
using multiple-item measures in survey research
single indicator may misclassify some individuals if wording leads to misunderstanding of meaning
single indicator may not capture all meaning in underlying concept
multiple indicators allow for finer distinctions and sophisticated data analysis
reliability
concerned with consistency of measures by looking at stability over time, internal reliability, and inter-observer consistency
stability over time
- whether results fluctuate as time progresses, assuming that thing being measured isn’t changing
- most thermometres have this reliability
- test using Test-retest method
internal reliability
- aka consistency
- multiple measures administered in one sitting should be consistent
- cronbach’s alpha coefficient
- split half method
cronbach’s alpha coefficient
commonly used test in which 1 is perfect internal reliability and 0 is no internal reliability, and .8 is typically considered minimum acceptable level
split half method
indicators divided into two halves, respondent’s scores should correlate, in which 1 is perfect internal reliability and 0 is no internal reliability
inter-observer consistency
- judgements between several researchers in activity involving subjective judgement
- ex: classifying and categorizing open answers
measurement validity
whether indicator accurately/properly gauges concept
-face, concurrent, construct, convergent
face validity
measure appears to reflect concept, essentially intuitive process
concurrent validity
examining when criterion differs from case to case
-ex: when measuring absenteeism as indicator for low job satisfaction, lack of absenteeism should be seen in those with high job satisfaction
construct validity
whether concepts relate to each other in a way consistent to what their theories would predict
ex: routine jobs should have lower job satisfaction than jobs with varied activities. if the routine jobs are found to have equal job satisfaction as complex jobs, it lacks construct validity. Either measure was invalid, deduction was misguided, or theory needs revision
convergent validity
- validity should be gauged through comparison to other measures of same concept developed through different methods
- ex problem with convergent approach: measuring crime though police reports or victimization surveys
which validities are more important?
face and internal are usually the only ones tested
if measure is not reliable ….
it cannot be valid
goal of quantitative researchers
to understand social order by making sense of phenomena and evaluate theories and interpretations
establishing causality
describing why, not just how
good quantitative research inspires confidence in researcher’s causual inferences
generalization of findings
sample must be as representative as possible in order to be confident results are not unique to the sample
probability sampling largely eliminates bias through random sampling
critiques of quantitative research
- fails to distinguish people from ‘world of nature’ (some claim science is only applicable to entities/processes that lack self-reflection)
- measurement process produces false sense of precision/accuracy
- reliance on procedure creates disjuncture between research and everyday life (relates of external validity, difference between what people do/say they’ll do)
- analysis of relationship between variables promotes view of social life as remote from everyday experiences
- explanations for findings may not be empathetic (ex: poor inner city areas see more unwed mothers with children because women are marrying later in life instead of marriage losing its popularity)
- assumption of objectivist ontology (assumes reality exists)