Acquiring Evidence: Introduction to Survey Research Design and Data Collection Flashcards
What questions must be addressed when making a survey?
What is the purpose of the survey?
What survey mode will you use? (Telephone, online, paper, interview)
When should a survey be used?
To say something about the population the sample is from
To collect information from large number of people or records
To collect information inexpensively
When data collection needs to be as standardised as possible
When information needed is not in-depth
What are the advantages and limitations to using surveys and questionnaires?
Advantages:
Anonymity
Asking sensitive questions
Standardised data
Quick to administer
Simple data analysis can be used
Data can be presented as graphs/charts
Multiple distribution channels (online, paper, social networking sites)
Limitations:
Limited explanations for data
Comments open to a range of interpretations
Don’t know why questions were skipped or why survey partially completed
No opportunity to explain questions that people don’t understand
Can’t guarantee survey was even received or completed by the right person
Basic literacy levels needed.
How can an effective survey be designed?
Establish survey goals (why is survey made? What is hoped to accomplish, how do i plan on using data that i collect? How will data influence my decisions?)
Create a dummy table (To ensure all objectives are being addressed and all questions address objectives)
Step 3: Asking good questions (open ended for variety of responses, closed ended for specific response)
What are the types of closed-ended questions?
Dichotomous
Likert scale
Guttman scale
Multiple choice
Rank order
What are the types of responses to some likert scales?
Strongly agree, agree, neutral, disagree, strongly disagree
How can a questionnaire’s internal reliability be tested/scored?
Cronbach’s alpha coefficient.
What causes random and systematic error in surveys?
Random error: Fluctuations in persons mood, misreading or misunderstanding question, measurement of individuals on different days or in different places
Systematic error: Sources of error including style of measurement, tendency towards self-promotion, cooperative reporting, and other conceptual variables are being measured
What does validity of a survey refer to?
Refers to how well a piece of research or a scale/survey measures what it set out to or how well it reflects the reality it claims to represent.
What are the types of validity?
Content validity (face and content validity)
Criterion validity (Includes concurrent and predictive validity)
Construct validity (Includes convergent, divergent, factorial, and discriminant validity)
What is face validity?
Degree to which a question appears effective in terms of stated aims. (measures what it says it does)
It is an assessment of whether a measurement scale looks reasonable
What is content validity?
Extent to which one can generalize from a particular collection of items to all possible items that would be representative of a specific domain of items.
How is content validity assessed?
Critical review by expert panel for clarity and completeness
Comparing with literature
Or both
What is criterion validity?
Involves comparing scale with a criterion measure that has been established as valid (correlates with external benchmark)
Relatively straightforward if valid criterion is already in existence.
What are the 2 subdivisions of criterion validity?
Concurrent validity: When info about criterion is available at time of the test
Predictive validity: Criterion measure is obtained after test has been administered.