7 Survey Research Flashcards
Term: Survey Research
Definition: Method to gather data using standardized tools like questionnaires or interviews.
Key Points: Used to understand attitudes, behaviors, opinions, etc. Can generate qualitative, quantitative, or mixed data.
Term: Purposes of Survey Research
Definition: Two main purposes: information gathering and theory testing/building.
Key Points: Helps explore, describe, explain, and predict phenomena. Essential for operationalizing abstract constructs.
Term: Types of Survey Administration
Definition: Self-administered (e.g., mail, online) or interview-administered (e.g., phone, face-to-face).
Key Points: Choice depends on target population, resources, and research goals. Self-administered for convenience, interview-administered for clarity.
Term: Developing Questionnaires
Definition: Process of designing questions for clarity and reliability.
Key Points: Avoid jargon, pilot test with a small group, and revise based on feedback. Keep it concise, readable, and provide appropriate response options.
Topic: Survey Design
Key Points: Clear instructions are essential for standardization and reliability. Sections can be organized by topic or question type. Start with easy and engaging questions, and use funneling or branching questions when appropriate.
Topic: Demographics
Key Points: Demographic information is typically gathered in a single section. Include only relevant questions and ensure response options are inclusive and sensitive to diverse populations.
Topic: Open Questions
Key Points: Open questions provide detailed, rich data but can be longer and difficult to analyze. Use them only when justified, ensure clarity and focus, and decide on the analysis strategy beforehand. They are more useful for descriptive and exploratory purposes.
Topic: Closed Questions
Key Points: Closed questions are quick to complete and easy to analyze but can oversimplify complex issues. Ensure questions are unambiguous, provide clear response options, and carefully consider the style of response options. They are more useful for explanatory and predictive purposes.
Topic: Writing Questions
Key Points: Avoid double-barreled questions that address two separate issues but allow only one response. They are confusing and make it impossible to determine the true intention of the respondent.
Topic: Ambiguity in Questions
Key Points: Avoid ambiguity in questions as they can be vague or over-generalized, leading to varied interpretations.
Topic: Negations in Questions
Key Points: Negations in questions introduce complexity and can be missed by respondents, resulting in non-response or misunderstanding. Present questions as positive statements to avoid confusion.
Topic: Neutral Questions
Key Points: Questions should be neutral, avoiding value-laden or leading language that may influence respondents. Emotive language should be avoided to maintain objectivity.
Topic: Jargon in Questions
Key Points: Avoid using technical terms that participants may not be familiar with to ensure clarity and understanding.
Topic: Response Bias
Key Points: Be aware of response bias, such as social desirability effects, where participants respond in a positively biased way. Manage this bias by including a lie scale or using both positively and negatively worded questions.
Topic: Rating Scales Overview
Key Points: Rating scales ask people to provide judgments on “how much” and are useful for measuring attitudes. They come in various formats, each with its own characteristics.
Topic: Dichotomous Rating Scale
Definition: Consists of two response options and is the simplest type of quantification.
Topic: Multichotomous Rating Scale
Definition: Offers either single or multiple response options.
Topic: Likert Scale
Definition: Consists of multi-point responses with equally spaced options. Considerations include response acquiescence, introduction of double negatives, and inclusion of neutral responses.
Topic: Non-verbal Rating Scales
Definition: Useful for children and cognitively impaired individuals. Participants indicate their response by pointing to a face or symbol.
Topic: Ranking Scale
Definition: Measures the relative importance of several items by asking participants to rank them.
Topic: Graphic Rating Scale
Definition: Participants mark along a continuous line anchored at each end and the score is recorded by measuring where the line was marked.
Topic: Semantic Differential Scale
Definition: Measures attitudes indirectly by having respondents mark their thoughts and feelings between bipolar opposite adjectives.
Topic: Response Rates
Definition: Percentage of questionnaires completed and returned. Strategies to maximize response rates include keeping questionnaires short, simple, and clear; including a pre-paid envelope with postal surveys; sending reminders; and offering incentives.
Topic: Questionnaire Construction
Definition: Questionnaires can measure one or more variables, typically using multiple items to measure a single variable. This is especially important with fuzzy constructs like attitudes, as it minimizes the impact of errors. Variable scores can be calculated as totals or averages, with averages being better for handling missing data.
Topic: Number of Response Options
Definition: The number of response options should balance sensitivity and reliability. Too few options lead to low sensitivity, while too many can decrease reliability.
Topic: Psychometrics
Definition: Psychometrics is the science of measuring psychological constructs, including personality, cognitive ability, attitudes, knowledge, and educational attainment.
Topic: Quality Assessment
Definition: The quality of psychometric tools is assessed using two criteria: temporal consistency, which measures the extent to which a tool provides the same results under the same conditions, and internal consistency, which measures the extent to which a tool measures the construct of interest.
Topic: Construct Validity
Definition: Construct validity refers to whether the construct being measured is valid. It is supported by cumulative research evidence and can be assessed in terms of convergent validity (correlation with tests of related constructs) and discriminant validity (lack of correlation with tests of different constructs).
Topic: Psychometric Tests
Definition: Psychometric tests are standardized questionnaires or tests designed to measure traits or abilities. Items are published as inventories, with norms available for interpretation. While reliability is established, validity can sometimes be questionable.