Exam 2 Flashcards
How do you calculate Total Error?
Difference between the true value and the observed value of a variable.
How do we determine sample size using different sampling procedures?
Difference between Field versus laboratory experiments.
Field Experiments: Research study where one or more independent variables are manipulated by the experimenter under controlled conditions.
Laboratory Experiments: Experiments where the experimental treatment is introduced in an artificial or lab setting.
Pro’s / Cons of Field vs Lab experiments.
Pro’s vs Cons: (Lab vs Field)
* Low External validity vs. High External Validity.
* High Internal Validity vs. Low Internal Validity.
* High Control vs. Low Control.
* Low Cost/Time, vs High Cost/ Time.
* Low Exposure to Competition vs. High Exposure to Competition.
* Subjects are aware of participation vs. Subjects are unaware.
Threats to internal validity in experiments – 7 threats
Maturation, Instrumentation, Selection Bias, Mortality, History, Testing, Statistical Regression
Internal versus external validity in experiments
Internal Validity:
o Ability of the experiment to show relationships unambiguously.
o Whether the manipulation of the independent variables causes the effect
External Validity
o Whether the experiment results can be generalized beyond the experimental setting.
o Applicability of the experiment results to situations outside of the actual experimental context.
Understand the 3 conditions necessary for causation: correlation vs causation.
1: Condition of Concomitant Variation – There must be evidence that a strong association exist between an action and an observe outcome.
2: Condition of Time Order Occurrence – Must be evidence that the action preceded the outcome. (X before Y).
3: Absence of Other Causal Explanations: There must be evidence that there is no strong competing explanation for the relationship. That a high level of validity exists.
Lay out of questionnaire design. (9)
1: Cover letter
2: Consent Form
3: Screening,
4: Prompts / Questions order
5: Test / Controls
6: Attention Check
7: Sensitive questions near the end
8: Demographic questions.
9: Thank you note.
What is pretesting and why do you do it?
Pretesting refers to the testing of the questionnaire on a small sample of respondents to identify and eliminate potential problems.
This helps us ensure our questionnaire gathers the correct type of data we are looking for. It also helps identify problems that may be present.
Know differences between open-ended, close-ended, leading, double-barreled, loaded questions
Leading Questions try to influence behavior or certain answers.
Loaded Questions make assumptions about the respondent in the question.
Double Barrel Questions are when you compare/ask 2 or more questions at the same time.
What words and types of sentences should be avoided in a questionnaire?
Avoid complexity, ambiguous wording, leading questions, loaded questions, double barrel questions, assumptions, and burdensome questions
Open Ended Questions:
Allows respondents to answer in their own words.
Solicit recall information when research don’t want to be bias by listing alternative choices.
Helps identify possible response category options when the researcher is unable to anticipate range of responses that could exist.
Pros: Answer is in respondents own words, provides insight, allows for probing and additional alternatives.
Cons: Difficult to interpret, editing and coding can be a challenge, potential interview bias, lower responses, possibility of shallow answers
Close ended questions
Respondents are given a finite number of responses to chose from.
Two main ways to ask these kinds of questions: Either choose from a list of responses or do a single choice rating scale.
Pros: Easy and accurate data coding / entry, limited responses, alternative list may help respondents recall, limited interview bias.
Cons: Researchers must generate alternatives; respondents are forced to choose no freedom in answers for respondents
What are the 5 characterizations of scale and their definitions?
Generalizability: The ease of administration and interpretation in different research settings and situations.
Sensitivity: The extent to which ratings provided by a scale can discriminate between the respondents who differ.
Validity: Does the study measure what it is supposed to measure?
Reliability: The consistency with which the measure produces the same results with the same or comparable population
Relevancy: Refers to how meaningful it is to apply the scale to measure a construct. Mathematically, relevancy=reliability*validity.
What are the factors that affect Researcher’s choice of attitude scale?
Specific information, data collection method, Budget constraints, respondents’ attitude, and knowledge
Five issues in designing single item scales.
- Types of poles used in the scale,
- Number of scale categories.
- Strength of the anchors.
- Balance of the scale.
- Labeling of the categories
What are the 5 single Item Scales:
Itemized, Pictural, Rank Order, Paired Comparisons, and Constant Sum
Define what each non-comparative single item scale does: (2)
Itemized Respondent selects from a limited number of categories. Most used single item scale. Measures overall satisfaction.
Pictorial: Commonly uses Pictures to describe feeling. Format must be comprehensible to respond and allow accurate response.
Define what each comparative single item scale does: (3)
Rank Order: Respondents compare two or more items and rank them. The respondent should have knowledge of all options. The options must cover all possible choices. Order of the choice may affect the result.
Paired Comparison Scales: Ask respondents choose one of the two items in a set based on a specific criterion or attribute.
Constant Sum Scales: Ask the respondent to divide a given number of point, typically 100, among two or more attributes, based on the relative importance.
Define Multi-Item Scales
scales that measure a sample of beliefs toward the attitude objects and combines the set of answers into an average/sum score to construct an estimate of some underlying or abstract variable.
What do the 3 Multi-Item Scale do?
Semantic differential Scale – Used to describe the set of beliefs that comprise a
person’s image of an organization or brand.
- Likert Scale - Respondent specifies a level of agreement or disagreement with
statements express either a favorable or an unfavorable attitude toward the
concept under study. - Thurstone Scale – Also called equal appearing intervals, strength of the individual
items is taken into account in computing the attitude score
Definition of scaling.
Scaling is the Process of creating a continuum on which objects are located according to the amount of the measured characteristics they possess.
Qualitative Method Characteristics.
Used to understand human behavior from the informant’s perspective. Assumes a dynamic negotiated reality.
Data is collected through participant observation and interviews.
Data is analyzed by themes from descriptions of information.
Data is reported in the language of the informant.