Quantitative Data Collection Flashcards
Define: Operationalization
Process of translating the concepts of interest to a researcher into observable and measurable phenomena
Why is it important to study how data is collected?
- The success of a study depends on the quality of the data-collection methods chosen and employed
- Data-collection method must be appropriate to the problem, hypothesis, setting and population
When collecting quantitative data, there has to be a goodness of fit between:
- Purpose
- Design
- Research question(s) or hypotheses
- Conceptual and operational definitions
- Data collection method
What is data consistency?
- In data collection, means that the method used to collect data from each participant in the study is exactly the same or as close to the same as possible
- Minimize bias when more than one researcher gathers data
- Control of extraneous variables
- Follow data collection protocols to ensure intervention fidelity
- Ensures interrater reliatbility
Define: Intervention Fidelity
- A way of ensuring consistency in data collection
- Researchers must train data collectors in the methods to be used in the study so that each data collector acquires the information in the same way (e.g. training research assistants)
- Can include protocols or manuals for gathering data systematically and reliably
How are some ways researchers can implement intervention fidelity?
- Structured and rigorous training of staff
- Role playing to evaluate competency
- Checks periodically throughout study
- Regular meetings to review protocol and address complex situations
- Checklists
Define: Fidelity
Faithfulness, loyalty
Define: Interrater Reliability
- The consistency of observations between 2+ observers
- Often the % of agreement among observers
- Reflected as a coefficient kappa (statistically term)
- E.g. when Gabe had to choose pictures of older people, passed it out to be evaluated, and would be ranked as ~85% of people thought this photo had a young adult
What are the common methods of Data Collection?
1) Physiological measurements
2) Observational methods
3) Interviews
4) Questionnaires
5) Records or available data
What is physiological measurements?
- Data nurses gather about patients every day (e.g. VS)
- Allows for objectivity, precision and sensitivity
What are observational methods?
- Used to see how participants behave under specific conditions (e.g. children’s response to pain)
- Requires the study’s observations to be consistent; with a systematic plan; checked and controlled; and related to scientific concepts and theories
What is reliability as it relates to evaluating measurement tools?
The consistency with which the instrument measures the concept of interest
What are the three aspects of reliability?
1) Stability (test/re-test reliability)
2) Homogeneity/internal consistency
3) Equivalence/Interrator Reliability (Cohen’s Kappa) (want 80%+)
What is a stability test?
- Ability of an instrument to produce the same results with repeated testing
- Same test administered again within a given intervals and you compared the results (should be the same)
- give the same questionnaire more than once
What is homogeneity/internal consistency?
- Homo = same
- All of the items in a tool measure the same concept or characteristic
- Chronbach’s alpha of 0.80+ (tells us it is reliable)
Define: Validity
- The extent to which an instrument actually measures or reflects the abstract construct (what it is meant to measure) (e.g. is tool actually measuring anxiety and not stress?)
- Expert opinion/expert panels
- Comparisons to other scales, other events, etc.
Define control as part of quantitative design:
- Measures that researchers use to hold the conditions of the study uniform and avoid possible impingement of bias (extraneous variables) on the dependent variable
- To control treatment, 1st step is make detailed description of treatment, 2nd step is to use strategies to ensure constituency in implementing treatment
- Variations in treatment reduce effect size and internal validity is reduced
What are the four ways to control extraneous variables?
1) homogeneous sampling (similar characteristics)
2) data consistency (collected consistently for everybody in sample)
3) random selection/randomization (assignment to groups)
4) manipulate independent variable (don’t see in non-experimental, as it’s a non-issue; but with experimental, would like to see all four above)
What is the difference between internal validity and external validity?
INTERNAL: extent to which study findings are “true” rather than the results of extraneous variables (factors WITHIN study design)
EXTERNAL: extent to which study findings can be generalized beyond the sample used in the study (apply findings OUTSIDE the study?)
What other factors might account for the changes in dependent variables?
1) Maturation (longitudinal study, things change naturally over time not d/t study)
2) History (something outside sample influences what sample may be)
3) Mortality (who drops out of study? How good are results if you lose a lot of people?)
4) Instrumentation (how reliable and valid are tools we are using?)
5) Testing (test/retest)
6) Selection bias (people who self-select to be in study)
Under what conditions and population could the same results be expected? (external validity)
- Selection effects (who is in study)
- Reactive effects (Hawthorne effect) (when people are being observed they act differently, will return to normal after long observation)
- Measurement effects (if tools are reliable and valid then this is non-issue)
What is a threat to validity?
- Rosenthal Effect: change in participant behaviors d/t researcher expectations; a self-fulfilling prophecy
- Double-blind procedures is a means of reducing bias by ensuring that both those who administer tx and those who receive it do now know which study participants are in the control and experimental groups
- Halo effect: tendency of judges to overrate a performance because participant has done well in an earlier rating or when rated in a different area (e.g. students with high marks in the past may receive a high grade on a substandard paper d/t this effect)
Describe how we critique validity:
- Are there threats to the internal validity of the study? (6 things – history, maturation, etc.)
- Does the design have controls at an acceptable level for threats to internal validity? (4 things of control – homogenous sampling, randomization, etc.)
- What are the limits to generalizability in terms of external validity? (who the sample is, selection, reactive-hawthorne effect, etc.)
How do we critique measurement and data collection in quantitative studies?
- How were data collected? Are data collection methods clearly described?
- Identify all methods of measurement. Are validity and reliability of each instrument described? Are validity and reliability levels adequate?
- Interview questions—do questions address concerns expressed in the problem statement?
- Is the training of data collectors clearly described and adequate?
Describe the benefits and limitations of physiological measurements:
BENEFITS:
- Appropriate for nursing care
- Objective, precise and sensitive
LIMITS:
- Expensive
- May require specialized knowledge and training
- May distort variable of interest simply by using them (e.g. pt HR may increase just by seeing the monitor)
- May be altered by environment (e.g. temp altered by recent intake)
Describe the benefits and limitations of observational measurements:
BENEFITS:
- Used when variables deal with events or behaviors that may be difficult to view as part of a whole
- Flexibility to measure many different situations
- Enable a great depth and breadth of information to be collected
LIMITS:
- Data be be distorted d/t observer’s presence (reactivity)
- Concealment requires consideration of ethical issues
- Data may be biased by the person doing the observing
Describe the benefits and limitations of interviews:
BENEFITS:
- Appropriate when a large response rate and an unbiased sample are important, as refusal rate for interviews lower compared to questionnaires
- Enable participation of people who cannot use a questionnaire
- Interviewer can clarify and maintain the order of questions for participants
- Questions can be altered to gather significant data (e.g. open ended questions)
LIMITS:
- Participant may respond in a way they believe they should respond (e.g. maintaining social desirability bias)
- May require hiring and training of interviewer
- Interviewer bias (may lead responder to react in a certain way unintentionally)
Describe the benefits and limitations of questionnaires:
BENEFITS:
- Useful when number of questions to be asked is finite
- Answers to clear and specific questions
- Can maintain anonymity and prevent interviewer bias
- Less costly and time consuming
LIMITS:
- Not everyone is capable of filling out questionnaires (e.g. illiterate, children)
- Lengthy ones less likely to be completed
Describe the benefits and limitations of records:
BENEFITS:
- May save time and money while conducting a study
- Reduces ethical or bias concerns
LIMITS:
- Subject to problems of availability, authenticity and accuracy
Define: Close ended item
A question that the respondent may answer with only one of a fixed number of choices
Define: Open ended item
A question that respondents may answer in their own words
Define: Concealment
An observational method that refers to whether or not the participants know that they are being observed
Define: Consistency
An aspect of data collection requiring data be collected from each subject in the study in exactly the same way or as close to the same way as possible
Define: Content analysis
Technique for the objective, systematic and quantitative description of communications and documentary evidence
Define: Debriefing
Opportunity for researchers to discuss the study with the participants and for participants to refuse to have their data included in the study
Define: External criticism
A process used to judge the authenticity of historical data
Define: Internal criticism
The process of judging the reliability or consistency of information within a historical document
Define: Inter-rater reliability
The consistency of observations between two or more observers; often expressed as a percentage of agreement, or a coefficient of agree that takes into account the element of chance
Define: Intervention
Observational method that deals with whether or not the observer provokes actions from those who are being observed
Define: Intervention fidelity
Consistency in data collection
Define: Measurement
The assignment of numbers to objects or events according to rules
Define: Operational definition
Description of how a concept is measured and what instruments are used to capture the essence of the variable
Define: Operationalization
Process of translating concepts into observable, measurable phenomena
Define: Reactivity
Distortion created when those who are being observed change their behavior because they know they are being observed; aka. Hawthorne effect
Define: Scale
A self-report measurement tool in which items of indirect interest are combined to obtain an overall score; a set of symbols is used to respond to each item; a rating or score is assigned to each response
Define: Systematic
Term used when data collection is carried out in the same manner with all participants and by all persons collecting the data