Quantitative Data Collection Flashcards

1
Q

Define: Operationalization

A

Process of translating the concepts of interest to a researcher into observable and measurable phenomena

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why is it important to study how data is collected?

A
  • The success of a study depends on the quality of the data-collection methods chosen and employed
  • Data-collection method must be appropriate to the problem, hypothesis, setting and population
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

When collecting quantitative data, there has to be a goodness of fit between:

A
  • Purpose
  • Design
  • Research question(s) or hypotheses
  • Conceptual and operational definitions
  • Data collection method
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is data consistency?

A
  • In data collection, means that the method used to collect data from each participant in the study is exactly the same or as close to the same as possible
  • Minimize bias when more than one researcher gathers data
  • Control of extraneous variables
  • Follow data collection protocols to ensure intervention fidelity
  • Ensures interrater reliatbility
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define: Intervention Fidelity

A
  • A way of ensuring consistency in data collection
  • Researchers must train data collectors in the methods to be used in the study so that each data collector acquires the information in the same way (e.g. training research assistants)
  • Can include protocols or manuals for gathering data systematically and reliably
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How are some ways researchers can implement intervention fidelity?

A
  • Structured and rigorous training of staff
  • Role playing to evaluate competency
  • Checks periodically throughout study
  • Regular meetings to review protocol and address complex situations
  • Checklists
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Define: Fidelity

A

Faithfulness, loyalty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Define: Interrater Reliability

A
  • The consistency of observations between 2+ observers
  • Often the % of agreement among observers
  • Reflected as a coefficient kappa (statistically term)
  • E.g. when Gabe had to choose pictures of older people, passed it out to be evaluated, and would be ranked as ~85% of people thought this photo had a young adult
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the common methods of Data Collection?

A

1) Physiological measurements
2) Observational methods
3) Interviews
4) Questionnaires
5) Records or available data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is physiological measurements?

A
  • Data nurses gather about patients every day (e.g. VS)

- Allows for objectivity, precision and sensitivity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are observational methods?

A
  • Used to see how participants behave under specific conditions (e.g. children’s response to pain)
  • Requires the study’s observations to be consistent; with a systematic plan; checked and controlled; and related to scientific concepts and theories
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is reliability as it relates to evaluating measurement tools?

A

The consistency with which the instrument measures the concept of interest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the three aspects of reliability?

A

1) Stability (test/re-test reliability)
2) Homogeneity/internal consistency
3) Equivalence/Interrator Reliability (Cohen’s Kappa) (want 80%+)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a stability test?

A
  • Ability of an instrument to produce the same results with repeated testing
  • Same test administered again within a given intervals and you compared the results (should be the same)
  • give the same questionnaire more than once
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is homogeneity/internal consistency?

A
  • Homo = same
  • All of the items in a tool measure the same concept or characteristic
  • Chronbach’s alpha of 0.80+ (tells us it is reliable)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Define: Validity

A
  • The extent to which an instrument actually measures or reflects the abstract construct (what it is meant to measure) (e.g. is tool actually measuring anxiety and not stress?)
  • Expert opinion/expert panels
  • Comparisons to other scales, other events, etc.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Define control as part of quantitative design:

A
  • Measures that researchers use to hold the conditions of the study uniform and avoid possible impingement of bias (extraneous variables) on the dependent variable
  • To control treatment, 1st step is make detailed description of treatment, 2nd step is to use strategies to ensure constituency in implementing treatment
  • Variations in treatment reduce effect size and internal validity is reduced
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the four ways to control extraneous variables?

A

1) homogeneous sampling (similar characteristics)
2) data consistency (collected consistently for everybody in sample)
3) random selection/randomization (assignment to groups)
4) manipulate independent variable (don’t see in non-experimental, as it’s a non-issue; but with experimental, would like to see all four above)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the difference between internal validity and external validity?

A

INTERNAL: extent to which study findings are “true” rather than the results of extraneous variables (factors WITHIN study design)

EXTERNAL: extent to which study findings can be generalized beyond the sample used in the study (apply findings OUTSIDE the study?)

20
Q

What other factors might account for the changes in dependent variables?

A

1) Maturation (longitudinal study, things change naturally over time not d/t study)
2) History (something outside sample influences what sample may be)
3) Mortality (who drops out of study? How good are results if you lose a lot of people?)
4) Instrumentation (how reliable and valid are tools we are using?)
5) Testing (test/retest)
6) Selection bias (people who self-select to be in study)

21
Q

Under what conditions and population could the same results be expected? (external validity)

A
  • Selection effects (who is in study)
  • Reactive effects (Hawthorne effect) (when people are being observed they act differently, will return to normal after long observation)
  • Measurement effects (if tools are reliable and valid then this is non-issue)
22
Q

What is a threat to validity?

A
  • Rosenthal Effect: change in participant behaviors d/t researcher expectations; a self-fulfilling prophecy
  • Double-blind procedures is a means of reducing bias by ensuring that both those who administer tx and those who receive it do now know which study participants are in the control and experimental groups
  • Halo effect: tendency of judges to overrate a performance because participant has done well in an earlier rating or when rated in a different area (e.g. students with high marks in the past may receive a high grade on a substandard paper d/t this effect)
23
Q

Describe how we critique validity:

A
  • Are there threats to the internal validity of the study? (6 things – history, maturation, etc.)
  • Does the design have controls at an acceptable level for threats to internal validity? (4 things of control – homogenous sampling, randomization, etc.)
  • What are the limits to generalizability in terms of external validity? (who the sample is, selection, reactive-hawthorne effect, etc.)
24
Q

How do we critique measurement and data collection in quantitative studies?

A
  • How were data collected? Are data collection methods clearly described?
  • Identify all methods of measurement. Are validity and reliability of each instrument described? Are validity and reliability levels adequate?
  • Interview questions—do questions address concerns expressed in the problem statement?
  • Is the training of data collectors clearly described and adequate?
25
Q

Describe the benefits and limitations of physiological measurements:

A

BENEFITS:

  • Appropriate for nursing care
  • Objective, precise and sensitive

LIMITS:

  • Expensive
  • May require specialized knowledge and training
  • May distort variable of interest simply by using them (e.g. pt HR may increase just by seeing the monitor)
  • May be altered by environment (e.g. temp altered by recent intake)
26
Q

Describe the benefits and limitations of observational measurements:

A

BENEFITS:

  • Used when variables deal with events or behaviors that may be difficult to view as part of a whole
  • Flexibility to measure many different situations
  • Enable a great depth and breadth of information to be collected

LIMITS:

  • Data be be distorted d/t observer’s presence (reactivity)
  • Concealment requires consideration of ethical issues
  • Data may be biased by the person doing the observing
27
Q

Describe the benefits and limitations of interviews:

A

BENEFITS:

  • Appropriate when a large response rate and an unbiased sample are important, as refusal rate for interviews lower compared to questionnaires
  • Enable participation of people who cannot use a questionnaire
  • Interviewer can clarify and maintain the order of questions for participants
  • Questions can be altered to gather significant data (e.g. open ended questions)

LIMITS:

  • Participant may respond in a way they believe they should respond (e.g. maintaining social desirability bias)
  • May require hiring and training of interviewer
  • Interviewer bias (may lead responder to react in a certain way unintentionally)
28
Q

Describe the benefits and limitations of questionnaires:

A

BENEFITS:

  • Useful when number of questions to be asked is finite
  • Answers to clear and specific questions
  • Can maintain anonymity and prevent interviewer bias
  • Less costly and time consuming

LIMITS:

  • Not everyone is capable of filling out questionnaires (e.g. illiterate, children)
  • Lengthy ones less likely to be completed
29
Q

Describe the benefits and limitations of records:

A

BENEFITS:

  • May save time and money while conducting a study
  • Reduces ethical or bias concerns

LIMITS:
- Subject to problems of availability, authenticity and accuracy

30
Q

Define: Close ended item

A

A question that the respondent may answer with only one of a fixed number of choices

31
Q

Define: Open ended item

A

A question that respondents may answer in their own words

32
Q

Define: Concealment

A

An observational method that refers to whether or not the participants know that they are being observed

33
Q

Define: Consistency

A

An aspect of data collection requiring data be collected from each subject in the study in exactly the same way or as close to the same way as possible

34
Q

Define: Content analysis

A

Technique for the objective, systematic and quantitative description of communications and documentary evidence

35
Q

Define: Debriefing

A

Opportunity for researchers to discuss the study with the participants and for participants to refuse to have their data included in the study

36
Q

Define: External criticism

A

A process used to judge the authenticity of historical data

37
Q

Define: Internal criticism

A

The process of judging the reliability or consistency of information within a historical document

38
Q

Define: Inter-rater reliability

A

The consistency of observations between two or more observers; often expressed as a percentage of agreement, or a coefficient of agree that takes into account the element of chance

39
Q

Define: Intervention

A

Observational method that deals with whether or not the observer provokes actions from those who are being observed

40
Q

Define: Intervention fidelity

A

Consistency in data collection

41
Q

Define: Measurement

A

The assignment of numbers to objects or events according to rules

42
Q

Define: Operational definition

A

Description of how a concept is measured and what instruments are used to capture the essence of the variable

43
Q

Define: Operationalization

A

Process of translating concepts into observable, measurable phenomena

44
Q

Define: Reactivity

A

Distortion created when those who are being observed change their behavior because they know they are being observed; aka. Hawthorne effect

45
Q

Define: Scale

A

A self-report measurement tool in which items of indirect interest are combined to obtain an overall score; a set of symbols is used to respond to each item; a rating or score is assigned to each response

46
Q

Define: Systematic

A

Term used when data collection is carried out in the same manner with all participants and by all persons collecting the data