Everything Flashcards
Validity.
The extent to which a tool or instrument measures what it is supposed to measure.
- precision.
- inductive reasoning.
- strength of qualitative research.
Reliability.
The degree to which a tool or instrument produces consistent or similar results.
- accuracy.
- deductive reasoning.
- strength of quantative research.
What are the 3 main methods to approaching finding evidence?
- Informally.
- Focused.
- Surveying the existing literature.
Sampling error.
When the groups of participants chosen is inadequate or not random enough.
- random errors.
- systematic errors.
Random errors.
- under or over representation of certain groups.
- likelihood of error can be reduced by increasing the sample size.
- standard deviation changes.
Systematic errors.
- inconsistencies or errors in the sampling frame.
- CANNOT be reduced by increasing sample size.
- mean changes.
Common observational study designs?
- descriptive research.
- diagnostic accuracy studies.
- epidemiological research.
Descriptive statistics.
Refer ONLY to the sample-not attempting to generalise beyond the sample.
Diagnostic accuracy studies.
Evaluates how well a diagnostic or assessment procedure:
- correctly identifies people who have the health condition the procedure is designed to protect.
- correctly identifies people who do not have the health condition.
Meta-analysis.
Specialised statistical technique for combining the results from a set of quantative studies in a systematic review.
Can have reliability without validity.
👍🏻.
Inferential statistics.
Go beyond the sample to help us infer what happens in the wider population.
Central tendency.
Refers to the mid-way point between the highest and the lowest of a sample of scores on a continuous variable.
- mean.
- median.
- mode.
Dispersion.
Refers to how spread out the scores are on a continuous variable.
- standard deviation.
- the minimum and maximum.
- the range.
Two types of inferential statistics.
- t-test.
- analysis of variance.
T-test.
Used to compare the average scores between two different groups in a study to see if the groups are different from each other (ONLY between TWO groups).
Analysis of variance.
A test that compares the average scores between 3 or more different groups in a study to see if the groups are different from each other e.g. Comparing 3 types of teaching styles.
Double blind experiment study.
Participants and researchers aren’t aware of what groups their in (can’t influence their behaviour).
Saturation.
Little or no new data is generated from the participants and it is believed that the sample size is adequate.
Triangulation.
The use of more than one method in combination.
Reflexivity.
Thinking carefully about what’s going on and how their own perspective might be influencing the data.
Infer-rater reliability.
Inducing more than one researcher to help with the analysis of the same data independently.
What 3 approaches greatly improve the validity of quantitative research?
- Triangulation.
- Reflexivity.
- Inter-rater reliability.
Progressive focusing.
- qualitative technique.
- allow for and often require modification of the research question in light of the findings.
Ethnography (passive observation).
Systematic watching of behaviours and conversations in naturally occurring settings (researcher is meant to observe the culture from the point of view of the participant).
Ethnography (participant observation).
Observation in which the researcher also occupies a role or part in the setting in addition to observing.
Phenomenology.
Focusing on people’s lived experiences and how they interpret those experiences.
Case studies.
Used to look at individuals, a small group of participants or group as a whole.
Grounded theory.
Literally from “the ground up.” Approach starts with no preconceptions and develops Therese and ideas as the data is collected and analysed.
What are the two most common types of qualitative data collection?
Interviews and observations.