Paper 3 Flashcards
real
What are the key differences between quantitative and qualitative data?
In quantitative research, the data are in the form of “numbers” that are easy to summarize and submit to statistical analysis. Quantitative data are meant for generalization beyond the sample from which the data were drawn.
Qualitative data are gathered through direct interaction with participants—for example, through one-to-one or group interviews, or by observations in the field. The data consist of non-numeric data - for example, from transcripts or field notes. Data are open for interpretation. The term used is “rich data”, which means that they are rich in their description of the behaviors being observed. Because the data are rich, they are not easy to analyze, and there is no single way to approach analysis
What would be the advantage of gathering qualitative data for a study instead of quantitative data? Give an example to illustrate your argument.
Qualitative data is descriptive and describes “how”. It is gathered by doing in-depth descriptions. Usually, people are studied in their own environment. For example, how does bullying affect school performance in children? How do IB students cope with stress?
What would be the advantage of gathering quantitative data for a study instead of qualitative data? Give an example to illustrate your argument.
The advantage is that statistical analysis can be applied and it can be determined to what extent the results may be due to chance. Quantitative research studies can also be more reliable since they can be replicated and the results can be easily compared. For example, to what extent does music assist in one’s ability to learn a list of words? When this study is carried out, the average number of words in both conditions - that is, with and without music - can be compared.
To what extent can one generalize from qualitative studies? (Be sure to use the terminology that is discussed in this workbook with regard to Guba & Lincoln’s vocabulary for discussing qualitative research studies).
Qualitative studies can be generalized, but only to a limited extent. The word often used by researchers is “transferability.” To determine the level of transferability, the situation that is being studied must be described in detail so that the findings can potentially be applied to a similar situation. For example, if you are studying stress at our school you would not be able to generalize your findings to all schools, but only schools that are similar to ours. You would have to consider the size of the school, the level of economic and cultural diversity, that we run the IB program, and the size of classrooms. The list could go on…
In general, what are the ethical considerations that must be made when carrying out qualitative research? Is this in any way different from the ethical considerations that must be followed when carrying out experimental research?
Unlike much quantitative experimental research, no IV is manipulated. However, sometimes the situation is manipulated to see how people will respond. So, in that sense, the same ethical standards apply to quantitative research. Very often qualitative research is long-term and personal. The participants may develop a relationship with the researcher and disclose personal information. It is important that the anonymity of the participants is protected and that their trust is not violated. It is also important that the researcher not become too close to the participants and thus lose a sense of objectivity.
What are two participant effects? How may they affect the findings of a qualitative research study?
The most basic participant effect is called “reactivity.” This is when the participants act differently than they usually would because they are aware that they are being studied. Another participant effect is the social desirability effect - this is where information is not disclosed if the participant fears being judged or seen in a negative light. Another effect is the conformity effect. This could happen in a focus group where participants behave in line with the other members of the group. Finally, another participant effect is the expectancy effect, where the participant wants to help the researcher and thus gives information that is believed to be what the researcher is looking for. All of these effects may affect the study by distorting the data and providing data that is not credible - in other words, is not reflective of the actual opinions or behaviors of the participant.
What is researcher bias? How can a researcher try to minimize its effect on research?
When the researcher’s own beliefs influence the research process. One way that this could be reduced is to train others to carry out the research who are not aware of the actual goals or expectations of the study. Another way is to have more than one researcher collect and/or analyze the data and compare the findings to see if the same interpretations are made. This can also be done by asking the participants whether they agree with the findings. Finally, the researcher can reflect on how their own biases may have influenced the study as part of the discussion of their findings.
What does it mean if we say that a study lacks “credibility?” Why is this important in qualitative research?
Credibility is linked to the concept of validity. Are the findings and/or interpretations in line with the experience of the participants? This is also called “trustworthiness.” So, if your interpretation of an interview with me says that I am very “anti-technology”, this may not be true but may be a misunderstanding of my tone, body language, or word choice. It could also be a facet of an interview, ignoring other statements that may have shown a more balanced approach to technology. Credibility is important so that we can say that our findings do reflect the experiences of our participants and helps us to ensure an appropriate level of objectivity.
Credibility can be established through triangulation or by asking the participant to read the interpretations/results and say whether they agree with the researcher.
Define triangulation, giving two examples of how it works.
Triangulation is a “cross-checking” of data with an attempt of reaching the same conclusion by other means. One type is “method triangulation”, where different methods are used to study the same behavior. So, to determine how stress affects an individual, I may give them a questionnaire, carry out an observation and have a focus group. A second form of triangulation is researcher triangulation, where more than one researcher is carrying out an observation and the data can then be compared. Finally, there is theory triangulation, where a behavior is studied by researchers of different theoretical backgrounds - for example, a biological, cognitive, and sociocultural psychologist.
Why is triangulation important in qualitative research? How does it affect the credibility of the study?
It is important to establish credibility. By using method triangulation and getting the same results, I can conclude that it was not simply my choice of method that led to the results. If more than one researcher comes to the same conclusion, then it is not just my own biases that led to the results.
Explain what is meant by reflexivity – and why it is an important part of qualitative research?
There are two types of reflexivity. The first one is “personal.” This is where the researcher reflects on how his or her own biases or personal experiences may have influenced their findings. This is important because it will help to reduce researcher bias. The second type of reflexivity is called “epistemological.” This is when the researcher reflects on how they carried out the study and whether their choice of research method or procedure could have influenced the findings. This is important to establish the credibility of the findings.
A priori coding:
A process of coding qualitative data whereby the researcher develops the codes ahead of time based on a theoretical framework, the interview question, or pre-existing knowledge.
Case study:
The study of a particular person, group, or situation over a period of time. Case studies are technically not a research method - but a combination of research methods.
Content analysis:
A data analysis technique used to interpret textual material. This is done by looking for data or themes in a text - for example, a transcript of an interview. The researcher may decide what to look for before reading the interview. This is a priori coding. This then converts the qualitative data into quantitative data. This is a deductive approach. The researcher may also wait until she has all the interviews and then note what trends “emerge” from the text. This is an inductive approach.
Covert observation:
An observation in which the identity of the researcher, the nature of the research project, and the fact that participants are being observed are hidden from those who are being studied. The opposite of an overt observation.
Credibility:
This word is often seen as a synonym for validity in qualitative research.
Cross-sectional design:
Comparing two or more groups on a particular variable at a specific time. The opposite is a longitudinal design where the researcher measures a change in an individual over time.
Data triangulation:
Collecting data from more than one source. Also called “source triangulation.” For example, collecting data from four different hospitals.
Emergent thematic coding:
A qualitative data analysis approach in which a text is read several times to identify themes that emerge from the data. This is a common method for interpreting interviews.
Epistemological reflexivity:
When a researcher reflects on their choice of method or materials may have influenced the findings of the study - for example, how did using a participant observation affect the potential behavior of the people being studied?
Event sampling:
A data collection strategy for observational studies. This is when the researcher makes note only when a specific behavior is observed. For example, only when aggression is observed on the playground.
Focus group:
A group interview, using 5 - 12 participants who share a common trait or interest.
Inter-rater reliability:
The degree of agreement among researchers recording behavior during an observation.
Longitudinal study:
Research over a period of time using observations, interviews, or psychometric testing. (Similar to a repeated measures design in an experiment).
Meta-analysis:
Pooling data from multiple studies of the same research question to arrive at one combined answer.
Method triangulation:
The use of more than one method to carry out a study. Case studies often use method triangulation. This is important because it increases the credibility of the study - we know it was not the choice of the research method alone that led to the findings.
Narrative interview:
An interview in which the researcher asks an open-ended question and invites the interviewee to respond, The interviewee is not asked any other questions and the interviewer only asks for clarifications. The goal is that questions asked by the interviewer will not influence the interviewee.
Naturalistic observation:
An observation carried out in a participant’s natural environment. The opposite of a lab or controlled observation.
Outcomes-based research:
An attempt by healthcare agencies to see how certain healthcare practices, treatments, and other interventions affect a person’s health. This type of research focuses on the results.
Participant attrition:
The rate at which participants drop out of a study over time. This often occurs when research has many steps or takes place over a long period of time.
Participant observation:
An observational study where the researcher joins the group that is being observed. The opposite of a non-participant observation.
Personal reflexivity:
When researchers reflect on how their own biases may have affected their research process and the findings of their research.
Point sampling:
A data collection method used when carrying out an observation of a group where the researcher records the behavior of an individual and then moves on to the next participant until all have been observed.
Process-based research:
An attempt by healthcare agencies to see how certain healthcare practices, treatments, and other interventions affect a person’s health over time. This type of research is focused on the changes over time, rather than the final results.
Prospective research:
A study that attempts to find a correlation between two variables by collecting data early in the life of participants and then continuing to test them over a period of time to measure change and development.
Quota sampling:
Similar to a stratified sample, but there is no random selection of participants from the population. For example, you want a sample that reflects your country’s population. If your country is 40% of one culture and 60% of another, then the sample would have that same proportion - but they are not chosen randomly. It might be the first 40 people that sign up from culture x and then the first 60 from culture y.
Retrospective research:
Collecting data to gain information about individuals’ past about an outcome that has already happened when the study is being conducted. For example, in the study of someone with depression, looking to see what protective and risk factors existed in their life, such as adverse childhood experiences.
Snowball sampling:
A sampling technique where research participants recruit other participants for a study.
Structured interview:
A type of interview in which the interviewer asks a particular set of predetermined questions. The questions are created in advance and all participants are asked the same questions in the same order.
Time sampling:
A data collection method used when carrying out an observation of a group where notes are taken at specific times - for example, every five minutes or every hour.
Theory triangulation:
The use of more than one theoretical approach to investigate a question - for example, looking at a patient like HM from a biological, cognitive, and sociocultural perspective.
Transferability:
The degree to which the results of qualitative research can be generalized or transferred to other contexts or settings.
Unstructured interview:
An interview in which there is no specific set of predetermined questions. The interviews are more like an everyday conversation and tend to be more informal and open-ended.
Alternative hypothesis:
Also known as the research hypothesis. A hypothesis that states that there will be a statistically significant relationship between two or more variables.
Baseline:
The level of responding before any treatment is introduced and therefore acts as a control condition. For example, measuring normal brain activity before being asked to recall a stressful event.
Confederate:
A helper of a researcher who pretends to be a real participant.
Control condition:
A condition that does not receive the treatment or intervention that the other conditions do. It is used to see what would happen if the independent variable were not manipulated.
Correlational research:
The researcher measures two or more variables without manipulating an independent variable and with little or no attempt to control extraneous variables.
Counterbalancing:
A technique used to deal with order effects when using a repeated measures design. When a study is counterbalanced, the sample is divided in half, with one half completing the two conditions in one order and the other half completing the conditions in the reverse order.
Cross-sectional design:
Comparing two or more groups on a particular variable at a specific time. The opposite is a longitudinal design where the researcher measures a change in an individual over time.
Dependent variable:
The variable that is measured and is hypothesized to be the effect of the independent variable.
Double-blind testing:
An experimental procedure in which neither the researcher doing the study nor the participants know the specific type of treatment each participant receives until after the experiment is over; a double-blind procedure is used to guard against both experimenter bias and placebo effects.
Factorial Design:
A design including multiple independent variables.
Field experiment:
A study that is conducted outside the laboratory in a “real-world” setting.
Hypothesis:
A testable statement of what the researcher predicts will be the outcome of the study which is usually based on established theory.
Independent samples design:
Also called an independent measures design and between-groups design. More than one experimental group is used and participants are only in one group. Each participant is only in one condition of the independent variable.
Independent variable:
The variable that is manipulated by the researcher.
Meta-analysis:
Pooling data from multiple studies of the same research question to arrive at one combined answer.
Natural experiment:
The study of a naturally occurring situation in the real world. The researcher does not manipulate an independent variable or assign participants randomly to conditions.
Non-equivalent groups design:
A between-subjects design in which participants have not been randomly assigned to conditions. A typical example would be to look at gender differences with regard to a certain behavior.
Null hypothesis:
A hypothesis that says there will be no statistical significance between two variables. It is the hypothesis that a researcher will try to disprove.
One-tailed hypothesis:
A scientific prediction stating that an effect will occur and whether that effect will specifically increase or specifically decrease, depending on changes to the independent variable.
Operationalization:
The process by which the researcher decides how a variable will be measured. For example, “marital satisfaction” cannot be measured directly, so the researcher would have to decide what traits will be measured in order to measure the construct.
Pretest-posttest design:
The dependent variable is measured before the independent variable has been manipulated and then again after it has been manipulated.
p-value:
The probability that, if the null hypothesis were true, the result found in the sample would occur.
Quasi-experiment:
The researcher manipulates an independent variable but does not randomly assign participants to conditions.
Random allocation:
A method of controlling extraneous variables across conditions by using a random process to decide which participants will be in which conditions. This includes random number generators and pulling names out of a hat.
Repeated measures design:
Also called a “within groups” design. The same participants take part in each condition of the independent variable. This means that each condition of the experiment includes the same group of participants.
Single-blind testing:
An experiment in which the researchers know which participants are receiving a treatment and which are not; however, the participants do not know which condition they are in.
Self-selected sampling:
Also called volunteer sampling. Participants choose to become part of a study because they volunteer by responding to an advert or a request to take part in a study.
Random sampling:
Selecting a sample of participants from a larger potential group of eligible individuals, such that each person has the same fixed probability of being included in the sample.
Two-tailed hypothesis:
A hypothesis that one experimental group will differ from another without specification of the expected direction of the difference - that is, without predicting an increase or decrease in behavior.
Opportunity sampling:
Also called convenience sampling. A sampling technique where participants are selected based on naturally occurring groups or participants who are easily available.
True experiment:
A study in which participants are randomly assigned to either a treatment group or a control group; an independent variable is manipulated by the researcher.
Bidirectional ambiguity:
A limitation of many correlational studies. It is not possible to know if x causes y, y causes x, if they interact to cause behavior, or whether it is just coincidental and no relationship truly exists.
Construct validity:
The degree to which a study consistently measures a variable. For example, if a researcher develops a new questionnaire to evaluate respondents’ levels of aggression, the construct validity of the instrument would be the extent to which it actually assesses aggression as opposed to assertiveness, social dominance, or irritability.
Stratified Random Sampling:
A method of probability sampling in which the population is divided into different subgroups or “strata” and then a random sample is taken from each “stratum.”
Demand characteristics:
Cues that may influence or bias participants’ behavior, for example, by suggesting the outcome or response that the experimenter expects or desires.
Ecological fallacy:
A mistaken conclusion drawn about individuals based on findings from groups to which they belong. For example, if a researcher uses Japanese participants in the sample and assumes that since they are Japanese, they must be collectivistic. The ecological fallacy is controlled for by giving a test to measure the assumed variable.
Ecological validity:
The degree to which results obtained from research or experimentation are representative of conditions in the wider world. Ecological validity is influenced by the level of control in the environment (hence, ecological).
Expectancy effect:
When a researcher’s expectations about the findings of the research are inadvertently communicated to participants and influence their responses. This distortion of results arises from participants’ reactions to subtle cues unintentionally given by the researcher - for example, through body movements, gestures, or facial expressions.
External validity:
The extent to which the results of a study can be generalized beyond the sample that was tested.
Extraneous variable:
Also known as a confounding variable. A variable that is not under investigation in an experiment but may potentially affect the dependent variable if it is not properly controlled.
Fatigue effect:
A type of order effect where a participant decreases in performance in later conditions because they are tired or bored with the activity.
Interference effect:
A type of order effect where the first condition may influence the outcome of the second condition. For example, when giving two sets of words to remember, when a participant remembers a word from the first condition when trying to recall words in the second condition.
Internal validity:
When an experiment was conducted using appropriate controls so that it supports the conclusion that the independent variable caused observed differences in the dependent variable.
Mundane Realism:
The participants and the situation studied are representative of everyday life. If a study is highly artificial, it is said to lack mundane realism.
Order effects:
Differences in research participants’ responses that result from the order in which they participate in the experimental conditions. Examples include fatigue effect, interference effects, or practice effect.
Participant attrition:
The rate at which participants drop out of a study over time. This often occurs when research has many steps or takes place over a long period of time.
Placebo effect:
A beneficial effect produced by a placebo drug or treatment, which cannot be attributed to the properties of the placebo itself, and must, therefore, be due to the patient’s belief in that treatment.
Practice effect:
A type of order effect where a participant improves in performance in later conditions because practice has lead to the development of skill or learning.
Random error:
Error that is due to chance alone. Random errors occur when unexpected or uncontrolled factors affect the variable being measured or the process of measurement.
Reactivity:
When participants change their behavior due to their awareness of being observed.
Reliability:
The consistency of a measure - that is, the degree to which a study is free of random error, obtaining the same results across time with the same population.
Sampling bias:
When a sample is selected in such a way that it is not representative of the population from which it was drawn. When a sample is biased, population validity is decreased.
Type I Error:
When the null hypothesis is rejected although it is true; when the research concludes there is a relationship in the population when in fact there is not.
Type II Error:
When the null hypothesis is retained although it is false; when the research concludes there is no relationship in the population when in fact there is one.