Midterm Flashcards
Evidence-based medicine
The integration of the best research evidence with clinical expertise and patient values
Two main categories of research
Basic research
Applied research
4 sources of evidence-based medicine
- Clinical expertise
- Research evidence
- Information from the practice context
- Clients values and circumstances
How to determine the strength of evidence:
First, classify the type of information as primary or secondary. Then, identify the level of evidence.
Primary sources
Publications that are written but the author; first-person reports and are often referred to as original sources.
Purpose of primary source:
To present new findings or discoveries about a topic, and to build or add new information to previous findings.
Examples of primary sources:
- Randomized control trial (RCTs)
- Controlled clinical trial (CCT)
- Experiments
- Surveys
- Case-control or a cohort study
- Case study
- Case reports or case series
Secondary sources:
Seek to describe, review, or summarize the available information on a topic by gathering data from several primary resources
Examples of secondary sources:
- Narrative reviews
- Systematic literature reviews
Level I evidence:
Evidence provided by one or more well-designed, randomized, controlled clinical trial, including overviews (meta-analyses) of such trials
Level II evidence:
Evidence provided by well-designed observational studies with concurrent controls (e.g., case control or cohort studies)
Level III evidence:
Evidence provided by expert opinions, case series, case reports and studies with historical controls
The purpose of EBP:
Assist in clinical decision making
Why is EBP important?
- Aim of improving client outcomes
- Promotes an attitude of inquiry
- Encourages professional accountability
When practicing EBP as a practitioner, you should aim for these 4 things:
- Awareness
- Consultation
- Judgment
- Creativity
Five specific steps that you should follow when applying evidence-based practice:
Step 1: answerable question Step 2: find evidence Step 3: appraise evidence Step 4: integrate appraisal and evidence Step 5: evaluate steps effectiveness and efficiency
Interprofessional education competencies:
Competency 1: values and ethics for interprofessional practice
Competency 2: roles and responsibilities for collaborative practice
Competency 3: interprofessional communication practices
Competency 4: interprofessional teamwork and team-based practice
Aims for interprofessional collaboration:
- Improving population health
- Improving patient experience
- Reducing costs of healthcare
- Reducing burnout for health care providers
Benefits of interprofessional education:
-Provides opportunities to learn and practice skills that improve communication and collaboration
-Build professional identity and pride by articulating one’s scope of practice
-Dispel stereotypes about other disciplines or team members through understanding each other’s’ roles and responsibilities
-Establish rapport and trust among team members that leads to valuing interdisciplinary collaboration
Positive changes in teamwork
IPP means…
Interprofessional practice
Professionals practicing IPP participate in non-hierarchical interdisciplinary team approach
IPP is seen as a means of:
Improving the client/patient/student experience of care.
In IPP teams:
The emphasis od on consensus-building and mutual respect
Parts of a PICO question:
P: population/patient/problem Who are the patients? What are the problems? I: intervention/exposure What do we do to them? What are they exposed to? (can include diagnostic or screening procedures) C: comparison/contrast What do we compare the intervention with? O: outcome What happens? What is the outcome?
PICO Example Question #1: In toddlers with expressive vocabulary delays, does focused stimulation lead to significantly greater vocabulary gains than no treatment?
P: in toddlers with expressive vocabulary
I: does focused stimulation
C: no treatment
O: lead to significantly greater vocabulary gains
Question #2: In adults who sustained a severe traumatic brain injury at least 1 year previously, does a program of cognitive strategy instruction lead to significantly better job performance ratings than no intervention?
P: in adults who sustained a severe TBI at least 1 year previously
I: does a program of cognitive strategy instruction
C: no intervention
O: lead to significantly better job performance ratings
Steps in selecting articles to read:
- Select the right article: determine the type of article(s) that will best help you get the information needed
- Perform an analysis: perform a quick analysis to determine if the article contains relevant information
True or false: Evidence from systematic research is the only acceptable basis for clinical decision making
False
True or false: the EBP framework recognizes that the experiences, values and preferences of ourselves and our clients can and should contribute to our clinical decisions
True
True or false: the amount of information available for review is overwhelming, so you should focus on “high-yield” sources in your area
True
True or false: only people who have completed years of specialized study can do a critical appraisal of research studies
False
Steps to searching the literature:
Find and gather a manageable amount of relevant literature using a wide range of contemporary tools and resources
Steps to reviewing the literature:
Analyze information for effectiveness and efficacy of interventions. It requires that you review the clinical application of the ideas to ensure evidence-based practice.
The goal of reviewing the literature:
Find out what is known about the topic based on these four things:
- Theory
- Facts
- Opinions
- Methods
Example of reviewing literature:
- Identify the quality and relevance of the information
- Evaluating the type and source of information for the most current information
- Comparing the outcomes of more than one treatment
- Understanding research design and statistical analysis
Example of searching literature:
- Using a google scholar to locate journal articles on a topic
- Developing a research question and key words
- Locating relevant literature on an electronic database such as pubmed
- Determining parameters to limit the type and amount of information found
Basic components of a scientific article:
- Title: a succinct description of the study topic
- Abstract: a concise summary of the study
- Introduction: a statement of purpose and rationale for the study with relevant background information including a concise literature review of the topic. The hypothesis and/or resear ch questions are stated at the end of this section
- Methods: a detailed outline of the procedures and evaluation instruments used as well as the variables measured.
- Results: a succinct and organized statement of the data and analysis, which includes pertinent figures, tables and graphs.
- Discussion: a discussion that includes an analysis and interpretation of the results and the implications and limitations of the study
- References: a list of sources of information cited or used in the study
Steps to analyzing a research article:
Step 1: identify the conclusions
What are the key conclusions of the study?
Do they help answer my research question?
Step 2: determine the purpose and rationale
What is the research problem?
What is the research question or hypothesis?
Step 3: understand methods and materials
What procedures were followed?
How were participants selected?
What variables are being measured?
Step 4: understand results and data analysis
Do the results make sense?
What are the important outcomes?
Are the results valid and reliable?
Step 5: interpret outcomes and draw conclusions
Are the outcomes reasonable and logical?
Are the results useful in clinical practice?
What are the strengths and limitations of this study
Academic integrity
The commitment to demonstrate honest and moral behavior in the academic setting; includes expectations that you must follow in your writing, but can also be thought of as a mindset.
4 principles of ethical research
- Principle of Beneficence
- Principle of Nonmaleficence
- Principle of Utility
- Principle of Autonomy
Principle of beneficence:
Beneficence is the moral obligation to act in a way that will benefit or help others
- Beneficence in clinical practice: it is providing interventions that will help patients. This goes a step beyond not doing harm to a patient. It ensures that you are actively attempting to help them.
- Beneficence in research: it means that you are doing things that promote participants’ welfare and safety. It also includes protecting participants from exploitation and keeping the participants’ interests as a priority. You minimize risks to participants while maximizing benefits.
Principle of Nonmaleficence:
Nonmaleficence is the moral obligation to protect from harm, specifically physical or mental danger. This principle also indicates that you should not expose people to unnecessary risk.
- Nonmaleficence in clinical practice: this means not intentionally harming a patient or client, but it also entails not causing unintentional harm through carelessness. Although therapy might, at times, require that you expose people to some risk so that they can progress towards their goals, you must be particularly cautious and use your clinical judgement to minimize the risks.
- Nonmaleficence in research: this focuses on ensuring that what is being done is not harming the participants or putting them at unnecessary risk. This means you are also making sure that harm is not done by omitting care or treatment.
Principle of Utility:
Utility is the moral principle that actions and behaviors are right if they promote happiness and pleasure. Utility also means that actions are wrong if they promote unhappiness or pain. Another way to think of this is to consider the usefulness of the action or behavior to achieve happiness. This principle is often thought of in terms of what action brings about the greatest good for the greatest number of people.
- Utility in clinical practice: Think of this principle from the utilitarian point of view. Think of what gives the most benefit while causing the least harm. In clinical practice you can look at this principle by considering how you prioritize what interventions you might use. Think about what’s most cost effective for your patients that gives the most benefit. Think about who needs immediate treatment vs. who will be harmed least by delaying services?
- Utility in research: In research consider utility when prioritizing a research agenda or when making decisions on allocation of funds. Look at utility as a type of cost-benefit analysis. What actions will give society the most benefit with the least risk? When considering utility, you need to also look at what are the most pressing problems for the largest number of people.
Principle of autonomy:
The moral obligation to respect that a person can make their own decisions about what they can and agree to do. We must respect the decisions people make regarding their own lives. This is often referred to as ‘human dignity’.
- Autonomy in clinical practice: This means respecting and acknowledging patients’ decisions regarding therapy, even if their wishes or decisions oppose our own. Autonomy DOES allow for educating your client or patient, but it DOES NOT allow you to make decisions for them. When you violate the principle of autonomy in the best interest of the other person, even though it is not what they want, this is known as paternalism.
- Autonomy in research: often addressed as informed consent. The participant has a right to know and to be given adequate information so that they can make an informed and calculated decision about the benefits and the risks of participation. The participants must freely agree to take part in research. There are four essential components to autonomy in research.
Informed consent must include:
- Disclosure
- Comprehension
- Voluntariness
- Competence
IRB levels of reviews:
Level 1: exempt
Typical educational practices
Level 2: expedited
Minimal risk and record minimal to non invasive data
Level 3: full
Moderate to high risks; people unable to provide consent
What’s a good PICO question?
- Directly relevant to the patient or problem
- Phrased to facilitate a quick and effective search for an answer
- Provides manner to evaluate answers you find
Quantitative research design
- Use empirical data and observations
- Grounded in empirical findings and numerical data
- Inclusion/exclusion criteria and strong methods/procedures section
i. e., systematic reviews, meta analysis, randomized controlled trials
Qualitative research design
- Use testimonies and quotes to describe phenomena
- Open-ended inquiry and interviews with participants
- Direct quotes and perspectives
- Rich descriptions of themes and significant statements
i. e., meta-synthesis, perspectives, ethnographies
Mixed methods research design
- Combine both techniques to more fully understand all angles of a topic
- Incorporates the participant’s point-of-view
- Analyzes empirical findings from data collection in a quantitative study in that light
- Can offer very strong interpretations
- Makes a large contributions to our understanding of a topic
- Comes at a cost of time and money!
Validity in experimental design
- The truthfulness or accuracy of a study’s results and is determined based on how well the research design elements:
- Focus on what the investigator wants to know
- Prevent unwanted influences from contaminating the study’s outcome
- The overall amount of control from all sources in an experimental study
These variables of interest will impact validity:
- Independent
- Dependent
- Extraneous (influence of unrelated factors) when not controlled we call this a confounding influence
Control in the design will limit the impact of
Extraneous factors
A true experimental study requires:
- Independent variable manipulated by the researcher
- Randomly chosen and assigned subjects
- A control or comparison group
Internal validity
The degree to which the relationship between the independent & dependent variables is free from the effects of extraneous factors
External validity
The degree to which results of a study can be generalized to persons or settings outside the experimental situation
Normally, when internal validity is increased, external is
Lowered (vice versa)
Four categories of threats:
- Statistical conclusion validity
- Internal validity
- Construct validity of causes and effects
- External validity
Single group threats to internal validity:
-History: Confounding effect of specific events
-Maturation: Effect is due to passage of time
-Attrition: Subjects drop out
-Testing: Repeated testing; reactive measurements
Instrumentation: Reliability of tools used
-Statistical regression toward the mean
-Selection /assignment
Three components of causality:
- Temporal precedence: The cause precedes the effect
- Covariation of cause and effect: Outcome only is present with the experimental effect or the degree of outcome is related to the experimental intensity/the magnitude of intervention
- No plausible alternative explanations: No other explanation for the response
Confounding variables present threats to internal validity because:
They offer competing explanations for the observed relationship between the independent variables and dependent variables
Social threats to internal validity
- Diffusion or Imitation of Treatments
Experimental groups socializes with controls - Compensatory Equalization of Treatments
Researcher is not blinded to intervention and tries to compensate - Compensatory Rivalry
Subjects try differently
When one groups’ assigned treatment is perceived as more desirable than the other’s; subjects receiving the less desirable treatment may try to compensate by working extra hard to achieve similar results.
- Resentful Demoralization
Subjects receiving less desirable treatments may be demoralized or resentful. Their reactions may be to respond at lower levels of performance.
*social threats result through the interaction of the subject and investigators. Blinding is desirable
Ruling out threats to internal validity
Many threats can be ruled out using random assignment and control groups.
External validity
- The extent to which the results of a study can be generalized beyond the internal specifications of the study sample.
- Concerned with the usefulness of the information outside the experimental situation
- The generalizability of a study is primarily related to the specific patient context and conditions under investigation
Threats to external validity:
1: Interaction of Treatment and Selection
Becomes a problem if samples are confined to certain types of participants
2: Interaction of Treatment and Setting
Because of the characteristics of the setting of participants in an experiment, a researcher cannot generalize it to individual in other settings
3: Interaction of Treatment and History
Cannot generalize results to different periods of time in the past or future.
4: Reactive or Interactive Effects of Testing
A pretest makes a participant more aware or sensitive to upcoming treatment
Statistical conclusion validity:
Refers to the appropriate use of statistical procedures for analyzing data
Threats to statistical conclusion validity:
- Low statistical power
- Violated assumptions of statistical tests
- Error rate
- Reliability and validity
The sample of participants that is selected for a study greatly impacts:
The validity of the study and the generalizability of the findings.
Inclusionary criteria
Characteristics potential participants must have to participate in a research study
Exclusionary criteria
Characteristics that disqualify potential participants from being in a research study
Nonprobability sampling
- Convenience sampling: Uses participants who are readily available
- Snowball sampling: A few participants are asked to participate; participants are then asked to identify other potential participants
- Purposive sampling: Participants are hand selected for a specific reason, not out of convenience (e.g., a characteristic the researcher wants to study)
Probability sampling
- Simple random sampling (every person from the population has an opportunity to be selected)
- Systematic sampling (every nth person on a list is selected from the population)
- Stratified sampling (used when certain subgroups of a sample must be represented)
- Cluster sampling (a population is divided into clusters and then participants are randomly selected from those clusters)
Sample size has a large impact on the…
Validity of the statistical conclusions made in a study
Generally, the longer the time commitment, the more ______ can be expected.
Attrition
Active independent variables (IVs)
IVs that are measured but not manipulated by the researcher
Ex: age, gender, intelligence, socioeconomic status, occupation, hearing sensitivity
Intervening (extraneous) variables
- Potential nuisance variables that can confound a study
- Any variable that affects the DV that is not the IV
- Minimized via awareness and good experimental design by the researcher
Continuous variables
-Take on a range of variables and possess the property of order
Ex: developmental age, chronological age, IQ, hearing sensitivity, other human characteristics that cannot be easily changed
Categorical variables
Characteristics of a person/object that are either present or not. There is no order to the variables
Examples: gender, age (young/middle-aged/old), ethnicity
Two main categories of group designs
Randomized control trial (RCT) group
Designs and single-factor group designs
Randomized control trial (RCT) group design:
Participants are randomly assigned to the experimental or control group. The only difference expected between the two groups is the outcome or result being measured, or the dependent variable.
Single-factor group designs:
Have ONE independent variable 6 types of single-factor group designs: -Pretest-posttest control group design -Posttest only control group -Single group pretest-posttest control group design -Non-equivalent control group design -Time series design -Repeated measures design
Pretest-posttest control group design
Participants are randomly assigned to at least two groups. The dependent variable is measured before and after treatment
Posttest only control group design
Participants are randomly assigned to at least 2 groups. Researchers are only able to measure the results (dependent variable) post or after the intervention (independent variable).
Single group pretest-posttest control group design
There is only one group, therefore participants CANNOT be randomly assigned and researchers are not comparing results between groups. Instead, results (dependent variable) are compared within the one experimental group
Single group pretest-posttest control group design
There is only one group, therefore participants CANNOT be randomly assigned and researchers are not comparing results between groups. Instead, results (dependent variable) are compared within the one experimental group
Non-equivalent control group design
Used when there is a nonrandom control group available for comparison of results to the experimental groups. Other than the control group being nonrandomized, this study is the same as pretest-posttest control group design
Time series design
- Utilizes multiple measurements before and after intervention to establish stable baseline measurement and patterns/trends of behavior.
- This is important when a comparison group is not available because the researchers are comparing how the results change over time due to treatment
Time series design
- Utilizes multiple measurements before and after intervention to establish stable baseline measurement and patterns/trends of behavior.
- This is important when a comparison group is not available because the researchers are comparing how the results change over time due to treatment
Repeated measures design
- There is one group of participants tested under all conditions and each subject acts as their own control
- This type of design is widely used in health science research because each individual’s performance is measured before and then after each treatment, which is useful in rehabilitation because treatments tend to progress and change overtime as the patients level of performance changes
Single-subject designs focus on…
The behavior and comparing treatment effects of an individual subject or subjects rather than groups of subjects, and the subject serves as their own control.
Key characteristics of single-subject designs
- Baseline assessments
- Stability of performance
- Continuous assessment
- Use of phases
A-B design
This is the most basic design for single subject research and consists of a baseline phase (A), followed by a treatment phase (B). A limitation of this design is that it provides no control comparison.
Withdrawal design (A-B-A design)
Replicating the baseline phase (A) after the intervention phase (B) to show that the target behavior only occurs when the treatment is happening and is not maintained once treatment is stopped
Multiple-baseline design
Used to determine whether treatment is effective while avoiding the ethical dilemma of withholding an effective treatment. This design uses the A-B approach but measures the effect of intervention on different behaviors in the same subject, OR the effect of the same treatment on different clients.
Alternating-treatments design
measure the effects of multiple treatments for the same condition. It lets the researcher compare two different treatments to see which is more effective or compare one treatment to a control or no intervention
Case report
A systematic documentation of a well-defined unit. It is usually a description of an episode of care for an individual, but sometimes an administrative, educational or other unit.
Case-control design
An epidemiological research design in which groups of individuals with and without a certain condition or characteristic (the “effect”) are compared to determine whether they have been differentially exposed to presumed causes of the condition or characteristic
Cohort design
An epidemiological research design that works forward from cause and effect, identifying groups of participants thought to have differing risks for developing a condition or characteristic and observing them over time to determine which group of participants is more likely to develop the condition or characteristic.
Correlational research
Conducted for the purpose of determining the interrelationships among variables.
Developmental research
Observations are made over time to document the natural history of the phenomenon of interest
Epidemiological research
Research that documents the incidence of a disease or injury, determines causes for the disease or injury, or develops mechanisms to control the disease or injury
Historical research
Research in which past events are documented because they are of inherent interest or because they provide a perspective that can guide decision making in the present
Meta-analysis
A research process by which the results of several studies are synthesized in a quantitative way.
Methodological research
Conducted to determine the reliability and validity of clinical and research assignments
Normative research
Uses large representative samples to generate norms on measures of interest.
Qualitative research
Research that is conducted to develop a deep understanding of underlying reasons, opinions, and motivations usually using interview and observation
Secondary analysis
Research that reanalyzes data collected for the sole purpose of answering new research questions.
Survey research
Data are collected by having participants complete questionnaires or respond to interview questions.
Evaluation research
Research conducted to determine the effectiveness of a program or policy
Policy research
Conducted to inform policy making and implementation