leren Flashcards

1
Q

Name four types of reviews

A

Narrative/traditional reviews, critical reviews, scoping reviews, systematic reviews

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are narrative reviews?

A

These provide a general overview of the literature. Usually there is no specific research questions and it often had diverse/multiple aims and purposes. Different types of studies/literature are taken into account, but often there is no aim to be comprehensive in selection of included studies. Weighting of the studies is not transparent, and the selection of studies is potentially biased.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are critical reviews?

A

They aim to demonstrate that the writer has extensively researched literature and critically evaluated its quality. It goes beyond a mere description and includes some degree of analysis and conceptual innovation. It typically results in a hypothesis or a model. It seeks to identify the most significant items in the field and the conceptual contribution to embody existing or derive a new theory. Analyses are typically narrative.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are scoping reviews?

A

They are a preliminary assessment of the potential size and scope of available research literature. They aim to identify the nature and the extent of research evidence. The completeness of searching is determined by the time/scope constraints. They may include research in progress. There is no formal quality assessment of the included studies and the analyses are typically tabular with some narrative commentary. It characterises quantity and quality of literature.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are systematic reviews?

A

They depart from a specific research question and are a systematic search for research evidence, often adhering to guidelines on the conduct of a review. An effort is made to include all the literature on a topic. There is a quality assessment that might determine inclusion/exclusion of studies. Analyses are typically tabular and if possible, a meta-analysis is included. They give recommendations for clinical practice and future research. Systematic overviews that are periodically updated seems like a good idea.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the four goals of Cochrane?

A
  1. Producing evidence;
  2. Making the evidence accessible;
  3. Advocating for evidence;
  4. Building an effective and sustainable organisation.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the six steps in performing systematic reviews?

A
  1. make up a research questions;
  2. search and select literature;
  3. asses the risk of bias of the studies included;
  4. extract data;
  5. analyse data;
  6. conclude
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What steps does searching and selecting literature in systematic reviewing include? name 6.

A

a. Scoping you search topic;
b. Choosing the resources to search;
c. Choosing the search terms;
d. Compiling your search strategy and running the search;
e. Finding the full text;
f. Managing the information found.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What things should be thought about when making up a research question in a systematic review?

A

A systematic review should always depart from a clinically relevant and well-defined research question. As a guideline, PICO can be applied. However, there are other important things to think about, such as the timing of the outcome measures, design of studies, the time from when you include the studies and the language.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

While analysing data in systematic reviews, comparisons must be defined based on…?

A

clinical homogeneity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is clinical homogeneity?

A

the subgroups in each study should be the same, just as the interventions/controls and the outcome measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are sensitivity analyses for?

A

Sensitivity analyses are used to look at the differences of studies with a high risk of bias and low RoB.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How is the Cochrane method for quality assessment called? And what steps (4) does is include?

A

The Cochrane uses the GRADE method for quality assessment, which takes into account:

  • Risk of Bias (risk of bias of the included studies/RCTs);
  • Inconsistency (are the results of the included in the ‘same’ direction (so no inconsistent pattern));
  • Indirectness (did the studies really include the target population and were the ‘real’ outcomes measured (not proxy measures));
  • Precision (were enough people included in the analyses).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is duplicate publication bias?

A

Duplicate (multiple) publication bias occurs when a study is counted more than once, as it is published in multiple papers. It also occurs when the first paper includes 50 patients and the second one 100. Then, only the second one needs to be included, as the result of the first paper will be exaggerated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the salami tactic?

A

In here, different parts of a report are published in different papers. For example, in one paper the physical outcomes are published, in the other only the psychosocial. Or, in the first paper only the 1-year results and in a second paper the 3-year results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the advanced salami tactic?

A

The advanced Salami tactic is when other authors (or in a different order) are in different papers on the same study.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is location bias?

A

Trials published in low- or non-impact factor journals were more likely to report significant results than those published in high-impact, mainstream medical journals and the quality of the trials was also associated with the journal of publication. This refers to location bias: should you include all journals?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is location bias?

A

Trials published in low- or non-impact factor journals were more likely to report significant results than those published in high-impact, mainstream medical journals and the quality of the trials was also associated with the journal of publication. This refers to location bias: should you include all journals?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is language bias?

A

Language bias refers to bias that could be introduced in reviews exclusively based in English-language reports. However, the research examining this issue is conflicting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Name four reasons meta analyses are performed

A

MA are performed, to:

  • Increase the power/improve precision. Many individual studies are too small, thus combining yields a higher chance of detecting an effect;
  • Answer questions not posed by individual studies. This increases generalisability/robustness;
  • Settle controversies in apparently conflicting studies;
  • Generate new hypotheses.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

In the case of dichotomous outcome, in what two ways can the pooled average be measured?

A
  • Absolute measures. This includes the risk difference (RD) and number needed to treat (NNT = 1/RD);
  • Relative measures. This includes the risk ratio (RR) or the odds ratio (OR).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

In what two ways can continuous outcomes be pooled?

A

Continuous outcomes can be measured in a numerical scale in two ways:

  • Identical outcome measure with an identical scale.
  • The same construct on a different scale (standardised mean difference). This is a correction for differences in direction of scale, as they must go in the same direction. The standard deviation does not have to be modified.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

In what two ways can ordinal outcomes be pooled?

A

For ordinal outcomes, there are two solutions:

  • Dichotomise;
  • Continuous, when there are too many categories (>7?).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What three forms of heterogeneity exist?

A

clinical heterogeneity, methodological heterogeneity and statistical heterogeneity (heterogeneity)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is clinical heterogeneity?
This includes differences in clinical features of a study (such as patients, treatments, and outcomes)
26
What is methodological heterogeneity?
This is about different study methodologies (e.g. RCTs VS non-RCTs)
27
What is measurement error (in reliability)? How is it expressed?
This Is The Systematic And Random Error Of A Patient’s Score That Is Not Attributed To True Changes In The Construct To Be Measured. It Is Expressed In SEM (Standard Error Of Measurement/The Standard Deviation Of Repeated Measurements In One Person). SEM = √Variance.
28
What does the National Health Care Institute (Zorginstituut Nederland) do? and what is needed to do so?
Develop Research Priorities. Priority Setting Departs From The Question Which Types Of Health Care Are Reimbursed (In The Basic Health Insurance Package) Or Not. Systematic Reviews Are Needed.
29
What is dependability in terms of quality in qualitative research? name five things to reach this
De vraag is hier of de context goed weergegeven wordt in de data. Ben je consistent geweest en sensitief geweest voor alles dat speelt? Of is de data afhankelijk van jouw keuzes? Dit volgt uit: o Iteratieve processen. Dataverzameling en -analyse wisselen elkaar af; o Data saturatie. Dataverzameling stopt pas als er geen nieuwe codes optreden (de hele scope is in beeld); o Een emergent design. Je kan er nog voor kiezen om eventueel op een andere manier aan je data te komen. o Standaardiseren en mechaniseren. Het automatisch registreren van informatie (zoals het opnemen van een interview; o Het vormen van een onderzoeksteam. Ter voorkoming van onderzoekersbias laat je de codes checken.
30
What is confirmability in terms of quality in qualitative research? name four things to reach this
Volgen de bevindingen logisch uit de data en kun je dat aantonen? Is je eigen rugzak erg in beeld? Let op: bewustzijn van jezelf is niet hetzelfde als neutraal zijn. Je bereikt dit door: o Deviant cases in je data of literatuur. Zo vermijd je tunnelvisie; o Een audit trail. Dit is een geheel van memo’s die het keuzeproces in kaart brengen. Hieruit is te herleiden waarom bepaalde keuzes gemaakt zijn; o Peer debriefing. Het bespreken van aspecten van het dataverzamelingsdesign en coderingen met co-onderzoekers. Ook hoort hierbij triangulatie in de analyse; o Reflexiviteit. Wees kritisch op je eigen rol/vooronderstellingen in het logboek.
31
What is statistical heterogeneity? How can this be detected?
This results from clinical/methodological (expected) heterogeneity or unexpected.Statistical heterogeneity (the unexpected form) can be detected by common sense (eye-ball test, in which you just look at the results). The chi-square test can be used, but a type 2 error is likely. Also, quantifying heterogeneity (I2) can be used. This describes the percentage of the variability in the effect estimates due to heterogeneity instead of chance. A value of >50% is sometimes labelled as substantial. This can be misleading though, as the importance of inconsistency depends on several factors, such as the number of studies and the direction of effects in individual studies.
32
What do you do when heterogeneity cannot be explained?
a random effect MA.
33
Name four things literature reviews are for.
- Providing up-to-date information; - Identifying significant issues/themes for further research; - Guiding the development of research topics/questions; - Presenting the used methodologies and research tools.
34
Name 6 criteria for a good internal validity (in terms of RCTs)
- Randomisation Procedures; - Comparable Groups At Baseline; - Blinding Of Randomisation Allocation To Patients, Outcome Assessors, Care Providers And Statisticians Or Researchers Performing The Analyses; - Compliance To The Protocol By Patients And Care Providers; - Loss To Follow Up; - Intention-To-Treat Analyses.
35
What is evaluation?
Evaluation Is A Systematic Determination Of A Subject’s Merit, Worth And Significance, Using Criteria
36
What is dependability in terms of quality in qualitative research? name five things to reach this
De Vraag Is Hier Of De Context Goed Weergegeven Wordt In De Data. Ben Je Consistent Geweest En Sensitief Geweest Voor Alles Dat Speelt? Of Is De Data Afhankelijk Van Jouw Keuzes? Dit Volgt Uit: O Iteratieve Processen. Dataverzameling En -Analyse Wisselen Elkaar Af; O Data Saturatie. Dataverzameling Stopt Pas Als Er Geen Nieuwe Codes Optreden (De Hele Scope Is In Beeld); O Een Emergent Design. Je Kan Er Nog Voor Kiezen Om Eventueel Op Een Andere Manier Aan Je Data Te Komen. O Standaardiseren En Mechaniseren. Het Automatisch Registreren Van Informatie (Zoals Het Opnemen Van Een Interview; O Het Vormen Van Een Onderzoeksteam. Ter Voorkoming Van Onderzoekersbias Laat Je De Codes Checken.
37
What is acquiescence bias?
People Agree With Statements As Presented
38
what is precision in terms of quality of RCTs? And by what is it determined?
The Likelihood Of Chance Effects Leading To Random Errors. It Is Reflected In The Confidence Interval Around The Estimate Of Effect From Each Study.
39
What aims PAR for?
This Aims To Change Practice In Real Life. There Are Collabs Between Researchers, Practitioners, And Users. Iterative Designs And Mixed Methods Are Used. It Is About The Understanding Of Perspectives In Order To Determine Change And Measuring Change. It Makes Use Of Deductive And Inductive Reasoning.
40
What does impact evaluation do?
Impact Evaluation Is About What The Effect Is Of The Outcome On A Societal Level.
41
What does impact evaluation do?
Impact Evaluation Is About What The Effect Is Of The Outcome On A Societal Level.
42
What is concurrent validity?
Concurrent Validity Measures How Well A New Test Compares To A Well-Established Test.
43
What happens in transdisciplinary research?
In Transdisciplinary Research People With Knowledge (Like PHD’ers) And People Without Certified Knowledge (Like People With Experience) All Work Together On Understanding The Problem, Developing And Testing Options To Address These Problems.
44
What is the claim of transdisciplinary research?
The Claim Of Transdisciplinary Research Is That All The Knowledge Generated In All The Different Fields Together Is Still Not Sufficient To Solve Complex And Persistent Problems. Something Else Is Needed On Top Of This.
45
What are general aspects for the aim of a measurement? name four
- Perspective (Patient Reported Or Clinician Reported). Definitions Of Main Symptoms Or Success Of A Treatment Can Differ; - Objective Or Subjective. In Many So Called ‘Objective Measures’ Interpretation From A Rater/Expert/Physician Is Still Needed (Such As In MRI Or X-Rays); - Disease Specific Or Generic; - What Exactly Do You Want To Measure? Is It Possible To Measure The Real Outcome? Sometimes Measuring The Real Outcome Is Challenging Or Requires A (Very) Long Term Follow-Up. Surrogate Endpoints (SE) Are Used As A Substitute For The Real Outcome (Clinical Endpoint).
46
What is central tendency bias? Name a solution and a set back
The Fact That People Often Want To Avoid Extreme Responses. A Solution Is To Create An Equal Number Of Positive And Negative Statements. A Set Back Is That People That Slightly Agree, Also Often Slightly Disagree
47
What is selective reporting bias?
This Refers To Systematic Differences Between Reported And Unreported Findings. Within A Published Report Analyses With Statistically Significant Differences Between Groups Are More Likely To Be Reported Than Non-Significant Differences.
48
What is meant by the demand gap?
Demand Gap: Researchers Not Always Work On Real Societal Problems.
49
What does internal consistency mean?
It Is The Interrelatedness Among Items. Wiki: Internal Consistency Is Typically A Measure Based On The Correlations Between Different Items On The Same Test (Or The Same Subscale On A Larger Test). It Measures Whether Several Items That Propose To Measure The Same General Construct Produce Similar Scores. For Example, If A Respondent Expressed Agreement With The Statements "I Like To Ride Bicycles" And "I've Enjoyed Riding Bicycles In The Past", And Disagreement With The Statement "I Hate Bicycles", This Would Be Indicative Of Good Internal Consistency Of The Test.
50
What do analytical surveys do/what are they/what do they consist of?
They Explore And Test Proportions/Associations/Predictors Between Variables. They Are Observational Studies, With Structured Questions And Limited Options For Respondents. They Are Highly Deductive.
51
What does ZonMW do?
It addresses efficiency research for: - Open Calls For Studies Designed To Address Efficiency Issues In Health Care Practice; - Targeted Calls For Studies Designed To Address Efficiency Issues That Come From Policy Parties In Order To Stimulate Healthcare Innovation, And To Respond Flexibly To Current Developments.
52
What are surrogate endpoints? Name three examples. What is a potential problem with SEs?
Surrogate Endpoints (SE) Are Used As A Substitute For The Real Outcome (Clinical Endpoint). Examples Are Biomarkers, Blood Pressure Instead Of CVD Or Bone Density Instead Of Bone Fractures. A Problem With SEs Is That They May Correlate With The Real Outcome, But Not Necessarily Have A Guaranteed Relationship. SEs Might Not Have Causal Relationships, Or The Outcome Might Appear Through Another Way Than Via The SE.
53
What four things must be thought of while defining the study population?
- Recruitment. Is Complete Information Given To Patients? Is There An Informed Consent? - Composition. What Are The In- And Exclusion Criteria? What About The Homogeneity Of The Group (Must Be Low For High Internal Validity But High For External Validity)? And How Representative Is The Population (To Prevent A Lesser External Validity)? - Size Of The Population. Needs To Be Large Enough To Answer The Research Question. To Prevent Loss-To-Follow-Up, You Can Send Reminders, Contact Them Personally During Measurements Or Provide Gifts. - Who Gets What? Think About Random Allocation To Prevent Confounding By Indication. Also, The Groups Should Be Comparable At Baseline. The Randomisation Procedure Is Only Adequate If Done By An Independent Person Or A Computer Program That Tracks All The Procedures.
54
What are research agendas? What is key element?
These Are National Lobbies For Many Professional Organisations, In Which Patient Involvement Is A Key Element. It Is About A Societal Perspective, Looking At The Burden Of A Disease For Patients And Society And The Impact Of The Results. It Also Has A Research Perspective, That Looks Into The Probability Of Successful Completion Of Studies And Of The Implementation. There Is Big Money Involved.
55
What are the four basic principles of quality in qualitative research?
Credibility, Transferability, Dependability, Confirmability
56
What is social desirability bias?
People Tend To Put Oneself In A Positive Perspective.
57
What is confirmability in terms of quality in qualitative research? name four things to reach this
Volgen De Bevindingen Logisch Uit De Data En Kun Je Dat Aantonen? Is Je Eigen Rugzak Erg In Beeld? Let Op: Bewustzijn Van Jezelf Is Niet Hetzelfde Als Neutraal Zijn. Je Bereikt Dit Door: O Deviant Cases In Je Data Of Literatuur. Zo Vermijd Je Tunnelvisie; O Een Audit Trail. Dit Is Een Geheel Van Memo’s Die Het Keuzeproces In Kaart Brengen. Hieruit Is Te Herleiden Waarom Bepaalde Keuzes Gemaakt Zijn; O Peer Debriefing. Het Bespreken Van Aspecten Van Het Dataverzamelingsdesign En Coderingen Met Co-Onderzoekers. Ook Hoort Hierbij Triangulatie In De Analyse; O Reflexiviteit. Wees Kritisch Op Je Eigen Rol/Vooronderstellingen In Het Logboek.
58
What does effectiveness evaluation do?
Effectiveness Evaluation Is About The Effect Of Reaching The Output. or the assessment of the degree of success of a program in achieving its goals
59
What is transferability in terms of quality in qualitative research? name three things to reach this
Het Gaat Hier Over Of De Resultaten Herkenbaar Zijn Voor Anderen En Overdraagbaar Zijn Naar Andere Contexten. Het Is Hierbij Belangrijk Dat Niet De Onderzoeker, Maar Zij Die Bekend Zijn Met Hun Context De Bevindingen Het Beste Van De Ene Naar De Andere Context Kunnen Vertalen. Je Bereikt Dit Door: O Het Geven Van Een Thick Description. Je Schetst De Context T.B.V. De Overdraagbaarheid Zo Volledig Mogelijk; O Sampling. Er Moet Maximale Variatie Zijn; O Vergelijken Met De Literatuur. Plaats Je Onderzoek In Een Bredere Context Dan Je Eigen Context.
60
What is co-intervention in terms of performance bias?
Provision Of Unintended Additional Care To Either Comparison Group
61
What is summative evaluating?
Ascertain A Degree Of Achievement Or Value Regarding Objectives And Results Of Any Such Action That Has Been Completed
62
What are the two steps to find the internal consistency?
1. Assess If There Are Different Factors With A Measurement Tool (Factor Analysis); 2. Assess Interrelatedness Of Items Per Factor ==> Cronbach’s Alpha (Value Between 0-1. The Closer To 1, The Better).
63
Name 4 assumptions that come along with PAR
- People Experiencing The Problem Are In The Best Position To Conduct Research On The Issue; - All People Can Learn Basic Research Skills; - Participants Can Establish Equal Partnerships With Researchers That Can Be Used To Address Community Problems; - PAR Related Activities Help Empower Members Of Powerless Groups. Participation In The PAR Process Is A Critical Component Of Community Interventions.
64
What kind of statistics must be used in ordinal scales?
Non-Parametric Tests
65
What is exclusion/attrition bias? Name 4 reasons for this.
Exclusion/Attrition Bias Is About Systematic Differences In Loss-To-Follow-Up Between Groups. Reasons For This Could Be Withdrawals, Side-Effects, Worse Health Or The Treatment Doesn’t Work.
66
What three things are the quality of RCTs based on?
Internal Validity, Precision And External Validity
67
Name 5 things a research objective should be
- Useful (Relevant According To Parties Involved); - Realistic (The Likelihood Of Contributing To Solving The Problem); - Feasible (In Time And Resources); - Clear (Specify Its Contribution); - Informative (An Indication Of The Knowledge To Be Gathered).
68
What is criterion validity?
This Is About The Relationship Between The Test And The Criterion (Golden Standard).
69
What kinds of validity exist?
Content Validity, Criterion Validity, Construct Validity
70
What is responsiveness?
Responsiveness Is The Same As Validity. Responsiveness Is Nothing More And Nothing Less Than A Longitudinal Way Of Assessing The Validity. Responsiveness Is All About The Question: Can We Detect A Real Change? And If You Appreciate Then That Responsiveness Is The Longitudinal Validity, You Can Translate Everything From The Domain Validity To The Longitudinal Assessment And Then You Get To Responsiveness.
71
Name three negative things about a framework
Reasons Not To Use Them Are Because They Also Stop Us To Think (Outside Of The Framework) And They Create Selective Views/Bias. They Make Research Less Open And Flexible.
72
What is contamination in terms of performance bias?
Provision Of The Intervention To The Control Group
73
What do process evaluations aim for?
Process Evaluations Help Stakeholders See How An Intervention/Program Outcome Or Impact Was Achieved.
74
What three forms of reliability are there?
Internal Consistency, Reliability, Measurements Errors
75
Why do we evaluate? name two things
- Ascertain A Degree Of Achievement Or Value Regarding Objectives And Results Of Any Such Action That Has Been Completed (Retrospective (Summative)); - Enable Reflection And Assist In Identification Of Future Change (Formative). This Focuses On Setting Change Objectives.
76
What topics does ZonMW look into?
- Early Evaluation Promising Interventions (New Interventions Only In Research Setting Or A Single Hospital). They Acquire Evidence For The Decision As To Whether Or Not Conduct Further Research Or To Implement The Intervention; - Evidence For Guideline And Insurance Coverage, By Looking Into Efficiency Of An Intervention (Effectiveness Proven), And Evidence In Support Of Guidelines And Insurance Coverage. Interventions Must Already Be Applied In The Netherlands.
77
What is credibility in terms of quality of qualitative research? How do you reach this? name four things
Geloofwaardigheid Ontstaat Als De Onderzoeksbevindingen Herkenbaar En Betekenisvol Zijn Voor De Deelnemers. Ook Ontstaat Dit Als Je Verschillende Perspectieven Op Een Fenomeen Goed In Beeld Hebt Gebracht. Geloofwaardigheid Bereik Je Door: O Triangulatie. Niet Alleen Patiënten Zelf, Maar Ook Hun Naasten En Professionals Moet Je Interviewen; O Sampling. Je Moet Variatie In Je Sample Doen Om Zo Een Volledig Beeld In Kaart Te Krijgen; O Member Checks. Je Moet Feedback Krijgen Van Respondenten Of Zij Zich In Het Verhaal Herkennen; O Rapportage. Gebruik Citaten Om Bevindingen Te Onderbouwen. Ga In Op Deviant Cases.
78
Name three types of evaluation
Efficiency Evaluation, Effectivesness Evaluation, Impact Evaluation
79
Name 6 criteria for a good external validity (in terms of RCTs)
- Relevant Research Questions; - Description Of In- And Exclusion Criteria; - Description Of Intervention; - Description Of Control Intervention; - Relevant Outcome Measurements; - Length Of Follow Up/Timing Of Follow Up Measurements.
80
What is face validity?
The Degree To Which (The Items Of) An HR PRO Instrument Indeed Looks As Though They Are An Adequate Reflection Of The Construct To Be Measured. Wiki: Face Validity Is Simply Whether The Test Appears (At Face Value) To Measure What It Claims To. This Is The Least Sophisticated Measure Of Validity. Tests Wherein The Purpose Is Clear, Even To Naïve Respondents, Are Said To Have High Face Validity.
81
What is the difference between multidisciplinary- and interdisciplinary research?
In Multidisciplinary Research Different Disciplines (Such As Nutritional People And People With Obesity And Medical People) Exist Alongside Each Other. Like A Report With Different Chapters. In Interdisciplinary Research A Team Of Different Disciplines Sits Together, Developing Research Questions And Drawing Conclusions Together. For Example, In Addiction Knowledge Of Sociology, Psychology And Biology Is Needed.
82
What is qualitative research?
Qualitative Research Is A Form Of Social Inquiry That Focuses On The Way People Make Sense Of Their Experiences Of Life And The World In Which They Live.
83
What is detection bias?
Detection Bias Is About An Error In The Measurement Of The Outcome (A.K.A. Information Bias). In Continuous Outcomes It Is A Measurement Error. In Dichotomous Or Categorical Outcomes, It’s Misclassification. Differential Measurements Are Important In The Context Of Risk Of Bias.
84
What is interpretivism (constructivism)?
The Truth And Its Meaning Are Constructed By The Person/Researcher (Subjects). People Interpret The World (Object) Differently. Researchers Inherently View The World Through Their Own Frame Of Reference. Observations Are Value-Bound.
85
What is internal validity in the context of RCTs?
The Extent To Which Its Design And Conduct Are Likely To Prevent Systematic Errors, Or Bias
86
What is transdisciplinary research? Name an example of how TR is tried to reach
There Is No Clear Definition For Transdisciplinary Research. It’s An Umbrella Term For Interactive Terms Of Research That Found Their Origin In Many Different Disciplines. For Example: Deliberative Policy Analyses, Fourth Generation Evaluation (In The Evaluation Of Teachers, Students And School Programs, Those Need To Be Part Of Determining What It Exactly Is To Be Evaluated). Also, In The Field Of Development Studies, Participatory Action Research Was Introduced To Reach For TR.
87
A new collab between ZonMW and Zorginstituut Nederland is "veelbelovende zorg". What is this for?
It Has The Priority To Study New And Promising Treatments Or Well-Established Treatments For Which The Evidence Is Not Strong/Convincing
88
Name four things literature reviews are for.
- Providing Up-To-Date Information; - Identifying Significant Issues/Themes For Further Research; - Guiding The Development Of Research Topics/Questions; - Presenting The Used Methodologies And Research Tools.
89
Name the 5 steps of evaluating
1. Plan: Determine What To Evaluate; 2. Measure: Collect Quantitative Data In Relation To The Initiative; 3. Describe: Determine The Characteristics Of The Initiative/Process; 4. Judge: Assess The Quality Or Effectiveness Of The Initiative; 5. Discuss/Negotiate: Stakeholders Are Active Participants In Evaluation Processes And May Have Different Perspectives.
90
In what three ways can performance bias occur?
- Exposure To Other Factors Than The Intervention Of Interest; - Contamination (Provision Of The Intervention To The Control Group); - Co-Intervention (Provision Of Unintended Additional Care To Either Comparison Group).
91
What is reliability? How is it increased?
Reliability Is The Same Result If The Situation Is Stable. Repeated Measures Often Increase The Reliability.
92
What is predictive validity?
Predictive Validity Is The Extent To Which A Score On A Scale Or Test Predicts Scores On Some Criterion Measure.
93
What is validity?
The Result One Really Aims For.
94
What is used to test the reliability (within reliability)? In what two ways can this be done? And what is needed in order to use this in a correct way?
The Test-Retest Design Is Used, Which Can Be Done In Two Ways: 1. Inter Rater. Here, There Are Different Raters On The Same Occasion, Or, Even Better, Different Raters On Different Occasions. This Reflects Daily Practice More Adequately. 2. Intra Rater. Here, The Same Raters Rate Different Occasions. The Construct Should Not Have Changed Between The Two Measurements. Therefore, There Must Be A Short Interval Between The Two Measurements. Also, External Criteria Can Be Used To Avoid Changes (Such As A Predefined Scale).
95
What is content validity?
This Is The Degree To Which The Content Of A Health-Related Patient Recorded Outcome (HR PRO) Instrument Is An Adequate Reflection Of The Construct To Be Measured.
96
Name three forms of bias that might occur in using ordinal scales
Central Tendency Bias, Acquiescence Bias, Social Desirability Bias
97
Name four general elements of qualitative data gathering
- Always A Description Of The Data; - Analytical Induction By Coding; - Secondary Data Analysis; - Reflexivity Of The Researcher.
98
What is positivism?
Those People Think That We Can Come Up With A Law About Society. They Come Up With Laws About How The World Around Us Functions. Reality Can Be Observed, Facts Can Be Presented As Truths, Knowledge Can Be Formulated Into Laws. There Is A Single Reality, Waiting To Be Found. Observations Are Value-Free. The Truth Is Generalisable.
99
What is deductive research?
Starts From A Theory, Which Is Used To Look At Reality And To Confirm Something. It Begins With A Hypothesis, Formal Instruments Are Used, And It Looks For Confirmation/Rejection.
100
What do phenomenological studies do? What does it hold?
They Aim For Contextual Descriptions And Analyses Of Phenomena. Phenomenology Holds That Any Attempt To Understand Social Reality Has To Be Grounded In People’s Experiences Of That Social Reality. It Emphasises Inductive Logic And Relies On Qualitative Analyses Of Data. It Is Not So Much Concerned With Generalisations To Larger Populations.
101
What two types of validity belong to criterion validity?
Concurrent Validity And Predictive Validity
102
What is construct validity?
Construct Validity (Including Structural Validity, Hypotheses Testing And Cross-Cultural Validity). This Is The Degree To Which The Scores Of An HR PRO Instrument Are An Adequate Reflection Of The Dimensionality Of The Construct To Be Measured.
103
What are horizontal and vertical analysis in axial coding?
Horizontal Analysis: Focussed On Aggregation And Comparison Of Content Of Data Across Different Interviews (Or Other Transcripts). Pay Attention To Diversity (Both Majority And Minority Of Views Count). Vertical Analysis: Focussed On Understanding The Essence Of Individual Interviews: ‘The Narrative’ (Or Other Data/Transcripts). You Need The Unique Line Of Arguing Of Individuals And Their Priorities.
104
What do outcome evaluations assess? What are providing the strongest evidence?
Outcome Evaluations Assess The Effectiveness Of An Intervention/Program In Producing Change. In Here, RCTs Provide The Strongest Evidence.
105
What is a conceptual framework?
A Conceptual Framework Is The Way Ideas Are Organised To Achieve A Research Project’s Purpose. Part Of Problem Solving (Answering A Question) Is The Ability To Reframe A Complex And Chaotic Reality In A Model/Framework That Clarifies The Mechanisms, And Thus Points To The Solutions.
106
What is an intention-to-treat analyses?
In Een Gerandomiseerd Experiment Worden De Effecten In De Interventiegroep Vergeleken Met De Effecten In De Controlegroep. Het Analyseren Van Alle Deelnemers Op Basis Van De Aan Hen Toegewezen Interventie Noemt Men Een Intention-To-Treatanalyse. Dus Ook De Deelnemers Die Zich Slecht Aan De Interventie Hebben Gehouden (Bijvoorbeeld De Medicatie Niet Hebben Ingenomen), Deelnemers Die Zijn Gestopt Met De Interventie (Bijvoorbeeld Omdat Zij De Bijwerkingen Niet Langer Konden Verdragen) En Zelf S De Deelnemers Die Zijn Overgestapt Naar De Andere Groep (Bijvoorbeeld Omdat Zij Dachten Meer Baat Te Hebben Bij De Andere Interventie In Het Onderzoek), Dragen Bij Aan Het Gemiddelde Effect Van De Groep, Waaraan Ze Door De Randomisatie Waren Toegewezen.
107
Experimental designs: what are they for and what is done?
Are To Determine Causality. An Independent Variable Is Manipulated To Determine The Effect Of The Dependent Variable.
108
Name three things in which coding of qualitative data can be. Also, give the right term that belongs to it
- Deductive (Through Content Analysis, Which Is Guided By Theory/Hypothesis Testing); - Inductive (Through Grounded Theory, Which Is Searching For Theory); - Via An Integrated Approach (Through Thematic Content Analysis, In Which You Use Theory To Set Up Experiments, But You Don’t Analyse Your Data By It).
109
What steps are followed in research agendas?
1. Defining The Scope; 2. Inventory Of Stakeholders (Healthcare Providers, Patients And Societal Partners); 3. Summary Of The Evidence; 4. Priority Setting; 5. Consensus Procedures.
110
name three critiques on qualitative research
- It Lacks Methodological Rigour; - It Is Prone To Researchers Subjectivity; - There Are Often Small Cases And Thus Limited Evidence.
111
What is formative evaluating?
Enable Reflection And Assist In Identification Of Future Change (Formative). This Focuses On Setting Change Objectives.
112
What is the internal objective in a research objective?
The Way In Which It Will Be Done/The Insights, Information, And Knowledge That Is Needed
113
Name a deductive and inductive pitfall in axial coding
Pitfalls In Deductive Research: Too Much Looking For Evidence Of A Relation. Pitfalls In Inductive Research: Accepting Too Vague Relations As The Truth.
114
Name three things a concept should be
Concepts Are: - Observable; - Distinguishable (From Other Concepts); - Variable (Otherwise There Is No Reason To Research It).
115
What is Cronbach's Alpha used for and what does its value mean?
Assess Interrelatedness Of Items Per Factor ==> Cronbach’s Alpha (Value Between 0-1. The Closer To 1, The Better).
116
Name 4 research methodologies
Experimental Designs, Analytical Surveys, Phenomenological Surveys, Participatory Action Research
117
What two forms of bias can occur during follow-up?
Exclusion/Attrition Bias And Detection Bias
118
What does efficiency evaluation do?
Efficiency Evaluation Is To See If An Input Will Lead To A Certain Output. The Output Could Be That There Are Certain People For A Treatment Protocol.
119
Name four things a conceptual framework should be
Conceptual Frameworks Should Be: - Considered As A Process Rather Than A Thing; - Evaluated And Revised Continuously; - Integrated Throughout The Study; - Knowledge-Based.
120
what is meant by the implementation gap (or knowledge-to-practice gap)?
Implementation Gap (Or Knowledge-To-Practice-Gap): As A Researcher You Develop Many Articles. However, Most Of The Generated Knowledge Does Not Lead To Change In Practice. That’s Partly Because There Is A Lack Of Communication.
121
What is inductive research?
Uses A Lot Of Data To Construct A Theory From Emerging Patterns. A Thick Description Is Needed And Is Ends With Hypotheses And Grounded Theory. In Most Cases, Both Are Used.
122
What three aspects need to be critically appraised for content validity?
Critically To Be Appraised Are: - The Comprehensibility Of Items Of Response Options; - Relevance Of Items Of Response Options - Comprehensiveness
123
What do frameworks help us with? name 5 things.
Frameworks Help Us To: - Understand The Main Research Question/objective; - Formulate Empirical/Subquestions; - Select And Develop Tools That Guide Our Data Collection; - Analyse Our Results; - Make Our Research Understandable To Others (As It Is Embedded In Current Knowledge And It Allows Generalisation).
124
What is PAR supposed to reach? Name 3 things
- Minimise Power Differences Between Researchers And Constituents; - Increase The Knowledge Of Participants; - Promote Social Change.
125
How should a research objective be formed (textually)?
The Research Objective Is *External Objective* By *Internal Objective*
126
What is performance bias?
This Is About The Extent Whether Controls And Interventions Are Really Delivered According To Plan.
127
Name two criteria for a good precision in terms of RCTs
- Sample Size Of Study Population Is Large Enough In Both Groups; - Outcome Measures Are Presented With Confidence Intervals.
128
What is the external objective in a research objective?
The Contribution Of Your Research Project To The Solution Of A Problem/What Results Can Be Expected
129
What are two forms of experimental designs?
RCTs And Quasi Experiments
130
What two things does an adequate randomisation procedure include?
- Random Sequence Generation (Assignment To Treatment Arms Based On Chance); - Allocation Sequence Concealment (Prevent Participants Or Trial Personnel From Knowing The Forthcoming Allocations).
131
what kind of bias can occur during reporting?
Selective Reporting Bias
132
What is external validity in terms of quality of an RCT?
Extent To Which The Results Of The Trial Can Be Generalised To Other Situations/Target Populations/Settings Etc