Section 7 Neuropsychological Rehabilitation Flashcards
What is the purpose of assessing patient functioning?
The purpose of assessing patient functioning is for diagnosis, prognosis and evaluation.
Why is it less common to measure outcome systematically after treatment in clinical practice?
Because neuropsychologists often conduct extensive testing during pre-treatment assessment.
What are some important aspects to consider when choosing an outcome measure?
Psychometric properties (reliability, validity, responsiveness), assessment criteria for outcome measures, and the ICF framework.
What is the definition of rehabilitation according to the British Society of Rehabilitation Medicine (BSRM) and Royal College of Physicians (RCP)?
“Rehabilitation is defined as a process of active change by which a person who has become disabled acquires the knowledge and skills needed for optimal physical, psychological, and social function and the use of all means to minimize the impact of disabling conditions.”
Measuring the level of …. in society is considered important in rehabilitation contexts.
Participation
What is the purpose of knowing about bias in research?
To ensure the validity and scientific quality of a study and the robustness of the evidence to guide clinical practice.
What are some strategies to avoid or minimize bias in research methodologies?
Randomised controlled trials (RCTs), single-case designs, systematic reviews, and clinical practice guidelines.
What is bias in research methodology and what does it refer to according to Higgins and Altman (2008)?
“a systematic error, or deviation from the truth, in results” = incorrect estimate of the association between exposure and the health outcome.
Bias occurs when an estimated association (odds ratio, difference in means, etc.) deviates from the true measure of association.
systematic error is introduced into sampling or testing by selecting or encouraging one outcome or answer over others
What are the five types of bias enumerated by Sackett (1979) in regards to the interpretation and reporting of results in research?
1) bias of rhetoric – use of arguments that are not based on evidence;
(2) all’s well literature bias – publishing studies that ignore or minimise conflicting results;
(3) one-sided references bias – citing references that support only one side of the argument;
(4) positive result bias – the more likely submission by investigators, and acceptance by editors, of studies reporting positive results; and
(5) hot stuff bias – When a topic is fashionable (‘hot’) investigators may be less critical in their approach to their research, and investigators and editors may not be able to resist the temptation to publish the results.
What is a critical appraisal (or methodological quality) instrument used for in research?
It is necessary to evaluate whether a report meets scientific standards and identifies biases in the planning and conduct of a study.
Examples of such instruments include the AMSTAR for systematic reviews, PEDro Scale for RCTs, and RoBiNT Scale for single-case research.
What are the four major classes of validity in research?
Internal validity; construct validity; statistical conclusion validity; and external validity.
What is internal validity in research?
Internal validity reflects the degree to which changes in the dependent variable are attributable to the effect of the independent variable rather than some other factor or confounder.
What is the goal of internal validity in research?
To ensure that changes in the dependent variable are solely the result of the intervention being studied.
What are some common threats to internal validity in research?
History, maturation, assignment, attrition, instrumentation, testing, regression, participant reactivity, and investigator-related expectancy effects.
What are history and maturation?
“History” refers to the influence of environmental factors (including historical events) that are not under the control of the investigator, such as changes in a participant’s personal circumstances.
“Maturation” refers to changes within a participant over time, such as spontaneous recovery or adjustment to disability.
What are assignment, attrition and instrumentation?
“Assignment” refers to the potential for important differences among participants in a study that may be related to performance on the dependent variable. For example, age.
“Attrition” refers to the loss of participants from a study sample, which can bias results if it is greater in one group than in another.
“Instrumentation” refers to the measurement tools used to assess the dependent variable. The reliability of these instruments must be considered and improved.
What are testing, regression, participant reactivity, and investigator-related expectancy effects?
“Testing” refers to the potential for practice effects on tests or familiarity with testing procedures to impact performance.
“Regression” refers to the tendency for extreme scores to return to the mean on subsequent testing occasions, in the absence of real changes in function.
“Participant reactivity” refers to the ways in which participants may respond in a way that complies with what they think the investigator expects. This can result from self-report measures or from the participant’s perceptions of the investigator’s expectations.
“Investigator-related expectancy effects” refers to the influence of an investigator’s expectations on a participant’s outcome. This can include compensatory equalization of treatments, where an investigator gives additional attention to participants.
What is construct validity and what are two core features of it?
Construct validity refers to how well the variable being measured reflects the higher order construct the variable is meant to represent. Two core features of construct validity are clarity regarding the construct under investigation and how the construct is measured.
What are some common threats to construct validity?
Poorly defined constructs, construct under-representation, and treatment-sensitive factorial structure bias.
What is construct explication and why is it important?
Refers to the process of defining or operationally defining the construct under investigation.
What is construct confounding and what is an example of it?
Construct confounding refers to the extent to which constructs either overlap with or are independent of each other. In other words, two variables are so closely related that it is difficult to determine which one is responsible for the observed relationship.
An example of construct confounding would be assuming a general construct (e.g. memory) from a specific construct (e.g. prospective memory) that is being studied, leading to inaccurate conclusions.
What is mono-operation bias and what is an example of it?
Mono-operation bias refers to a single operation (i.e. a measure) that may under-represent a construct, only capturing a single facet of a complex multidimensional construct. An example of mono-operation bias would be measuring anxiety only by documenting the frequency of episodes of agitation.