Performance measurement Flashcards

1
Q

Why do we need RR? (Background)

A

Increasing and ageing population = increasing demands on imaging service
Ongoing shortage of radiologists; large number set to retire in next ~5 years, and many retiring early due to pressures of work
Decrease in reporting capacity (more reporting/less reporters)
Must meet reporting targets
Outsourcing?
Reporting radiographers – role expansion
RRs – beware of diagnostic shortfalls – errors – litigation
Mitigation – audit, performance measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the standards of practice for RR image interpretation? (background

A

Professional and regulatory body
Reporting standards - “gold standard”
Impact on patient management
Rigorous audit processes/regular CPD – maintenance and improvement
But image reporting is subjective – impact on diagnostic accuracy
Need to understand individual perception and analysis of medical images, as errors will impact on clinical reasoning and decisions made
Must also take into consideration factors affecting the image reporting process, as these also have an impact on reporting standards

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Standards of Practice, what are the policies/governing guidence surrounding RR?

A

RCR Standards for Interpretation and Reporting of Imaging Investigations 2018
Aimed at radiologists and other reporters; defines the standards and best practice that patients should expect and emphasises the importance of actionable reporting, teamworking, close communication, peer feedback and learning and system improvement.
RCR & COR Standards for the Education and Training of Reporting Practitioners in Musculoskeletal Plain Radiographs 2022
Defines the education and training required for all members of the multi-professional team who report MSK plain radiographs. This includes the learning outcomes to be achieved, minimum requirements for assessment, and recommendations on how education programmes for MSK plain film reporters should be designed and structured.
RCR & COR Standards for the education, training and preceptorship of reporting practitioners in adult chest X-ray 2023
Defines the education and training required for all members of the multi-professional team who report CXRs within a clinical imaging service
SCOR Preliminary Clinical Evaluation and Clinical Reporting by Radiographers: Policy and Practice Guidance 2013
Sets out the role of radiographers in relation to clinical reporting and initial image interpretation.
NHS Scotland Scottish Clinical Imaging Network (SCIN) National Framework for the Musculoskeletal Reporting Radiographer 2017
Provides guidance of the standard of training, education and auditing required by RRs to maximise their role and produce accurate imaging reports across NHS trusts in Scotland.
HCPC SOPs 2023
Regulatory body standards for professional registration and maintaining that position. Requirement for all radiographers with some areas specifically tailored to RR, e.g. scope of practice, etc.
IRR 2017/IR(ME)R 2017
Specific requirements of the regulations relating to the role of RR, e.g. suggesting advising/requesting further imaging, sound understanding of the risk and benefit of radiation doses, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what are the sources of bias/causes of error in image interpretation?

A

Internal (human)
External

theres one more unofficial:
clinical history can introduce bias - discussed in SOS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what are the internal sources of bias/causes of error in image interpretation?

A

Due to human error, image reporting errors unlikely to be removed completely, however could be minimised with greater understanding

Search errors – reporter fails to see the abnormality that’s visible on the image
Recognition error – abnormality is fixated upon but overlooked
Decision error- fixation occurs for a longer period of time, but abnormality does not get reported, possibly due to a lack of knowledge as the significance is not realised

Satisfaction of search (SOS)
This is a type of decision error, and it occurs when the reporter stops examining the image after one abnormality is found, potentially missing others
A full clinical history is important for accurate image reporting, but it can be a source of bias, leading the reporter to focus on one area only

Fatigue and distraction
Workload fatigue can cause errors, becoming more frequent as reporting radiographer workdays become longer with integrated teaching and administrative tasks acting as distractions
There is a national shortage of reporting individuals which places increased pressure, with the number of images to be interpreted increasing/the time to review each case decreasing, leading to a negative impact on diagnostic accuracy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what are the external sources of bias/causes of error in image interpretation?

A

Suboptimal images – poor technique/image exposure leads to uncertainty and ambiguity, preventing a definitive report. May be due to patient condition, but important to work with team so they understand what is required, and why.

Reporting room equipment, e.g. monitors/displays – must meet industry standards for image reporting, or abnormalities could be missed. Also consider room itself, e.g. lighting, quiet space, etc.

Mobile operating systems, e.g. tablets – used more widely; but limitations of using them, e.g. the touch screens can get covered with fingerprints, poorly selected display settings can compromise viewing, etc.

Department culture – need for supportive team approach, e.g. if an error occurs, it can have a negative impact (‘blame culture’), so it’s important to discuss this with the whole team, so that lessons can be learned – error meetings

Support structure in place - radiologist to support clinical decision making; colleagues to support dedicated and uninterrupted reporting time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How can image reprting performance be measured?

A

Performance measurement of RRs essential to ensure quality in clinical practice

Audit/clinical governance
Can be measured in terms of:

Diagnostic accuracy

Intra-rater/intra-observer reliability

Inter-rater agreement/inter-observer reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what is diagnostic accuracy and how is it measured?

  • how is it compensated?
A

Sensitivity/specificity, etc.
Figures produced are subjective, and therefore sensitive to bias
Compensate by measuring against pre-defined gold standard
Plot receiver operating characteristics (ROC) graph – the detection of abnormal cases against a backdrop of normal cases: TP (sensitivity) plotted against FP (specificity) – area under curve is diagnostic accuracy
FROC - free response operator characteristic – when there are multiple pathologies present
AFROC – alternative free response operator characteristic (helps with statistical downfalls of FROC; most useful for measuring reporting performance on images with multiple pathologies) – plots lesion location rate against FP rate

ROC curve is used to show diagnostic accuracy rate, the larger the area under the curve the higher the diagnostic accuracy (higher sensitivity at a given false positiove rate)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what is Intra-rater/intra-observer reliability?

A

Measure by comparing accuracy statistics before and after a set time
Allows RR to determine whether their own decision making varies between sessions
Can show inconsistencies and highlight areas for development
Works well where there is a definitive ‘answer’ e.g. MSK reporting – is there a fracture, or isn’t there?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what is Inter-rater agreement/inter-observer reliability?

A

RRs can test for agreement between their reports and those of a colleague/group of peers
Good QA for department – consistency of agreement will give a sense of accuracy, however there needs to be consideration of chance agreement, for validity
Cohen’s Kappa introduced to account for these chance agreement scenarios – calculation based on the difference between how much agreement is present and how much is expected by chance alone; doesn’t however work well in ambiguous situations, and doesn’t consider the difficulty/complexity level of the report, e.g. simple fracture v subtle signs of early cancer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly