My Questions Flashcards
1) Why did you not mention supervised learning in your thesis? What is supervised learning?
Supervised learning was used - the therapists worked with me and the patients to train the machine learning models by selecting examples of individualised movements. This data was then tied to specific observations - by using thresholds; therapists could manipulate when feedback was perceived based on joint positions.
More examples of target movements or compensatory movements were often required adding additional training examples to achieve greater accuracy linked to the therapist’s clinical reasoning.
2) Did you do a quantitative analysis of your machine learning models (for example KNN vs NN etc?
I was led by participant behaviour rather than technical evaluation. No specific quantitative tests were run to compare across the models. (other than Appendix 3-1 system trials - with 13 trials documented and rated subjectively in real time).
Linear regression models were so much quicker to train and mapped onto the movement trajectories in a smoother, more reliable way than NN. NN took longer to train (over 2 mins as opposed to 10 seconds. NN appeared to be more noisy with jumps in the mappings noted making the feedback more erratic for patents.
3) What were the key methodologies you employed in your research?
Mixed methods were used with a broad interdisciplinary approach - feedback from clinicians, motor neuroscience, human-computer interaction experts.
The methods used for data collection and analysis used both qualitative (based on feedback and questionnaires), alongside quantitative data collection
Quliatitaive = Braun and Clarke (2006) thematic analysis of 10 hours of recordings. “flexible analytic approach”.
Quantitative = Wekinator: duration of time in compensation and models to be trained.
This flexibility was important for both the qualitative and quantitative approaches: the technical aspects of system testing and design used in the workshops and case studies. Here a rapid design methodology permitted changes to take place directly in the workshops. This was a co-design approach: with therapists’ and patients’ feedback into the system design.
4) What would you do differently if you did it again?
I would have a more robust design history file - specifically based on my current work at a medical start-up where the quality management system is the heart of the company. All system changes and design tests should be verified and validated in a transparent way. I could have monitored all system changes more clearly.
I would ask a standardised questionnaire for PROMs after each block of 50 reps to understand the patient’s perception of exertion. Maybe using the BORG scale. System usability scale and maybe the technology acceptance measure. This could have been administered to therapists and patients if the system, had a GUI that therapists could use to do the training themselves.
I would have used a motion capture system to compare the Sonic Sleeve outputs to in the lab setting. -capture more fine grained movement parameters.
I would aim to run mobile phone to capture the data alongside the motion capture to compare across the two systems. Could the feedback be accurate enough using mobile tracking?
5) What is your main research question?
Can real-time auditory feedback help a patient with stroke achieve higher quality movements in their rehabilitation?
6) What is your unique contribution in this research area?
No other research has tracked upper limb rehab with self-selected music to guide the quality of movement on three types of compensation. trunk flexion, shoulder abduction and shoulder elevation. This combination provides a template for future research to build off.
7) What was your key finding?
The majority of patients can make use of real time auditory feedback and reduce the proportion of time in bad quality movement by around 20%.
8) Why should we care about your research?
The cost of healthcare is prohibitive and therapists are in short supply. Most rehab takes place in the home so systems such as sonic sleeve could help plug the gab and provide enhanced ability to track patient progress - reps, duration, quality of movement and adherence to prescriptions.
9) Who is the key audience for your research?
Other researchers in rehabilitation technology, psychology and movement neuroscience. Engineers may find the results of interest in designing and testing new systems or informing iterations on existing platforms.
10) Do you think the supervised nature of the training helped make the intervention more clincally relevant?
The models were highly individualised and therefore, the feedback was targeted directly to the individual abilities of the patients. This mimics a 1:1 session with a therapist so the data is well aligned with the individual and therefore, the feedback is likely to provide better feedback for the individual. However, scaling up is hard as this requires training for every patient.
11) Do you think that unsupervised modes could work? What may be required for this approach to be successful and what may be the benefits?
If a larger corpus of data was collected and labled appropriately then a new patient should in principle be able to do a short calibration and have template models assigned to them. For example three levels, severe, moderate of mild impairment levels.
The key benefit here is that the application could scale up for more easily and be deployed to support patients without a laborious training process - templates would save therpiast time and if the models were constantly adapting to the patients with therapists being able to set thresholds only if needed this could be a powerful rehabilitation aid.
12) What was your primary motivation for the research?
I have been working to use music and new technologies for the benefit of those in need. My own grandmother had multiple mini-strokes so I have been invested directly in this population for many years.
13) What were the key challenges in undertaking the research? And where in your thesis do you describe these?
Working directly on a stroke unit that was incredibly busy was a challenge - navigating tight timetabling and a therapist keen to do core OT or PT as opposed to feasibility research was something that required persistent negotiation.
The feasibility research mapping multiple levels of sound was also a key challenge. Building models on the fly in real-time was challenging but also provided a powerful tool for co-design in real-time - permitting therapist and patient perspectives to be taken into account. (Page 50)
Technical challenges included patients in the home not finding it easy to connect to Wifi on the rehabilitation kits, not being able to plug in cables. Practical, supposedly simple tasks were actually blockers to the research.
14) What is the key piece of research in literature that you are building on?
Thielman 2010 - used a sensor on the back of a chair to signal trunk flexion. I use camera tracking instead. Did not use music - alarm based. Hard to know how motivating this was for patients - were they irritated by the auditory feedback? noises? - Reaching Performance Scale (RPS) was better with auditory feedback compared to constraints.
I extend this approach using a different technology (computer vision with machine learning on multiple modes of compensation)
Valdes 2018 and 2017 used visual and force feedback with significant reductions in trunk displacement. No auditory feedback was used in this research.
15) What were the key assumptions in your research?
Page 45: high dose is important; music can provide a motivational framework
Based on prior research Kirk 2016 - drum pads in the home.