Lecture 12b, Motor Learning, Performance & Retention (2) Flashcards
Quantifying Performance through Performance Curves
motor skill acquisition is captured by measuring performance across practice
- trials
- blocks
- time on the x-axis and some measure of performance on the y-axis
allows conclusions about temporary changes in performance to some outcome measure
textbook: performance curves are not learning curves as they do not necessarily indicate much about progress in the relatively permanent capability for performance and rather plots about performance over practice trials
What happens to performance with practice?
“(power)law of practice” across time, improvements are rapid at first & much slower later
- becomes more level or plateau over time
Negatively accelerated performance curve
rapid initial improvements followed by decreasing gains
- change slows down over time (rapid improvements and then flattens down)
- describe graphs as negatively or positively accelerated
- is the most common if you are charting average performance over time
- it is called negatively accelerated seeing as the change slows down over time
- for example: learning to skiing
Positively accelerated performance curve
little improvement, then rapid changes (then level off)
(eg. sail boarding)
- changes increases over time (to a certain point)
- skills that require more balance issues
- you get a lot of failures at first, then you get it and then there is this rapid increase (learning seems to take off)
Four Factors Can Influence the Shape of a Performance Curve?
1) between subject variability
2) within subject variability
3) scoring sensitivity (metrics used can change how performance judged)
4) ceiling and floor effects (obscures change in performance & learning)
1) Between-Subject Variability: Individual performance curves can be very different
groups averages may hide individual differences in performance
- is a mean across a number of participants (this is what the performance curve looks like) but it does not represent any of the participants as an average does not describe any individual differences
2) Within-Subject Variability: individual’s performance varies across trials
average performance across a block will hide trial-trial variation in performance
- an individual is going to see variability in their performances across trials and will not be a smooth trajectory but if start taking a mean across a block of trials, it might look like a smooth line of decrease in error overtime
- plotted every single trial, the data might be noisy
- typically group averages across individuals and across trials
3) Scoring Sensitivity: How things measured affects shape of performance curve
- measures needs to be sensitive to learning or improvement
rate of progress shown by a curve may be artificial and/or arbitrary…
example following: tracking task where ‘time-on-target’ is measured
in balance tasks, often provide “time on target” feedback - as there is more noise or more variable, we can be more flexible in our measure and as one gets better, we want to be a bit more precise and tighter with our measurements
Tracking Task
What is measured impacts conclusions about acquisition. The shape of the curve changes & performance “amounts”
3 different curves based on the same data set
how you measure performance affects conclusions about performance & shape of curve
- when 5% error bandwidth, over 10 trials, not ‘seen’ as very good
◦ sort of positively accelerated
- contrast to 15% bandwidth (same task!), now good
◦ accurate at the beginning as
well most of the time
- at 30%, very good, but almost no improvement
◦ looks like there was no improvement overtime but it is because the measure was not sensitive enough to detect that improvement over trials (they are already performing at top of the scale)
textbook: (1) the learning gains were rapid at first and slower later (30% curve); (2) the learning gains were linear across practice (15% curve); and (3) the learning gains were slow at first and more rapid later (5% curve)
- Ceiling and Floor Effects: Obscure change in performance due to scale limits
most tasks have an absolute score that no one can exceed (e.g., zero errors, 100% time on target)
- limitations at top of the scale = ceiling effects
- limitations at bottom of the scale = floor effects
as person approaches the ceiling/floor, the changes in performance may become increasingly insensitive to the internal changes in learning
textbook:
- when the performance maximum is reached, this is called a ceiling effect because a higher performance score is not possible
- if tracking error had been measured in this study (say, using root-mean-square error, or RMSE), then zero error would be the minimum score possible and would be called a floor effect, because a lower performance score than this is not possible