During movement, a predictive signal with information on the consequences of an action is integrated with an afferent signal with actual sensory information triggered by the movement. This helps overcome neural delays and navigate noisy environments. Without brain stim, there are probably two main ways to alter the balance of these two signals, the combining of which attenuates self-generated sensory information. One, is to increase afferent uncertainty – i.e. make the environment uncertain and increase the mover’s reliance on prior predictions which increase the robustness when there is uncertainty about the current sensory feedback. The other might be to increase the precision of the predictive signal, by which I mean the accuracy of the prediction made by the internal model; doing so would increase its relative weighting compared to the unchanged afferent signal.
It matters what the internal model predicts about the sensory consequences of movements, as this appears to affect sensory attenuation, which is a useful – probably crucial – part of moving that helps distinguish between self- and externally-generated stimuli and enhances salience of environmental perturbations. It also matters that an expected sensation is perceived to be self-generated, even if it technically isn’t. Kilteni and Ehrsson used the well-known rubber hand illusion to test the effect of body state estimation on sensory attenuation, finding that tactile input was attenuated when a rubber hand made contact with the participant’s hand, but only if it was perceived to be their own hand (illusory self-touch).
I had just been thinking about what exactly an internal model can make predictions about during a force-matching task that is quite likely an unfamiliar task for participants. I’ve already come up with ideas for how to increase uncertainty during a force-matching task, but how might sensory attenuation be altered via an improved precision of a predicted signal coming from the internal model? What would a precise forward model (the part of the internal model that makes the prediction) look like during the task? What information should be given to it? An internal model forms and morphs when a movement is learned, but one already exists before a force-matching task even begins shown by the evidence for sensory attenuation during a self-generated force-match.
This is where body state estimation plays a role. The brain needs to know where the body is in space at any moment, but it would take a lot of processing power to weigh up sensory evidence from the eyes, muscles and tendons, inner ear and tactile receptors every second. Hence, when we move, we can make a prediction about such sensory consequences of that movement based on years of evidence, and instead only respond to externally-generated sensations that might indicate a perturbation or novelty in the environment.
So during a force-matching task, it’s more a case of perceiving where the body is in space as opposed to being experienced in a force-matching task and familiar with the tactile consequences of a moving lever.
This got me thinking about coaching movements, particularly in sports. I’m an advocate of creating an environment for the athlete to best-learn to move effectively, rather than instructing them explicitly about every tiny limb movement or muscle contraction they should be executing during the movement. Setting a goal which demands good movement provides feedback and reward for the athlete to learn from without overthinking and trying to consciously control small components of the movement that need to be automatic. However, it might also be useful to guide an athlete through a movement, or certain endpoints/checkpoints of that movement, to generate an accurate picture of what the sensory consequences of a successful movement should feel like. As in the force-matching task, healthy brains have a pretty good idea of what self-touch feels like and so we attenuate some tactile input during a self-generated force-match when our two hands are close in proximity (such an effect disappears when our hands are separated by a distance of about 25 cm).
Likewise it might be beneficial to build a picture of correct/effective body position during an exercise, such as the catch of an olympic lift, to supplement to goal of the movement with sensory information of what it’s like when that goal is achieved. This could help form a forward model, though it’s probably important that such ‘endpoint sensations’ are achieved fairly actively so that an action is coupled with the consequence as closely as possible. This may have a useful role early on in motor learning when teaching new movements, but also some components of a movement in need of some fine-tuning in a more experienced athlete. The latter led me to do a quick bit of reading on whole vs part practice, which takes into consideration movement complexity, organisation and difficulty, and is discussed in Magill and Anderson’s Motor Learning and Control book, with an example of it’s application in an asymmetrical bimanual coordination task here, by Kurtz and Lee.
I’m attaching a photo of a messy page of thoughts which preceded this post, which includes a somewhat legible example (blue writing) of the power of a well-learned movement which persists incorrectly into a new environment, only occasionally detected via conscious cognitive effort, and not by any sensory perturbations because all are inline with what the year-old forward model predicts during the action of exiting one room into another! This highlights the fallibility of using cognitive information to detect errors in a movement, which is superiorly guided by an accurate internal model formed of lower level sensory information.