Fig. 11.1
Examples of different manipulator architecture for upper limb rehabilitation. For the serial mirrored manipulator. (a) MIT Manus [1]; (b) MIME [2]; (c) ARM Guide [3]; (d) ACT 3D [4]; (e) Gentle-S [5]; (f) Armon [6]. Exoskeletal architectures. (g) Pneu WREX [7]; (h) ARMin [8]; (i) Dampace [9]; (j) L-EXOS [10]; (k) MGA exoskeleton [11]; (l) CADEN 7 [12]
The robotic devices can be grouped also considering the control architecture. The initial applications in the robotic rehabilitation field employed industrial manipulators opportunely controlled. However, due to their inherent high power/weight ratio and some critical issue related to safety, engineers decided to design a different class of devices where the specific requirements were oriented toward an enhancement of human-robot interaction. This new class of haptic devices is based on force reflecting controllers designed to provide realistic rendering of the simulated environment and therefore to generate different mechanical impedance at the end effector. This is guaranteed by the power of the actuation, however a more powerful electromechanical actuator tends to be heavier. Larger and powerful actuators can deliver a high force at the end effector, but they also increase the friction and inertia perceived by the human operator. Contrarily, small motors can guarantee a very backdrivable design, but they deliver a limited range of haptic rendering. That’s why in haptics the choice of actuation is a crucial design specification, which is driven by the overall structure of the device, including the control architecture. There are several solutions adopted for the implementation of haptic control scheme, but the most widely used can be mainly summarized in the class of impedance and admittance controllers.
In order to understand how the two mentioned control architectures work, Fig. 11.2 depicts a simplified model of a one degree of freedom actuator, where Fh and Fa are respectively the force exerted by the human and the actuators. Writing down the equation of motion of the entire system, there is also the force related to the intrinsic mechanical impedance Z due to the friction of the mechanism and the inertial contribution of the moving mass of the mechanism.
Fig. 11.2
Simplified model of a haptic actuators
(11.1)
The maximum force that the haptic device can deliver is strictly related to the maximum level of the haptic rendering the human operator can perceive; this concept is explained with the introduction of the definition of transparency [13] of an haptic device which is the ratio of the impedance as the input source of the haptic interface Zin and the actually perceived output impedance Zout of the device.
(11.2)
The principle of transparency is strictly related to the concept of back-drivability, and a transparency close to unity means that the source of the mechanical impedance is not altered by the mechanics of the device [13]: the action of the actuator is perceived at the end effector in a pristine way by the operator. Colgate [14] provided another important contribution to the comprehension of the mechanism of haptic rendering, describing the Zwidth as the difference between the maximum load Zmax and the perceivable friction and inertia at the free space movement Zmin.
(11.3)
Both Zwidth and transparency T are complex operators, which are related to many factors and are strictly dependent on the frequency characteristics of the device and associated control architecture. In haptics and all the applications related to human robot interaction there are two main controllers that are used and they are strictly correlated with the specific feature of the hardware on which they must be implemented: generally speaking the main characteristics which drives the choice between different control schemes is the intrinsic mechanical impedance of the hardware referred as Z. For a very back-drivable device (with a very low value of Z) the preferred controller is the simple Impedance control scheme: the loop of an impedance controller does not require any force sensor to detect the interaction forces at the end effector exchanged between the operator and the device, because due to the back-drivable mechanism the motion of the operator is the parameter by which the control action is generated. Observing Fig. 11.3a the control effort is computed by comparing the desired position x d with the actual position x a ; the force generated by the impedance controller is, in most of the applications, the sum of a proportional and derivative actions as reported in the following formula.
Fig. 11.3
(a) Impedance control scheme. (b) Admittance control scheme
(11.4)
The computed force F c is then summed to the force exerted by the human and converted into joint torques. The overall impedance felt by the user, i.e. the transfer function between the robot motion X a and the hand force F h is a combination of the programmed impedance and the dynamics of the robot Z. Therefore, this control scheme is only applied when robot dynamics is negligible in comparison to the target impedance generated by the controller action. Open-loop impedance controllers are easy to implement, and are widely used in robotic rehabilitation. Devices like MIT-Manus [1], ‘Braccio di Ferro’ [15], ATR’s Parallel direct drive air-magnet Floating Manipulandum (PFM) [16] and vBot [17] all use this control scheme.
Admittance control scheme (see Fig. 11.3b) is mainly based on two nested control loops and it is implemented when the mechanical impedance perceived at the end effector of the device is high and therefore the system does not result in a back-drivable solution. In the outer loop the human-generated force, measured by a force/torque sensor, is translated into a robot movement through a ‘target admittance’ block, which specifies the desired behavior of the manipulator at the interface with the subject.
In other words, the target admittance block reflects the desired haptic behavior by generating a desired device position θa; the generated position is then compared with a desired θd and sent to an inner loop control consisting in a position controller, for example a PID (as in the figure a PD), which is used to drive the system in the configuration generated by the admittance outer loop. The robot is transformed into a position (displacement) generator where the position is the response of second order linear model corresponding to the target admittance block. The inner loop is generally implemented by the servos of the motor at a very high speed (>3 kHz). The outer loop can be updated at lower speeds to limit the computational burden.
Admittance control is used in devices with low levels of back-drivability; an example is the HapticMASTER (Moog, USA) [18], which is a widely used device in the field of neurorehabilitation and neuroscience; see, for instance, the ACT3D [4] and GENTLE/S [5] systems. Whereby the user exerts a force on the device, the device reacts by generating the appropriate displacement.
Overall, the two proposed control schemes behave in similar way and the final purpose is imposing a dynamic behavior to the end effector, which mainly depends on the computational action of the software, hiding the intrinsic mechanical impedance of the hardware. The difference between the two solutions is in the parameters used for the generation of the control efforts: while the impedance controller receives as input the motion flow generated by the human interaction with the end effector, feeding back a force computed by the controller, contrarily the admittance control is based on the acquisition of the interaction forces between the human and the device. The presence of the force sensor at the end effector in the admittance control scheme is therefore crucial for the generation of the desired haptic rendering, which is primarily computed as a desired position from the admittance target block and successively translated into a force by the inner position loop. It is clear from these considerations that the admittance controller results in a more complex architecture with respect to the impedance one, for the presence of the sensor employed to acquire the force and the two nested loop which must be opportunely tuned for respecting stability. Furthermore, if on one side the admittance controller allows for a wider Z width because is usually implemented on powerful robotic devices able to deliver high force and torque ranges, on the other side the presence of force sensors and the necessity of very high gain parameters in the control loop has the disadvantage of being intrinsically less safe in case of failure with respect to the impedance controller where the control loop is extremely simplified and the back-drivability of the mechanical structure is still able to assure a good level of haptic rendering and an intrinsically safe gentle interaction. In other terms, impedance control is generally more robust than admittance control in most applications and implementations.
11.2 Measures of Performance and Recovery
About 80 % of stroke survivors are unable to perform activities of daily living (ADL) due to an abnormal disturbed sensory feedback or motor control of the upper limb on the paretic side [19, 20] associated to sensory distortions, which manifests through a reduction of tactile or afferent feedback, or as the opposite, through hypersensitivity and lack of control.
The loss of motor control is classified in typical neurological impairments by the following phenomena:
Muscle weakness limits the maximum potential output force of a muscle [21]. It is caused by the damage to motor cortex neurons or their corticospinal projections, decreasing the activation of spinal motor neurons controlling the muscles.
Hyperactive reflexes can resist or even reverse desired movements. The expression of hyperactive reflexes is felt as increased muscle tone or joint resistance when it depends on the muscle-length feedback [22, 23]. When dependent on the muscle speed feedback, the effects are described as spasticity [24].
Abnormal muscle synergies express themselves through a loss of independent joint control, where involuntary co-activation of muscles occurs over multiple joints [25, 26]. For example, when attempting to reach up and out for an object on a shelf, the abduction torque in the shoulder causes an involuntary flexion of the elbow, reducing the achievable reaching distance of the hand [4].
Muscle atrophy is a decrease in muscle mass and the results of muscle disuse over time [27]. The loss of neural activation leads to a slow wasting away of the affected muscle fibers, thereby contributing to long-term muscle weakness.
Increased joint stiffness is due to changes in muscle and tendon properties. These changes are a result from permanent muscle activity due to abnormal muscle coactivation patterns or spasticity.
The immediate effects after a stroke can range from losing all voluntary muscle activation, to having no noticeable effects on limb movements. Spontaneous recovery can bring back some original motor function, but this takes many months to level out [28].
Several stroke assessment scales are used to more precisely assess the need for medical treatment and assistance, and to monitor functional recovery. The following scales all capture some of the mental and motor impairments in stroke survivors:
Barthel Index (BI): measures independent functioning and mobility in daily life.
Functional Independence Measure (FIM): measures sensitivity and comprehensiveness in daily life.
Chedoke-McMaster Stroke Assessment (CMSA): measures impairment and activities of daily life.
Motor Activity Log (MAL): measures arm usage.
Modified Ashworth Scale (MAS): measures muscle tone.
Tone Assessment Scale (TAS): measures muscle tone.
Modified Tardieu Scale (MTS): measures muscle tone
Motor Assessment Scale (MAS): measures performance of functional tasks.
Fugl-Meyer Assessment (FMA): measures motor and joint function and sensation.
Action Research Arm Test (ARAT): measures ability to handle different objects.
Nine Hole Peg Test (NHPT): measures fine manual dexterity.
Wolf Motor Function Test (WMFT): measures time-based upper extremity performance.
These scores are placed in order of level of detail given. The top scales only yield an indication of the care and assistance needed. The bottom scales measure the dexterity of the upper paretic limbs, and are most useful for upper-extremity research. Of these, the FMA [29] is a well-designed, feasible clinical examination based on the general stages of recovery [30]: these stages cover different course of the pathology form an initial stage when the subject is unable to voluntarily move his/her limb, an intermediate stage when synergies are corrupted due to the presence of spasticity and a final one when movement still appears clumsy, but spasticity is reduced and the subject can voluntarily control the motion performing simple actions. It has been widely tested in the stroke population, but due to the amount of time it takes to administer, it is mostly used by scientists, not by therapists or physicians. The ARAT and NHPT have been suggested as faster and more accurate assessments to measure dexterity [31]. While for quantifying muscle tone, the MTS seems the most objective measuring the stretch reflex induced by an angular movement of each joint in its range of motion at different velocities [32].
A problem for most of these clinical scales is their non-linearity, lack of resolution, and inter-rater reliability. Some scales have only six possible levels, and when different examiners administer the test, the results may vary as well. Robots are now used in research environments to obtain more accurate measurements [33–35]; the capacity of a haptic device to acquire multiple data relying on various sensors can drive towards a unified approach to the use of robotic rehabilitation not only as a mere exercising machine, but also as an instrument able to quantify and qualify the pathological conditions. There are several parameters that can be detected during robot therapy; most of the works in literature uses kinematic data to characterize motor learning and motor recovery. The paradigms of the experiments are mainly (most widely used) characterized by point-to-point movements starting from a position in the workspace and reaching a target with or without assistance of the robotic interface. Observation of the kinematics usually consists in an offline analysis of trajectories over the course of the different phases of the protocol. Each trajectory during reaching tasks is a movement that can gather information of the capacity of the subject to interpret the task and successively control his/her limb. Analysis usually aims to characterize each trajectory by the following parameters:
Lateral deviation (LD): it is defined as the deviation from the straight line that connects the initial position to the target, evaluated at the time of peak velocity or also the max deviation over the entire trajectory to the target. Positive and negative errors correspond to leftward and rightward lateral deviations, respectively [36].
Acceleration peak: it is the highest value of the acceleration profile. This indicator, if associated with movement direction, can provide a polar plot that we expect to be asymmetric, as suggested in a previous study [37] that analyzes the anisotropy of the inertial properties of both the human and robot arms.
Aiming error: it is the angular deviation from ideal trajectory (the straight line that connects the starting point to the target), evaluated 300 ms after movement onset; it a indicator which is mainly used to evaluate the feedforward effort to movement.
Jerk metric: the jerk is the derivative of the acceleration and allows measuring the level of smoothness during a movement, which resulted to be a characteristic of coordinated and healthy movements [38–41].
Directional analysis: it is usually performed for each of the previously defined variables, in order to highlight the interplay between the effect of the assisting/deviating robotic action and the capacity of the subject to generalize the movements in the arm workspace [42, 43].
Learning Index: the degree of adaptation to the action of a robotic device during a motor learning or motor recovery exercise. Usually these index is computed by observing the force field effect on the lateral deviation by means of the following formula [44] that takes into account the values of lateral deviations in the force-field and catch trials while the force field generated by the robotic device is suddenly and randomly removed (LD ff and LD catch )
(11.5)
Other kinds of measures consider not the kinematic, but the dynamics of the movements. Most of the devices employed in rehabilitation are equipped with force/torque sensors, which allow detecting the interaction between the device and the patient. Although kinematic data provide a more generalized characterization of the movements, the force exchanged between the end effector and the subject’s limb can be used to have a deeper insight of the modulation of motor commands during human robot interaction. This is the case of the stiffness measurements: the pioneering technique for stiffness estimation was based on the acquisition of muscular restoring force resulting from a small imposed displacement [45]: the new experimental method used a computer controlled mechanical interface (planar manipulandum) to measure and plot the conservative elastic force field associated to the posture of the arm by observing the steady-state force responses to a series of separate one-dimensional ‘step’ perturbations imposed from different directions. The main outcome was that endpoint stiffness of the human arm in the horizontal plane is characterized by directional properties that depend on limb geometry. Robot generated force impulses have been used to estimate stiffness during multijoint movements [46–52] and a successive experiment by Burdet et al. [53] strengthens the robustness of this technique by implementing an algorithm able to modulate the hand displacement on a predictive algorithm of the unperturbed trajectory.
The question arises as to whether is possible to have a continuous information of the muscular stiffness level during robot therapy, and gather some insights on how the central nervous system is able to modulate muscular activation while manipulating external environment. Recently Piovesan et al. [54, 55]. made a fist attempt in this direction, proposing a new and fast method for measuring the arm impedance of people with neuromotor disabilities during robot assisted movements. This methodology was tested with a stroke survivor’s population. The results showed that the performance improvements produced by minimally assistive robot training are associated with decreased viscosity and stiffness in stroke survivors’ paretic arm and these mechanical impedance components are partially modulated by visual feedback. In their review Marchal-Crespo and Reinkensmeyer [56] highlighted that it is necessary to find ‘improved models of human motor recovery to provide a more rational framework for designing robotic therapy control strategies’. While musculoskeletal models have been extensively used for personalizing rehabilitation treatments [57], only recently some studies proposed computational models of neuromotor recovery. Some of these models describe the subjects’ capability to execute a task e.g. [58, 59] modeled the time course of the performance during training. Others models investigated how assistive forces [60] or more in general training [61] influence voluntary control, thus neuromotor recovery. Some of these models [62–64] were incorporated in the robotics controllers to personalize the intervention and to automatically regulate the exercise according to subjects’ residual ability, the progress of the disease, and the improvement due to the on-going therapy.
Another class of models focuses on the recovery process at the cortical level [65–67] studying how focal cortical lesions determine the reorganization at the neural and motor levels. Several studies investigated muscle activation highlighting that muscles participate to the production of movements in well-defined functional groups – called “synergies” – activated by the descending cortical commands. A recent study [68] has suggested that muscle synergies can be considered as physiological markers of motor cortical damage and can be used to characterize and understand motor impairment in stroke survivors. Finally there are models that look at both the cortical and the behavioral levels [69–71] describing how in motor skill learning tasks voluntary movements induce cortical and subcortical reorganization.
These recent studies opened the new field of computational rehabilitation and have a far reach potential both with respect to a better understanding of the mechanisms underlying the recovery process and with respect to the optimization of the rehabilitative intervention.
11.3 Learning Adaptation and Rehabilitative Exercises
As suggested by recent reviews [72–74], neural plasticity is recognized as the fundamental property of the human brain that must be exploited in order to achieve significant improvements in neuromotor rehabilitation of the upper-limb. This somehow clashes with the conventional view among clinicians that the recovery margin is close to null when the stroke survivor becomes “chronic”. It is then necessary to revise the conception and design of robot therapy in order to promote neural plasticity and enhance motor learning/re-learning. The underlying fundamental problem that must be faced in order to achieve such goal is that the main “cybernetic” effect of the neurological insult is to break the intrinsic coherence of purposive actions, namely the causal relation between intended actions, actual movements, and the corresponding feedback reafference (see Fig. 11.4 for a graphical representation of this concept): the motor program that drives the muscles in agreement with a given task can successfully unfold its control patterns only if the sensory consequences of the control patterns (the sensory reafferences) match the expected motion patterns. The same criterion of coherence impinges upon the validity of the task planner: if the expected and the actually measured motion patterns do not fit than the task description and the related motor program must be changed.
Fig. 11.4
Cybernetic structure of purposive actions in healthy subjects
The neurological damages that characterize stroke patients do not allow them to achieve the cybernetic coherence of purposive actions typical of healthy subjects. Rehabilitation treatments provided by human or technological operators has the purpose of helping the patients to recover a minimum of cybernetic coherence by providing suitable assistance patterns. In general, three types of assistance can be envisaged, as sketched in Fig. 11.5. (1) Cognitive assistance, characterized by procedural, declarative and motivational aspects; (2) Electrical assistance (FES), i.e. direct electrical stimulation of weakly activated muscle groups; (3) Haptic assistance, i.e. physical interaction of a human or robot therapist which consists of modulated patterns of assistive/resistive forces, associated with an optimally regulated mechanical impedance of the therapist.
Fig. 11.5
Possible types of assistance in pathological conditions
Frequently, cognitive assistance has been developed in the framework of immersive or semi-immersive visual virtual reality. However, if used alone, it must be limited to individuals with a rather low level of impairment, namely patients who are able to carry out a goal-directed movement; in intermediate cases, it has been shown that a specific type of non-adaptive physical assistance, for example partial compensation of gravity (Armeo Spring Hocoma, AG, Switzerland) can be helpful. In more severe cases, in which the subjects are unable to carry out simple reaching movements in some directions (for example center-out movements to distant targets, on the border of the workspace) or have a strongly reduced RoM (Range of Motion) these movements must be supported by carefully regulated assistance. The purpose of such assistance is not to carry out the movements in place of the subject, as it happens in prosthetic devices. On the contrary, the issue is to train the subject is such a way that he/she can re-learn the control patterns that are necessary to perform the movement. The physiological prerequisite of functional recovery by means of training is the availability of neural plasticity and the main issue that must be faced by the designer of rehabilitation technologies is to use design principles that allow recruiting such plasticity in an efficient manner. There are two main dangers that must be taken into account: (1) the danger of spasticity and (2) the danger of slacking [60]. The first danger suggests to use smooth assistance patterns, in order to avoid strong acceleration peaks that may induce exaggerated reflex reaction and thus determine muscular hypertonus and worsens the level of spasticity. For this reason it is important to integrate, in the electrical/haptic assistance protocol, mechanisms for on-line detection of the mechanical impedance of the arm which is a sensitive indicator of excessive tonic activities induced by treatment (see later sections of this chapter). The second danger, which consists of the general tendency of human subjects to over-rely on external help, behaving as greedy optimizers of error and effort, suggests to modulate the level of assistance to the minimum level required for allowing to carry out the planned action without time constraints, i.e. in a self-paced manner. This is equivalent to attempt to trigger the assisting patterns upon positive detection of the intention to move. In other words, the regulation of assistance should avoid the imposition of passive movements because in such case the physiological coherence of the plan-effort-reafference circle is not reinforced: avoiding (unidirectional) Passive Mobilization (PM) is the main avenue to promote (bidirectional) haptic interaction.
On the other hand, passive mobilization can help contrasting the deterioration of the thixotropic properties of the collagen matrix of the muscle tissue that is a secondary consequence of the functional immobilization of the paretic limbs of stroke survivors. Thus, although some degree of passive mobilization is acceptable in a treatment routine (see also the technique of dynamic splinting, discussed in a coming section of the chapter), the majority of it must be based on smooth and minimized patterns of assistance, triggered by the detected intention to move. In fact, task oriented training has emerged as a leading concept in clinical practice. In other words, it is not movement per se, obtained for example by means of passive mobilization, which is effective in recruiting plastic adaptation, but minimally assisted movements associated with a task and volitional effort.
The intention to move, even in severely motor impaired subjects, can be detected directly or inferred/promoted indirectly. Examples of the former approach are provided by the “contralateral-homonymous paradigm” [75] or by “body machine interfaces” that extract motor intentionality even from extremely reduced mobility [76]. Another direct approach to the detection of intentionality is based on the analysis of brain activity, which is known to occur in anticipation of overt actions, in the so called preparation time. For example, event-related de-synchronization signals from cortical activity have been proved to be reliable and effective triggers of electrical assistance for the fast recovery of foot drop [77]. In contrast, EMG-triggered or cyclically delivered FES, in absence of a trigger of the intention to move, is totally ineffective for the recovery of voluntary control [78, 79]. Figure 11.6 depicts the aforementioned solutions. The indirect approach to the detection of intentionality is a typical topic in HRI (Human-Robot Interaction) and in the study of interacting dyads. It is known indeed that people often perform actions that involve a direct physical coupling with another person, for example when cooperating in the manipulation of a large and heavy object. This is a problem of coordination through physical interaction, which has some analogy to the interaction between a human or a robotic therapist with a motor impaired individual. It has been found that dyads produce much more overlapping forces than individuals, especially for tasks with higher coordination requirements [80], thus suggesting that dyads amplify their forces to generate a haptic information channel. In the case of robotic rehabilitation such haptic information channel can be facilitated by different techniques, such as oscillatory or pulsed assistance patterns, as explained in following sections of the chapter. In general, it is safe to say that in the near future the different assistance channels will need to be integrated. Indeed, rehabilitation of individual with neuromotor disabilities is a complex process that requires a “division of labour”, exploiting the pros of a method for compensating the cons of another. In particular, electrical assistance seems to be more appropriate for specific muscles or small muscle groups, whereas it is unfeasible for movements that involve multiple degrees of freedom. In contrast, haptic assistance delivered by a robot or exoskeleton is more naturally suited for coordinated movements of proximal and distal movements of the arm, although it is ill-suited for more functional movements, typical of the activities of daily life, such as picking a bottle and filling a glass. This is where human assistance, by an experienced physiotherapist, is more appropriate. On the other hand, the degree of success of such “precious” and “expensive” human intervention is greatly enhanced if electrical/robot assistance training has allowed the patient to recover basic control functionalities, such as range of motion, speed, force that are the prerequisite for achieving functional movements.
Fig. 11.6
Triggering channels of assistance with intention detection
11.4 Results and Case Studies
In the present chapter we will report some examples of using haptic devices designed and developed the Motor Learning and Robotic Rehabilitation Laboratory of the Istituto Italiano di Tecnologia. The first section will be dedicated to some applications for proximal arm (elbow and shoulder) and body machine interface with special emphasis to assistive control and visual feedback design. The second part of the case studies will be focused on rehabilitation of distal upper limb in particular the wrist.
11.4.1 Case Study 1: Pulsed Assistance
This study [81] aims at evaluating a new paradigm of minimal haptic assistance designed to aid stroke survivors in a reaching task. We proposed to combine a force field that is continuous in time with a pulsed component, which is characterized by sequences of a smooth impulse (0.2 s) followed by a refractory phase (0.3 s) having a repetition frequency of 2 Hz. For each subject we estimated the level of minimal assistive force that the robot has to provide to keep him/her stable in different positions of the workspace. This value corresponds to the maximum assistive force that the robot applied during the reaching task, in which the continuous component contributes for 50 % of the total amplitude while the remaining 50 % is the impulse peak amplitude. As a consequence, the assistance average value over an impulse period is much inferior (∼30 %) to the minimal assistance estimated value. This choice makes the task more challenging for the subjects and is aimed to avoid the phenomenon known as slacking. Moreover, the inclusion of a transient component in the force feedback should enhance the mechanoreceptors phasic response and hence favor proprioceptive awareness.
The protocol was designed in order to promote the active execution of large outward movements. Five ‘far’ targets (T) were arranged at a distance of 26 cm from a starting position (S) for center-out movements. Three intermediate targets (I) were added for the return movements, at distance of 13 cm from S, as shown in Fig. 11.7 (left panel). The task consisted of reaching one target, chosen randomly in the set of points, with a threshold of 2 cm. Reaching sequences followed the scheme: S → T → I → S. After a reaching movement was completed, a 1 s delay was introduced before presenting a new target in the sequence. We tested two conditions, i.e. vision (V) and no-vision (NV). In both cases haptic feedback was provided and in vision condition a screen displayed the target and current hand position by means of colored circles. The protocol included two sessions on 2 separate days: a familiarization session and a test session that comprised two evaluation blocks and one training block. In the training block of trials the subjects had to complete a total of 6 target-sets (3 with vision, 3 without vision); each target-set consisted of 30 outward movements (S → T) and 60 inward movements (T → I and I → S), for a total of 90 movements. The evaluation blocks aimed at estimating the minimal assistance level.
Fig. 11.7
Left panel: Pulsed assistance task. Red circles are the possible target locations, green ones are the intermediate targets and the black circle is the starting point. Yellow filled circle represents the current hand position while the red filled one the active target. Right panel: Bar graphs showing the median performance indicators values relative to the first (grey) and the last (light blue) trial
Five stroke survivors (all females, age 45.6 ± 12.5) participated in this study. The inclusion criteria were: (1) diagnosis of a single, unilateral stroke verified by brain imaging; (2) sufficient cognitive and language abilities to understand and follow instructions; (3) chronic condition (at least 1 year after stroke); (4) stable clinical conditions for at least 1 month before being enrolled in this study. All subjects underwent clinical evaluations before starting the present study. The average Ashworth score was 1.6 ± 0.4 (range 0–4), the average Fugl-Meyer score was 30.4 ± 15.4 (range 0–66).
Subjects’ trajectories throughout the training appeared to be fragmented in a series of sub-movements. Nonetheless, we observed a substantial performance improvement throughout the training session in the NV condition: the mean speed (Vm) of movement increased (V: 0.011 ± 0.018 (0.044, −0.029) [m/s], NV: 0.029 ± 0.033 (0.131, −0.029) [m/s]) and there is accordingly an increment in duration of the first sub-movement compared to the total movement time T-ratio (TR, range [0,1]) in the NV condition (V: −0.163 ± 0.476 (0.310, −1) [s], NV: 0.332 ± 0.393 (0.965, −0.521) [s]). At the end of the training, subjects improved their precision in reaching the targets, especially in NV trials, thus the end-point error (E) decreased during the training (V: 0.006 ± 0.030 (0.082, −0.077) [m], NV −0.059 ± 0.069 (0.091, −0.205) [m]). Figure 11.7 (right panel) shows Vm, TR, E for one of the subjects. These results highlighted that the minimal level of assistance required was consistently reduced at the end of the session (1.27 ± 1.58 (0.04, −3.45) [N]) for most of the subjects.
11.4.2 Case Study 2: Integrating Proprioceptive Assessment with Proprioceptive Training
Multisensory integration of the information from muscle spindles, Golgi tendon organs, joint and cutaneous receptors of the arms allows the human brain to be aware of the relative position of the two hands as well as their positions in the peripersonal space, in the absence of visual feedback. This integration capacity is crucial for conceiving and carrying out purposive actions in everyday life, but is frequently impaired in stroke patients [82]. This is a strong obstacle for the recovery of sensorimotor functions. Clinical observations suggest that intact position sense following stroke strongly correlates with motor recovery of the paretic arm and is predictive of long term motor recovery [83]. Until now, the capacity to discriminate the relevant features of the deficits is limited.
A recent work by Dukelow et al. used the KINARM device [84], by fitting each arm of the subjects in one of two exoskeletons: one arm was passively placed in one of nine positions, in one half of the workspace, and the subject was told to actively mirror-match the other arm in the contralateral hemi space. This procedure provides a quantitative assessment of the limb position sense in the joint configuration space.
We propose an alternative method [85], based on a bimanual manipulandum [86]. The subject is asked to actively match the hand position of the paretic arm with the healthy arm, using a set of 17 test points, balanced in the two halves of the workspace. In other words, matching is performed in the extrinsic peri-personal space and is assessed the hand position sense. Since proprioceptive assessment should always be integrated with proprioceptive training, in order to provide adaptive assistance, we propose also a robot treatment mechanism that uses the same set of 17 target points, balanced in the peripersonal workspace, with the difference that in this case the non-paretic hand (passively placed by the robot in 1 of the 17 test points) is the target of the paretic hand and the motion of the arm is robot-assisted by a smooth force field.