Skip to main content

ORIGINAL RESEARCH article

Front. Neurosci., 02 September 2014
Sec. Neuroprosthetics
This article is part of the Research Topic Biosignal Processing and Computational Methods to Enhance Sensory Motor Neuroprosthetics View all 20 articles

Feedback control of arm movements using Neuro-Muscular Electrical Stimulation (NMES) combined with a lockable, passive exoskeleton for gravity compensation

  • 1Control Systems Group, Technische Universität Berlin, Berlin, Germany
  • 2Research Group for Machine Design and Rehabilitation, Vienna University of Technology, Vienna, Austria
  • 3Hocoma AG, Volketswil, Switzerland
  • 4NeuroEngineering and Medical Robotics Laboratory, NearLab, Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Milan, Italy
  • 5Fraunhofer Institute for Experimental Software Engineering, Kaiserslautern, Germany

Within the European project MUNDUS, an assistive framework was developed for the support of arm and hand functions during daily life activities in severely impaired people. This contribution aims at designing a feedback control system for Neuro-Muscular Electrical Stimulation (NMES) to enable reaching functions in people with no residual voluntary control of the arm and shoulder due to high level spinal cord injury. NMES is applied to the deltoids and the biceps muscles and integrated with a three degrees of freedom (DoFs) passive exoskeleton, which partially compensates gravitational forces and allows to lock each DOF. The user is able to choose the target hand position and to trigger actions using an eyetracker system. The target position is selected by using the eyetracker and determined by a marker-based tracking system using Microsoft Kinect. A central controller, i.e., a finite state machine, issues a sequence of basic movement commands to the real-time arm controller. The NMES control algorithm sequentially controls each joint angle while locking the other DoFs. Daily activities, such as drinking, brushing hair, pushing an alarm button, etc., can be supported by the system. The robust and easily tunable control approach was evaluated with five healthy subjects during a drinking task. Subjects were asked to remain passive and to allow NMES to induce the movements. In all of them, the controller was able to perform the task, and a mean hand positioning error of less than five centimeters was achieved. The average total time duration for moving the hand from a rest position to a drinking cup, for moving the cup to the mouth and back, and for finally returning the arm to the rest position was 71 s.

1. Introduction

The consequences of Spinal Cord Injury (SCI) can be severe. Depending on the level of the lesion, SCI causes a loss of motor and sensory functions, and results in the immobilization of the patient. The level of lesion in SCI refers to the vertebrae in the spinal column affected by the injury. The higher the injury on the spinal cord, the more dysfunction can occur. Cervical (neck) injuries usually result in a full or partial tetraplegia (paralysis of the arms, legs, and trunk of the body below the level of the associated injury to the spinal cord). Individuals with a complete lesion at the C7 level or above (C6, C5, …) usually depend on attendant care for all daily life activities.

In SCI patients, the neural pathway from the Central Nervous System (CNS) to the muscles is interrupted. The injury may cause a complete or partial lesions of the upper and/or lower motor neurons. The upper motor neuron originates in the motor region of the cerebral cortex or the brain stem and carries motor information down to the lower motor neurons. All lower motor neurons (LMNs) related to voluntary movements are located in the ventral horn of the spinal cord and anterior nerve roots (spinal lower motor neurons) and innervate skeletal muscle fibers. They act as a link between upper motor neurons and muscles. In case of upper motor neuron lesions, Neuro-Muscular Electrical Stimulation (NMES) can be applied to the lower motor neurons that are still intact to cause artificial contractions of the innervated muscles (Sheffler and Chae, 2007). This will replace the lacking control signals from the CNS to the muscles.

Restoration of grasp function by NMES in spinal cord injured individuals has been realized by different research groups and is even available in form of commercial systems (for an overview see Popovic et al., 2002; Rupp and Gerner, 2007). Available neuroprostheses for grasping are able to restore the two most frequently used grasping styles: the palmar and the lateral grasp (Popovic et al., 2002). C7-C5 complete SCI subjects benefit the most from a grasping neuroprosthesis and achieve a high level of independence in Activities of Daily Living (ADL). These individuals have sufficient residual function of the proximal upper limb muscles that allow them to perform reaching tasks.

Injuries at the high C3 and C4 level result in a significant loss of function at elbow and shoulder level. Deltoid and the biceps muscles are innervated from the C5 and C6 level of the spinal cord. These muscles may be also denervated (lower motor neuron lesion), especially in case of C4 tetraplegia. However, the extent of denervation is likely to vary across individuals. The feasibility to restore shoulder and elbow functions at least partially by NMES was demonstrated by (Acosta et al., 2001) in persons with C3/C4 tetraplegia using percutaneous stimulating electrodes and by (Bryden et al., 2000) in persons with C5/C6 tetraplegia using a fully implanted stimulation system. However, the generated force in individuals with C3 and C4 SCI was not sufficient to hold the arm against gravity. In this context, it should also be noted, that a long lasting electrical stimulation of shoulder and arm muscles is overall not appropriate due to the fast fatigue of electrically stimulated muscles.

In order to enable reaching functions in individuals with SCI at C3 and C4 level, NMES-hybrid orthoses have been investigated. In Hoshimiya et al. (1989), a balanced forearm orthosis (BFO) was used for supporting arm motions. (Smith et al., 1996) used a suspended sling to provide shoulder joint stability, and Nathan and Ohry, (1990) applied mechanical splinting. All studies reported limited performance because of insufficient shoulder control. The stimulation was commanded by voice control (Nathan and Ohry, 1990), by breathing patterns (Hoshimiya et al., 1989) or by contralateral shoulder motion sensed by a position transducer (Smith et al., 1996).

Schill et al., (2011) developed the system OrthoJacket—an active NMES hybrid orthosis for the paralyzed upper extremity. The system combined NMES controlled grasping with an electrical/pneumatic actuation of shoulder movements and a flexible fluid actuator for support of elbow-joint movements. For control of the orthosis, EMG signals from arm muscles were acquired. This means that only individuals with some residual arm/hand functions could use this system. Furthermore, NMES was not used for movement generation at the shoulder or elbow-joint.

Within the EU project TOBI, a further NMES hybrid orthosis was developed to support both grasping and elbow-joint movements by NMES (Rohm et al., 2010). However, this system required sufficient residual shoulder function to be provided by the user. To avoid an excessive stimulation of the biceps muscle during holding tasks, the orthosis' elbow-joint was self-locking in direction of flexion and electrically de-lockable. A Brain Computer Interface (BCI) and a shoulder joystick at the non-supported side were provided as interfaces for the control of the orthosis.

In all existing systems, either NMES was applied in an open-loop manner using pre-defined stimulation patterns or the patient had to adjust the stimulation intensity by himself, e.g., via a position transducer at the contralateral shoulder or through EMG signals of preserved muscles. None of the systems allows the automatic positioning of the hand at arbitrary positions in the reachable workspace. In addition, deviations from the desired behavior, e.g., due to muscular fatigue, are not automatically compensated.

This study aims at developing a fully feedback-controlled arm neuroprosthesis for individuals with no or very weak residual arm and shoulder functions (such as persons with C3/C4 tetraplegia). In contrast to existing arm neuroprostheses, the proposed solution allows to position the hand at arbitrary desired positions within the reachable workspace. This arm neuroprothesis is a component of the modular assistive framework MUNDUS (Pedrocchi et al., 2013), that has been developed to support and recover arm and hand functions in severely impaired people. The arm reaching functionality can be extended by a robotic or NMES-based module for grasping assistance.

To reduce the amount of required stimulation for the arm and shoulder muscles, a passive light-weight exoskeleton supports the user in addition to NMES. The main purpose of the exoskeleton is the gravity compensation by a passive spring mechanism. In addition to this, the exoskeleton enables all joints to be locked for holding the arm at given positions without NMES. Thus, only point-to-point movements under gravity compensation have to be realized by means of artificial muscle activation, assuming no or insufficient residual motor control by the user over his/her arm and shoulder musculature.

Automatic control of NMES to achieve functional shoulder/arm movements is challenging due to the highly non-linear and time-varying behavior of the electrically stimulated muscles (Lynch and Popovic, 2008). Mimicking physiological movements would require to identify the musculo-skeletal system of the arm for each individual and each time the system is applied. This would require a long lasting calibration procedure infeasible in clinical environments or at home. For the use of NMES in stroke rehabilitation, Iterative Learning Control (ILC) has been proposed in order to generate precise functional reaching movements (Freeman et al., 2012). ILC demands a cyclic movement generation. After every movement cycle, an error trajectory with respect to a given reference movement will be determined and used to either update an open-loop applied stimulation pattern or to update the reference trajectory of an underlying feedback controller. The latter approach guaranties a sufficiently small tracking error even for initial ILC trials but again requires a detailed model in order to design the feedback controller. To avoid any huge calibration effort, we present a simpler movement generation strategy that involves sequential NMES control of all Degrees of Freedoms (DoFs) available in the exoskeleton.

The manuscript is structured as follows: in Section 2.1, an overview of the overall control system architecture is given. Sections 2.2 and 2.3 then describe the employed exoskeleton and the muscle actuation by NMES, respectively, in detail. In Section 2.4, we introduce the kinematic model of the exoskeleton and its parameter identification as well as required coordinate transformations used by the arm controller. In Section 2.5, the feedback controlled generation of arm movements is presented in detail. Then, in Section 2.6, we describe the experimental trials performed on healthy subjects to evaluate the performance of the control system. Section 3 summarizes the results in terms of the positioning error and execution times achieved in the validation trials. The article closes with a discussion and some conclusions.

2. Materials and Methods

2.1. Control System Architecture

The entire system developed for the support of the reaching movements is depicted in Figure 1. Potential users have no or very weak residual voluntary activation of arm, shoulder and hand muscles, but they can still control the head and gaze fixation. They usually sit in a wheelchair in front of a table. The target motions supported by the system are daily life activities, such as drinking, eating, brushing, touching the own body, pushing an alarm button, and moving an object on the table.

FIGURE 1
www.frontiersin.org

Figure 1. System architecture for support of reaching function.

The arm/shoulder movements are induced by NMES while an exoskeleton guides the movement and supports the arm during static postures in absence of NMES. The control signals (stimulation intensities and on/off state of the exoskeleton brakes) are generated by a real-time controller that receives commands from the Central Controller (CC) implemented in form of a finite state machine. The central controller instructs the real-time controller to move the hand to a given target position in the reachable workspace. Sensors integrated in the exoskeleton measure joint angles that are used as feedback variables by the real-time controller. The NMES control algorithm sequentially controls each joint angle while locking the other DoFs.

The user interacts with the system by means of an eyetracker. Therefore, a commercial system, the Tobii T60W system (Tobii Technology AB, Sweden), has been extended by a specific GUI for the MUNDUS application. The table-mounted eyetracker is integrated into a 17″ TFT monitor. During tracking, the Tobii T60 uses infrared diodes to generate reflection patterns on the corneas of the user's eyes. Proper image processing is used to identify the gaze point on the screen. The three dimensional position of the user's hand, of the objects to be manipulated, and of the mouth are continuously monitored by environmental sensors, i.e., two Kinect cameras (Microsoft Corp., Redmond, USA). To this end, colored markers are attached to the hand and the objects. The first Kinect camera provides an image of the working space to the eye-tracking screen. To start an interaction with a specific object, the user has to visually fixate this object on the eyetracker screen for a pre-defined time duration. Once an object is selected, the corresponding Kinect coordinates are sent to the CC which transforms these coordinates into the global (exoskeleton) 3D coordinate system. The transformed coordinates will then be used by the real-time controller for movement generation. The second Kinect camera is placed in front of the user and is used to track the face position.

The fixation detection algorithm has been exclusively developed for the specific MUNDUS application, and it comprises user-dependent temporal (i.e., time during which the user has to continuously fix an object or an icon on the screen to select the gazed point) and spatial (i.e., area around the barycenter of the cluster of gaze samples inside which each sample has to fit for a fixation to be revealed) threshold settings. To prevent unwanted fixation detections, a confirmation icon is shown on the eye-tracking screen after a fixation event is detected, and the user is asked to confirm or cancel the selection. Moreover, the working space where the user can select the object/action to interact with is shown only when the user him/herself has selected the START icon from the standby interface that is provided by the eyetracking screen when MUNDUS is waiting for user interaction.

Special parts of the eye-tracker screen are dedicated to other available tasks (e.g., activating emergency switch off, touching spots of the body). The emergency icon is always displayed in the top-left corner of the screen, and it is continuously selectable to allow the user to stop MUNDUS. If the emergency icon is fixated, a message is sent by the eye-tracker that stops all MUNDUS components. To trigger sub-actions, specific questions are displayed on the screen and the user can reply by fixating a GO or a STOP icon.

The central controller interfaces all modules and interacts with the eyetracker and the real-time controller. For the purpose of system integration, the software components of the CC and the eyetracker module have been integrated in one single MS Windows-based PC. The real-time controller and the data processing of the environmental sensor module are based on a computer system running Linux with RTAI extension1. Development and testing of the control system is performed in Scilab/Scicos 4.1.22 using the real-time framework OpenRTDynamics3. The communication between all modules is established via UDP and messages are broadcasted in XML format.

2.2. Exoskeleton

As a basis for the exoskeleton design, the previously mentioned target motions were analyzed using a motion capture system (Lukotronic, Lutz Mechatronic Technology e.U, Austria) to estimate the required ranges of motion and expected loads at the joints (Karner et al., 2012; Reichenfelser et al., 2013). The 3D mechanical design was done in Catia V5R19 (Dassault Systmes, France), focusing on modularity, simplicity and light weight. The developed exoskeleton with gravity compensation is shown in Figure 2A. The available degrees of freedom (DoF) of the exoskeleton are:

1. Shoulder flexion/extension (angle ϑu),

2. Shoulder horizontal rotation (angle φu),

3. Elbow flexion/extension (angle ϑf).

FIGURE 2
www.frontiersin.org

Figure 2. (A) Exoskeleton with spring-based gravity compensation and electromagnetic brakes mounted on a wheelchair. (B) Cup holder with an universal joint in the handle.

The rotation of the forearm around the upper arm axis (humeral rotation) and pronation/supination of the forearm are locked by the exoskeleton as these DoFs are difficult to be controlled by NMES using surface electrodes. Due to the reduced DoFs, the orientation of the hand is not freely adjustable in the workspace. Thus, to allow a safe handling of objects despite this constraint, special objects with an universal joint in the handle have been developed (e.g., cup holder shown in Figure 2B).

The exoskeleton is equipped with magnetic encoders (Vert-X, Contelec AG, Switzerland) to measure the angles for all three DoFs. Electromagnetic DC brakes (Kendrion, Germany) can lock the shoulder horizontal rotation with a torque of 2.5 Nm, the shoulder flexion/extension with up to 5 Nm and the elbow flexion/extension with 1.5 Nm to hold the arm in any posture when the stimulation is switched off.

To realize gravity compensation, a pressure spring is integrated in a vertical carbon tube that can be either mounted on a wheelchair as shown in Figure 2 or alternatively attached to a body harness for mobile use. The spring force is transferred to the elevation lever by a rope and pulley mechanism. Figure 3 depicts an isometric view of the shoulder joint mechanism and shows the occurring torques as a function of shoulder elevation angle. A slight under-compensation (spring torque smaller than gravity torque) is intended as the arm should move downwards slowly and gravity-induced when the stimulation and the brakes are turned off. The amount of compensation is adjusted manually by changing the wind up length of the rope at the spring adjustment module. A linear guiding provides the connection between the elevation lever and the upper arm shell and compensates misalignment of the anatomical and the mechanical shoulder joint. This also minimizes the reaction forces. For the elbow-joint, an elastic band with a variable attachment point acts as weight support.

FIGURE 3
www.frontiersin.org

Figure 3. Isometric view of the shoulder joint mechanism showing the angle sensors and brakes for the two degrees of freedom. The right graph shows the occurring torque due to gravity (black solid line) together with the compensation torque (dashed red line) at the shoulder joint as a function of shoulder flexion/extension angle ϑu for an averaged upper arm weight of 2.15 kg and a forearm/hand weight of 1.91 kg with the elbow flexed at 90°. The resulting additional torque when the electromagnetic brake is switched on is shown as blue dash-dotted line.

The exoskeleton has a total weight of 2.2 kg and can be quickly adjusted to different anthropometric dimensions.

2.3. Neuro-Muscular Electrical Stimulation

The desired arm movements are induced by four stimulation channels activating the anterior, posterior and medial deltoid as well as the biceps muscle (cf. Table 1). By stimulating the medial deltoid, the shoulder extension can be actuated, while the anterior and posterior deltoid allow arm rotation in the horizontal plane. Stimulation of the biceps is used to flex the elbow-joint. Shoulder flexion as well as elbow extension are induced by gravitational forces.

TABLE 1
www.frontiersin.org

Table 1. Stimulation channels.

One pair of self-adhesive hydrogel electrodes (oval shaped with size 4 × 6.4 cm) is used for each stimulated muscle. For the generation of the biphasic stimulation pulses, the current-controlled stimulator RehaStim Pro (HASOMED GmbH, Germany) is used. The stimulation frequency for all channels is fixed at 25 Hz, while the individual current amplitudes and pulse widths can be adjusted in real-time using the open ScienceMode protocol4 through a galvanically isolated USB interface.

The stimulation intensity in terms of pulse charge νi serves as control signal for the muscle i. Table 1 shows the used control signal notation. The pulse charge νi of the muscle i is defined as product of the current amplitude Ii and the pulsewidth pwi. In this application, a given charge is equally distributed to pulse width and current amplitude (normalized to their maximal values) as follows:

pwi=νipwmaxImax,Ii=νiImaxpwmax,0νi(Imaxpwmax),

where pwmax = 500 μs and Imax = 127 mA are the maximal values of pulse width and current amplitude, respectively.

In a calibration phase that is always performed before using the MUNDUS system, the maximal tolerated pulse charge νi of each muscle i is determined. Additionally, for the medial deltoid, the stimulation intensity νd,m that causes the onset of a visible muscle contraction is determined. This value is required for the implementation of the more complex shoulder flexion/extension controller described in Section 2.5.2.

2.4. Kinematic Model and Coordinate Transformations

To calculate the hand position from a given set of joint angles or vice versa, a kinematic model of the exoskeleton is required. In addition, a transformation from the Kinect coordinate system to the global (exoskeleton) coordinate system must be determined for the following reason: Objects to interact with may be arbitrarily located on the table in front of the user. The Kinect is required to determine the object position in the local Kinect coordinate system. In order to bring the hand to objects by NMES, the Kinect coordinates must be mapped into exoskeleton 3D coordinates and corresponding exoskeleton angles. The latter are used to describe the hand position in the real-time arm controller.

It is assumed that the placement of the Kinect as well as the settings of the exoskeleton may change from day to day. Therefore parameters need to be determined with simple and fast procedure through experimental system identification.

Figure 4 shows the simplified kinematic exoskeleton/arm model with the global (exoskeleton) coordinate system (xg, yg, zg) and the Kinect coordinate system (xk, yk, zk). Both are Cartesian coordinate systems. Depicted is the right arm reaching forward. The model assumes that the exoskeleton is completely rigid and that the arm is perfectly aligned to the exoskeleton.

FIGURE 4
www.frontiersin.org

Figure 4. Simplified kinematic model of the exoskeleton with coordinate systems and a transformation between these systems. Depicted is the right arm reaching forward. The parameters of the coordinate transformation ϕ, θ, ψ, and tk as well as the kinematic model parameters lu, lf, and φf need to be identified.

The forward kinematics is given by

phg(ϑu,φu,ϑf) =(luR(ϑu,φu)+lfR(ϑu,φu)R(ϑf,φf))ez.    (1)

where pgh is the hand position in global coordinates, ez = [0, 0, 1]T is a unity vector, and lf and lu are the lengths of the forearm and upper arm, respectively. The rotation matrix R is defined as follows:

R(ϑ,φ):=[cosφcosϑsinφsinϑcosφcosϑsinφcosφsinφsinϑsinϑ0cosϑ].    (2)

In the used setup, the humeral rotation angle φf of the shoulder is constant, as it represents a fixed DoF, and its value is determined by the configuration of the exoskeleton.

Equation (1) can be used to determine the hand position for a given set of exoskeleton angles. The inverse kinematics can be obtained by numerically solving Equation (1) to determine the angles ϑu, φu and ϑf for a given hand position pgh within the reachable workspace and angle φf. The solution is unique as the humeral shoulder rotation angle φf is fixed, and the operational space for ϑf is limited by the mechanical constraints to [0, π].

The transformation from Kinect coordinates to global coordinates is visualized in Figure 4 and can be written as

pg=Rk(ϕ,θ,ψ)pk+ tk    (3)

where pg = [xg yg zg]T, pk = [xk yk zk]T, and tk ∈ ℝ3 × 1 is a translation vector, and Rk ∈ ℝ3 × 3 a rotation matrix which is parameterized by the Euler angles ϕ, θ, and ψ.

2.4.1. Parameter identification

The parameters ϕ, θ, ψ, and tk of the coordinate transformation as well as the kinematic model parameters lu, lf, and φf are unknown and have to be calibrated for each user each time the system is set up. Therefore, a system identification procedure is applied to determine the nine parameters. During the calibration phase, the arm and the attached unlocked exoskeleton are manually placed by a third person (e.g., the caregiver) at N different positions in the reachable workspace that can be reached with the arm attached to the exoskeleton. Since nine parameters need to be identified, N ≥ 9 positions must be visited. The reachable workspace is at first defined by the forward kinematics of the exoskeleton. However, this space may be furthermore limited by insufficient NMES-induced muscle force.

For each hand position i, the corresponding joint angles (ϑu,i, φu,i, ϑf,i) are measured together with the hand position vector

ph,ik=[xh,ikyh,ikzh,ik]T,    (4)

which is recorded by the environmental sensor in the Kinect coordinate frame.

The unknown parameter vector Θ = [lu lf φf ϕ θ ψ tkT]T is estimated by minimizing a quadratic cost function

Θ^=argminΘ(12i=1NeieiT)    (5)

where

ei:=((luR(ϑu,i,φu,i)+lfR(ϑu,i,φu,i)R(ϑf,i,φf))ez)ph,i,FKg      (Rk(ϕ,θ,ψ)· ph,ik+ tk)ph,i,Kinectg    (6)

is the error between the hand position pgh,i,FK, obtained by the forward kinematic model (1), and the hand position pgh,i,Kinect, obtained from the transformed Kinect measurements, both in global coordinates. The minimization of the cost function is achieved by the Gauss-Newton method with analytically calculated gradients.

2.5. Control System

All NMES generated arm movements are initiated by commands received from the high level control system, the Central Controller (CC), which processes, among others, the information collected by the eye-tracker. The CC movement commands are:

1. Move hand to a desired 3D position,

2. Change the angle of shoulder flexion/extension by a certain amount, and

3. Change the angle of elbow flexion/extension by a certain amount.

Each command emits an event causing a state transition in a finite state-machine on the real-time control system, which then performs the actual movement.

Based on the elementary movement commands outlined above, complex movement sequences are possible by a combination of multiple commands issued in series. An example for the drinking use case is outlined in Figure 5.

FIGURE 5
www.frontiersin.org

Figure 5. The state automaton inside the MUNDUS Central Controller (CC) to realize the drinking use case starting from an arm rest position and returning to this position again. The states (S3, S5, S7, S9, S10, S12, S14, S15) with arm movements trigger a state machine inside the real-time arm NMES control module (cf. Figure 6). The references for the rest position as well as for the mouth position may be stored in the MUNDUS CC as angular references during the system calibration phase. The object position is online determined by the Kinect system by tracking a green marker on the object handle.

In this study, the hand movements were performed voluntarily by the subject. In the complete MUNDUS system, two alternative solutions to support hand functions have been proposed: a hand neuroprosthesis and a robotic hand orthosis (Pedrocchi et al., 2013). The hand neuroprosthesis deploys a new stimulation system for array electrodes (Valtin et al., 2012) in order to produce precise finger movements. However, the description of these hand modules is outside the scope of this study.

It should be noted that the straight lines shown in the center of Figure 5 do not represent the actual trajectories of the hand. The actual generation of a movement between two points by the real-time controller will be described in the next section.

2.5.1. Sequential real-time control strategy

The real-time control system internally controls the angles of the exoskeleton. Therefore, whenever a command is issued by the CC, new angular references are determined by the real-time control system. This calculation involves, if required, also stored old angular references from the last movement and the inverse exoskeleton kinematics. The resulting reference angles of the jth command are rjϑu, rjφu, and rjϑf for the shoulder ab-/adduction, the horizontal shoulder rotation, and the elbow flexion/extension, respectively.

Sequential feedback control is used to adjust the stimulation intensities (pulse charges) in order to drive the hand to desired positions in the reachable work space. Each DoF is controlled separately, one after the other while all other DoFs are locked by the exoskeleton brakes. This results in a fully decoupled system with regard to crosstalk between the DoFs. For this reason, a light model with few parameters can be used for each controller design, which dramatically reduces the effort for parameter identification. Each movement to a given 3d position is divided into three consecutive steps:

1. control of the shoulder flexion/extension,

2. control of the shoulder horizontal rotation and

3. control of the elbow flexion/extension.

The real-time arm NMES controller is a hybrid control system combining a state automaton and continuous-time feedback controllers to reach the desired angle subsequently for each DOF (cf. Figure 6).

FIGURE 6
www.frontiersin.org

Figure 6. Real-time arm NMES control system shown in form of a hybrid system combining a state automaton and continuous controllers: state transitions are indicated by black bold arrows, while continuous signals are represented by colored thin arrows. Not shown are short periods (states) between the activations of the individual controllers in which all brakes are locked and the respective initial stimulation intensities are adjusted for the next controller activation.

2.5.2. Shoulder flexion/extension control

For the shoulder flexion/extension, a discrete-time controller based on an identified pulse transfer-function model is employed. The control design uses the well-known pole-placement method in polynomial form (Astrom and Wittenmark, 1996). For the jth activation of the controller, the relation between the stimulation intensity νjd,m of medial deltoid and the shoulder elevation angle ϑju can be approximately described by a second order autoregressive with exogenous input (ARX) model (Ljung, 1999) of the form

ϑuj(k)=B(q)A(q)νd,mj(k)+q2A(q)ej(k),v_d,mνd,mj(k)ν¯d,m,k0,    (7)

where k is the sample index, ej(k) represents white noise, and

B(q)=b0,A(q)=(q2+a1q+a2)q4

are polynomials of the forward-shift operator q (q s(k) = s(k + 1)). This model possesses an input-output time delay of six sampling instants, which is typically observed in the recorded I/O data. The used sampling frequency is 25 Hz and equals to the stimulation frequency. During the system calibration, the coefficients of the polynomials are estimated from a recorded input step response (changing νd,m from (νd,m + 0.2(νd,mνd,m)) to (νd,m + 0.8(νd,mνd,m))) using the instrumental variable method (Ljung, 1999).

Based on the obtained model, a polynomial controller of the form

νd,mj(k)=S(q)R¯(q)(1q)(T(q)S(q)rϑujϑuj(k))    (8)

is designed with the controller polynomials R(q), S(q), and T(q). Figure 7 shows the corresponding closed-loop system. The controller has integral action [factor (1 − q) in (8)]. This enables the rejection of constant and slowly varying disturbances and compensates the effects of muscular fatigue. The coefficients of the controller polynomials R(q) and S(q) are chosen to obtain a desired characteristic polynomial

Acl(q)=(1q)R¯(q)A(q)+S(q)B(q)    (9)
FIGURE 7
www.frontiersin.org

Figure 7. Closed-loop system with discrete-time polynomial controller.

the roots of which are equal to the closed-loop system poles and should be stable and well damped. For the given system and controller with integrator, the minimal degree controller is given by deg(S) = 6, R) = 5 and deg Acl = 12. A common approach is to factorize Acl(q) as follows:

Acl(q)=Acl,1(q)Acl,2(q)q8    (10)

where Acl,1(q) and Acl,2(q) are second order polynomials specified via rise-time tr,i and damping factor Di (i = 1,2) of corresponding continuous-time second order systems. Eight of the twelve closed-loop poles are located at the origin (fastest possible mode in discrete-time). The pre-filter polynomial is set to

T(q)= Acl,2(q)q4Acl,1(1)/B(1).    (11)

This yields a unity DC gain from the reference input rjϑu to the system output ϑju. Furthermore, it cancels six closed-loop poles defined by Acl,2(q)q4. The resulting transfer function of the closed-loop system is then:

ϑuj(k)rϑuj(k)=T(q)B(q)Acl(q)=Acl,1(1)B(q)q4Acl,1(q)B(1).    (12)

As a result, only the poles defined by the roots of q4Acl,1(q) influence the system dynamics with respect to changes in the reference signal. The disturbance rejection and noise properties of the closed-loop system, however, are depending on all closed-loop poles defined by Equation (10). At first, the rise-time and damping factor for Acl,1 are selected to obtain a desired reference tracking behavior. Then the rise-time and damping factor of Acl,2 are iteratively tuned to yield satisfactory noise sensitivity and disturbance rejection (verified by frequency response plots of the sensitivity and the complementary sensitivity function). For all subjects of this study, we have chosen tr,1 = 0.6 s, tr,2 = 0.5 s and a damping factor Di = 0.999 for both polynomials.

The final controller implementation, which is shown in Figure 8, takes the following additional aspects into account:

1. Controller initialization to apply a given constant initial stimulation intensity νjd, m(0) = νjd, m, init.

2. Generation of a smooth reference trajectory rjϑu,f(k) that guides the arm from the initially measured angle ϑju(0) to the given target angle rjϑu of the activation j.

3. Avoidance of integrator windup for control signals violating the constraint νd,m ≤ νjd,m(k) ≤ νd,m by using the standard anti-windup scheme proposed in Astrom and Wittenmark (1996) with the anti-windup observer polynomial Aaw(q) = Acl,2(q)q4.

FIGURE 8
www.frontiersin.org

Figure 8. Implementation of the shoulder extension/flexion controller including an anti-windup observer with R(q) = (1 − q) R(q), a trajectory generator and an adjustable initial stimulation intensity νjd,m,init. The parameters of the saturation function are νjd,m = νd,m- νjd,m,init and νjd,m = νd,mjd,m,init for νd,m ≤ νjd,m,initνd,m.

The initial stimulation intensity νjd,m,init is adjusted in order to avoid undesired movements when the controller is activated. Thus, before the controller activation and the brake release, the stimulation intensity is increased up to the value which was used before locking the DoF. The ramp-up period lasts about 1.5 s. Furthermore, to avoid unwanted initial transients caused by the controller transfer functions, the initial joint angle ϑju(k = 0) at controller activation is acquired and then subtracted from the joint angle measurement ϑju(k) and the output of the trajectory generator.

2.5.3. Trajectory generation

To obtain smooth shoulder flexion/extension movements, the reference trajectory rjϑu,f(k) for each activation j is chosen to be a sinusoidal reference path starting at ϑju(0) and converging to the desired target angle rjϑu:

rϑu,fj(k)={ϑuj(0)for0k<N112(1cos(πkN12N))·(rϑujϑuj(0))+ϑuj(0)forN1kN2=N1+Nrϑujfork>N2=N1+N.

The parameter N1 = 69 describes the amount of samples (corresponding to 2.76 s) before the sinusoidal shape starts, and N denotes the number of samples for the transient part of the trajectory and is set to 150 (corresponding to 3 s). After the sample N2 = N1 + N, the reference trajectory is equal to rjϑu. Then, the controller will be deactivated and the brake will be locked as soon as one of the following conditions is fulfilled:

• The absolute error |rjϑu − ϑju(k)| is less than 1°.

• The control signal νjd,m(k) was continuously saturated for more than 2 s.

• The controller was active for more than 15 s (time-out event).

Once the target is reached, the current value of stimulation intensity is stored and the controller of the shoulder flexion/extension is deactivated.

2.5.4. Shoulder horizontal rotation control

The control of the shoulder horizontal rotation involves the stimulation of the anterior (for inward rotation) and the posterior (for outward rotation) deltoid. Thus, the following switching control law is used

νd,aj={urjifurj>00ifurj0    (13)
νd,pj={urjifurj<00ifurj0,    (14)

which introduces a mapping of one single virtual actuation variable ujr ∈ [−νd,p,νd,a] to the two stimulation intensities νjd,a and νjd,p for the jth controller activation.

The virtual actuation variable ujr is the output of an integral controller with constant integration slopes and is given by

urj(k+1)=satν¯d,p,ν¯d,a(urj(k)+crsgn(rφujφuj(k))),urj(0)=0,

where the positive gain cr is set to 0.3 μ as in this study. To avoid integrator windup, a saturation function

satb1,b2(x):={b1ifxb1xifb1<x<b2b2ifb2x    (15)

is used in the integral control law. This prevents the integrator from exceeding the constraints for the actuation variable.

Conditions for the deactivation of the controller and the subsequent locking of the brake are in analogy to the ones given in Section 2.5.2.

2.5.5. Elbow extension/flexion control

The control of elbow extension/flexion is similar to the horizontal shoulder rotation control, but only one muscle, the biceps, is stimulated in order to induce elbow flexion. Downward movements of the forearm (extensive movements) are caused by gravity. The stimulation intensity will be linearly increased/decreased with the absolute slope rate ce = 6.7 nAs in each sampling instance until the desired angle is achieved. The following integral controller, which also includes an anti-windup strategy, is used:

νbj(k+1)=sat0,ν¯b(νbj(k)+cesgn(rϑfjϑfj(k))),νbj(0)=νb,initj.    (16)

Here, j represents again the jth activation of the controller. The initial stimulation intensity νjb,init is adjusted in order to prevent the forearm from rapidly falling down when the controller is activated and the brake is released. Thus, before the controller activation, the stimulation intensity is increased up to 50% of the stimulation intensity achieved at the end of the previous activation phase of the elbow controller. The ramp-up phase lasts 1 s.

Conditions for the deactivation of the controller and the subsequent locking of the brake are in analogy to the ones given in Section 2.5.2.

2.6. Validation of the Control System

The control system was validated in five healthy subjects (three female and two male), aged 29–40 years (mean ± SD 34.5 ± 5.3). Average weight was 61 ± 17 kg. The drinking task was selected to evaluate the performance of the system. Each subject was asked to be completely relaxed during the arm movements entirely induced by the system. At the hand related steps of the procedure, he/she was asked to voluntarily open and close the hand in order to grasp and release the cup. Each subject repeated the trial five times. Before the beginning of the trials, the exoskeleton as well as the amount of gravity compensation were adjusted to the anthropometric measures of each subject. Then, the system was calibrated performing the following steps:

• Set the stimulation parameters (Section 2.3),

• Determine the parameters of the kinematic model and coordinate transformation (Section 2.4),

• Tune the discrete-time controller of the shoulder flexion/extention by means of an experimental session aimed at model identification (Section 2.5), and

• Teach-in the rest position and the in-front-of-mouth position.

The experimental protocol was approved by the ethical committee of the Valduce Hospital (Italy) where the validation trials have been performed. All subjects signed a written informed consent.

To evaluate the performance of the system, the positioning error between the target position and actually reached position at the completion of each movement command was computed for the hand positions 1 to 8 shown in Figure 5. Two sets of positioning errors were calculated since two different methods were used to derive the actual position in the global coordinate system: (1) the measured angles were applied to the forward kinematic model; (2) the actual position measured by the Kinect was transformed in the global coordinate system. Furthermore, the time needed to execute all movement commands during the drinking task was computed.

3. Results

Figure 9 exemplarily shows the recorded angles together with their active references (bands), the applied stimulation intensities and the states of the brakes. Vertical, dashed lines separate the time periods of the controlled arm movements that have been introduced and numbered in Figure 5. The stimulation intensities νd,a, νd,m, νd,p, and νb are normalized to their bounds [0,νd,a], [νd,m,νd,m], [0,νd,p], and [0,νb], respectively. The control system is performing well in moving the arm such that the joint angles are close to the reference angles. However, in this example, an unwanted slipping of the horizontal shoulder brake can be observed after 43, 80, 92, and 106 s that causes the shoulder horizontal rotation angle φu to drift away from the previously reached target angle. Figure 10 shows the desired arm posture at the ending of every controlled arm movement in comparison to the real arm position achieved by NMES. The error caused by slipping is clearly visible for the instances of time 2*, 4*, 6*, and 7*, which represent the endings of the corresponding movements defined in Figure 5.

FIGURE 9
www.frontiersin.org

Figure 9. Exemplary results of the application of the developed control system to one healthy subject. The transient behavior for one trial of the described drinking task is shown. The numbers on the vertical dashed lines in the third subplot indicate the begin (without star) and end (with star) of the eight arm movements defined in Figure 5. In the first subplot, the active reference angles (bold colored lines with black surrounding) are shown along with the measured angles. In the figure, the colors blue, green and red correspond to the elbow-joint, shoulder flexion/extension and shoulder horizontal rotation, respectively. In the middle subplot, the applied stimulation intensities are presented. The state of the brakes is plotted in the bottom subplot. An individual controller for one DOF is only active for time periods in which a reference trajectory is plotted for the corresponding angle. Theoretically, angles should not change in periods in which no corresponding reference trajectories are plotted due to active brakes.

FIGURE 10
www.frontiersin.org

Figure 10. Static arm postures for one trial of the described drinking task. Shown are the desired arm postures and the actually obtained ones for the endings of the eight arm movements defined in Figure 5. The upper body is indicated in green while the right arm is pointing forwards. The table in front of the subject is illustrated in blue.

The five trials of the drinking task were successfully completed by all subjects. For each subject, Table 2 reports the mean and standard deviation values of the position errors in xg/yg/zg-directions obtained during the five trials of the drinking task. The controller performance obtained in the two most important reaching subactions, i.e., reaching the object and reaching the mouth, and the overall performance obtained by averaging the results obtained in all of the eight target positions are shown in Table 2. The Euclidian norm (i.e., the mean distance error) of the mean positioning error vectors has been calculated from data in Table 2 and is reported in Table 3. The mean distance error for all subjects and positions was less than two centimeters when using the exoskeleton angles to determine the hand position. Based on the Kinect measurements, the observed mean distance error is smaller than five centimeters. For the majority of subjects (B–E), a relatively large mean (systematic) error in the xg-direction of up to 12 cm are observed for the object position (cf. Figure 5), resulting in a mean distance of about 8 cm (see Table 3). Subject D obtained a large standard deviation for the object positioning error in xg-direction (see Table 2). A larger discrepancy between the errors based on the exoskeleton sensors and the Kinect can be observed for the mouth position in subjects C–E.

TABLE 2
www.frontiersin.org

Table 2. Mean positioning errors along with their standard deviations in xg/yg/zg-direction for five drinking task sequences per subject measured via the exoskeleton sensors and via Kinect.

TABLE 3
www.frontiersin.org

Table 3. Euclidean norm (distance) of the mean positioning error vector given in Table 2.

Additionally to positioning error analysis, the validity of the identified kinematic model and coordinate transformation is investigated for each individual subject. For the twelve positions chosen during the kinematic model calibration, we calculated the 3D position of the hand in two ways using the found kinematic model parameters: At first by applying the kinematic model to the measured exoskeleton joint angles and second by transforming the Kinect measurements into the global coordinate system. Then, over all twelve positions the RMS of the distance error between the two estimates for the hand positions is calculated. The results are shown in Table 2.

The mean values averaged over five trials of the observed time durations for all sub movements and for each subject are reported in Table 4. Each individual sub movement is indicated by a number previously introduced in Figure 5. Additionally, the mean values for the total time required to complete a full drinking task (only time durations wherein the controller was activated are counted) are reported per subject. The average time for the execution of all eight arm movement commands was 71.4 s. The total time for donning the system on and for calibration was less than 10 min for every subject (calibration alone required about 2 min).

TABLE 4
www.frontiersin.org

Table 4. Mean time durations along with their standard deviations for each sub movement defined in Figure 5 and each subject.

4. Discussion and Conclusions

The experimental evaluation shows that the feedback control of the hybrid NMES-exoskeleton system is feasible. Compared to the results presented in Freeman et al. (2012), no learning phase was required to achieve the desired functional movements. Overall, the evaluation shows that it is possible to support the user in performing the drinking task. Because the drinking task was considered the most complex one, we conclude that other tasks are supported with similar effectiveness.

The observed small position errors at the mouth might be corrected by minor head movements to allow the drinking from the cup by means of a straw. When positioning the hand above the object (i.e., the cup handle), in xg-direction larger errors were observed compared to other directions. But due to the large dimension of the cup handle, the ability to grasp the handle was not restricted. The limited accuracy for placing the hand at objects restricts the possible size and number of objects on the table. Reasons for the observed errors are diverse. One major problem observed is the limited braking torque of 2.5 Nm for the horizontal shoulder rotation that sometimes cannot prevent unwanted slipping. Despite careful placement of the stimulation electrodes, it cannot be avoided that a stimulation of the Deltoid, medial head, generates (besides a desired shoulder extension moment) an unwanted horizontal shoulder rotation moment. If the latter exceeds the torque of the locked horizontal shoulder rotation brake, then slipping occurs for this DoF. With the arm pointing forward, an error in the shoulder horizontal rotation leads to a large hand error in the xg-direction, especially for the extended arm. In future research, the use of array electrodes for the deltoid muscle might be an option to achieve a more selective stimulation and to avoid such unwanted stimulation effects and slipping. Another solution is to increase the brake torque by re-designing the exoskeleton.

Even when moving to a position given in Cartesian coordinates, the real-time control system is based on angular control. The position errors determined by the exoskeleton angles are purely related to the control system. The errors determined by the Kinect measurements additionally take problems into account that are related to the used kinematic model and coordinate transformations. The current controller design assumes that the exoskeleton/arm-combination represents a rigid body system. This is certainly only an approximation. Moreover, for the calibration of the kinematic model and the coordinate transformation, the arm/hand is moved by an assisting person to twelve arbitrarily chosen different positions in the workspace. Compared to the later use with NMES, no loading/deformation of the exoskeleton by the arm weight takes place. Any deviation from the rigid body assumption causes a position error due to the use of an incorrect forward kinematics. Such an error can only be detected by an external measurement system, like the Kinect, and not by the exoskeleton's internal angle sensors. The larger errors computed from the Kinect measurements compared to the one derived from the exoskeleton sensors are therefore an indicator that the rigid body system assumption is only an approximation.

A shortcoming of the developed system is that elbow extension and shoulder flexion are only induced by gravity. This requires a carefully adjusted weight compensation. Any overcompensation of the weight could drive the arm movement into a dead lock.

Huge advantages of the employed control strategy are its robustness and its simple adaptation to new users/sessions. Only a simple single-input single-output dynamical model needs to be identified for the adaptation of the controller. For all subjects, the same tuning parameters, like rise times and damping factors, have been used for the automatic design of the shoulder extension/flexion controller. In addition to this, the same gains have been applied to the controllers of shoulder horizontal rotation and elbow flexion/extension in all subjects. Due to automated and guided procedures, the system can be set up in a few minutes for the individual user. All individual NMES controllers for the three DoFs include an integrator which allows for the compensation of muscular fatigue as long as the stimulation intensities do not saturate. No deterioration of control performance was observed for the healthy subjects during the five performed trials and from day to day. All these advantages have to be paid by the fact that the movements do not look very physiological and movement sequences are not time optimal (cf. Table 4). However, we hypothesize that this fact is of minor importance for final users, and that the guaranteed functionality overbalances the timing issue for this assistive technology. The personal experience of performing all movements by means of the own muscles is the major advantage compared to robotic approaches for assistance of reaching function (e.g., Maheu et al., 2011). Regular use of the proposed arm neuroprosthesis and, consequently, of the patient's musculature will be health promoting. It will increase muscle strength and might also improve cardiovascular fitness.

In summary, a feedback controlled hybrid NMES-exoskeleton which does not require any residual function at the shoulder and arm level was developed. By combining NMES with the passive exoskeleton for partial arm weight support, muscular fatigue can be significantly reduced since the required amount of muscular force is smaller compared to normal movements. The use of electrically lockable joints reduces the onset of muscular fatigue even further because no muscle function is required to hold the desired position.

The presented study was focusing on the achievable control system performance, which was expected to be maximal for healthy individual due to non-atrophied muscles and the absence of spasticity. During the development of the system, a first test involving one incomplete SCI subject (C4/C5) was performed and showed that the system supported the subject in reaching a cup and bring it to the mouth. The results of this test have been previously published (Pedrocchi et al., 2013). Tests of the final feedback controller on a group of SCI subjects will be performed to observe the feasibility of the system in supporting daily life activities. To obtain successful results, an initial conditioning phase in order to assure that NMES is able to induce some muscle force, and a longer familiarization phase with the system, are envisaged.

Author Contributions

Christian Klauer and Thomas Schauer designed and implemented the real-time NMES control system including interfaces to the central controller and to the sensors and brakes of the exoskeleton. They also derived the kinematic model and set-up for the parameter estimation. Werner Reichenfelser, Jakob Karner, and Margit Gföhler designed and built the passive light-weight exoskeleton. Marta Gandolla and Alessandra Pedrocchi developed the eyetracker interface. Emilia Ambrosini, Simona Ferrante, and Christian Klauer carried out the validation study of the control system including data analysis. Marco Hack and Andreas Jedlitschka developed the Kinect interface and object/hand tracking. Sven Zwicker and Alexander Duschau-Wicke realized the central controller, the overall system integration and the inter-module communication. Alessandra Pedrocchi was the manager of the EU project MUNDUS and responsible for the entire system design. All authors contributed in writing and revising the manuscript.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The research leading to these results has received funding from the European Community's Seventh Framework Programme under grant agreement no. 248326 within the project MUNDUS. We would also like to thank all participants of the study.

Supplementary Material

The Supplementary Material for this article can be found online at: http://www.frontiersin.org/journal/10.3389/fnins.2014.00262/abstract

A video of the drinking use case realized by the MUNDUS system showing a healthy subject. The arm movements are generated by means of the described feedback control system. In addition, also a NMES hand module is applied to support the grasping of the object.

Footnotes

References

Acosta, A. M., Kirsch, R. F., and Van der Helm, F. C. T. (2001). “Feasibility of restoring shoulder function in individuals with C3-C4 spinal cord injury,” in Proc. of the 6th Annual International FES Society Conference, (Cleverland, OH).

Astrom, K. J., and Wittenmark, B. (1996). Computer-Controlled Systems: Theory and Design. 3rd Edn. Upper Saddle River, NJ: Prentice Hall.

Bryden, A. M., Memberg, W. D., and Crago, P. E. (2000). Electrically stimulated elbow extension in persons with c5/c6 tetraplegia: a functional and physiological evaluation. Arch. Phys. Med. Rehabil. 81, 80–88. doi: 10.1016/S0003-9993(00)90226-0

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Freeman, C., Rogers, E., Hughes, A., Burridge, J., and Meadmore, K. (2012). Iterative learning control in health care: electrical stimulation and robotic-assisted upper-limb stroke rehabilitation. IEEE Control Syst. 32, 18–43. doi: 10.1109/MCS.2011.2173261

CrossRef Full Text

Hoshimiya, N., Naito, A., Yajima, M., and Handa, Y. (1989). A multichannel FES system for the restoration of motor functions in high spinal cord injury patients: a respiration-controlled system for multijoint upper extremity. IEEE Trans. Biomed. Eng. 36, 754–760. doi: 10.1109/10.32108

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Karner, J., Reichenfelser, W., and Gfoehler, M. (2012). “Kinematic and kinetic analysis of human motion as design input for an upper extremity bracing system,” in Proc. of IASTED International Conference Biomedical Engineering - BioMed 2012 (Innsbruck: ACTA Press), 764–105.

Ljung, L. (1999). System Identification: Theory for the User. 2nd Edn. New Jersey, NJ: Prentice Hall.

Lynch, C. L., and Popovic, M. R. (2008). Functional electrical stimulation. IEEE Control Syst. Mag. 28, 40–50. doi: 10.1109/MCS.2007.914689

CrossRef Full Text

Maheu, V., Frappier, J., Archambault, P., and Routhier, F. (2011). “Evaluation of the JACO robotic arm: clinico-economic study for powered wheelchair users with upper-extremity disabilities,” in Proc. of IEEE International Conference on Rehabilitation Robotics (Zurich), 1–5.

Pubmed Abstract | Pubmed Full Text

Nathan, R. H., and Ohry, A. (1990). Upper limb functions regained in quadriplegia: a hybrid computerized neuromuscular stimulation system. Arch. Phys. Med. Rehabil. 71, 415–421.

Pubmed Abstract | Pubmed Full Text

Pedrocchi, A., Ferrante, S., Ambrosini, E., Gandolla, M., Casellato, C., Schauer, T., et al. (2013). MUNDUS project: mUltimodal neuroprosthesis for daily upper limb support. J. Neuroeng. Rehabil. 10, 66. doi: 10.1186/1743-0003-10-66

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Popovic, M. R., Popovic, D. B., and Keller, T. (2002). Neuroprostheses for grasping. Neurol. Res. 24, 443–452. doi: 10.1179/016164102101200311

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Reichenfelser, W., Karner, J., and Gföhler, M. (2013). “Modular instrumented arm orthosis with weight support for application with NMES,” in Converging Clinical and Engineering Research on Neurorehabilitation, Vol. 2, eds J. Pons, D. Torricelli, and M. Pajaro (Berlin, Heidelberg: Springer), 1159–1163. doi: 10.1007/978-3-642-34546-3-191

CrossRef Full Text

Rohm, M., Müller-Putz, G., Kreilinger, A., von Ascheberg, A., and Rupp, R. (2010). A hybrid-brain computer interface for control of a reaching and grasping neuroprosthesis. Biomed. Tech. 55, 1–4. doi: 10.1515/bmt.2010.715

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Rupp, R., and Gerner, H. (2007). “Neuroprosthetics of the upper extremity—clinical application in spinal cord injury and challenges for the future,” in Operative Neuromodulation, eds D. E. Sakas, B. A. Simpson, and E. S. Krames (Vienna: Springer), 419–426.

Pubmed Abstract | Pubmed Full Text

Schill, O., Wiegand, R., Schmitz, B., Matthies, R., Eck, U., Pylatiuk, C., et al. (2011). OrthoJacket: an active FES-hybrid orthosis for the paralysed upper extremity. Biomed. Tech. 56, 35–44. doi: 10.1515/bmt.2010.056

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Sheffler, L. R., and Chae, J. (2007). Neuromuscular electrical stimulation in neurorehabilitation. Muscle Nerve 35, 562–590. doi: 10.1002/mus.20758

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Smith, B. T., Mulcahey, M. J., and Betz, R. R. (1996). Development of an upper extremity FES system for individuals with C4 tetraplegia. IEEE Trans. Rehabil. Eng. 4, 264–270. doi: 10.1109/86.547926

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Valtin, M., Schauer, T., Behling, C., Daniel, M., and Weber, M. (2012). “Combined stimulation and measurement system for array electrodes,” in Proc. of the International Conference on Biomedical Electronics and Devices - Biodevices 2012 (Algarve), 345–349.

Keywords: neuro-muscular electrical stimulation, neuroprosthetics, exoskeleton, feedback control, assistive technology, eye tracking

Citation: Klauer C, Schauer T, Reichenfelser W, Karner J, Zwicker S, Gandolla M, Ambrosini E, Ferrante S, Hack M, Jedlitschka A, Duschau-Wicke A, Gföhler M and Pedrocchi A (2014) Feedback control of arm movements using Neuro-Muscular Electrical Stimulation (NMES) combined with a lockable, passive exoskeleton for gravity compensation. Front. Neurosci. 8:262. doi: 10.3389/fnins.2014.00262

Received: 14 March 2014; Accepted: 04 August 2014;
Published online: 02 September 2014.

Edited by:

Jose L. Pons, CSIC, Spain

Reviewed by:

Juan C. Moreno, Spanish National Research Council, Spain
Diego Torricelli, Consejo Superior de Investigaciones Cientificas, Spain

Copyright © 2014 Klauer, Schauer, Reichenfelser, Karner, Zwicker, Gandolla, Ambrosini, Ferrante, Hack, Jedlitschka, Duschau-Wicke, Gföhler and Pedrocchi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Alessandra Pedrocchi, NeuroEngineering and Medical Robotics Laboratory, NearLab, Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Via Giuseppe Colombo 40, 20133 Milano, Italy e-mail: alessandra.pedrocchi@polimi.it

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.