Team Members: Benyamin A. Haghi, Sahil Shah, Spencer Kellis from Richard Andersen’s lab.
In the United States, there are about 17,700 new cases per year of Spinal Cord Injury (SCI). SCI results in a partial or total loss of motor function. Brain-Machine Interfaces (BMI) have the potential to increase independence and improve quality of life in SCI patients by reading out neural signals and mapping them onto control signals for assistive devices.
BMI systems serve as an interface between the cortex and peripheral devices and hence they need to be robust over time in the face of different sources of variability. For example, electric potentials in the cortex have small amplitudes and are susceptible to noise, and electrical and mechanical properties of implanted microelectrodes change over time. Neuronal populations may also change over time. Hence, the decoders designed for a BMI system should be able to generalize across these sources of variability to accurately infer movement commands from changing neural signals.
In addition, almost all existing BMI systems run on a desktop computer consuming several watts of power (a typical desktop consumes 60 to 300 watts of power). Such a system is not optimized for real-time processing outside of a clinical setting. Hence, the need for a robust and efficient learning system implemented on an ASIC. Moreover, the algorithms used for such a BMI system have assumed a linear relation between inputs and outputs (e.g., Kalman filters or Wiener filters). In recent years, due to progress made in machine learning and neural networks, there has been an increased interest in adopting these novel techniques for BMI applications. With enough training data, these powerful machine learning algorithms could generalize over large variations in the recorded data.
Our group in collaboration with Richard Andersen’s Lab propose to develop a BMI system which efficiently maps neuronal signal to kinematics in a resource-constrained environment. Figure 1 shows a top-level block diagram of our BMI system.
In our recent work  we use neural and behavioral data collected during the open-loop phase of a 2D center-out brain-control task. In this phase of the task, a cursor moves under computer control, with a minimum-jerk velocity profile, from the center of a computer screen to one of eight different target locations arranged uniformly around a unit circle, while the subject uses motor imagery to imagine controlling the cursor. Data is collected in three-minute blocks, each block consisting of 53 trials, with a pseudorandom uniform distribution of targets across trials.
This data is used to train several neural networks. Specifically, a Recurrent Neural Network (RNN), which have shown promising results for sequential data were used. An RNN is composed of feedforward network as well as a feedback network, meaning that all previous outputs are integrated to predict the next time-step RNNs also use previous time step’s input data when computing a new prediction.
While these algorithms are powerful in their capacity to capture complex relationships, they currently require power-hungry computational resources to operate. Part of making BMI systems clinically relevant is to design and develop size- and power-efficient hardware for decoding kinematics such that these systems can be implanted or worn on the body. One of the directions being investigated involves exploring such novel algorithms and energy-efficient hardware.
- Benyamin Haghi, Spencer Kellis, Luke Bashford, Sahil Shah, Daniel Kramer, Brian Lee, Charles Liu, Richard Andersen and Azita Emami “Robust Learning Algorithms for Brain Machine Interfaces” IEEE Brain Initiative Workshop on Advanced NeuroTechnologies 2018.
- Sahil Shah, Benyamin Haghi, Spencer Kellis, Luke Bashford, Daniel Kramer, Brian Lee, Charles Liu, Richard Andersen and Azita Emami “Decoding Kinematics from Human Parietal Cortex using Neural Networks” International IEEE EMBS Conference on Neural Engineering (Accepted)