Skip to main content

PERSPECTIVE article

Front. Neurosci., 20 April 2016
Sec. Neural Technology
This article is part of the Research Topic Current challenges and new avenues in neural interfacing: from nanomaterials and microfabrication state-of-the-art, to advanced control-theoretical and signal-processing principles View all 35 articles

Implications of the Dependence of Neuronal Activity on Neural Network States for the Design of Brain-Machine Interfaces

  • Neural Computation Laboratory, Istituto Italiano di Tecnologia, Rovereto, Italy

Brain-machine interfaces (BMIs) can improve the quality of life of patients with sensory and motor disabilities by both decoding motor intentions expressed by neural activity, and by encoding artificially sensed information into patterns of neural activity elicited by causal interventions on the neural tissue. Yet, current BMIs can exchange relatively small amounts of information with the brain. This problem has proved difficult to overcome by simply increasing the number of recording or stimulating electrodes, because trial-to-trial variability of neural activity partly arises from intrinsic factors (collectively known as the network state) that include ongoing spontaneous activity and neuromodulation, and so is shared among neurons. Here we review recent progress in characterizing the state dependence of neural responses, and in particular of how neural responses depend on endogenous slow fluctuations of network excitability. We then elaborate on how this knowledge may be used to increase the amount of information that BMIs exchange with brain. Knowledge of network state can be used to fine-tune the stimulation pattern that should reliably elicit a target neural response used to encode information in the brain, and to discount part of the trial-by-trial variability of neural responses, so that they can be decoded more accurately.

Introduction

Brain-machine interfaces (BMIs) are devices mediating the dialogue between a brain and the external world. These devices hold the potential to restore motor or sensory functions to people who lost them due to illness or injury. Depending on their direction of communication with the brain, BMIs can be divided into various categories (Donoghue, 2002; Mussa-Ivaldi and Miller, 2003).

Efferent or motor BMIs use sensors to record neural activity—such as single-unit (SUA) or Multi-unit (MUA) activity, Local Field Potentials (LFPs), electrocorticograms (ECoG), or electroencephalograms (EEGs)—and decode this activity to infer the motor intent of the subject and command an artificial actuator (a robotic arm, a motorized wheelchair, or a computer cursor). These systems can have a considerable clinical impact for the treatment of patients with neurological diseases such as stroke, spinal cord injury, or Parkinson's disease.

Afferent or sensory BMIs sense physical quantities from the environment (i.e., sound, light, temperature) and use an encoding interface to translate these sensory signals into patterns of neural activity elicited using causal interventions on the brain (for example, electrical or optogenetic microstimulation) with the goal of provoking the desired sensation (Fitzsimmons et al., 2007). Examples include cochlear implants (Loeb, 1990; Clark, 2006) and retinal prostheses (Zrenner, 2002; Nirenberg and Pandarinath, 2012). Sensory interfaces have obvious implications for curing the loss of sensory function.

Researchers have also developed bidirectional BMIs (Figure 1) in which both a decoder of motor intention and an encoder of sensory information exchange information with the brain in a closed-loop (Reger et al., 2000; Nicolelis, 2003; Lebedev and Nicolelis, 2006; Nicolelis and Lebedev, 2009; O'Doherty et al., 2009, 2011; Mussa-Ivaldi et al., 2010; Lebedev et al., 2011; Carmena, 2013; Moxon and Foffani, 2015). Such systems may have important clinical applications because (unlike motor BMIs) they can provide the brain with non-visual feedback information (such as tactile or proprioceptive information) that is important for compliant task execution. Bidirectional BMIs may also help to automatically execute tasks without focusing attention on each single motor command. Recently we proposed to achieve this goal through a class of bidirectional BMIs, implemented both in anesthetized and awake rodents (Vato et al., 2012, 2014; Boi et al., 2015), in which the decoding and encoding interfaces generated a motor program similar to the force fields generated by the spinal cord when combining motor and sensory information (Shadmehr et al., 1993; Mussa-Ivaldi et al., 1994).

FIGURE 1
www.frontiersin.org

Figure 1. Schematic of a bidirectional brain-machine interface. A bidirectional BMI has two pathways of communication with the brain: an afferent pathway from some sensors to the brain and an efferent pathway from the brain to a device controlled by it. The decoder—or motor interface—transforms the recorded activity into motor commands for the device. The encoder—or sensory interface—transmits the information about the external world or about the state of the device to the brain by delivering electrical stimulation patterns to it.

Despite all this progress, several aspects of current unidirectional and bidirectional BMIs remain to be improved (Bensmaia and Miller, 2014; Shenoy and Carmena, 2014). One key challenge is that the large trial-to-trial variability of neural responses (Faisal et al., 2008; Quiroga and Panzeri, 2009) strongly limits BMIs. Improving sensory interfaces requires converting more reliably a sensory signal into desired patterns of neural activity that can be robustly perceived as the appropriate sensory signal. Improving motor interfaces requires better decoding the motor intention despite the trial-to-trial variability of the neurons that express it. Variability of neural activity cannot be easily reduced simply by improving technology to record and stimulate from ever increasing numbers of electrodes (Baranauskas, 2014; Lebedev, 2014), because some of the main sources of variability are generated at the network level and are shared across neurons (Goris et al., 2014; Lin et al., 2015; Schölvinck et al., 2015). This can be conceptualized by thinking of neural activity as state-dependent: neural activity does not depend only on external task-related variables but also on internal network variables. In this Perspective article, we will discuss recent findings about state dependence of neural activity, and we will reason on how taking state dependence of neural activity into account can help us to build better sensory, motor, and bidirectional BMIs.

State Dependence of Neural Responses

Neural responses to a sensory stimulus do not only depend on the feedforward extrinsic sensory inputs but also on intrinsic network variables that can be collectively defined as the “neural state” (Buonomano and Maass, 2009). This state dependence is generated by strong recurrent and feedback connectivity that creates endogenous ongoing activity and modulates how afferent information is processed (Harris and Thiele, 2011). Similarly, firing of neurons in motor areas reflects not only the tuning to the movement being expressed but also other factors including ongoing network dynamics (Rule et al., 2015). In addition, neuromodulatory inputs from brainstem nuclei can modulate the dynamics of cortical networks (Moxon et al., 2007; Edeline, 2012; Eschenko et al., 2012; Lee and Dan, 2012).

That neural firing is state-dependent has been known for many years (Arieli et al., 1996). Recent years, however, have witnessed important progress in the mathematical understanding of how the single-trial neural responses to a particular stimulus or external event depend on network state. Biophysical models can predict the contribution to single-trial responses of individual neurons to electrical or optical stimulations arising from intrinsic neural mechanisms (such as adaptation) that can be inferred from the previous spiking history of that neuron (Ahmadian et al., 2011). These methods can in principle be extended to predict in real-time the optimal stimulation intensity needed to elicit a target pattern of neural activity while minimizing stimulation power (Ahmadian et al., 2011). Other theoretical work concentrated on mathematically predicting single-trial cortical activity elicited by a sensory stimulus by using a dynamical system to model the interaction between the feed-forward stimulus drive and the ongoing fluctuations of the circuit's state (Curto et al., 2009). This prediction worked well both when the interaction between ongoing state dynamics and stimulus drive is linear and when it is non-linear. Further work has shown that the prediction of single-trial cortical responses to stimuli can be greatly enhanced when knowledge of the state of neuromodulatory brainstem nuclei (in particular of the nucleus releasing norepinephrine) is used to improve the prediction of the ongoing cortical state dynamics (Safaai et al., 2015). A number of other recent studies clarified that the variability induced by state changes can be, to a first approximation, described simply. In some cases state dependence is described with an additive term of background activity to the trial-averaged response to the stimulus (Arieli et al., 1996; Ecker et al., 2014; Schölvinck et al., 2015). In other cases, state dependence can be described as multiplicative (i.e., it rescales the gain of the stimulus-response function, see Goris et al., 2014) or as a mixture of additive and multiplicative effects (Kayser et al., 2015; Lin et al., 2015). Importantly, the trial-to-trial variations of neural responses due to state changes are shared across neurons (Lin et al., 2015), likely because they arise as “network effects” (Harris and Thiele, 2011). Because it is shared, the variability due to state changes cannot be easily eliminated simply recording from more neurons.

Recent studies have begun individuating which variables describing intrinsic brain activity can be used as effective “neural state” variables. Several studies have suggested that cortex undergoes periodic endogenous slow periodic variations in excitability that can be captured by the phase of low-frequency activity of mass signals such as LFPs or MUA. For example, both in anesthetized (Kayser et al., 2015) and awake animals (Lakatos et al., 2005) certain phases of low-frequency LFP oscillations correspond to higher firing rate and other phases correspond to lower firing rate of single neurons. A recent study (Kayser et al., 2015) in anesthetized animals showed that the phase of low-frequency (delta, theta, and alpha) LFP oscillations at which a stimulus is presented rescales both the stimulus-response gain and the background firing of auditory cortical neurons. Similarly, the gain of visual cortical neurons of awake attentive macaques is modulated by intrinsic state variables varying on time scales of few hundreds ms (Rabinowitz et al., 2015). These low-frequency oscillations also correlate with perception: the phase of low-frequency (delta, theta bands) EEGs at which a near-threshold sensory stimulus is presented to human subjects impacts on whether the subject reports the perception of the stimulus (Busch et al., 2009; Ng et al., 2012). Moreover, the perception elicited by non-invasive transcranial stimulation of sensory cortices depends on the endogenous alpha EEG rhythm at the time of stimulation (Romei et al., 2008) suggesting that both the neural activity and the perceptual effect elicited by causal intervention in the awake brain depends on the ongoing endogenous low-frequency activity.

How Can State Dependence of Neural Responses be used to Improve BMIs?

Here we discuss how taking state dependence into account could increase the information bandwidth by which BMIs communicate with the brain.

To discuss this, we will suppose for simplicity that, as in auditory cortex (Kayser et al., 2015), the response r of a neuron to a stimulus s in a single trial tr depends on a state variable θ (in this example, the phase of a low-frequency LFP at which stimulus s is applied in trial tr) with an additive-multiplicative model. The gain g and the background b both depend on the phase of a low-frequency network oscillation at which the stimulus is applied

r(s,tr)=g(θ)f(s)+b(θ)    (1)

where state θ is function of the trial and f(s) is the trial-averaged responses to all trials to stimulus s [in other words, f(s) is the neuron tuning's curve]. We assume that, again as in Kayser et al. (2015), if the stimulus was presented in the phases of the low-frequency LFP that corresponded to LFP troughs (respectively, peaks) then it elicited a larger (respectively, smaller) response because both the gain and the background activity were larger (respectively, smaller), see Figure 2A.

FIGURE 2
www.frontiersin.org

Figure 2. How knowledge of state dependence may be used to improve reliability of elicited patterns and to enhance decoding of neural activity. This figure uses cartoons of neural responses to illustrate how taking into account state dependence of neural responses may improve BMIs. (A) The panel illustrates the responses of a cartoon neuron to two different stimuli presented at different phases of LFP. Green and pink arrows represent the application times of stimulus s = 1 and s = 2, respectively. Stimuli applied at the trough of the LFP (a more excitable network state phase) elicit a stronger response than stimuli applied at the peak of the LFP (a less excitable network state phase). (B) We plot the time course of cartoon neural activity shortly after a stimulus is applied at time t = 0. Different lines represent single-trial responses to the same stimulus that were elicited in trials that differed by the value of the state variable (in this case, the LFP phase) in each trial. Each line representing the single-trial responses is color-coded by the value of the state variable in that trial. The response to the same stimulus has large trial-to-trial variability because of the difference in the state at which the stimulus is presented. (C) In a particular network state, the real response (dashed black line) and the state-dependent model prediction (pink line) of the response are shown. If the model of state dependence is accurate, it will help the experimenter to predict which response will be elicited in that trial given the stimulation parameters and the neural state, thereby narrowing down the uncertainty about which response will be elicited. The gray area shows the range of possible responses that could be obtained for a given stimulus because of state variations. (D) A scatter plot showing the variations around the mean of single-trial responses plotted against the single-trial response variations around the mean estimated from a state-dependent model of the responses in a hypothetical case. Each point represents one trial. This scatter plot indicates how well a state dependence model can predict the single-trial neural responses to a given stimulus. (E) How to discount state-induced trial-to-trial variability is exemplified for a single trial. Stimulus 1 was presented in this cartoon trial, and the response in this trial (full black line) is plotted against the trial-averaged response of stimulus 1 (green line) and of stimulus 2 (pink line). The state dependence model predicts that the response variation around the trial-average in this trial was negative. The black arrows show the model-predicted state-induced variation around the trial-average for this particular single trial. The addition of the model predicted variability gives a “discounted” response (dashed black line) that will be much closer to the averaged response to stimulus 1 (the stimulus actually presented in this trial) that the original response. (F) The distributions of the responses to stimulus 1 (green full line) and stimulus 2 (pink full line) and the distributions of the discounted responses for stimulus 1 (dashed green line) and stimulus 2 (dashed pink line) obtained after subtracting the model predicted state-induced variability are shown. The distributions of the discounted responses are narrower and allow better discrimination between the two stimuli.

Knowledge of state dependence can improve the encoding stage of BMIs. If (as illustrated in Figure 2B) trial-to-trial variations of stimulus-evoked responses can be to some extent predicted from models of state dependence such as those in Equation (1), these model predictions can lead to design a “state-dependent” causal intervention onto the neural tissue to achieve more reliably the desired neural response (Figure 2C). Suppose that we want to achieve a target firing rate r* on a given trial and that we estimate the network state at that time to be θ. Using Equation (1), the optimal value of the stimulation strength f*(s) that we need to apply to achieve the target response is:

f*(s)=r*-b(θ)g(θ)    (2)

The potential advantage of using knowledge of the state at which the stimulus is applied to better predict the responses that will be elicited has been tested in vivo. Brugger et al. (2011) successfully used the low-frequency (< 20 Hz) components of pre-stimulus LFPs to better predict the intensity of electrical microstimulation needed to achieve a target value of cortical firing in response to the stimulation. The algorithm was particularly successful at increasing the reliability of responses to low-intensity stimulation (Brugger et al., 2011). This suggests that using knowledge of state to fine-tune the stimulation parameters could achieve reliable injection of information into the nervous system using less damaging interventions.

Knowledge of state dependence (expressed as ability to predict single-trial variations around the stimulus mean from mathematical knowledge of state dependence, Figure 2D) can also greatly improve the decoding stage of a BMI. Indeed, more information about the external variables could be obtained by simply subtracting out the estimated state-induced trial-to-trial variations of these responses. This idea is illustrated in Figure 2E, which shows the trial-averaged stimulus-evoked firing rate of a cartoon neuron to two different stimuli, as well as a single-trial response. This trial elicited a firing rate that was in-between the mean rates of these two stimuli. This intermediate-strength response could have arisen either in response to the weakest stimulus when the network was in excitable state or in response from the strongest stimulus when the network was less excitable. This ambiguity can be resolved after computing and then subtracting out the trial-to-trial variability predicted by the network state. In this example, the predicted variability was negative (indicating that the network was in a less excitable state). The subtraction of the predicted variability from the single-trial response (black upward arrows in Figure 2E) produces a “variability-discounted” response much closer to the trial-averaged response of stimulus presented in that trial (and thus much easier to decode) than the original response. Within the state dependence model of Equation (1), the state-induced variability of the responses could be discounted as follows:

rdiscounted(s,tr)=r(s,tr)-b(θ)g(θ)    (3)

The subtraction of state-induced variability leads to a reduction of variability of responses at fixed stimuli (Figure 2F) of the discounted responses with respect to the original ones. Reducing variability at fixed stimulus increases stimulus discriminability (and thus information) of neural responses. Importantly, this increase of information after discounting state variability happens because taking into account the state variable reveals more tightly the relationship between stimulus and response at fixed state, and it can happen even if the state variable does not carry any information about the stimulus per se.

How substantial may this information gain be in real data? Safaai et al. (2015) quantified the network state as the parameters of a dynamical system (a Fitzhugh-Nagumo model, see FitzHugh, 1955) that best described the low-frequency (< 15 Hz) synchronized variations of cortical excitability before the application of a somatosensory stimulus. They subtracted from the original cortical MUA responses to the stimulus in each trial the prediction of the trial-to-trial variations of cortical firing due to state variations obtained from their dynamical system model. They found that, although the variables describing pre-stimulus state did not carry any stimulus information, the gain of sensory information obtained from the neural responses after discounting was large: it was 40% when estimating state only based on cortical ongoing activity alone, and it reached 70% when taking into account also the state of neuromodulatory nuclei releasing norepinephrine (Safaai et al., 2015). This suggests that state dependence could potentially double current information rates of decoding BMIs without increasing the size of invasive electrode arrays.

These considerations on state dependence could be incorporated into existing research directions in BMIs. Several decoding schemes, including those based on Wiener-Kolmogorov or Kalman filters non linear recurrent systems and other kinds of dynamical systems (Carmena et al., 2003; Hatsopoulos et al., 2004; Hochberg et al., 2006, 2012; Fitzsimmons et al., 2009; Sussillo et al., 2012; Kao et al., 2015) can incorporate in decoding—besides information about the “state” of the external devices to be commanded (not to be confounded with the neural state considered here) and besides other kinds of contextual modulations of neural activity such as influence of movement on sensory responses (Saleem et al., 2013; Zagha et al., 2013)—also the history of neural activity over long times scales. Thus, in principle these algorithms are well suited to be extended to include the knowledge of the state of the neural circuit. However, these studies typically included in the neural history mainly neural response components that directly carried information about the task-relevant variables to be decoded. Recent progress about state dependence of neural responses (Curto et al., 2009; Kayser et al., 2015; Safaai et al., 2015) suggests that also neural variables that represent only internal state information (such as the ongoing cortical activity or the activity of neuromodulatory nuclei) but do not directly carry information about the variables to be decoded can nevertheless greatly enhance decoding performance of BMIs, because they may allow to discount and subtract out a major source of variability. Given that activity of nuclei such as the Locus Coeruleous may be partly estimated non-invasively (Aston-Jones and Cohen, 2005), including an estimate of the activity of neuromodulatory nuclei in BMI may become useful and be feasible also in clinical applications.

It is likely that the advantage of considering state dependence may be particularly important for bidirectional BMIs when sensory and decoding operations work in a closed-loop. The ability to inject more accurate sensory feedback can lead the subject to express a more accurate motor intention, which in turn can be better decoded by discounting the state-induced variability. Similarly, brain-to-brain interfaces (Rao et al., 2014) might benefit from including state dependence as well.

Practical Challenges for Exploiting State Dependence for BMIs

The most promising candidate “neural state” variables that emerge from recent work typically relate to slow fluctuations of neural activity at frequencies lower than 20 Hz and that need few to several hundreds ms to be measured (Curto et al., 2009; Kayser et al., 2015; Safaai et al., 2015).

Although in some cases these low frequencies may directly carrry information about sensory or motor variables of interest—for example information about low-frequency components of dynamical natural stimuli (Rickert et al., 2005; Luo and Poeppel, 2007; Kayser et al., 2009; Belitski et al., 2010; Hall et al., 2014)—sensory or motor information in neural responses is often carried by neural firing in short time scales of few tens of ms (Panzeri et al., 2010). The potential difference of time scales for detecting task- or stimulus-informative neural variables and for detecting neural state variables poses important technological challenges for implementing state dependence in a closed-loop. In particular, electrical microstimulation produces artifacts that may mask the recorded neural signals for few ms. This issue may be addressed (O'Doherty et al., 2011) by multiplexing the recording and the electrical stimulation. The need to detect state variables operating at longer time scales calls for optimizing the time multiplexing strategy for the readout of signals at multiple time scales. The ability of the state-dependent stimulations to achieve reliable patterns even at lower current intensity (Ahmadian et al., 2011; Brugger et al., 2011) may become key to optimally integrate in real-time electrical stimulation, recoding of the neural activity and a state detector in a closed-loop bidirectional system.

If inserting knowledge of low-frequency variations in network states proves a successful strategy to improve BMIs, causal intervention of such states may become an important part of bidirectional BMIs. Thus, an interesting question is how to integrate in a close loop systems state-dependent algorithms and causal manipulation of low frequency cortical fluctuations (Thut et al., 2011; Beltramo et al., 2013).

Author Contributions

All authors listed, have made substantial, direct and intellectual contribution to the work, and approved it for publication.

Funding

This work was supported by the European Commission (FP7-ICT-2011.9.11/284553, “SICODE,” and FP7-2007-2013/PITN-GA-2011-290011, “ABC,” and National Operational Programme for Research and Competitiveness 2007-13 PONa3_00210, “Cyber Brain”), by the MIUR Flag-Era JTC Human Brain (“Slow-Dyn”), and by the Autonomous Province of Trento (“Grandi Progetti 2012,” “ATTEND”).

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We are grateful to M. Semprini, F. Boi, F. A. Mussa-Ivaldi, O. Eschenko, N. K. Logothetis, C. Kayser, and S. Sakata for their precious collaboration on earlier work relevant to this article, and to T. Fellin and the Referees for suggestions on this manuscript.

References

Ahmadian, Y., Packer, A. M., Yuste, R., and Paninski, L. (2011). Designing optimal stimuli to control neuronal spike timing. J. Neurophysiol. 106, 1038–1053. doi: 10.1152/jn.00427.2010

PubMed Abstract | CrossRef Full Text | Google Scholar

Arieli, A., Sterkin, A., Grinvald, A., and Aertsen, A. (1996). Dynamics of ongoing activity: explanation of the large variability in evoked cortical responses. Science 273, 1868–1871. doi: 10.1126/science.273.5283.1868

PubMed Abstract | CrossRef Full Text | Google Scholar

Aston-Jones, G., and Cohen, J. D. (2005). An integrative theory of locus coeruleus-norepinephrine function: adaptive gain and optimal performance. Annu. Rev. Neurosci. 28, 403–450. doi: 10.1146/annurev.neuro.28.061604.135709

PubMed Abstract | CrossRef Full Text | Google Scholar

Baranauskas, G. (2014). What limits the performance of current invasive brain machine interfaces? Front. Syst. Neurosci. 8:68. doi: 10.3389/fnsys.2014.00068

PubMed Abstract | CrossRef Full Text | Google Scholar

Belitski, A., Panzeri, S., Magri, C., Logothetis, N. K., and Kayser, C. (2010). Sensory information in local field potentials and spikes from visual and auditory cortices: time scales and frequency bands. J. Comput. Neurosci. 29, 533–545. doi: 10.1007/s10827-010-0230-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Beltramo, R., D'Urso, G., Dal Maschio, M., Farisello, P., Bovetti, S., Clovis, Y., et al. (2013). Layer-specific excitatory circuits differentially control recurrent network dynamics in the neocortex. Nat. Neurosci. 16, 227–234. doi: 10.1038/nn.3306

PubMed Abstract | CrossRef Full Text | Google Scholar

Bensmaia, S., and Miller, L. (2014). Restoring sensorimotor function through intracortical interfaces: progress and looming challenges. Nat. Rev. Neurosci. 15, 313–325. doi: 10.1038/nrn3724

PubMed Abstract | CrossRef Full Text | Google Scholar

Boi, F., Semprini, M., Mussa-Ivaldi, F. A., Panzeri, S., and Vato, A. (2015). “A bidirectional brain-machine interface connecting alert rodents to a dynamical system,” in Proceeding 37th Annual International Conference IEEE EMBC (Milan), 51–54.

PubMed Abstract | Google Scholar

Brugger, D., Butovas, S., Bogdan, M., and Schwarz, C. (2011). Real-time adaptive microstimulation increases reliability of electrically evoked cortical potentials. IEEE Trans. Biomed. Eng. 58, 1483–1491. doi: 10.1109/TBME.2011.2107512

PubMed Abstract | CrossRef Full Text | Google Scholar

Buonomano, D. V., and Maass, W. (2009). State-dependent computations: spatiotemporal processing in cortical networks. Nat. Rev. Neurosci. 10, 113–125. doi: 10.1038/nrn2558

PubMed Abstract | CrossRef Full Text | Google Scholar

Busch, N. A., Dubois, J., and VanRullen, R. (2009). The phase of ongoing EEG oscillations predicts visual perception. J. Neurosci. 29, 7869–7876. doi: 10.1523/JNEUROSCI.0113-09.2009

PubMed Abstract | CrossRef Full Text | Google Scholar

Carmena, J. M. (2013). Advances in neuroprosthetic learning and control. PLoS Biol. 11:e1001516. doi: 10.1371/journal.pbio.1001561

PubMed Abstract | CrossRef Full Text | Google Scholar

Carmena, J. M., Lebedev, M. A., Crist, R. E., O'Doherty, J. E., Santucci, D. M., Dimitrov, D. F., et al. (2003). Learning to control a brain-machine interface for reaching and grasping by primates. PLoS Biol. 1:e42. doi: 10.1371/journal.pbio.0000042

PubMed Abstract | CrossRef Full Text | Google Scholar

Clark, G. M. (2006). The multiple-channel cochlear implant: the interface between sound and the central nervous system for hearing, speech, and language in deaf people-a personal perspective. Philos. Trans. R. Soc. Lond. B. Biol. Sci. 361, 791–810. doi: 10.1098/rstb.2005.1782

PubMed Abstract | CrossRef Full Text | Google Scholar

Curto, C., Sakata, S., Marguet, S., Itskov, V., and Harris, K. D. (2009). A simple model of cortical dynamics explains variability and state dependence of sensory responses in urethane-anesthetized auditory cortex. J. Neurosci. 29, 10600–10612. doi: 10.1523/JNEUROSCI.2053-09.2009

PubMed Abstract | CrossRef Full Text | Google Scholar

Donoghue, J. (2002). Connecting cortex to machines: recent advances in brain interfaces. Nat. Neurosci. 5, 1085–1088. doi: 10.1038/nn947

PubMed Abstract | CrossRef Full Text | Google Scholar

Ecker, A. S., Berens, P., Cotton, R. J., Subramaniyan, M., Denfield, G. H., Cadwell, C. R., et al. (2014). State dependence of noise correlations in macaque primary visual cortex. Neuron 82, 235–248. doi: 10.1016/j.neuron.2014.02.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Edeline, J. M. (2012). Beyond traditional approaches to understanding the functional role of neuromodulators in sensory cortices. Front. Behav. Neurosci. 6:45. doi: 10.3389/fnbeh.2012.00045

PubMed Abstract | CrossRef Full Text | Google Scholar

Eschenko, O., Magri, C., Panzeri, S., and Sara, S. J. (2012). Noradrenergic neurons of the locus coeruleus are phase locked to cortical up-down states during sleep. Cereb. Cortex 22, 426–435. doi: 10.1093/cercor/bhr121

PubMed Abstract | CrossRef Full Text | Google Scholar

Faisal, A., Selen, L., and Wolpert, D. (2008). Noise in the nervous system. Nat. Rev. Neurosci. 9, 293–303. doi: 10.1038/nrn2258

PubMed Abstract | CrossRef Full Text | Google Scholar

FitzHugh, R. (1955). Mathematical models of threshold phenomena in the nerve membrane. Bull. Math. Biophys. 17, 257–278. doi: 10.1007/BF02477753

CrossRef Full Text | Google Scholar

Fitzsimmons, N. A., Drake, W., Hanson, T. L., Lebedev, M. A., and Nicolelis, M. A. (2007). Primate reaching cued by multichannel spatiotemporal cortical microstimulation. J. Neurosci. 27, 5593–5602. doi: 10.1523/JNEUROSCI.5297-06.2007

PubMed Abstract | CrossRef Full Text | Google Scholar

Fitzsimmons, N. A., Lebedev, M. A., Peikon, I. D., and Nicolelis, M. A. (2009). Extracting kinematic parameters for monkey bipedal walking from cortical neuronal ensemble activity. Front. Integr. Neurosci. 3:3. doi: 10.3389/neuro.07.003.2009

PubMed Abstract | CrossRef Full Text | Google Scholar

Goris, R. L., Movshon, J. A., and Simoncelli, E. P. (2014). Partitioning neuronal variability. Nat. Neurosci. 17, 858–865. doi: 10.1038/nn.3711

PubMed Abstract | CrossRef Full Text | Google Scholar

Hall, T. M., de Carvalho, F., and Jackson, A. (2014). A common structure underlies low-frequency cortical dynamics in movement, sleep, and sedation. Neuron 83, 1185–1199. doi: 10.1016/j.neuron.2014.07.022

PubMed Abstract | CrossRef Full Text | Google Scholar

Harris, K. D., and Thiele, A. (2011). Cortical state and attention. Nat. Rev. Neurosci. 12, 509–523. doi: 10.1038/nrn3084

PubMed Abstract | CrossRef Full Text | Google Scholar

Hatsopoulos, N., Joshi, J., and O'Leary, J. G. (2004). Decoding continuous and discrete motor behaviors using motor and premotor cortical ensembles. J. Neurophysiol. 92, 1165–1174. doi: 10.3410/f.1020784.240460

PubMed Abstract | CrossRef Full Text | Google Scholar

Hochberg, L. R., Bacher, D., Jarosiewicz, B., Masse, N. Y., Simeral, J. D., Vogel, J., et al. (2012). Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature 485, 372–375. doi: 10.1038/nature11076

PubMed Abstract | CrossRef Full Text | Google Scholar

Hochberg, L. R., Serruya, M. D., Friehs, G. M., Mukand, J. A., Saleh, M., Caplan, A. H., et al. (2006). Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature 442, 164–171. doi: 10.1038/nature04970

PubMed Abstract | CrossRef Full Text | Google Scholar

Kao, J. C., Nuyujukian, P., Ryu, S. I., Churchland, M. M., Cunningham, J. P., and Shenoy, K. V. (2015). Single-trial dynamics of motor cortex and their applications to brain-machine interfaces. Nat. Commun. 6, 7759. doi: 10.1038/ncomms8759

PubMed Abstract | CrossRef Full Text | Google Scholar

Kayser, C., Montemurro, M. A., Logothetis, N. K., and Panzeri, S. (2009). Spike-phase coding boosts and stabilizes information carried by spatial and temporal spike patterns. Neuron 61, 597–608. doi: 10.1016/j.neuron.2009.01.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Kayser, C., Wilson, C., Safaai, H., Sakata, S., and Panzeri, S. (2015). Rhythmic auditory cortex activity at multiple timescales shapes stimulus-response gain and background firing. J. Neurosci. 35, 7750–7762. doi: 10.1523/JNEUROSCI.0268-15.2015

PubMed Abstract | CrossRef Full Text | Google Scholar

Lakatos, P., Shah, A. S., Knuth, K. H., Ulbert, I., Karmos, G., and Schroeder, C. E. (2005). An oscillatory hierarchy controlling neuronal excitability and stimulus processing in the auditory cortex. J. Neurophysiol. 94, 1904–1911. doi: 10.1152/jn.00263.2005

PubMed Abstract | CrossRef Full Text | Google Scholar

Lebedev, M. A. (2014). How to read neuron-dropping curves? Front. Syst. Neurosci. 8:102. doi: 10.3389/fnsys.2014.00102

PubMed Abstract | CrossRef Full Text | Google Scholar

Lebedev, M. A., and Nicolelis, M. A. L. (2006). Brain–machine interfaces: past, present and future. Trends Neurosci. 29, 536–546. doi: 10.1016/j.tins.2006.07.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Lebedev, M. A., Tate, A. J., Hanson, T. L., Li, Z., O'Doherty, J. E., Winans, J. A., et al. (2011). Future developments in brain-machine interface research. Clinics 66(Suppl. 1), 25–32. doi: 10.1590/S1807-59322011001300004

PubMed Abstract | CrossRef Full Text | Google Scholar

Lee, S. H., and Dan, Y. (2012). Neuromodulation of brain states. Neuron 76, 209–222. doi: 10.1016/j.neuron.2012.09.012

PubMed Abstract | CrossRef Full Text | Google Scholar

Lin, I. C., Okun, M., Carandini, M., and Harris, K. D. (2015). The nature of shared cortical variability. Neuron 87, 644–656. doi: 10.1016/j.neuron.2015.06.035

PubMed Abstract | CrossRef Full Text | Google Scholar

Loeb, G. E. (1990). Cochlear prosthetics. Annu. Rev. Neurosci. 13, 357–371. doi: 10.1146/annurev.ne.13.030190.002041

PubMed Abstract | CrossRef Full Text | Google Scholar

Luo, H., and Poeppel, D. (2007). Phase patterns of neuronal responses reliably discriminate speech in human auditory cortex. Neuron 54, 1001–1010. doi: 10.1016/j.neuron.2007.06.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Moxon, K. A., Devilbiss, D. M., Chapin, J. K., and Waterhouse, B. D. (2007). Influence of norepinephrine on somatosensory neuronal responses in the rat thalamus: a combined modeling and in vivo multi-channel, multi-neuron recording study. Brain Res. 1147, 105–123. doi: 10.1016/j.brainres.2007.02.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Moxon, K. A., and Foffani, G. (2015). Brain-machine interfaces beyond neuroprosthetics. Neuron 86, 55–67. doi: 10.1016/j.neuron.2015.03.036

PubMed Abstract | CrossRef Full Text | Google Scholar

Mussa-Ivaldi, F. A., Alford, S., Chiappalone, M., Fadiga, L., Karniel, A., Kositsky, M., et al. (2010). New perspectives on the dialogue between brains and machines. Front. Neurosci. 4:44. doi: 10.3389/neuro.01.008.2010

PubMed Abstract | CrossRef Full Text | Google Scholar

Mussa-Ivaldi, F. A., Giszter, S. F., and Bizzi, E. (1994). Linear combinations of primitives in vertebrate motor control. Proc. Natl. Acad. Sci. U.S.A. 91, 7534–7538. doi: 10.1073/pnas.91.16.7534

PubMed Abstract | CrossRef Full Text | Google Scholar

Mussa-Ivaldi, F. A., and Miller, L. (2003). Brain–machine interfaces: computational demands and clinical needs meet basic neuroscience. Trends Neurosci. 26, 329–334. doi: 10.1016/S0166-2236(03)00121-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Ng, B., Schroeder, T., and Kayser, C. (2012). A precluding but not ensuring role of entrained low-frequency oscillations for auditory perception. J. Neurosci. 32, 12268–12276. doi: 10.1523/JNEUROSCI.1877-12.2012

PubMed Abstract | CrossRef Full Text | Google Scholar

Nicolelis, M. A. L. (2003). Brain–machine interfaces to restore motor function and probe neural circuits. Nat. Rev. Neurosci. 4, 417–422. doi: 10.1038/nrn1105

PubMed Abstract | CrossRef Full Text | Google Scholar

Nicolelis, M. A. L., and Lebedev, M. A. (2009). Principles of neural ensemble physiology underlying the operation of brain-machine interfaces. Nat. Rev. Neurosci. 10, 530–540. doi: 10.1038/nrn2653

PubMed Abstract | CrossRef Full Text | Google Scholar

Nirenberg, S., and Pandarinath, C. (2012). Retinal prosthetic strategy with the capacity to restore normal vision. Proc. Natl. Acad. Sci. U.S.A. 109, 15012–15017. doi: 10.1073/pnas.1207035109

PubMed Abstract | CrossRef Full Text | Google Scholar

O'Doherty, J. E., Lebedev, M. A., Hanson, T. L., Fitzsimmons, N. A., and Nicolelis, M. A. (2009). A brain-machine interface instructed by direct intracortical microstimulation. Front. Integr. Neurosci. 3:20. doi: 10.3389/neuro.07.020.2009

PubMed Abstract | CrossRef Full Text | Google Scholar

O'Doherty, J. E., Lebedev, M. A., Ifft, P. J., Zhuang, K. Z., Shokur, S., Bleuler, H., et al. (2011). Active tactile exploration using a brain-machine-brain interface. Nature 479, 228–231. doi: 10.1038/nature10489

PubMed Abstract | CrossRef Full Text | Google Scholar

Panzeri, S., Brunel, N., Logothetis, N. K., and Kayser, C. (2010). Sensory neural codes using multiplexed temporal scales. Trends Neurosci. 33, 111–120. doi: 10.1016/j.tins.2009.12.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Quiroga, R. Q., and Panzeri, S. (2009). Extracting information from neuronal populations: information theory and decoding approaches. Nat. Rev. Neurosci. 10, 173–185. doi: 10.1038/nrn2578

PubMed Abstract | CrossRef Full Text | Google Scholar

Rabinowitz, N. C., Goris, R. L., Cohen, M., and Simoncelli, E. (2015). Attention stabilizes the shared gain of V4 populations. Elife 4:e08998. doi: 10.7554/eLife.08998

PubMed Abstract | CrossRef Full Text | Google Scholar

Rao, R. P., Stocco, A., Bryan, M., Sarma, D., Youngquist, T. M., Wu, J., et al. (2014). A direct brain-to-brain interface in humans. PLoS ONE 9:e111332. doi: 10.1371/journal.pone.0111332

PubMed Abstract | CrossRef Full Text | Google Scholar

Reger, B. D., Fleming, K. M., Sanguineti, V., Alford, S., and Mussa-Ivaldi, F. A. (2000). Connecting brains to robots: an artificial body for studying the computational properties of neural tissues. Artif. Life 6, 307–324. doi: 10.1162/106454600300103656

PubMed Abstract | CrossRef Full Text | Google Scholar

Rickert, J., Oliveira, S., Vaadia, E., Aertsen, A., Rotter, S., and Mehring, C. (2005). Encoding of movement direction in different frequency ranges of motor cortical local field potentials. J. Neurosci. 25, 8815–8824. doi: 10.1523/JNEUROSCI.0816-05.2005

PubMed Abstract | CrossRef Full Text | Google Scholar

Romei, V., Brodbeck, V., Michel, C., Amedi, A., Pascual-Leone, A., and Thut, G. (2008). Spontaneous fluctuations in posterior alpha-band EEG activity reflect variability in excitability of human visual areas. Cereb. Cortex 18, 2010–2018. doi: 10.1093/cercor/bhm229

PubMed Abstract | CrossRef Full Text | Google Scholar

Rule, M. E., Vargas-Irwin, C., Donoghue, J., and Truccolo, W. (2015). Contribution of LFP dynamics to single neuron spiking variability in motor cortex during movement execution. Front. Syst. Neurosci. 9:89. doi: 10.3389/fnsys.2015.00089

PubMed Abstract | CrossRef Full Text | Google Scholar

Safaai, H., Neves, R., Eschenko, O., Logothetis, N. K., and Panzeri, S. (2015). Modeling the effect of locus coeruleus firing on cortical state dynamics and single-trial sensory processing. Proc. Natl. Acad. Sci. U.S.A. 112, 12834–12839. doi: 10.1073/pnas.1516539112

PubMed Abstract | CrossRef Full Text | Google Scholar

Saleem, A. B., Ayaz, A., Jeffery, K. J., Harris, K. D., and Carandini, M. (2013). Integration of visual motion and locomotion in mouse visual cortex. Nat. Neurosci. 16, 1864–1869. doi: 10.1038/nn.3567

PubMed Abstract | CrossRef Full Text | Google Scholar

Schölvinck, M. L., Saleem, A. B., Benucci, A., Harris, K. D., and Carandini, M. (2015). Cortical state determines global variability and correlations in visual cortex. J. Neurosci. 35, 170–178. doi: 10.1523/JNEUROSCI.4994-13.2015

PubMed Abstract | CrossRef Full Text | Google Scholar

Shadmehr, R., Mussa-Ivaldi, F. A., and Bizzi, E. (1993). Postural force fields of the human arm and their role in generating multijoint movements. J. Neurosci. 13, 45–62.

PubMed Abstract | Google Scholar

Shenoy, K. V., and Carmena, J. M. (2014). Combining decoder design and neural adaptation in brain-machine interfaces. Neuron 84, 665–680. doi: 10.1016/j.neuron.2014.08.038

PubMed Abstract | CrossRef Full Text | Google Scholar

Sussillo, D., Nuyujukian, P., Fan, J. M., Kao, J. C., Stavisky, S. D., Ryu, S., et al. (2012). A recurrent neural network for closed-loop intracortical brain-machine interface decoders. J. Neural. Eng. 9:026027. doi: 10.1088/1741-2560/9/2/026027

PubMed Abstract | CrossRef Full Text | Google Scholar

Thut, G., Veniero, D., Romei, V., Miniussi, C., Schyns, P., and Gross, J. (2011). Rhythmic TMS causes local entrainment of natural oscillatory signatures. Curr. Biol. 21, 1176–1185. doi: 10.1016/j.cub.2011.05.049

PubMed Abstract | CrossRef Full Text | Google Scholar

Vato, A., Semprini, M., Maggiolini, E., Szymanski, F. D., Fadiga, L., Panzeri, S., et al. (2012). Shaping the dynamics of a bidirectional neural interface. PLoS Comput. Biol. 8:e1002578. doi: 10.1371/journal.pcbi.1002578

PubMed Abstract | CrossRef Full Text | Google Scholar

Vato, A., Szymanski, F., Semprini, M., Mussa-Ivaldi, F., and Panzeri, S. (2014). A bidirectional brain-machine interface algorithm that approximates arbitrary force-fields. PLoS ONE 9:e91677. doi: 10.1371/journal.pone.0091677

PubMed Abstract | CrossRef Full Text | Google Scholar

Zagha, E., Casale, A. E., Sachdev, R. N., McGinley, M. J., and McCormick, D. A. (2013). Motor cortex feedback influences sensory processing by modulating network state. Neuron 79, 567–578. doi: 10.1016/j.neuron.2013.06.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Zrenner, E. (2002). Will retinal implants restore vision? Science 295, 1022–1025. doi: 10.1126/science.1067996

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: brain-machine interfaces, neuromodulation, neural coding, state dependence, neural response variability

Citation: Panzeri S, Safaai H, De Feo V and Vato A (2016) Implications of the Dependence of Neuronal Activity on Neural Network States for the Design of Brain-Machine Interfaces. Front. Neurosci. 10:165. doi: 10.3389/fnins.2016.00165

Received: 03 January 2016; Accepted: 01 April 2016;
Published: 20 April 2016.

Edited by:

Michele Giugliano, University of Antwerp, Belgium

Reviewed by:

Mikhail Lebedev, Duke University, USA
Cornelius Schwarz, Eberhard Karls University, Germany

Copyright © 2016 Panzeri, Safaai, De Feo and Vato. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Stefano Panzeri, stefano.panzeri@iit.it;
Alessandro Vato, alessandro.vato@iit.it

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.