Skip to main content

ORIGINAL RESEARCH article

Front. Comput. Neurosci., 06 September 2012
Volume 6 - 2012 | https://doi.org/10.3389/fncom.2012.00062

An approximation to the adaptive exponential integrate-and-fire neuron model allows fast and predictive fitting to physiological data

  • Bernstein-Center for Computational Neuroscience, Central Institute of Mental Health, Psychiatry, Medical Faculty Mannheim of Heidelberg University, Mannheim, Germany

For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard fI curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron’s response under a wide range of mean-input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the fI current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating (“in vivo-like”) input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model’s generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a “high-throughput” model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.

1. Introduction

In recent years there has been a growing interest in large-scale neuronal network simulations (Traub et al., 1988, 2005; Whittington et al., 2000; Markram, 2006; Izhikevich and Edelman, 2008; Lansner, 2009; Lundqvist et al., 2010) that capture the cellular heterogeneity observed in real cortical tissue (Binzegger et al., 2004; Markram et al., 2004; Wang et al., 2006; Thomson and Lamy, 2007) and model interactions between many diverse cortical and subcortical brain structures (Lansner et al., 2003; Izhikevich and Edelman, 2008). The increasing desire to model such systems at a high level of physiological realism which takes into account the diversity and variation in neuronal cell types is, however, in conflict with the computational feasibility and “analytical” tractability of such models. Since in many situations synaptic inputs to a neuron with the same kinetics can be lumped into single “super-synapses” (e.g., Durstewitz and Gabriel, 2007), such that the number of synaptic equations to be solved scales linearly with the number of neurons, often the computational burden associated with the cellular models is the more serious bottleneck in network simulations. Moreover, if cell diversity is an explicit issue in itself and large pools of physiological cell data are available, even the time required for fitting cell models to all the different cell types recorded can become a considerable temporal constraint.

Single neuron models of very different degrees of complexity have been developed over the last decades to study neural functions. On one side, detailed multi-compartmental biophysically meaningful models can often reproduce voltage traces of their experimental counterparts to almost arbitrary degree (Traub et al., 1991; De Schutter and Bower, 1994; Jaeger et al., 1997; Poirazi and Mel, 2001; Prinz et al., 2003; Druckmann et al., 2007, 2011; Moyer et al., 2007), and potentially provide a deep understanding of the underlying biophysical mechanisms and functional role of the cellular morphology (e.g., Mainen and Sejnowski, 1996; Poirazi and Mel, 2001; Shu et al., 2006; Durstewitz and Gabriel, 2007). However, because of their large number of parameters, fitting such single-cell models to electrophysiological observations is often a slow and tedious procedure which may also run into the risk of serious over-fitting: Different parameter configurations may result in similarly good fits of a given “training set” (Prinz et al., 2004), thus it is not clear how these models would perform on data that was not explicitly used to optimize the parameters.

Partly for these reasons, but also for speeding up large-scale network simulations and mathematical tractability, much simpler models have been introduced (Durstewitz, 2003, 2009; Fourcaud-Trocmé et al., 2003; Izhikevich, 2004; Brette and Gerstner, 2005) as a phenomenological description of neuronal activity, often with a focus more on the dynamical mechanisms underlying spiking behavior. The leaky integrate-and-fire neuron (LIF; Lapicque, 1907; Hill, 1936) presumably presents the simplest spiking neuron model of this kind. In order to model the upswing of an action potential more faithfully, various extensions to the LIF neuron were proposed, like a quadratic function (Ermentrout and Kopell, 1986; Izhikevich, 2003) or an exponential term (Fourcaud-Trocmé et al., 2003; Brette and Gerstner, 2005) which captures the spike initiation phase. It was also shown that a second dynamical variable may be mandatory to capture certain neuronal features like adaptation (Izhikevich, 2003; Richardson et al., 2003). Models like the adaptive exponential integrate-and-fire model (AdEx; Brette and Gerstner, 2005) or the Izhikevich (2003) model can qualitatively reproduce a large number of spiking patterns observed in real neurons (Izhikevich, 2003, 2004; Naud et al., 2008; Durstewitz, 2009). In addition to the qualitative reproduction of real spike train features, a few studies also dealt with systematic quantitative fitting of model parameters on the basis of electrophysiological recordings (Jolivet et al., 2006, 2008; Clopath et al., 2007; Badel et al., 2008a,b; Naud et al., 2008; Gerstner and Naud, 2009). Remarkably, these simple neuron models, usually trained on in vivo-like fluctuating-current inputs, can often predict spike times from in vitro recordings with high precision.

Here, we suggest an alternative approach that rests mainly on firing rate information and simply uses standard fI curves (firing rate over step current) for fitting model parameters. Such curves are routinely obtained by in vitro electrophysiologists and are widely available in public data bases for many different cell types. They cover a broad range of mean-input currents and output spike rates a neuron may traverse in vivo. To allow for a very fast and efficient fitting procedure, we derive an approximation to the AdEx model that results in closed-form expressions for transient and stationary firing rates. Spike-time prediction performance of the model is then tested, however, on different spike trains obtained from recordings with “in vivo-like” fluctuating-current injections. Model performance is evaluated on a large variety of experimentally recorded neocortical cell types, and is compared to an even simpler adaptive LIF neuron as well as to the full AdEx. Based on this large pool of experimental data, also some of the major empirical factors constraining the model fitting process are exposed. The result is a single-cell modeling and parameter fitting framework that allows to efficiently build up cell models for large pools of physiologically characterized cell types in relatively short time. Potential shortcomings and future extensions of our approach are discussed.

2. Materials and Methods

2.1. Training Set Data Required for Model Fitting

The training set for the parameter tuning consists of onset and steady-state fI curves as well as of the sub-rheobase IV curve. The onset firing rate fO(I) reflects the initial response to a step-like current (I) stimulus (Benda and Herz, 2003) and is defined as the inverse of the first interspike interval (ISI).

fOI=1tOISII.

However, sometimes initial spike doublets or triplets may be observed, partly reflecting the fact that neurons in vitro (in contrast to in vivo) usually reside at a lower membrane voltage at which many inward currents are in a recovered state, resulting in higher excitability initially when a current is first injected. To deal with this, the firing frequency adaptation curve is fitted by an exponential decay with an effective time constant τeff (Madison and Nicoll, 1984; Edman et al., 1987; Stocker et al., 1999; Benda and Herz, 2003), and the onset firing rate is determined by evaluating this function at the time of the first spike (a procedure which also results in a more stable estimate). The behavior of the adapted cell, on the other hand, is given by the steady-state firing rate f(I) defined as the inverse of the average interspike interval when the cell has reached a reasonably stationary level (i.e., an approximately constant firing rate).

fI=1tISII.

In practice, interspike intervals are averaged over an interval of 5 s after a transient of 10–15 s. Finally, the subthreshold behavior of the cell is assessed from those trials where the input currents are below the rheobase, i.e., do not cause spiking. The IV curve was constructed by relating these input currents to the steady-state voltage response of the cell. For currents far below the rheobase, I(V) is often almost perfectly linear for rodent prefrontal cortex neurons (present observations).

2.2. Neuron Models

We first briefly review the basic AdEx model in order to derive subsequently an approximation to it that allows for setting up closed-form expressions for the onset and steady-state fI curves.

2.2.1. The AdEx model

The AdEx model is a two-dimensional model that mathematically describes the evolution of the membrane potential V(t) and an adaptation current w(t). It is an extension of the exponential integrate-and-fire neuron (first developed in Fourcaud-Trocmé et al., 2003) and defined by the following system of non-linear ordinary differential equations (Brette and Gerstner, 2005; Naud et al., 2008):

CdVdt=-gLV-EL+gLΔTeV-VTΔT+I-w
τwdwdt=aV-EL-wifV>VupthenVVrandwwr=w+b(4)

The first equation of the AdEx is an extension of the LIF neuron that models the upswing of an action potential by an exponential function (Fourcaud-Trocmé et al., 2003). Whenever the membrane potential approaches the threshold VT, the exponential term causes a very rapid increase of the voltage (note that VT is not a strict spiking threshold, however, as in the classical LIF model, but just a parameter that determines where the exponential is “centered” on the V-axis). The downswing is replaced by a reset condition. In the second equation, the parameter a determines the subthreshold adaptation and b covers spike-triggered adaptation. Despite the simplicity of this two equation model with just a handful of parameters, it can reproduce a wide range of physiological firing patterns like tonic spiking, adaptation, initial or regular bursting, to name but a few (Naud et al., 2008). Further details of the AdEx model are described in Brette and Gerstner (2005) and Naud et al. (2008).

The nullclines of the ordinary differential equation (ODE) system of the AdEx provide insights into its dynamics. They are given by

dVdt=0wV=-gLV-EL+gLΔTeV-VTΔT+I
dwdt=0ww=aV-EL.

There are a maximum of two fixed points (associated with the state of resting) when I is small and zero fixed points (associated with a state of repetitive spiking) when I is sufficiently large. The transition from resting to spiking can occur through different types of bifurcation depending on parameter settings. For a saddle-node bifurcation, the Jacobian matrix J of the equations (3 and 4) has two real eigenvalues, one of them being equal to zero. This leads to the following condition at the bifurcation point V0.

detJ=0
V0=VT+ΔTln1+agL.

Together with equations (5 and 6), this expression can be used to calculate the rheobase ISN,0.

wVV0=wwV0
ISN,0=a+gLVT-EL-ΔT+ΔTln1+agL.

When a = 0, one can show that the transition from the resting state to repetitive spiking occurs always via a saddle-node bifurcation and the last intersection point (and with that the onset of the fI curve) is determined by V0 = VT (Touboul, 2008; Touboul and Brette, 2008).

2.2.2. An approximation to the AdEx model

The exponential term in the AdEx model renders an analytical solution of the differential equations impossible. Neither the membrane potential nor the fI curves can be derived analytically. The fundamental issue is the lack of knowledge about the steady-state trajectory in the phase plane. One approach to solve the problem is to use an approximation to the AdEx based on the idea of separation of time scales: Under the assumption that the evolution of the w-variable is much slower than the evolution of the membrane potential Vτmτw<<1, the trajectory in the phase plane (Gerstner and Kistler, 2002; Naud et al., 2008)

• is nearly horizontal if it is far away from the V-nullcline wV,

• follows the left branch of that nullcline at a vertical distance D(V) as soon as it approaches the V-nullcline wV.

A slow adaptation is a reasonable assumption for many real neurons since the membrane time constant is often one to two orders of magnitude lower than the spike-rate adaptation time constant (Benda and Herz, 2003; Thomson and Lamy, 2007).

To obtain the steady-state firing rate without computing the whole transient dynamics, we require w(t) to be constant in time

dwdt=dwdVdVdt=0

except within a well-defined vicinity of the left branch of the V-nullcline wV. As dV/dt ≠ 0 away from wV, it follows that dw/dV = 0 in this case. Hence, the trajectory in the (V,w)-phase plane is exactly horizontal. In the second regime characterized by dw/dt ≠ 0, the trajectory has to follow a curve defined by a vertical distance D(V) up to its minimum at V = VT. Hence, w(V) is defined piecewise and we obtain a differential equation for the membrane potential V that can be solved section by section. Furthermore, the reset value wr and thus the value w(Vup) for the onset and the steady state are known from the start.

To calculate w in the vicinity of wV, the vertical distance D(V) has to be specified. Following Gerstner and Kistler (2002) and Naud et al. (2008), we make the ansatz

wtraj=wV-1τwDV.

Inserting (12) into equation (4) and using the additional assumption a = 0

dwtrajdt=-wtrajτw=-wVτw+DVτw2.

Using (3) and differentiating wtraj [equations (12) and (5)] with respect to V, dwtraj/dt can also be obtained as

dwtrajdt=dwtrajdVdVdt=-1τmτw+eV-VTΔTτmτw-D(V)Cτw2D(V).

where τm = C/gL is the membrane time constant. In obtaining this expression we also used the relationship C·dV/dt = D(V)/τw that follows from inserting (5) into (12). Setting (13) = (14) and solving for D(V) one obtains the approximation

DVCgLI+gLΔTeV-VTΔT-gLV-EL=τmwV

under the assumptions that τw-2 is sufficiently small such that terms containing τw-2 can be neglected, and τwτm-1eV-VTΔT0. This latter assumption is valid provided that (τw·τm) is sufficiently large compared to the exponential term which is ≤1 for all values VVT. Inserting (15) into (12), wtraj is approximately given by

wtraj1-τmτwwV

and we can therefore write

dwdt=dwdVdVdt=1-τmτwdwVdVdVdt.

Defining two functions el and er by

el=wV-DVτw=1-τmτwwV
er=wV+DVτw=1+τmτwwV,

the second regime is confined within the band [el, er] for VVT.

Simplified AdEx model: Summing up the results from the previous section, the simplified AdEx model (simpAdEx) is defined as follows:

CdVdt=-gLV-EL+gLΔTeV-VTΔT+sI-w=wV-w
dwdt=0forD<τww-wVΘVT-V1-τmτwdwVdVdVdtotherwise(21)ifV>VupthenVVrandwwr=w+bifw=1+τmτwwVthenw1-τmτwwV

where Θ denotes the Heaviside function and s corresponds to a scaling factor set to 1 for the fitting process (see Section 3.2 for further explanation). The first if-condition describes the downswing. It is equivalent to the reset defined in the AdEx. The second if-condition defines a vertical jump from the envelope er to el as soon as the trajectory reaches the curve er. This constraint is necessary because of a singularity in the integral that would appear whenever the horizontal trajectory crosses the V-nullcline. Thus, the trajectory is horizontal unless in close proximity to the left branch of the V-nullcline, where it follows the branch at a vertical distance defined by D. We can now distinguish between two cases in the steady state:

1. The steady-state reset value wr is below the V-nullcine wV and the trajectory approaches the left branch from the left. As soon as the horizontal trajectory crosses the curve el, it follows the curve up to the point V = VT. From there on it stays horizontal at wV(VT) − D(VT)/τw. The corresponding voltage trace shows a sharp reset (Figure 1A).

2. The steady-state reset value wr is above the V-nullcline wV and the trajectory approaches the left branch from the right. As soon as the horizontal trajectory crosses the curve er, it jumps vertically to the curve el, where it follows el up to the point V = VT. From there on it stays horizontal at wV(VT) − D(VT)/τw. The corresponding voltage trace shows a broad reset (Figure 1B).

FIGURE 1
www.frontiersin.org

Figure 1. Comparison of phase planes of the simpAdEx for sharp (A) and broad (B) reset. If the steady-state reset point [(Vr, wr); green cross] is below the V-nullcline (red curve) the spiking pattern corresponds to a sharp reset (no undershoot), otherwise it is broad (associated with an after-hyperpolarizing current or undershoot). The trajectory is given in blue with the blue filled square representing the initial point and the open blue squares indicating the reset points. The lower panels show the voltage traces corresponding to these trajectories. The dashed vertical line marks the threshold VT beyond which the trajectory runs strictly parallel to the abscissa. The two gray dashed lines present the functions el and er described in the text. Note that the distances of the envelopes to the V-nullcline have been enlarged for clarity.

The fixed points of the system are the intersection points of the V-nullcline with the horizontal w = 0. Figure 2 shows the behavior of the V-nullcline as well as the functions el and er with increasing input current I in the phase plane. The curves are shifted upwards and for I less than or equal to the rheobase, they have at least one intersection point. In equations (13 and 14), the subthreshold adaptation defined by a was set to zero such that the intersection points of el and er coincide with the fixed points of the system. Hence, it is ensured that the sub-rheobase IV curve is consistently defined by equations (20 and 21).

FIGURE 2
www.frontiersin.org

Figure 2. Comparison of phase planes of the simpAdEx for I less than (A), equal (B) and greater than (C) the rheobase. The V-nullcline (in red) and the curves el and er (gray dashed lines) are shifted upwards with increasing input current. For I less than or equal the rheobase, the three curves have a maximum of two shared intersection points (one stable node given by the filled green circle and one unstable node represented by an open green circle) which coincide at the saddle point V = VT. The trajectory is given in blue with the blue filled square representing the initial point and the open blue squares indicating the reset points. The steady-state trajectory starts at the green cross and passes through three sections: It runs horizontal up to the intersection point with the envelope el (dark cyan line) where it follows el up to its minimum at VT (cyan line). The third part of the trajectory is again horizontal (light cyan line). Note that the distances of the curves el and er to the V-nullcline have been enlarged for clarity.

IV=gLV-EL-gLΔTeV-VTΔTVVT.

The system has a maximum of two fixed points that coalesce and finally disappear via a saddle-node bifurcation at VT since a = 0 [cf. equation (10)].

Due to the piecewise defined w(V), we can now derive a closed-form expression for the onset tOISI(I) and steady-state interspike interval tISI(I). Based on equation (20) and using separation of variables, the solution is given in terms of integrals of the general form:

tISI=VtaVteCdVwVV-wV.

Specifically, the steady-state interspike interval tISI(I) can be computed as the sum of three integrals over the three different regimes in the phase plane (Figure 2C):

t,1ISI=VrVsCdVwVV-wr
t,2ISI=VsVTCτwdVDV
t,3ISI=VTVupCdVwVV-wr+bf=t,1ISI+t,2ISI+t,3ISI-1.(26)

where Vs denotes the intersection point of the horizontal at wr with the curve el when the steady-state reset point is below the V-nullcline, and with er otherwise. This relationship holds only for b > 0, while for b = 0 the steady-state interspike interval is given by tISI=t,1ISI with Vs = Vup. The onset fI curve is given by

fO=tOISI-1withtOISI=VrVupCdVwVV-b.

Equation (27) is only valid for bwV(VT) − D(VT)/τw. Otherwise tOISI has also to be split up into three integrals, as for the steady state, since the trajectory would reach the vicinity of the V-nullcline. Note that equations (24–27) still need to be solved numerically since analytical expressions for the integrals are not available. This, however, can be done through fast algorithms like the adaptive Lobatto quadrature (Matlab function “quadl”) which evaluates the integral only at a few discrete points. The resulting fitting procedure is about 10–100 times faster than explicitly simulating whole voltage traces up to where a steady-state in the firing rate is reached (see Table 2; simulating the full (V,w)-trajectory, however, is about as time-consuming as for the full AdEx). The mathematical derivations above were also checked numerically (i.e., by simulation of the full system).

Despite these simplifications, the simpAdEx can reproduce many of the spiking patterns observed in real neurons and as seen in the full AdEx, including tonic spiking, adaptation, initial bursting, or regular bursting (Figures 3A–C). In addition to the closed-form expressions for the onset and steady-state firing rate, one can also easily find similar expressions for the latency or the number of spikes in response to a step current.

FIGURE 3
www.frontiersin.org

Figure 3. Phase plane representations and corresponding spiking patterns upon constant current steps for the approximation to the AdEx (A–C) and the adaptive LIF neuron (D,E). Adaptation (A,D), initial burst (B,E), and regular bursting (C). Red curve: V-nullcline, blue curve: trajectory. For the adaptive LIF model, the w-nullcline is shown in green and the dashed vertical marks the threshold VT.

2.2.3. The adaptive LIF model

As an alternative to the simpAdEx model above, we also investigated an even simpler model which incorporates subthreshold adaptation (a ≠ 0) but allows for an analytical solution for the fI curves, namely the adaptive LIF neuron defined by

CdVdt=-gLV-EL+I-w
τwdwdt=aV-EL-w(29)    ifV>VTthenVVrandwwr=w+b

This model is identical to the AdEx except it lacks the exponential term describing the action potential upswing. For b = 0, it is well described and discussed in Izhikevitch (2001) and Richardson et al. (2003). The nullclines are given by

dVdt=0wV=-gLV-EL+I
dwdt=0ww=aV-EL.

The system has either no fixed points when the nullclines run parallel (I ≠ 0 and a = −gL), infinitely many fixed points when the nullclines lie on top of each other (I = 0 and a = −gL), or exactly one fixed point whose eigenvalues are given by

λ1,2=12-1τm+1τw±1τm+1τw2-4a+gLCτw=12-m±Δ,(32)

and which is stable for

a-gL.

The transition to repetitive spiking can be obtained by adjusting the threshold parameter VT which defines the reset condition. Increasing I shifts the intersection point to the right. The rheobase is then defined by the intersection point at the threshold parameter VT: I0 = (gL + a)·(VTEL). The system has three dynamical regimes depending on its eigenvalues λ1 and λ2, and the general solution is given by

Vt=C1eλ1t+C2eλ2t+snifΔ>0C1+C2te-m2t+snifΔ=0e-m2t[C1cos(βt)+ifΔ<0with β=-Δ2C2sin(βt)]+sn

where s = (τw·C)−1·[I + (a + gLEL], n=a+gLτwC, and the parameters C1 and C2 have to be determined from boundary conditions. For the onset and steady state, these boundary conditions can be written as

V0,V˙0=EL,ICforthelatencylatVr,V˙lattl-gLCVr-VT-bCfortheinitialstateVr,1C-gLVr-EL+I-wrforthesteadystate(35)

In the following, we derive analytical solutions for the onset and steady-state firing rates for the first regime defined by Δ > 0 as an example. Analog expressions for the cases Δ = 0 and Δ < 0, respectively, can be directly deduced. In order to calculate the onset firing rate, we first have to consider the latency defined as the time delay up to the first spiking time. By using the boundary conditions for the latency and solving the resulting system of equations, we obtain the following equation for the membrane potential Vlat(t):

Vlatt=IC-EL*λ2eλ1t-eλ2tλ1-λ2+EL*eλ2t+sn.

with EL*=EL-sn. The latency time tl can be calculated by taking Vlat(tl) = VT:

VT*=IC-EL*λ2eλ1tl-eλ2tlλ1-λ2+EL*eλ2tl.

where VT* is given by VT*=VT-sn. On the basis of the solution (36), V˙lat(tl) can be derived. A closed-form expression for the onset as well as the steady-state interspike interval (in the following denoted by tISI, with tISI=tISI and tISI=tOISI, respectively), can be derived with the boundary condition V(0) = Vr and V(tISI) = VT:

VT*=LVr,V˙0eλ1tISI-eλ2tISI+Vr*eλ2tISI

where Vr*=Vr-sn,VT*=VT-snandLVr,V˙0 denotes LO for the initial regime and L for the steady state, respectively. In the case of the initial state, LO is given by

LO=1λ1-λ2V˙lattl-gLCVr-VT-bC-λ2Vr*.

The calculation of the steady-state firing rate necessitates a further condition:

wtISI=wr-b=-gLVT-EL+I-CdVdttISI.

Using this, L is a linear function of wr:

L=1λ1-λ21CwV(Vr)-wr-λ2Vr*.

wr in turn is uniquely given by (40) with

dVdttISI=Lλ1eλ1tISI-λ2eλ2tISI+λ2Vr*eλ2tISI.

Equations (38) and (39) determine the initial interspike interval tOISI and equations (38), (41), and (42) can be combined into one equation which gives a closed-form expression for the steady-state interspike interval. The onset and the steady-state firing rates are then directly given by the inverse of these expressions.

The model cannot reproduce as many spiking patterns as the AdEx, but shows notable features like tonic spiking, adaptation, transient spiking, or delayed acceleration (Figures 3D,E). Based on this formalism, piecewise defined linear models may increase the variety of spiking patterns while remaining mathematically tractable.

2.3. Model Fitting Method

Estimating initial parameter values: Most of the parameters of the model are subject to a fitting procedure. The only exceptions are the capacitance C and the upper limit of the membrane potential Vup which are set by explicit constraints. C is fixed by the relation C = gL·τm, where the membrane time constant τm is directly obtained from subthreshold recordings. Specifically, τm is extracted from exponential fits to the initial part (~50–100 ms) of voltage traces after applying small hyper- or depolarizing current steps. A single exponential term fitted the decay (rise) reasonably well for our recordings. Vup, on the other hand, does not significantly affect the model dynamics and was coupled to the fitted slope factor ΔT through Vup = 10·ΔT − 40 to avoid numerical problems.

Where possible, initial values were taken directly from the data: EL and gL are extracted as the offset and slope, respectively, of the linear fit to the sub-rheobase IV curve in its approximately linear range. Furthermore, an estimate of the threshold VT can be derived from the IV curve of the simpAdEx model. As the cell does not spike and thus w = 0, the current I that results in a given voltage V can be calculated by searching for the zero crossings of the V-nullcline wv. From equation (22), it follows that

I0=gLVT-EL-gLΔT.

Equation (43) suggests that a reasonable estimate for VT is given by the voltage at the rheobase current I0. A log-function is fitted to the empirical fI curve for estimating the rheobase, and the corresponding value for V is used as initial estimate for VT.

The remaining parameters ΔT, τw, Vr, and b do not have a clearly defined physiological equivalent (although ΔT may be mainly related to the fast Na+ channel activation). Based on experience, the slope factor ΔT is usually around 1–3 mV (Fourcaud-Trocmé et al., 2003; Clopath et al., 2007; Badel et al., 2008a; Naud et al., 2008), so 2 mV was used as an initial estimate. A rough estimate of the reset value Vr was taken directly from the voltage traces upon step currents. For b, we defined a lower bound by the inverse of the time constant τeff of the adaptation, as small values of b lead to a very slow adaptation.

The fitting procedure: All software used for parameter tuning and model validation was written in Matlab and C and will be made publicly available at www.bccn-heidelberg-mannheim.de. Since closed-form expressions are available for the final interspike intervals, it is no longer necessary to numerically integrate the full two-dimensional (V,w)-trajectory up to the point where a steady state in spiking activity has been reached. In addition, equations (24–27) allow for faster numerical schemes that evaluate the integrals only at specific points, like adaptive Lobatto quadrature (Press et al., 2007) as provided by the built-in Matlab function “quadl.” Our fitting algorithm has three consecutive steps. During the first and the second step, the parameters are first roughly tuned by fitting only three points of the IV curve and the fI curves, respectively, in order to refine the initial estimates. More precisely, uniformly distributed pseudorandom numbers on an interval defined by ±20% of the initial parameter estimates are used to fit two data points near the rheobase of the IV curve and one point far away from it, in order to capture roughly the subthreshold behavior and the onset of the fI curves. During this optimization step, only the parameters gL, EL, ΔT, and VT are tuned. The membrane capacitance C is then recalculated by C = τm·gL since τm is assumed to be fixed. Next, the slopes of the simulated onset and steady-state fI curves are adjusted simultaneously by using three data points (the rheobase, one close to the onset and the last defined point) of the real fI curves. In this process, the parameters b, τw, and Vr are optimized and the other parameters are held fixed because the slope is mainly determined by these three. Since we do not have initial values for the first two parameters, several combinations of systematically chosen initial values for τw and b are tested. Subsequently, the fitting errors are compared and the pair (τw, b) corresponding to the smallest error is taken for the final fitting step. In the final step, all data points of the three curves are fitted simultaneously, allowing all parameters except ΔT, Vup, gL, and C to be optimized. We emphasize that this whole fitting procedure is completely automatized and does not require any parameter setting/exploration or pre-inspection of data by the user: It requires nothing more than provision of the training data, from which the routine extracts initial estimates as described above, and then automatically cycles through all the steps above until a fixed convergence criterion is reached.

The optimization function is given by

q=w1f,target-f,model2+w2fO,target-fO,model2+w3Itarget-Imodel2(44)

The weights w = (w1,w2,w3) may be chosen to balance the relative importance of the three contributions. We set w = (5,1,4) as from our observations the steady-state fI curve appeared to be most important for the spiking behavior. The whole procedure is repeated at least five times starting from different initial estimates. In all cases this took less than 15 min on a single 2.4 GHz Intel(R) Xeon(R) CPU E5620 (for comparison with the full AdEx, see Table 2). The optimized parameter configuration with the lowest overall fitting error is then used to predict the spike times in test sets consisting of voltage traces upon fluctuating-current input. Generally, we found that different initial estimates resulted in similar parameter configurations.

2.4. Performance Measure

We used two previously introduced performance measures to evaluate the prediction quality of our model more formally: The coincidence rate Γ and the Victor–Purpura measure DVP(q). The coincidence rate (Kistler et al., 1997; Gerstner and Kistler, 2002; Jolivet et al., 2004, 2008) basically describes the percentage of correctly predicted spike times with precision Δ, taking stochastic coincidences into account

Γ=Ncoinc-Ncoinc0.5Nmodel+Ndata1Nnorm.

where Ncoinc is the number of coincidences within ±Δ, and 〈Ncoinc〉 = 2νΔNdata is the expected number of coincidences generated by a homogeneous Poisson process with rate ν. The variables Nmodel and Ndata denote the number of spikes in the spike trains generated by the model and the real neuron, respectively, and Nnorm = 1 − 2νΔ is a normalization factor (Kistler et al., 1997; Gerstner and Kistler, 2002; Jolivet et al., 2004, 2008).

The Victor–Purpura measure (Victor and Purpura, 1996) is a metric based on spike times or interspike intervals and can be understood as a cost function that specifies how much effort is needed to transfer one spike train into the other. The measure depends on a cost parameter q that describes the relative sensitivity of the metric to precise timing of spikes. The algorithm to calculate the measure is best described by the elementary steps that are allowed: adding or deleting a spike has the cost of 1 and shifting a spike by the amount Δt is equivalent to the cost of qt|. Following Kreiman et al. (2000), we normalize the Victor–Purpura metric by the total number of spikes in both spike trains to ensure that the value is always between 0 and 1. To facilitate comparability with the coincidence rate, we rearrange the measure such that a value of 0 is equivalent to no similarity and 1 indicates the best scenario:

VPq=1-DVPqNmodel+Ndata.

Both these measures were also used to determine the intrinsic reliability of experimentally recorded cells, i.e., the coincidence of spikes between different identical repetitions of the same stimulus injected into the same neuron. We report the intrinsic reliability as the average over all pairs of repetitions.

2.5. Experimental Preparation and Electrophysiological Recordings

Coronal cortical slices (250–300 μm) containing the prelimbic/intralimbic region of the medial PFC were prepared from the brains of 44–55 days old BL/6 mice and Sprague Dawley rats following decapitation, in accordance with German animal welfare laws and institutional regulations. The brains were rapidly dissected and brain slices were prepared in cold (4°C), oxygenated (carbogen, 95% O2–5% CO2) ACSF containing (in mM): 124 NaCl, 3 KCl, 1.8 MgSO4, 1.6 CaCl2, 10 Glucose, 1.25 NaH2PO4, and 26 NaHCO3. Slices were then transferred to a chamber containing ACSF at room temperature. Submerged slices in the recording chamber were continuously perfused with oxygenated ACSF. Neurons were identified based on their somatic morphology and the orientation of their dendrites (visualized using differential interference contrast microscopy). Pyramidal cells had triangular shaped somas and prominent apical dendrites (Mason et al., 1991; Schröder and Luhmann, 1997), bitufted cells elongated somas with one or two prominent, vertically orientated dendrites (Reyes et al., 1998; Rozov et al., 2001), and fast-spiking cells had round cell bodies with multipolar dendrites (Connors and Gutnick, 1990). Neuronal identity was further confirmed by their characteristic action potential firing patterns in response to stepped depolarization (Connors et al., 1982; Mason and Larkman, 1990; Chagnac-Amitai et al., 2004). Thick-walled borosilicate pipettes (6–8 MΩ tip resistance) were used for the whole cell patch-clamp recordings and were filled with (in mM): 105 K-gluconate, 30 KCl, 10 HEPES, 4 MgATP, 0.3 GTP, and 10 Phosphocreatine. Recordings were made using an Axoclamp 2B amplifier (Axon Instruments, Union City, CA, USA). Data was filtered at 2 kHz and digitized at 10–20 kHz with an ITC-16 (InstruTech, Port Washington, NY, USA) and analyzed offline using customized Matlab analysis routines (MathWorks, MA, USA). All recordings were performed at 33–36°C. Series resistance was not compensated in most of the recordings, and no adjustments to membrane potential were made. In all experiments 50 μM Picrotoxin (GABAA receptor blocker), 50 μM DNQX (AMPA receptor antagonist), and 50 μM DL-2-amino-5-phosphonovaleric acid (NMDA receptor blocker) was added to the recording solution in order to minimize synaptic noise.

For the recording of fI curves and sub-rheobase IV curves, current steps from −200 up to 600 pA were applied for 25 s each. The depolarizing inputs were preceded by a brief hyperpolarizing current step of −50 pA for monitoring input resistance stability. Interleaved with these constant current protocols, fluctuating-current inputs were applied to the cell for 25 s if the mean current was greater than or equal the rheobase and 45 s otherwise in order to generate a reasonable number of spikes. The step- and fluctuating-current parts of the full recording protocol were repeated about 2–8 times in total (with each repeat identical; each part took about 4–6 min, separated by ~30 s), to account for physiological variability and to obtain estimates of cellular reliability. A total of ~100 prefrontal cortex cells were recorded using this type of protocol. Fluctuating currents were constructed from two Poisson spike trains mimicking 100 excitatory input neurons each firing at 10 Hz and 200 inhibitory neurons each at 20 Hz, respectively. The spike trains were filtered by an artificial synapse modeled by double exponential functions with the kinetics of AMPA, GABAA, and NMDA currents. The parameters for the synaptic kinetics (τon = [0.5, 1, 2.5] ms and τoff [2, 6, 95] ms for AMPA, GABAA, and NMDA, respectively), and the non-linearity of the NMDA current (see Jahr and Stevens, 1990), were taken from Durstewitz (2003). The mean and SD of the total current input were controlled by changing synaptic weights, i.e., by the effect each spike has on the three different current components. Model parameters were fitted to the fI and sub-rheobase IV curves, and prediction performance was evaluated on the block of fluctuating current sets immediately following the respective training set.

3. Results

We investigated a large number of data sets (N ~ 100 recorded prefrontal cortex neurons) in order to characterize the potential of our simplified AdEx model (the AdEx model is originally developed in Brette and Gerstner, 2005). This was done by fitting parameters of the model on onset and steady-state fI and sub-rheobase IV curves, and subsequently evaluating the prediction performance on test sets consisting of in vivo-like fluctuating input currents (Destexhe et al., 2001, 2003).

The onset fI curve fO(I) captures the initial response of the non-adapted cell, while the steady-state fI curve f(I) reflects the behavior of the adapted cell for a given level I of mean input. The sub-rheobase IV curve in addition captures subthreshold and passive response properties of the cell. This training set is easily obtained by standard current-clamp protocols with hyper- and depolarizing constant current steps (see Materials and Methods) and thus widely available (Benda and Herz, 2003). The fitting procedure is based on closed-form expressions for the fI curves we had derived from an approximation to the AdEx model (see Materials and Methods), and therefore does not require to simulate the full underlying system of differential equations up to the point where a steady-state in spiking activity has been reached. It also allows for faster numerical schemes. In consequence, the resulting fitting scheme is about one to two orders of magnitude faster than methods that require numerical integration of the underlying system of differential equations (see Table 2 and below).

Fluctuating inputs mimicking synaptic bombardment (Destexhe and Paré, 1999; Destexhe et al., 2001, 2003) had been used previously for the purpose of model fitting (Jolivet et al., 2006, 2008; Clopath et al., 2007; Badel et al., 2008a,b; Naud et al., 2008; Gerstner and Naud, 2009). In our case, however, these traces will be used purely or mainly for checking the prediction performance of the model, i.e., either no information at all (Section 3.1) or only total spike count (but not spike time) information (Section 3.2) from these data will be harvested in model fitting. In vivo-like fluctuating-current test sets were probed within a wide range of SD, from 20 to 550 pA. For the preparation used here (adult rodent PFC), however, only the lower portion of this spectrum (SD of ~35–50 pA) produced voltage fluctuations in the recorded cells (σV ≤ 6 mV, range ≤ 30 mV) that were most consistent with in vivo data [σV ~ 1–5 mV, range ~ 10–20 mV, during awake activity or up-states as extracted from (Steriade et al., 2001; Timofeev et al., 2001; intracellular recordings); and (London et al., 2010; patch-recordings)]. To further verify these numbers, we also analyzed in vivo patch-clamp recordings from the (anesthetized) adult rodent PFC (kindly provided by Dr. Thomas Hahn, Central Institute of Mental Health and BCCN Heidelberg-Mannheim). The voltage SD during up-states ranged from ~2 to 6 mV (<σV> ~ 3.5 mV; voltage range ~ 10–20 mV) in these data, in agreement with the values we have extracted from the literature. In contrast, fluctuating-current injections into adult PFC cells recorded in vitro with σ = 100 pA already resulted in voltage fluctuations (σV ~ 11.9 mV, range ~ 45–50 mV) that clearly exceeded the range observed in vivo. Thus, fluctuating test stimuli with σ = 35–50 pA were deemed to be the ones most relevant to the in vivo setting, and hence most of the subsequent discussion will focus on this range.

3.1. Performance of the simpAdEx on Electrophysiologically Recorded Pyramidal Cells and Interneurons

Pyramidal cell (N ~ 90) and interneuronal (N ~ 10) recordings in vitro were obtained from layers 3, 5, and 6 of the adult rat or mice medial prefrontal cortex (see Materials and Methods). Figure 4 shows a few examples of training set fits and corresponding test set performances of the model on data from one bitufted interneuron (Figure 4A) and from pyramidal neurons in layers 3 (Figure 4B) and 5 (Figure 4C). The empirical fI curves usually cover the whole range of spike rates up to the point of depolarization block (i.e., where spike-generating Na+ channels cannot recover from inactivation anymore) and were fitted very well in each case. Comparisons of the model and real cell test set voltage traces upon fluctuating currents also revealed an often remarkably high rate of precise spike coincidences (as in the examples shown), considering that these fluctuating test sets are very different from the data that had been used for adjusting the model. Even though the model spike reset points are often below the experimentally measured ones (see also Clopath et al., 2007; Badel et al., 2008b), the model V-traces are almost always able to make up with the original data before the next spike is reached. A missed or additional spike only leads to very transient deviations and has no longer-term effects. Thus, in general, spike times within the empirically recorded traces are predicted quite accurately. Although not a major objective of the present study, for the pyramidal cells also the correlations between model and measured subthreshold voltage traces (with spikes cut out in ±10 ms windows) were quite high, with a mean of 0.76 (n = 30 from 3 cells) when normalized to the intrinsic reliability of the empirical cells (i.e., the membrane potential correlations between different repetitions of the same trace). For fast-spiking interneurons these fits were substantially worse (normalized <r> = 0.36, n = 31 from 3 cells) for reasons discussed below (Section 4.3).

FIGURE 4
www.frontiersin.org

Figure 4. Training set fits and example test set performance for (A) a bitufted interneuron (<Γmodcell> = 1.2), (B) a layer-3 pyramidal cell (<Γmodcell> = 0.78) and (C) a layer-5 pyramidal neuron (<Γmodcell> = 0.9) from the rodent prefrontal cortex. The training set consisting of the onset and steady-state fI curves as well as the sub-rheobase IV curve (left panel), and the test set (right panel) consisting of a voltage trace upon a fluctuating input current are given in black [(A) rheobase ≈ 50 pA, σ = 35 pA, μ = 15 pA; (B) rheobase ≈ 80 pA, σ = 50 pA, μ = 50 pA; (C) rheobase ≈ 50 pA, σ = 100 pA, μ = 25 pA]. The corresponding fits by the simpAdEx are given in red (steady-state f-I curve) and blue (onset fI curve). The spike trains (top, right panel) illustrate the variability in the recorded cell responses to identical repetitions of the same fluctuating-current input (black and gray), together with the spiking responses of the model (red). Insets show zoom-ins on the subthreshold regime (t-interval: 500 ms, V-interval: 30 mV).

Figure 5A (black bars) summarizes all spike prediction results for different layers and cell types in rats and mice, across mean-input currents both above and below rheobase, and across a range of different SD (σ ∈ {20,35,50} pA; for rat L5 cells data contain in addition test sets with σ ∈ {100,150,200,250} pA). The performance measures were normalized by the intrinsic reliabilities of the cells as in previous studies (Jolivet et al., 2006, 2008; Clopath et al., 2007; Badel et al., 2008a,b; Naud et al., 2008; Gerstner and Naud, 2009), since in general the agreement between different spike train repetitions upon the same input from a real cell sets an upper bound on the prediction performance we may expect from a model. Only cells with intrinsic reliability ≥0.2 were considered in the present analysis (cf. Jolivet et al., 2008). When averaging across all the group means in Figure 5A, the coincidence rate Γ for a coincidence window Δ = 20 ms was about 0.74, and 0.64 for Δ = 10 ms [VP(q = 4/<ISI>) = 0.82; VP(q = 0.125 ms−1) = 0.67].

FIGURE 5
www.frontiersin.org

Figure 5. Summary statistics for model performance on test sets from prefrontal cortical cells recorded in vitro. (A) Performance measure averaged across all test sets (σ ∈ {20,35,50} pA; in addition σ ∈ {100,150,200,250} pA for rat layer-5 cells) for different species, cell types and layers calculated by the normalized coincidence rate <Γmodcell> for a window Δ = 20 ms without (black) and with (gray) a scaling factor that compensates for potential firing rate differences. The number of test sets investigated is given in parentheses. (B) Performance measure as a function of the mean input; points are averages across the two SD σ ∈ {35,50} pA [taken from the rat data shown in (A)] without (black) and with (gray) scaling factor. Dashed vertical: average rheobase of the cells shown. (C) Averaged prediction performance for layer-5 pyramidal cells from rats (N = 14 for σ = 50 pA; N = 6 for σ > 50 pA) as a function of the SD (mean current μ ≤ 25 pA) without (black) and with (gray) optimal scaling factor. All performances are given relative to the intrinsic reliabilities Γcell of the cells. Error bars = SEM. M_L5, mouse layer-5 pyramidal cell; M_L3, mouse layer-3 pyramidal cell; M_L6, mouse layer-6 pyramidal cell; M_FS, mouse fast-spiking interneuron; R_L5, rat layer-5 pyramidal cell; R_L3, rat layer-3 pyramidal cell; R_BT, rat bitufted interneuron.

Figure 5B shows how the model performance depends on the mean input μ averaged over two different SD σ ∈ {35,50} pA: It seems that prediction performance is slightly better for lower μ close to or less than the rheobase. Thus, one may conclude that above the rheobase further mechanisms play a role that are not captured by the model that well. In particular, fluctuating currents with a large mean μ might contain more current events that drive the real cell close to or beyond the depolarization block. Hence, one explanation of the decreasing performance with higher μ might be that the model lacks an explicit Na+-channel inactivation mechanism. In vivo, however, spiking rates of prefrontal neurons are usually quite low even upon stimulus presentation (<10–20 Hz; Margrie et al., 2002; Lee et al., 2006; Lapish et al., 2008; Durstewitz et al., 2010), such that firing regimes beyond the rheobase and close to depolarization block may not be very physiological anyway (in fact, a common idea is that cortical neurons in vivo reside in a balanced regime right below the spiking threshold; van Vreeswijk and Sompolinsky, 1996; Destexhe et al., 2003; Renart et al., 2006). Figure 5C in addition shows that the prediction performance was also consistently high in layer-5 pyramidal cells across a wider range of variances for mean currents below the rheobase (μ < 25 pA).

Table 1 summarizes the parameter estimates for the simpAdEx for pyramidal cells in layer 3 and 5, and for fast-spiking interneurons.

TABLE 1
www.frontiersin.org

Table 1. Statistics of parameter estimates of the simpAdEx for different PFC layer 3 (L3) and 5 (L5) pyramidal cells (PC) and fast-spiking (FS) interneurons.

3.2. Optimizing Spike-Time Prediction by Compensating Firing Rate Variations

Although, as shown in the previous section, in many cases the simpAdEx model adjusted purely on the basis of constant-current-step protocols performed quite well on the independent fluctuating test sets, in other cases the model appeared to match the empirical spike trains less well (Figure 6A). We noticed, however, that these cases most often were not due to an inability of the model to capture the empirical spike times per se. That is, whenever the model and the real cell both elicited a spike around a particular time, these were often precisely aligned (as Figure 6A illustrates). Rather, the total number of spikes elicited by the model and real cells deviated in these situations (the source of these deviations will be investigated further below), and as a consequence of this some of the spikes produced in one preparation (model or real cell) had no counterpart in the other. Therefore, to examine to which levels model performance could be pushed based on firing rate information alone, a constant scaling factor s for the input I(t) [see equation (20)] was introduced. This multiplicative factor was determined solely by matching the total number of spikes within the “recording periods” of our simple model and of the physiological cells, that is without any other adjustments that would aim to capture the precise spike times (other parameters of the model could be scaled instead, e.g., EL, which comes down to adjusting an additive constant). As exemplified in Figure 6A, compensating the mismatch in model and empirical test trace firing rates by setting the scaling factor suffices to bring the spike trains to almost perfect agreement. The gray bars and markers in Figure 5 summarize the results across all data sets also studied in the previous section, demonstrating that in general the introduction of a scaling factor based solely on firing rate information improves spike-time predictions. With a scaling factor in place, Figure 5 also contains average performance measures slightly above one [formally the coincidence rate defined by equation (45) can indeed be greater than 1; see Naud et al., 2011]. This indicates that adjusting the scaling factor on the test traces may result in model – real cell agreements which are higher than the agreement between different repetitions from the same real neuron, as explored further below. One may argue, of course, that our “test sets” in this case are not true test sets anymore, as still information from these traces (spike count) was used to adjust one of the model parameters. For achieving a comparable level of spike-time prediction, however, it is also sufficient to adjust the scaling factor just on an initial segment of the fluctuating trace, and then use the remaining trace as a truly independent test set (example in Figure 6B). As Figure 6C shows, the estimate for the scaling factor quickly converges after a few dozen spikes. Consequently, if any information from fluctuating traces may be harvested at all, prediction performance can still be driven to very high levels on test set bits not used at all for adapting model parameters (the conditions required in the INCF contest proposed by Gerstner and Naud, 2009).

FIGURE 6
www.frontiersin.org

Figure 6. Illustration of spike prediction performance under various conditions with and without scaling factor. (A) Example where test set performance of the simpAdEx for a layer-5 pyramidal neuron (red; <Γmodcell> = −0.02) is significantly improved by including a scaling factor (blue; <Γmodcell> = 0.79). Original traces are given in black (σ = 50 pA, μ = 25 pA). (B) Performance of the simpAdEx on a test set with a scaling factor only adjusted on the first half of the fluctuating trace (not shown) for a fast-spiking interneuron from rodent prefrontal cortex (σ = 50 pA, μ = 50 pA). (C) The optimal scaling factor s as a function of the number of spikes used to determine s. Six different test sets are shown in different shades of gray for a layer-5 pyramidal neuron. (D) Test set performance of the simpAdEx for a layer-5 pyramidal neuron (without scaling factor) under high input variation modeled as an Ornstein–Uhlenbeck process (σ = 400 pA, μ = 25 pA).

One potential factor that may contribute to the firing rate deviations compensated by the scaling factor are the fluctuations that naturally occur across different experimental repetitions of the test trace in vitro. In fact, we observed that the firing rates during different identical fluctuating test repetitions within the same experimentally recorded cell systematically depend on the resting potential right before the test set application [both overall positive (<r> ~ 0.6, p < 5 · 10−32) and negative (<r> ~ −0.56, p < 10−18) correlations were observed; across all sets: <r2> ~ 0.43, p < 0.002 according to a permutation bootstrap test]. This suggests that the scaling factor may partly compensate for experimental noise, that is fluctuations in precise resting conditions (potentially associated with variations in ionic milieu that occur across time) which cause corresponding fluctuations in cell excitability and firing rates. Indeed, not surprisingly, just like the empirical firing rates, the magnitude of the scaling factor needed was significantly correlated with the experimentally recorded resting potential just before test set application (Figure 7A; <|r|> ~ 0.59, p < 10−20; <r2> ~ 0.43, p < 0.002 according to permutation bootstraps). In our data sets this source of experimental noise may have been particularly problematic because, as already noted above, we have mostly employed lower SD (≤50 pA) for the fluctuating-current inputs than used in most previous studies (≥150 pA; e.g., Clopath et al., 2007; Badel et al., 2008b).

FIGURE 7
www.frontiersin.org

Figure 7. Determinants and implications of the scaling factor in model prediction performance. (A) The optimal scaling factor as a function of the experimentally recorded resting potential for 4 example test sets (gray to black shaded asterisks). Asterisks with she same shade of gray belong to identical repetitions of the same stimulus. (B) The optimal scaling factor as a function of the SD σ averaged across five layer-5 pyramidal cells (mean current μ = 25 pA). K = slope of the linear fit. (C) P–P plot of the distributions F of the firing rate agreement measure between model and neuron Πn,m (without scaling factor) and the firing rate agreement measure between different trials of the neuron Πn,n for a subset of the data (σ ∈ {35,50}, μ ≤ rheobase). The line Fn,n) = Fn,m) is given in red. (D) P–P plot of the distributions F of the model performance Γmod (with scaling factor) and the cell reliability Γcell for a subset of the data (σ ∈ {35,50}, μ ≤ rheobase). The line Fmod) = Fcell) is given in red.

Hence, the impact of variations in resting conditions relative to the voltage fluctuations caused by the stimulus may have been higher than in many previous studies. This is also the reason why we had to allow for somewhat broader time windows (10–20 ms) for detecting spike coincidences than in many previous studies (with time windows more in the range of 5–10 ms). To investigate this issue of comparability with previous studies further, four layer-5 PFC pyramidal cells were recorded using Iinj SD ranging from 250 to 550 pA (cf. Badel et al., 2008a,b) and implementing precisely the same type of random process (Ornstein–Uhlenbeck process) as most frequently employed previously (cf. Rauch et al., 2003; Clopath et al., 2007; Badel et al., 2008a). In these cases, reasonable prediction performance (without including or adjusting a scaling factor) was indeed achieved on the test sets for a coincidence window of only 5 ms [Figure 6D; <Γ> = 0.6, max(Γ) = 0.83].

Another potential source of the firing rate deviations may be that our training approach does not include any samples explicitly representing input variation, unlike fitting approaches directly based on fluctuating traces. It is known that the spike rate of neurons does not only depend on the mean input but as well on its variance (Mainen and Sejnowski, 1995), with the result that the shape and slope of fI curves can change considerably with the input (Chance et al., 2002; Rauch et al., 2003). Information about variance-dependency of neural spiking is not explicitly represented in our training set data, although it may come in implicitly through the fact that both steady-state and transient fI information is used. To investigate the contribution of the input variance in the test sets with mean currents less than the rheobase, the required scaling factor was plotted as a function of the input variance across empirical data sets (N = 5 cells) for which a larger range of SD had been probed. As shown in Figure 7B, the scaling factor is approximately linear in the SD σ, with a relatively shallow slope (~2 · 10−3/pA). Hence, at least within the more physiological regime of below-rheobase mean inputs, its dependence on the input variance seems not too high, suggesting that at least part of the mechanisms accounting for the variance-dependence of spike rates may have been implicitly captured by our training sets and intrinsic properties of the model. Also note that this slope factor implies that for a range of SD that may be considered physiological based on the in vivo analysis preceding Section 3.1 (σI ~ 25–50 pA), the scaling factor may vary by no more than ~5% across this in vivo range (in contrast to the variation caused by different resting conditions, Figure 7A). The conclusion that input variance has a comparatively mild effect on model performance is further reinforced by the observation that spike-time prediction performance itself does not strongly depend on input variance [Figure 5C; if anything, it tends to slightly increase with higher variance and no input scaling (black markers)].

The analyses above demonstrated that spike-time prediction could be further improved by just compensating for total spike count differences between model and target traces through a scaling factor, and have identified potential experimental sources for the firing rate deviations. For an application of our model to in vivo situations, a crucial question therefore is how severe or limiting these firing rate deviations between model and empirical traces actually are in relation to the physiological variation observed under in vivo-like stimulation conditions (current σ ≤ 50 pA, μ < rheobase; without scaling factor, the overall agreement under these conditions for cells with intrinsic reliability Γcell ≥ 0.2 was <Γ> ~ 0.73 for a coincidence window of 20 ms). We therefore determined the agreement in firing rates (f) by

n,m=1|fnfm|(fn+fm)

with “n” denoting neural and “m” model firing rates. This was done for various repetitions of the same current input to the same cell as recorded physiologically (Πn,n), and between the model traces (without additional scaling) and the physiological recordings (Πn,m). As already noted in Section 3.1, model predictions should be accepted as reasonably good if they lie within the range of empirical variation, that is compared to the agreement between different spike train repetitions from experimentally recorded cells under exactly the same input conditions. To quantify this relation, the distribution of model-real cell firing rate agreements Πn,m was compared to the distribution of real cell-real cell agreements Πn,n using a percentile–percentile (P–P) plot (Figure 7C). The P–P-graph places data points at coordinates corresponding to the percentiles of the model-real cell (abscissa in Figure 7C) and the real cell-real cell (ordinate in Figure 7C) distributions F into which these points fall. If the two distributions were exactly the same, the P–P plot would follow the line Fn,n) = Fn,m), while it would range above it if model predictions were better than empirical reliability and below it if they were worse. The graph contains all test sets with mean currents μ below the rheobase and SD σ ∈ {35,50} pA. As Figure 7C (black dots) demonstrates, the model-real cell distribution is well en par with the real cell-real cell distribution, and – if anything – tends to actually range above it for the higher percentiles. This indicates that the model-empirical firing rate agreements are at least as good as the reliability among different identical-input-repetitions from the same cell, i.e., almost optimal if set in relation to the empirical variation. That the model firing rate agreements tend to be actually slightly better than the experimental reliability might be explained by the fact that the models are fit to fI training sets close in time to the corresponding test sets (relative to the temporal spacing between different test sets). Therefore, they may already account for some of the factors causing variation across different experimental repetitions (across time; see above), and hence perform even slightly better than expected from the empirical variation.

We may also conjecture from these observations that explicitly adjusting an input scaling factor to precisely match empirical and model trace spike counts could actually result in over-fitting, that is adjusting the model toward part of the empirical noise rather than capturing the true expectancy of the empirical distribution. This is indeed confirmed by the P–P-plot shown in Figure 7D, which illustrates that the distribution F of spike-time agreements Γmod between the model including a scaling factor and the real cells is actually shifted to consistently higher values compared to the distribution between different repetitions of the same fluctuating stimulus within the same cell (Γcell). That is, for any level of agreement between two spike trains there are more model-real cell coincidence rates ranging above that level than real cell-real cell coincidence rates, indicating that the model with scaling factor performs better than would be expected from the experimental distribution.

In conclusion, the simpAdEx model adapted solely based on conventional step protocols performs well in in vivo-like test situations within the bounds of physiological reliability. Inclusion of the scaling factor demonstrates to which levels of precise spike-time performance the simple model could theoretically be stretched relying only on firing rate information for training (plus sub-rheobase IV curve). However, given that it may also lead to over-fitting, for practical application of the model to in vivo network situations inclusion of this additional parameter is not advisable.

3.3. Comparison to the Adaptive LIF Model and the Full AdEx

The basic LIF model enjoys great popularity when it comes to larger-scale network simulations due to its mathematical simplicity and computational efficiency. In Materials and Methods we derived (exact) closed-form expressions for the onset and steady-state firing rates of the adaptive LIF model, that is the basic LIF model enhanced by a second linear differential equation implementing a spike-rate adaptation process (see Materials and Methods: 2.2.2 and 2.2.3). The parameters of the model are tuned by the same fitting algorithm on training sets consisting of fI and IV curves as used for the simpAdEx (see Materials and Methods: 2.3). To minimize the effect of experimental noise and enhance comparability between models, potential mismatches in spike counts between model and target traces were largely eliminated through input scaling as introduced in the previous section. This way the adaptive LIF was adjusted for a subset of 39 different real cells, and its performance on the corresponding fluctuating-current test sets was compared with the one from the simpAdEx. Figure 8 gives the training and test result for an example of a physiological cell. The aLIF like the simpAdEx can be seen to reproduce spike times with high accuracy, although the aLIF clearly does not capture the subthreshold dynamics as well. Figure 8D shows the (non-normalized) spike coincidence rates (gamma factors) for the aLIF versus simpAdEx models for different test sets. Despite the further simplification in the aLIF model, its spike prediction performance, once firing rate differences were eliminated, is comparable to the simpAdEx for small SD (black dots), while for higher input variance (red dots) the coincidence rates are below those from the simpAdEx.

FIGURE 8
www.frontiersin.org

Figure 8. Comparison of the simplified AdEx (simpAdEx) with the adaptive LIF neuron (aLIF) and the full AdEx. The onset and steady-state fI curves and the test set consisting of a voltage trace upon a fluctuating input current for a fast-spiking interneuron from the rodent prefrontal cortex are given in black. The corresponding model fits [(A) aLIF, (B) simpAdEx, and (C) AdEx] are given in red (steady-state f-I curve) and blue (onset fI curve). (D) The raw coincidence rate Γ for a window Δ = 20 ms for 170 evaluated test sets from 39 cells (layer-3 and -5 pyramidal cells, fast spiking and bitufted interneurons) for aLIF and simpAdEx. Black dots = σ ∈ {35,50}, red dots = σ ∈ {100, 150,200,250}. (E) Raw Γ for a window Δ = 20 ms for 12 evaluated test sets of 5 cells (see Table 2) for AdEx and simpAdEx. A scaling factor for eliminating firing rate differences was included in all model comparisons (see text).

The development of the approximation to the full AdEx was motivated by the closed-form expressions that could be derived for the training set, and the much faster fitting procedure implied by this (see Materials and Methods). The considerable speed-up in fitting times is verified in Table 2 which summarizes for five different physiological neurons the computer time needed to fit the three training set curves for the full AdEx with (a ≠ 0) and without (a = 0) subthreshold adaptation, and for the simpAdEx. The parameters of the full AdEx are tuned by the same fitting algorithm used for the simpAdEx (see Materials and Methods: 2.3), only that numerical integration of the differential equations was required for the full AdEx due to the lack of closed-form fI expressions. These data evidence speed-ups of 1–2 orders of magnitude, as noted in previous sections. In terms of spike-time prediction, the comparison of (non-normalized) coincidence rates for 12 test sets from 5 cells is given in Figure 8E (again with input scaling present to eliminate firing rate deviations as a contributing factor), and a specific example of training and test set fit is shown in Figure 8C. On average, the performance does not seem to increase by using the full instead of the simplified AdEx.

TABLE 2
www.frontiersin.org

Table 2. Comparison of computation time requirements for the full and simplified AdEx.

4. Discussion

For the purpose of large-scale neuronal network simulations (Traub et al., 1988, 2005; Markram et al., 2004; Markram, 2006; Wang et al., 2006; Izhikevich and Edelman, 2008; Lansner, 2009), single neuron models which do not compromise physiological realism too much and capture some of the tremendous cellular heterogeneity observed in real cortical tissue are of increasing interest. Here we introduced a novel approach for fitting a simple 2-ODE neuron model to experimental data with good prediction performance on distinct test sets not employed for fitting (Section 3.1). Our approach had several important objectives: (1) We wanted the fitting procedure to be fast (cf. Table 2) and completely automatized (see Materials and Methods), so that large pools of neural recordings could easily be translated into single-cell models; (2) we only wanted to rely on simple standard electrophysiological protocols for this process, i.e., fI and IV curves, which are widely available and routinely obtained; (3) the training set for model fitting should assess a wide range of firing rates (as rate changes are still the most prominent responses correlated with behavior observed in vivo); (4) the models should exhibit satisfactory prediction performance on physiological recording sets distinct from the ones used for training, in this case spike responses upon fluctuating “in vivo-like” current injections.

Toward these goals, we developed an approximation to the AdEx model from which closed-form expressions for initial and steady-state fI curves could be derived. For the AdEx model, originally introduced by Brette and Gerstner (2005), it has been shown previously that it can reproduce a variety of spiking patterns observed in diverse cell types (Naud et al., 2008), thus forming a good starting point for our own analysis. Our approximation to the AdEx was based on separation of time scales and phase plane considerations. Since we do not have to solve for the full model trajectory up to a steady-state by numerical integration but can directly calculate the fI curves from closed-form expressions, the fitting procedure is sped up by about two orders of magnitude (cf. Table 2). This allows to construct large sets of model neurons from empirical data sets conveniently and quickly. Based on our analytical approximation, one can also easily find closed-form expressions for, e.g., the latency to the first spike or the number of spikes in response to a step current.

There are, of course, also other approaches to fast model fitting. Progress in global optimization techniques and computer hardware substantially decreased the temporal requirements for fitting models to experimental data (Brette et al., 2007). Evolutionary techniques (Bäck and Schwefel, 1993) like genetic algorithms, differential evolution and related methods, or particle swarm algorithms (Eberhart and Shi, 1998), enable fast optimization of multi-dimensional systems by efficient parallelization, in particular through the use of graphics processing units (GPU; Owens et al., 2007; Rossant et al., 2010). Other algorithmic solutions like implementations based on vectorization, that is replacing multiple repeated operations by single operations on vectors, can make optimization processes more efficient (Brette and Goodman, 2011). For instance, in one such recent approach (Rossant et al., 2011) 50–80-fold speed improvements were found combining vectorization techniques and parallelization on 240 GPU cores when compared to run times on a single GPU. Most of these algorithmic and hardware approaches may also be harvested for our model, however, to further speed-up the process if many different cell types are to be fitted at once. The advantage of our approach lies primarily in alleviating the need for explicitly simulating the system of differential equations. The optimization function equation (44) may then still be subjected to evolutionary, swarm, or vectorization techniques, or different model cells from a larger data set may be optimized in parallel on different CPUs/GPUs.

Our AdEx approximation can still reproduce most of the spiking patterns of the original AdEx, and thus many of the patterns observed in real neurons. As an even simpler alternative, we also considered an adaptive LIF model for which we derived exact closed-form expressions for the onset and steady-state firing rates. Both models could be easily fit to near perfection to the initial and steady-state fI curves of recordings from real pyramidal and interneurons from various layers of the rodent prefrontal cortex. The simpAdEx model on top captured sub-rheobase IV curves very well.

4.1. Nature of the Training Set: Limitations and Extensions

The major advantages of the training set used for our modeling approach is its simplicity and easy availability, and, of course, the fact that it allows for the fast fitting procedure based on closed-form expressions. In contrast, for the purpose of model fitting and evaluation, most previous studies have employed fluctuating currents designed to reflect the synaptic bombardment by populations of excitatory and inhibitory presynaptic cells (Jolivet et al., 2006, 2008; Clopath et al., 2007; Badel et al., 2008a,b; Naud et al., 2008; Gerstner and Naud, 2009). Unlike fitting procedures working on such fluctuating current/voltage traces, our training set probes none of the higher-order statistical properties (at least not explicitly, see above) that may characterize in vivo activity. In this sense it is more removed from the in vivo scenario to which the model is ultimately to be applied to, than training sets used in previous approaches. However, it may be important to note that the construction of in vivo-like fluctuating training or test sets rests on many assumptions about synaptic dynamics, amplitudes and time constants, input rates, correlations among inputs, frequency content, and so on. Prediction quality on real in vivo situations is likely to depend on how well these assumptions are met, and like it is generally true in statistics (e.g., Hastie et al., 2009), one may sometimes be better off using simpler methods making less assumptions. Furthermore, due to the time constraints imposed by in vitro methods (limited life time and intracellular integrity/composition of the cells), only a limited parameter range can be probed by any training protocol for a given cell, such that results could be biased toward the specific parameter regime explored. For instance, while our training protocol contains a large range of mean-input currents and thus spike output rates a neuron may traverse in vivo, up to the point of depolarization block, this was not always the case in all of the previous approaches based on fluctuating inputs. For these reasons, it may not be per se clear that any fitting procedure based on fluctuating input currents would also automatically transfer better to an in vivo situation. This is not to say that a training protocol working with step-like inputs is actually to be preferred, but just to caution that generalization performance will likely depend on the details of the implementation of the fluctuating inputs and the parameter range probed. Ideally, of course, the better an approximation one can get to the in vivo situation, and the more data sets for covering the “in vivo space” one has, the better this would be for training a model.

One other issue that deserves discussion in this context is the similarity between training and test sets, and the fitting criterion used. If both training and test data consist of noisy voltage traces which in addition may be generated from similar underlying distributions, then while test set prediction performance should be better than if dissimilar training data were used, generalization to completely different scenarios may be worse (see also discussion on within- vs. out-of-sample predictions in, e.g., Hastie et al., 2009). This may be exaggerated if the fitting criterion explicitly includes the specific quantities to be predicted, i.e., the precise spike timing in the present case (for instance, some fitting criteria that have been in use directly included the spike-time agreement measure Γ as a term). Our approach uses training and test sets which are very distinct: On the training side, we used subthreshold IV and onset and steady-state fI curves based on step-like inputs, while the test sets consisted of fluctuating in vivo-like current inputs of different means and SD. The fitting criterion based on the IV and fI curves also did not involve any spike-time information (the target of prediction). In this sense we feel that our test sets impose a quite strong generalization test on the model. However, this discussion may boil down to the same issues already brought up above: If much knowledge about the ultimate application domain is available and can be integrated into the training sets, then it should certainly be used, and fitting criteria that actually emphasize those aspects on which high prediction performance is sought, may be preferred. In this sense this discussion is less a critique of the various fitting approaches (including ours), but more to propose this whole subject as important for more detailed future research.

As noted above, our training set composed of steady-state and transient fI curves does not directly assess higher-order statistical properties of the input. With regards to the input variance, one simple potential extension is to augment the set of constant hyperpolarizing and depolarizing current steps with step-like positive or negative excursions from any given level of injected current. Such an approach may allow to assess in more detail how a cell responds to transient mild or larger deflections from any mean-input level (i.e., to variations of different size). At the same time it still largely retains the simplicity of experimental protocols consisting only of current-clamp steps and thus also retains the possibility to fully specify the model by using only steady-state and transient fI curves, an important feature of the fitting procedure introduced here. Within the limits of the resolution required to obtain estimates of the instantaneous firing rate (at least one interspike interval), the frequency of these step-like changes may also be varied to assess some of the frequency-dependent aspects of the neural response while still enabling model specification via fI curves.

4.2. Prediction Performance, Input Regimes, and Firing Rate Variations

To assess the predictive power of our modeling approach, a large set of data from purpose-generated electrophysiological recordings from different cell types and cortical layers (L3, L5, and L6) was analyzed. Prediction performance was formally evaluated using two measures (coincidence rate and Victor–Purpura measure) which quantify the accuracy in predicting spike times within the test sets (our results, however, did not strongly depend on the employed performance measure or its parameter settings). Since previous fitting procedures with reduced models most often used fluctuating input stimuli both for the training and the testing (see Section 4.1), and furthermore often optimized the coincidence rate directly (e.g., Clopath et al., 2007), it seems natural that they should have an edge compared to our approach in predicting precise spike times within the test traces. However, prediction performance in our model (Section 3.1) seems quite en par with levels reported previously for different sets of probed input conditions (compare, for instance, to the two-compartment model based on the AdEx presented in Clopath et al., 2007, or the refractory exponential IF model in Badel et al., 2008a,b), and could be improved further by introducing a constant scaling factor for the current input into the voltage equation of the simpAdEx model (Section 3.2), adjusted to level the total spike counts in the model and reference traces. Allowing further adjustment through this single scalar scaling factor led to often almost perfect agreement of spike trains, although it was tuned solely to match the total number of spikes in the model and target traces, that is without using detailed spike time or voltage information. It was also sufficient to use only a portion of the fluctuating test traces for adapting this parameter (cf. Figures 6B,C), leaving the remainder of the fluctuating voltage trace as a truly independent test set. However, we found that the scaling factor may actually harm generalization performance, as it leads to over-fitting (Figure 7D), and for most practical purposes (in vivo scenarios) should therefore not be included in the model, as further discussed below.

Nevertheless the results surrounding the scaling factor highlight a number of interesting and important points about model fitting, dynamics, and empirical noise. First, it is interesting to note that in any step of the model fitting approach, including adjustment of the scaling factor, only firing rate (plus sub-rheobase IV) information has been used, yet the model is able to predict quite accurately precise spike times for a whole range of different input scenarios. This suggests that, for the situations tested, the underlying spike-generating dynamics of complex real neurons was properly captured by our simple 2-ODE model. More importantly, it indicates that – once basic dynamical properties are captured – all the information about spike times is already contained in the firing rates (and possibly vice versa), potentially shedding some new light on the relation between firing rate and precise spike-time codes (Shadlen and Newsome, 1994; Gütig and Sompolinsky, 2006; London et al., 2010). Second, these analyses revealed factors from where model – real cell differences may come from. One idea was that firing rate deviations could potentially arise from the fact that neural spiking rates do not just depend on the mean but also on the variance in the input current (Mainen and Sejnowski, 1995; Chance et al., 2002; Rauch et al., 2003). Our training set does not explicitly contain voltage traces of different variance. However, it was found that at least below the spiking threshold, presumably the regime in which neurons operate in vivo (Destexhe et al., 2003; Renart et al., 2006), the scaling factor did not change too much (with a slope of ~2 × 10−3/pA) across a large range of input variances (cf. Figure 7B), nor did spike prediction performance itself (Figure 5C). Potentially this is because at least part of the relevant information is still available to the fitting process through the transient fI curves. Neocortical neurons in vivo have indeed been proposed to live in a subthreshold balanced regime where spiking is caused by occasional threshold crossings (van Vreeswijk and Sompolinsky, 1996; Destexhe et al., 2003; Renart et al., 2006), an idea consistent with the relatively low firing rates of neocortical neurons in vivo (Margrie et al., 2002; Lee et al., 2006; Lapish et al., 2008; Durstewitz et al., 2010).

Another idea was that model – real cell firing rate deviations may in part simply reflect empirical noise, that is firing rate fluctuations that naturally occur across different identical repetitions of the same stimulus conditions in the recorded neurons (Mainen and Sejnowski, 1995). Indeed, firing rates recorded for identical stimulus applications significantly co-varied with fluctuations in the cellular resting potential preceding the application, and so did the scaling factor (cf. Figure 7A). This problem may be exaggerated with larger temporal gaps between the training and test sets (as in the present study, ~5 min on average), as during longer time periods the precise physiological conditions during test set application may be more likely to have drifted away from those during training set application. It may also be more of a problem in slice preparations from adult animals as compared to the much more widely used juvenile preparations. Adult slices exhibit richer intrinsic activity and less stability than juvenile slices (e.g., Tseng and O’Donnell, 2005). In contrast, it should be less of a problem if the variations in the input are large (e.g., σ = 250–550 pA, as in Figure 6D). Thus, fluctuations in physiological background conditions and the signal/noise-ratio are factors that influence the model fitting process (see also below). These results may also have implications for neural processing in vivo: Either synaptic events have to be of sufficiently large amplitude to overcome background fluctuations and cause reliable spike timing (see London et al., 2010), or variations in background conditions need to be sufficiently common to the neurons supposed to communicate via precise spike times (a condition that in the model – real cell comparisons could be installed through the scaling factor).

The physiological noise also limits the degree to which spike times from one stimulus presentation can be predicted from another identical stimulus repetition in the same cell. As the model cannot be expected to perform better than the real cells, the statistical distributions of the empirical firing rate reliabilities (i.e., spike-rate agreements across different repetitions) and the model-real cell spike-rate agreements were first compared. For test sets with SD up to 50 pA (the in vivo range, see Section 3), this comparison revealed that the model’s firing rate accuracy was perfectly within the bounds of empirical variation (cf. Figure 7C). This means that inclusion of a scaling factor in the model on average does not truly improve the model – real cell fit beyond the degree of spike-rate agreement to be reasonably expected from the experimental variability. This in turn suggests that the scaling factor may in fact partly fit the empirical noise (i.e., lead to over-fitting of particular test traces), as was confirmed when the distributions of cell-cell and model-cell spike coincidence rates were compared: With the scaling factor in place, the model outperformed the real cells, that is led to spike coincidence levels that were consistently higher than those between different repetitions from the same cell (cf. Figure 7D). Thus, the scaling factor demonstrates the spike-time prediction power that can be achieved while exploiting only rate (plus sub-rheobase IV) information as a source for model fitting. But the actual recommendation to be derived from this analysis is that none such additional parameter adjustment should be performed for application of the model to in vivo-like situations.

Finally, we would like to comment on the time windows used to evaluate spike coincidences (Γ) for most part in the present study. We have reported results for both 10 ms, which is within the range of 5–10 ms employed by most previous studies (Badel et al., 2008a,b; Jolivet et al., 2008), and 20 ms. The choice of the larger window of 20 ms was motivated by the fact that in most of our cells we had used lower SD for the fluctuating currents (≤50 pA) than most commonly used previously (≥150 pA; Jolivet et al., 2006, 2008; Clopath et al., 2007; Badel et al., 2008a,b; Naud et al., 2008; Gerstner and Naud, 2009). These lower SD, in turn, were adopted because they provoked voltage fluctuations most compatible with the range observed in vivo for our preparation (adult rodent prefrontal cortex; see Section 3). In ours as in previous studies (Mainen and Sejnowski, 1995; Chance et al., 2002; Rauch et al., 2003), cells become less reliable in their spiking patterns as the input variance decreases, which in turn makes the choice of larger time windows more reasonable (ultimately, it is of course the voltage variance most relevant here, but for a given set of cellular parameters this will be highly correlated with the current variance). When we used very high current variances for the input (250–550 pA, as, e.g., in Badel et al., 2008a,b) and the same type of random process for generating the fluctuations as used by others before (Ornstein–Uhlenbeck process; e.g., Rauch et al., 2003; Clopath et al., 2007; Badel et al., 2008a), spike coincidence levels between model and real cells approaching those reported previously (e.g., <Γ> = 0.6 as in Clopath et al., 2007) were obtained (without scaling factor) also for smaller time windows (5 ms). It may also be important to note that larger time windows do not necessarily imply higher coincidence rates: In Jolivet et al. (2008), for instance, the coincidence rate Γ was shown to be approximately constant for coincidence windows Δ ∈ [2 ms, 12 ms], and decayed for both smaller and larger time windows. Finally, it should be noted that different studies have employed different recording protocols, different parameter settings for generating current fluctuations, and different physiological preparations and cell types, which may limit comparability among studies (even when intrinsic cell reliability is used for normalization).

4.3. Subthreshold Membrane Dynamics, Comparison between Models and Possible Extensions

The main goal of our study was to develop a simplified neuron model which, when adjusted based on standard in vitro protocols, captures the spiking behavior (rate and timing) of real cells, as these are the quantities which represent the ultimate read-out of a neuron’s information processing accessible to other neurons, and are the quantities usually reported in in vivo studies. We observed, however, that on average the adapted simpAdEx model would also reproduce the subthreshold membrane voltage quite well (see insets in Figure 4). For prefrontal pyramidal cells, correlations with the membrane potential in the subthreshold regime were on average <r> ~ 0.58 (and increased to 0.65 with input scaling), just slightly below the range of cross-correlations observed between different repetitions of the same stimulus in vitro, <r> ~ 0.75. The main deviations from the voltage trajectory of real cells seemed to occur right after a spike where the model cells sometimes produced a considerable after-hyperpolarization while the real neurons often returned to a potential not far below the pre-spike level. This is to be expected, as the model cells have an artificial reset process and lack the ionic mechanisms (Na+-channel inactivation, diverse Ca2+- and K+-channels) that drive spike-after-polarizations in vitro. Note that this balance of ionic currents in the direct aftermath of a spike is not captured by the subthreshold IV curves used for model fitting. This mismatch in after-spike voltage behavior has been described by others before for this class of simple models (e.g., Clopath et al., 2007; Badel et al., 2008b), and it may also be the reason why the subthreshold voltage correlations with interneurons [<r> ~ 0.27 (0.3 with input scaling), compared to a correlation of 0.75 among repetitions] are considerably worse than those for pyramidal cells (see above). Especially for fast-spiking interneurons the subthreshold voltage trajectory is dominated to a large degree by the post-spike polarizations which are not properly captured by the model. We emphasize that strongly reduced models like the present which are mainly adapted to capture the spiking behavior of real cells may not necessarily also exhibit good fits to the subthreshold membrane behavior (as actually evidenced by the aLIF example in Figure 8A). In some cases, they may even have to compromise voltage dynamics to achieve a good approximation to the spiking behavior. For spiking, the dynamics of the membrane voltage in the vicinity of (or approach toward) the threshold is most crucial. If the voltage dynamics is of primary interest, biophysically more detailed models may often be better suited, although we note that recently simple models have been advanced that also capture post-spike voltage behavior very well (Badel et al., 2008a,b; Rossant et al., 2010).

For several data sets, we have compared performance of the aLIF, the simpAdEx, and the full AdEx models (cf. Figures 8D,E). The aLIF is the simplest type of model that would allow reasonable fitting to our training data, as an adaptation term is essential for simultaneously matching the transient as well as the steady-state fI curves. With a scaling factor allowed, the level of spike-time agreement (Γ) with the experimental cells was comparable for all three types of reduced models. However, experimentally recorded IV curves often significantly deviated from linearity as the membrane potential approached the spiking threshold. Since the aLIF model is strictly linear in its current-voltage-relationship below threshold, it is not able to capture this experimentally observed departure from linearity. Likely for this reason, its agreement with the subthreshold membrane dynamics was often substantially worse than that of the simpAdEx model (compare Figures 8A,B). The fitting process for the aLIF model also often resulted in rather unphysiological parameter estimates (for instance, Vr < −150 mV), probably to compensate for the mismatch in the subthreshold voltage dynamics when reproducing the spiking behavior. Thus the aLIF model (with scaling factor) does a reasonably good job in reproducing empirical firing rates and spike times, but the addition of an exponential term to the voltage equation seems to substantially improve the physiological validity of the model with regards to the membrane dynamics. Comparing the simpAdEx and the full AdEx model, the constraint that adaptation in the simpAdEx is solely driven by the reset since a = 0 may present a limitation for fitting physiological data in which subthreshold adaptation currents, like Iks (Hammond and Crépel, 1992), play a prominent role. However, in practice, it was often taken to be zero because it proved to be difficult or impossible to extract a reasonable value from electrophysiological recordings (Clopath et al., 2007). In general, it is important to keep in mind that all the models discussed here represent massive reductions of a much more complex physiological reality. Many if not most of the parameters in such models will therefore inevitably ultimately represent lumped contributions of different ionic sources or spatial factors. In this sense, in most cases there will be no 1:1 mapping on biophysical parameters of a real cell.

More generally, of course, as with every highly simplified neuron model, ours also ignores the spatial (dendritic) structure of real cells. One potential extension may be to add dendritic transfer/filter functions to the model which allow to fit fI curves generated from distal current inputs at the same time as the somatically induced fI traces (see also Clopath et al., 2007). One could potentially maintain the closed-form tractability of the model in this scenario by first fitting fI curves from different dendritic injection sites separately, and then combining them into the same model by optimizing parameters of the transfer functions. One other obvious limitation already pointed out is that the model is not able to capture phenomena resulting from the progressive inactivation of Na+ channels at higher potentials like the depolarization block. Neither the approximation to the AdEx nor the AdEx itself are able to capture the sudden decrease or non-monotonicity in firing rates as the driving current becomes very high, since they lack a bifurcation from the stable limit cycle associated with periodic spiking to a stable fixed point (in contrast to other simple neuron models, e.g., Izhikevich, 2007; Durstewitz, 2009). The phenomenon of depolarization block itself may simply be captured by introducing a current threshold beyond which spiking is shut off, but of course this would not capture more subtle changes in the neuron’s behavior resulting from progressive inactivation of inward currents. However, as already discussed above, extensions of the model along this direction may be less relevant if the main goal is to capture network-dynamical regimes that are presumably characteristic of awake in vivo scenarios (balanced state).

To conclude, we derived an approximation to the AdEx model which allows for fast fitting to physiological data while retaining most of the dynamical features of the original model. Our model fitting approach relies entirely on fI and IV curves as obtained routinely in standard electrophysiological protocols, yet the resulting models predict reasonably well spike times and behavior obtained with in vivo-like fluctuating-current inputs not used for model fitting. Thus this approach may allow to efficiently and automatically translate, in a kind of “high-throughput” fashion, larger data bases of single-cell recordings into a computational framework. It can be used to construct networks of in this sense physiologically validated model cells which are still computationally efficient and analytically tractable.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

This work was funded by grants from the German ministry for education and research (BMBF, 01GQ1003B) and the Deutsche Forschungsgemeinschaft to Daniel Durstewitz (Du 354/6-1 & 7-2). We are very grateful to Dr. Thomas Hahn (Central Institute of Mental Health, Psychiatry, Medical Faculty Mannheim of Heidelberg University) for providing in vivo patch-clamp recordings from the rodent PFC for estimating parameters used in stimulus construction.

References

Bäck, T., and Schwefel, H. (1993). An overview of evolutionary algorithms for parameter optimization. Evol. Comput. 1, 1–23.

CrossRef Full Text

Badel, L., Lefort, S., Berger, T. K., Petersen, C. C. H., Gerstner, W., and Richardson, M. J. E. (2008a). Extracting non-linear integrate-and-fire models from experimental data using dynamic i-v curves. Biol. Cybern. 99, 361–370.

CrossRef Full Text

Badel, L., Lefort, S., Brette, R., Petersen, C., Gerstner, W., and Richardson, M. (2008b). Dynamic IV curves are reliable predictors of naturalistic pyramidal-neuron voltage traces. J. Neurophysiol. 99, 656–666.

CrossRef Full Text

Benda, J., and Herz, A. V. M. (2003). A universal model for spike-frequency adaptation. Neural Comput. 15, 2523–2564.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Binzegger, T., Douglas, R. J., and Martin, K. A. C. (2004). A quantitative map of the circuit of cat primary visual cortex. J. Neurosci. 24, 8441–8453.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Brette, R., and Gerstner, W. (2005). Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. J. Neurophysiol. 94, 3637–3642.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Brette, R., and Goodman, D. F. M. (2011). Vectorized algorithms for spiking neural network simulation. Neural Comput. 23, 1503–1535.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Brette, R., Rudolph, M., Carnevale, T., Hines, M., Beeman, D., Bower, J., Diesmann, M., Morrison, A., Goodman, P., Harris, F., Zirpe, M., Natschläger, T., Pecevski, D., Ermentrout, B., Djurfeldt, M., Lansner, A., Rochel, O., Vieville, T., Muller, E., Davison, A. P., El Boustani, S., and Destexhe, A. (2007). Simulation of networks of spiking neurons: a review of tools and strategies. J. Comput. Neurosci. 23, 349–398.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Chagnac-Amitai, Y., Luhmann, H. J., and Prince, D. A. (2004). Burst generating and regular spiking layer 5 pyramidal neurons of rat neocortex have different morphological features. J. Comp. Neurol. 296, 598–613.

CrossRef Full Text

Chance, F., Abbott, L., and Reyes, A. (2002). Gain modulation from background synaptic input. Neuron 35, 773–782.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Clopath, C., Jolivet, R., Rauch, A., Lüscher, H., and Gerstner, W. (2007). Predicting neuronal activity with simple models of the threshold type: adaptive exponential integrate-and-fire model with two compartments. Neurocomputing 70, 1668–1673.

CrossRef Full Text

Connors, B. W., and Gutnick, M. J. (1990). Intrinsic firing patterns of diverse neocortical neurons. Trends Neurosci. 13, 99–104.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Connors, B. W., Gutnick, M. J., and Prince, D. A. (1982). Electrophysiological properties of neocortical neurons in vitro. J. Neurophysiol. 48, 1302–1320.

Pubmed Abstract | Pubmed Full Text

De Schutter, E., and Bower, J. M. (1994). An active membrane model of the cerebellar Purkinje cell. II. Simulation of synaptic responses. J. Neurophysiol. 71, 401–419.

Pubmed Abstract | Pubmed Full Text

Destexhe, A., and Paré, D. (1999). Impact of network activity on the integrative properties of neocortical pyramidal neurons in vivo. J. Neurophysiol. 81, 1531–1547.

Pubmed Abstract | Pubmed Full Text

Destexhe, A., Rudolph, M., Fellous, J., and Sejnowski, T. (2001). Fluctuating synaptic conductances recreate in vivo-like activity in neocortical neurons. Neuroscience 107, 13–24.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Destexhe, A., Rudolph, M., and Paré, D. (2003). The high-conductance state of neocortical neurons in vivo. Nat. Rev. Neurosci. 4, 739–751.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Druckmann, S., Banitt, Y., Gidon, A., Schürmann, F., Markram, H., and Segev, I. (2007). A novel multiple objective optimization framework for constraining conductance-based neuron models by experimental data. Front. Neurosci. 1:1. doi:10.3389/neuro.01/1.1.001.2007

CrossRef Full Text

Druckmann, S., Berger, T. K., Schürmann, F., Hill, S., Markram, H., and Segev, I. (2011). Effective stimuli for constructing reliable neuron models. PLoS Comput. Biol. 7, e1002133. doi:10.1371/journal.pcbi.1002133

CrossRef Full Text

Durstewitz, D. (2003). Self-organizing neural integrator predicts interval times through climbing activity. J. Neurosci. 23, 5342–5353.

Pubmed Abstract | Pubmed Full Text

Durstewitz, D. (2009). Implications of synaptic biophysics for recurrent network dynamics and active memory. Neural. Netw. 22, 1189–1200.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Durstewitz, D., and Gabriel, T. (2007). Dynamical basis of irregular spiking in NMDA-driven prefrontal cortex neurons. Cereb. Cortex 17, 894–908.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Durstewitz, D., Vittoz, N. M., Floresco, S. B., and Seamans, J. K. (2010). Abrupt transitions between prefrontal neural ensemble states accompany behavioral transitions during rule learning. Neuron 66, 438–448.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Eberhart, R., and Shi, Y. (1998). “Comparison between genetic algorithms and particle swarm optimization,” in Evolutionary Programming VII, Lecture Notes in Computer Science, eds V. Porto, N. Saravanan, D. Waagen, and A. Eiben (Berlin: Springer), 611–616.

Edman, A., Gestrelius, S., and Grampp, W. (1987). Analysis of gated membrane currents and mechanisms of firing control in the rapidly adapting lobster stretch receptor neurone. J. Physiol. (Lond.) 384, 649–669.

Pubmed Abstract | Pubmed Full Text

Ermentrout, G. B., and Kopell, N. (1986). Parabolic bursting in an excitable system coupled with a slow oscillation. SIAM J. Appl. Math. 46, 233–253.

CrossRef Full Text

Fourcaud-Trocmé, N., Hansel, D., van Vreeswijk, C., and Brunel, N. (2003). How spike generation mechanisms determine the neuronal response to fluctuating inputs. J. Neurosci. 23, 11628–11640.

Pubmed Abstract | Pubmed Full Text

Gerstner, W., and Kistler, W. (2002). Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge: Cambridge University Press.

Gerstner, W., and Naud, R. (2009). How good are neuron models? Science 326, 379–380.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Gütig, R., and Sompolinsky, H. (2006). The tempotron: a neuron that learns spike timing-based decisions. Nat. Neurosci. 9, 420–428.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Hammond, C., and Crépel, F. (1992). Evidence for a slowly inactivating k+ current in prefrontal cortical cells. Eur. J. Neurosci. 4, 1087–1092.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Hastie, T., Tibshirani, R., and Friedman, J. (eds). (2009). “High-dimensional problems: p N,” in The Elements of Statistical Learning, Springer Series in Statistics (New York: Springer), 1–50.

Hill, A. V. (1936). Excitation and accommodation in nerve. Proc. R. Soc. Lond. B Biol. Sci. 119, 305–355.

CrossRef Full Text

Izhikevich, E. (2003). Simple model of spiking neurons. IEEE Trans. Neural Netw. 14, 1569–1572.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Izhikevich, E. M. (2004). Which model to use for cortical spiking neurons? IEEE Trans. Neural Netw. 15, 1063–1070.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Izhikevich, E. M. (2007). Dynamical Systems in Neuroscience. Cambridge: The MIT Press.

Izhikevich, E. M., and Edelman, G. M. (2008). Large-scale model of mammalian thalamocortical systems. Proc. Natl. Acad. Sci. U.S.A. 105, 3593–3598.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Izhikevitch, E. M. (2001). Resonate-and-fire neurons. Neural Netw. 14, 883–894.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Jaeger, D., De Schutter, E., and Bower, J. M. (1997). The role of synaptic and voltage-gated currents in the control of Purkinje cell spiking: a modeling study. J. Neurosci. 17, 91–106.

Pubmed Abstract | Pubmed Full Text

Jahr, C. E., and Stevens, C. F. (1990). Voltage dependence of NMDA-activated macroscopic conductances predicted by single-channel kinetics. J. Neurosci. 10, 3178–3182.

Pubmed Abstract | Pubmed Full Text

Jolivet, R., Kobayashi, R., Rauch, A., Naud, R., Shinomoto, S., and Gerstner, W. (2008). A benchmark test for a quantitative assessment of simple neuron models. J. Neurosci. Methods 169, 417–424.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Jolivet, R., Lewis, T., and Gerstner, W. (2004). Generalized integrate-and-fire models of neuronal activity approximate spike trains of a detailed model to a high degree of accuracy. J. Neurophysiol. 92, 959–976.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Jolivet, R., Rauch, A., Lüscher, H., and Gerstner, W. (2006). Predicting spike timing of neocortical pyramidal neurons by simple threshold models. J. Comput. Neurosci. 21, 35–49.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Kistler, W., Gerstner, W., and Hemmen, J. (1997). Reduction of the Hodgkin-Huxley equations to a single-variable threshold model. Neural Comput. 9, 1015–1045.

CrossRef Full Text

Kreiman, G., Krahe, R., Metzner, W., Koch, C., and Gabbiani, F. (2000). Robustness and variability of neuronal coding by amplitude-sensitive afferents in the weakly electric fish eigenmannia. J. Neurophysiol. 84, 189–204.

Pubmed Abstract | Pubmed Full Text

Lansner, A. (2009). Associative memory models: from the cell-assembly theory to biophysically detailed cortex simulations. Trends Neurosci. 32, 178–186.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Lansner, A., Fransén, E., and Sandberg, A. (2003). Cell assembly dynamics in detailed and abstract attractor models of cortical associative memory. Theory Biosci. 122, 19–36.

CrossRef Full Text

Lapicque, L. (1907). Recherches quantitatives sur l’excitation électrique des nerfs traitée comme une polarization. J. Physiol. Pathol. Gen. 9, 620–635.

Lapish, C. C., Durstewitz, D., Chandler, L. J., and Seamans, J. K. (2008). Successful choice behavior is associated with distinct and coherent network states in anterior cingulate cortex. Proc. Natl. Acad. Sci. U.S.A. 105, 11963–11968.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Lee, A. K., Manns, I. D., Sakmann, B., and Brecht, M. (2006). Whole-cell recordings in freely moving rats. Neuron 51, 399–407.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

London, M., Roth, A., Beeren, L., Häusser, M., and Latham, P. E. (2010). Sensitivity to perturbations in vivo implies high noise and suggests rate coding in cortex. Nature 466, 123–127.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Lundqvist, M., Compte, A., and Lansner, A. (2010). Bistable, irregular firing and population oscillations in a modular attractor memory network. PLoS Comput. Biol. 6, e1000803. doi:10.1371/journal.pcbi.1000803

CrossRef Full Text

Madison, D. V., and Nicoll, R. A. (1984). Control of the repetitive discharge of rat CA 1 pyramidal neurones in vitro. J. Physiol. (Lond.) 354, 319–331.

Pubmed Abstract | Pubmed Full Text

Mainen, Z. F., and Sejnowski, T. J. (1995). Reliability of spike timing in neocortical neurons. Science 268, 1503–1506.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Mainen, Z. F., and Sejnowski, T. J. (1996). Influence of dendritic structure on firing pattern in model neocortical neurons. Nature 382, 363–366.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Margrie, T., Brecht, M., and Sakmann, B. (2002). In vivo, low-resistance, whole-cell recordings from neurons in the anaesthetized and awake mammalian brain. Pflugers Arch. 444, 491–498.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Markram, H. (2006). The blue brain project. Nat. Rev. Neurosci. 7, 153–160.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Markram, H., Toledo-Rodriguez, M., Wang, Y., Gupta, A., Silberberg, G., and Wu, C. (2004). Interneurons of the neocortical inhibitory system. Nat. Rev. Neurosci. 5, 793–807.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Mason, A., and Larkman, A. (1990). Correlations between morphology and electrophysiology of pyramidal neurons in slices of rat visual cortex. II. electrophysiology. J. Neurosci. 10, 1415–1428.

Pubmed Abstract | Pubmed Full Text

Mason, A., Nicoll, A., and Stratford, K. (1991). Synaptic transmission between individual pyramidal neurons of the rat visual cortex in vitro. J. Neurosci. 11, 72–84.

Pubmed Abstract | Pubmed Full Text

Moyer, J. T., Wolf, J. A., and Finkel, L. H. (2007). Effects of dopaminergic modulation on the integrative properties of the ventral striatal medium spiny neuron. J. Neurophysiol. 98, 3731–3748.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Naud, R., Gerhard, F., Mensi, S., and Gerstner, W. (2011). Improved similarity measures for small sets of spike trains. Neural Comput. 23, 3016–3069.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Naud, R., Marcille, N., Clopath, C., and Gerstner, W. (2008). Firing patterns in the adaptive exponential integrate-and-fire model. Biol. Cybern. 99, 335–347.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Owens, J. D., Luebke, D., Govindaraju, N., Harris, M., Krüger, J., Lefohn, A. E., and Purcell, T. J. (2007). A survey of General Purpose computation on graphics hardware. Comput. Graph. Forum 26, 80–113.

CrossRef Full Text

Poirazi, P., and Mel, B. W. (2001). Impact of active dendrites and structural plasticity on the memory capacity of neural tissue. Neuron 29, 779–796.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Press, W. H., Teukolsky, S. A., Vetterling, W. T., and Flannery, B. P. (2007). Numerical Recipes: The Art of Scientific Computing, 3 Edn. New York, NY: Cambridge University Press.

Prinz, A. A., Billimoria, C. P., and Marder, E. (2003). Alternative to hand-tuning conductance-based models: construction and analysis of databases of model neurons. J. Neurophysiol. 90, 3998–4015.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Prinz, A. A., Bucher, D., and Marder, E. (2004). Similar network activity from disparate circuit parameters. Nat. Neurosci. 7, 1345–1352.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Rauch, A., La Camera, G., Lüscher, H., Senn, W., and Fusi, S. (2003). Neocortical pyramidal cells respond as integrate-and-fire neurons to in vivo-like input currents. J. Neurophysiol. 90, 1598–1612.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Renart, A., Moreno-Bote, R., Wang, X., and Parga, N. (2006). Mean-driven and fluctuation-driven persistent activity in recurrent networks. Neural Comput. 19, 1–46.

CrossRef Full Text

Reyes, A., Lujan, R., Rozov, A., Burnashev, N., Somogyi, P., and Sakmann, B. (1998). Target-cell-specific facilitation and depression in neocortical circuits. Nat. Neurosci. 1, 279–285.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Richardson, M., Brunel, N., and Hakim, V. (2003). From subthreshold to firing-rate resonance. J. Neurophysiol. 89, 2538–2554.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Rossant, C., Goodman, D. F. M., Fontaine, B., Platkiewicz, J., Magnusson, A. K., and Brette, R. (2011). Fitting neuron models to spike trains. Front. Neurosci. 5:9. doi:10.3389/fnins.2011.00009

CrossRef Full Text

Rossant, C., Goodman, D. F. M., Platkiewicz, J., and Brette, R. (2010). Automatic fitting of spiking neuron models to electrophysiological recordings. Front. Neuroinformatics 4:2. doi:10.3389/neuro.11.002.2010

Rozov, A., Jerecic, J., Sakmann, B., and Burnashev, N. (2001). AMPA receptor channels with long-lasting desensitization in bipolar interneurons contribute to synaptic depression in a novel feedback circuit in layer 2/3 of rat neocortex. J. Neurosci. 21, 8062–8071.

Pubmed Abstract | Pubmed Full Text

Schröder, R., and Luhmann, H. J. (1997). Morphology, electrophysiology and pathophysiology of supragranular neurons in rat primary somatosensory cortex. Eur. J. Neurosci. 9, 163–176.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Shadlen, M. N., and Newsome, W. T. (1994). Noise, neural codes and cortical organization. Curr. Opin. Neurobiol. 4, 569–579.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Shu, Y., Hasenstaub, A., Duque, A., Yu, Y., and McCormick, D. A. (2006). Modulation of intracortical synaptic potentials by presynaptic somatic membrane potential. Nature 441, 761–765.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Steriade, M., Timofeev, I., and Grenier, F. (2001). Natural waking and sleep states: a view from inside neocortical neurons. J. Neurophysiol. 85, 1969–1985.

Pubmed Abstract | Pubmed Full Text

Stocker, M., Krause, M., and Pedarzani, P. (1999). An apamin-sensitive ca2+-activated k+ current in hippocampal pyramidal neurons. Proc. Natl. Acad. Sci. U.S.A. 96, 4662–4667.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Thomson, A. M., and Lamy, C. (2007). Functional maps of neocortical local circuitry. Front. Neurosci. 1:1. doi:10.3389.neuro.01/1.1.002.2007

CrossRef Full Text

Timofeev, I., Grenier, F., and Steriade, M. (2001). Disfacilitation and active inhibition in the neocortex during the natural sleep-wake cycle: an intracellular study. Proc. Natl. Acad. Sci. U.S.A. 98, 1924–1929.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Touboul, J. (2008). Bifurcation analysis of a general class of non-linear integrate and fire neurons. SIAM J. Appl. Math. 68, 1045–1079.

CrossRef Full Text

Touboul, J., and Brette, R. (2008). Dynamics and bifurcations of the adaptive exponential integrate-and-fire model. Biol. Cybern. 99, 319–334.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Traub, R. D., Contreras, D., Cunningham, M. O., Murray, H., LeBeau, F. E. N., Roopun, A., Bibbig, A., Wilent, W. B., Higley, M. J., and Whittington, M. A. (2005). Single-column thalamocortical network model exhibiting gamma oscillations, sleep spindles, and epileptogenic bursts. J. Neurophysiol. 93, 2194–2232.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Traub, R. D., Miles, R., and Wong, R. K. (1988). Large scale simulations of the hippocampus. IEEE Eng. Med. Biol. Mag. 7, 31–38.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Traub, R. D., Wong, R. K., Miles, R., and Michelson, H. (1991). A model of a CA3 hippocampal pyramidal neuron incorporating voltage-clamp data on intrinsic conductances. J. Neurophysiol. 66, 635–650.

Pubmed Abstract | Pubmed Full Text

Tseng, K. Y., and O’Donnell, P. (2005). Post-pubertal emergence of prefrontal cortical up states induced by d1-nmda co-activation. Cereb. Cortex 15, 49–57.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

van Vreeswijk, C., and Sompolinsky, H. (1996). Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 274, 1724–1726.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Victor, J. D., and Purpura, K. P. (1996). Nature and precision of temporal coding in visual cortex: a metric-space analysis. J. Neurophysiol. 76, 1310–1326.

Pubmed Abstract | Pubmed Full Text

Wang, Y., Markram, H., Goodman, P. H., Berger, T. K., Ma, J., and Goldman-Rakic, P. S. (2006). Heterogeneity in the pyramidal network of the medial prefrontal cortex. Nat. Neurosci. 9, 534–542.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Whittington, M., Traub, R., Kopell, N., Ermentrout, B., and Buhl, E. (2000). Inhibition-based rhythms: experimental and mathematical observations on network dynamics. Int. J. Psychophysiol. 38, 315–336.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Keywords: pyramidal cells, interneurons, fI curve, adaptation, spike timing, temporal coding, prefrontal cortex

Citation: Hertäg L, Hass J, Golovko T and Durstewitz D (2012) An approximation to the adaptive exponential integrate-and-fire neuron model allows fast and predictive fitting to physiological data. Front. Comput. Neurosci. 6:62. doi: 10.3389/fncom.2012.00062

Received: 26 January 2012; Accepted: 03 August 2012;
Published online: 06 September 2012.

Edited by:

Nicolas Brunel, Centre National de la Recherche Scientifique, France

Reviewed by:

Bertrand Fontaine, Ecole Normale Supérieure, France
Srdjan Ostojic, Ecole Normale Superieure, France

Copyright: © 2012 Hertäg, Hass, Golovko and Durstewitz. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and subject to any copyright notices concerning any third-party graphics etc.

*Correspondence: Loreen Hertäg and Daniel Durstewitz, Bernstein-Center for Computational Neuroscience, Central Institute of Mental Health, Psychiatry, Medical Faculty Mannheim of Heidelberg University, J 5, 68159 Mannheim, Germany. e-mail: loreen.hertaeg@zi-mannheim.de; daniel.durstewitz@zi-mannheim.de

Download