Edited by: Bert Shi, The Hong Kong University of Science and Technology, Hong Kong
Reviewed by: Theodore Yu, University of California at San Diego, USA; Chi-Sang Poon, Harvard – MIT Division of Health Sciences and Technology, USA; Tadashi Shibata, University of Tokyo, Japan
*Correspondence: Giacomo Indiveri, Institute of Neuroinformatics, Swiss Federal Institute of Technology Zurich, University of Zurich, Zurich CH-8057, Switzerland. e-mail:
This article was submitted to Frontiers in Neuromorphic Engineering, a specialty of Frontiers in Neuroscience.
This is an open-access article subject to a non-exclusive license between the authors and Frontiers Media SA, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and other Frontiers conditions are complied with.
Hardware implementations of spiking neurons can be extremely useful for a large variety of applications, ranging from high-speed modeling of large-scale neural systems to real-time behaving systems, to bidirectional brain–machine interfaces. The specific circuit solutions used to implement silicon neurons depend on the application requirements. In this paper we describe the most common building blocks and techniques used to implement these circuits, and present an overview of a wide range of neuromorphic silicon neurons, which implement different computational models, ranging from biophysically realistic and conductance-based Hodgkin–Huxley models to bi-dimensional generalized adaptive integrate and fire models. We compare the different design methodologies used for each silicon neuron design described, and demonstrate their features with experimental results, measured from a wide range of fabricated VLSI chips.
Spike-based models of neurons have recently become very popular, for both investigating the role of spike-timing in the computational neuroscience field, and for implementing event-driven computing systems in the neuromorphic engineering field. Several spike-based neural network simulators have been developed within this context, and much research has focused on software tools and strategies for simulating spiking neural networks (Brette et al.,
In this work we describe a wide range of circuits commonly used to design SiNs, spanning multiple design strategies and techniques that range from current-mode, sub-threshold to voltage-mode, switched-capacitor (S-C) designs. Moreover we present an overview of the most representative silicon neuron circuit designs recently proposed, compare the different approaches followed, and point out advantages and strengths of each design.
From the functional point of view, silicon neurons can all be described as circuits that have one or more
Temporal integration block | |
Spike/event generation block | |
Refractory period mechanism | |
Spike-frequency adaptation block | |
Spiking threshold adaptation block | |
Weak inversion | Strong inversion |
Voltage mode | Current-mode |
Non-clocked | Switched-capacitor |
Biophysical model | Phenomenological model |
Real-time | Accelerated-time |
In the next Section we will describe some of the more common circuits used as basic building blocks for building SiNs which cover all design strategies outlined in Table
It has been shown that an efficient way of modeling neuron conductance dynamics and synaptic transmission mechanisms is by using simple first-order differential equations of the type
Many of the membrane channels that shape the output activity of a neuron exhibit dynamics that can be represented by state changes of a series of voltage-dependent
Since transistors also involve the movement of a charged particle through an electric field, a transistor circuit can directly represent the action of a population of gating particles (Hynna and Boahen,
The source of M2,
It is also possible to model conductance and channel dynamics by abstracting their behavior, describing it with sets of differential equations, and solving them using analog circuits. One can resort to using systematic synthesis methods for mapping non-linear differential equations onto analog circuits. For example, using this strategy it was possible to design circuit implementations for the FitzHugh-Nagumo neuron model (FitzHugh,
Biophysically realistic implementations of neurons produce analog waveforms that are continuous and smooth in time, even for the generation of action potentials (we will describe examples of these types of circuits in Section
One of the original circuits proposed for generating discrete events in VLSI implementations of silicon neurons is the
One of the main advantages of this self-resetting neuron circuit are its excellent matching properties: mismatch is mostly dependent on the matching properties of the two capacitors of the circuit rather than any of its transistors. As low mismatch is especially desirable in imagers and photoreceptor arrays, this circuit has been applied to the design of a spiking (or event-based) vision sensor (Azadmehr et al.,
The Axon-Hillock circuit produces a spike event when the membrane voltage crosses a voltage threshold that depends on the geometry of the transistors and on the VLSI process characteristics. In order to have better control over the spiking threshold, it is possible to use a five-transistor amplifier, as shown in Figure
The capacitance
The principles used by this design to control spiking thresholds explicitly have been used in analogous SiN implementations (Indiveri,
An additional advantage that this circuit has over the Axon-Hillock circuit is power consumption: The Axon-Hillock circuit non-inverting amplifier, comprising two inverters in series, dissipates large amounts of power for slowly varying input signals, as the first inverter spends a significant amount of time in its fully conductive state (with both nFET and pFET conducting) when its input voltage
Spike-frequency adaptation is a mechanism observed in a wide variety of neural systems. It acts to gradually reduce the firing rate of a neuron in response to constant input stimulation. This mechanism may play an important role in neural information processing, and can be used to reduce power consumption and bandwidth usage in VLSI systems comprising networks of silicon neurons.
There are several processes that can produce spike-frequency adaptation. Here we will focus on the neuron's intrinsic mechanism which produces slow ionic currents with each action potential that are subtracted from the input. This “negative feedback mechanism” has been modeled differently in a number of SiNs.
The most direct way of implementing spike-frequency adaptation in a SiN is to integrate the spikes produced by the SiN itself (e.g., using one of the filtering strategies described in Section
Spike-frequency adaptation and other more complex spiking behaviors can also be modeled by implementing models with adaptive thresholds, as in the Mihalas–Niebur neuron model (Mihalas and Niebur,
Examples of two-state variable SiNs that use either of these mechanisms will be presented in Section
Recent experimental evidence suggests that individual dendritic branches can be considered as independent computational units. A single neuron can act as a multi-layer computational network, with the individually separated dendritic branches, allowing for parallel processing of different sets of inputs on different branches before their outputs are combined (Mel,
Early VLSI dendritic systems included the passive cable circuit model of the dendrite specifically by implementing the dendritic resistance using S-C circuits (Elias and Northmore,
The authors in Wang and Liu (
Circuits that operate like a MOS transistor but with a digitally adjustable size factor W/L are very useful in neuromorphic SiN circuits, for providing a weighted current or for calibration to compensate for mismatch. Figure
Alternative design schemes, using the same principle but different arrangement of the transistors can be used for applications in which high-speed switching is required (Leñero-Bardallo et al.,
Typically, the smallest currents that can be processed in conventional circuits are limited by the MOS “off sub-threshold current,” which is the current a MOS transistor conducts when its gate-to-source voltage is zero. However, MOS devices can operate well below this limit (Linares-Barranco and Serrano-Gotarredona,
We will now make use of the circuits and techniques introduced in Section
The types of SiN designs described in this section exploit the biophysical equivalence between the transport of ions in biological channels and charge carriers in transistor channels. In the classical conductance-based SiN implementation described in Mahowald and Douglas (
Thalamic relay neurons possess a low-threshold calcium channel (also called a T-channel) and a slow inactivation variable, which turns off at higher voltages and opens at low voltages. The T-channel can be implemented using a fast activation variable, and implemented using the gating variable circuit of Figure
The dendritic compartment contains all active membrane components not involved in spike generation – namely, the synapses (e.g., one of the low-pass filters described in Section
The somatic compartment, comprising a simple I&F neuron such as the Axon-Hillock circuit described in Section
When
The approach followed for this Thalamic relay SiN can be extended by using and combining multiple instances of the basic building blocks described in Section
In Yu and Cauwenberghs (
We have shown examples of circuits used to implement faithful models of spiking neurons. These circuits can require significant amounts of silicon real-estate. At the other end of the spectrum are compact circuits that implement basic models of I&F neurons. A common goal is to integrate very large numbers of these circuits on single chips to create large arrays of spiking elements, or large networks of neurons densely interconnected (Merolla et al.,
A common application of basic I&F spiking circuits is their use in neuromorphic vision sensors. In this case the neuron is responsible for encoding the signal measured by the photoreceptor, and transmitting it off-chip using the AER. In Azadmehr et al. (
The neuron used in the octopus retina (Culurciello et al.,
Another compact neuron circuit is the one used in the dynamic vision sensor (DVS) silicon retina (Lichtsteiner et al.,
The matching behavior of this circuit is the key to the success of the DVS, which is the first event-based silicon retina that has been commercialized and sold to other institutions. Because the DC mismatch in the log intensity value is blocked by C1, and because A1 inserts a high gain element that appears before the poorly matched comparators AON and AOFF, the mismatch referred back to the signal of interest (dlogI) is reduced by the gain of A1. For example, if the mismatch of AON/AOFF are 20 mV and the gain A1 = 20, then the mismatch at the logI output is reduced to 1 mV, which corresponds to a visual contrast of about 3.5%. This relatively good matching allows the DVS to be used with natural visual input, which often has rather low contrast. This circuit is an example of the general principle of removing static mismatch and amplifying before comparing for improving precision using imprecise elements. Measurements show that across an array of 16k pixels the one-sigma matching is equivalent to about 2% contrast. The five-sigma matching (which applies across a large array of cells) is then about 10%, in agreement with practical contrast threshold settings of about 15% that we routinely use (Lichtsteiner et al.,
The simplified I&F neuron circuits described in the previous Section require far less transistors and parameters than the biophysically realistic models of Section
The circuit shown in Figure
The log-domain LPF neuron (LLN) is a simple yet reconfigurable I&F circuit (Arthur and Boahen,
where ν∞ is
The LLN is composed of four sub-circuits (see Figure
Implementing LLN's various spiking behaviors is a matter of setting its biases. To implement regular spiking, we set
The DPI neuron is another variant of a
The DPI-neuron circuit is shown in Figure
By applying a current-mode analysis to both the input and the spike-frequency adaptation DPI circuits (Bartolozzi et al.,
where
where
By changing the biases that control the neuron's time-constants, refractory period, spike-frequency adaptation dynamics and leak behavior (Indiveri et al.,
Indeed, given the exponential nature of the generalized I&F neuron's non-linear term
The SiN circuits described up to now have transistors that operate mostly in the sub-threshold or weak-inversion domain, with currents ranging typically between fractions of pico to hundreds of nano-amperes. These circuits have the advantage of being able to emulate real neurons with extremely low power requirements and with realistic time-constants (e.g., for interacting with the nervous system, or implementing real-time behaving systems with time-constants matched to those of the signals they process). However, in the weak-inversion domain mismatch effects are more pronounced than in the strong-inversion regime (Pelgrom et al.,
It has been argued that in order to faithfully reproduce computational models simulated on digital architectures, it is necessary to design analog circuits with low mismatch and high precision (Schemmel et al.,
As mentioned in Section
Figure
Each mathematical function used in the H–H neuron model, implemented using its analog equivalent circuit, is controlled by tunable analog variables which correspond to the model parameters. All parameters are stored on-chip on dynamically reconfigurable and analog DRAM cells. This implementation approach is costly in terms of silicon area and time-to-fabrication, due to the full-custom design mode and to the open parameter space that necessitates above-threshold design with bipolar and MOS transistors. To improve the design flow, the analog circuits are designed as library items which form a database. The database is used as a platform for automated design (Lévi et al.,
Neuron membrane voltages are obtained by summing currents chosen from a set of “generator” library circuits and summed on a capacitance representing a membrane capacitance. The currents are selected by a system of configurable switches, and a maximum of five generators can be selected for a single neuron. This covers most of the point neuron models used in computational neuroscience. External inputs and synaptic currents from pre-synaptic neurons can be injected on the membrane capacitance. The results presented in Figure
As for the sub-threshold case, implementations of biophysically detailed models such as the one described above can be complemented by more compact implementations of simplified I&F models.
The quadratic I&F neuron circuit (Wijekoon and Dudek,
The two-state variables, “membrane potential” (V) and “slow variable” (U), are represented by voltages across capacitors
A design that is in-between the detailed H–H neuron circuits and the quadratic neuron circuit, in terms of transistor count and circuit complexity is the above-threshold current-controlled conductance neuron.
This circuit is also an accelerated neuron model, which uses transistors operated in the strong-inversion regime to emulate the properties of neuron membrane conductances. Together with on-chip bias-generation circuits such a model can be calibrated to quantitatively reproduce numerical simulations. Figure
Functionally the ion-channels are realized by current-controlled conductances. The inhibitory and excitatory channels receive a current-sum representing the total neuro-transmitter density in the synaptic cleft of the inhibitory and excitatory synapses respectively. Thereby, the time course of the synaptic conductance is generated outside of the neuron circuit and may differ for each synapse. Using a current-mode input is mandatory at the high acceleration factor of the neuron (104–105). A rise-time of 1 ms in biology translates to 10 ns. Considering a voltage swing of 1 V and a total capacitance of 5 pF for the neuron input
The low input impedance necessary at the neuron inputs is generated by wide cascode transistors (M6 and M9). The circuits for the leakage and the inhibitory conductances are standard operational transconductance amplifiers (OTA1 and 2). In case of the inhibitory conductance, the linear input range is extended by using a voltage-divider chain at the input of the OTA built from long transistors. This is feasible since the additional leakage generated by these transistors can be compensated by reducing the static leakage current
The excitatory conductance has to react very quickly to changes in the input current, as shown in Figure
Switched-capacitors have long been used in integrated circuit design to enable the implementation of variable resistors whose sizes can vary over several orders of magnitude. This technique can be used as a method of implementing resistors in silicon neurons, which is complementary to the methods described in the previous sections. More generally, S-C implementations of SiNs produce circuits whose behaviors are robust, predictable and reproducible (properties that are not always met with sub-threshold SiN implementations).
The circuit shown in Figure
The Mihalas–Niebur S-C neuron (Mihalas and Niebur,
Experimental results measured from a fabricated integrated circuit implementing this neuron model (Folowosele et al.,
The S-C principle of using discrete time and clocked signals can be extended to use high-speed pulsing current mirrors for building weight-modulated charge packet driven leaky I&F neurons.
In this framework, spikes produced by a source neuron act as asynchronous clock signals that selectively activate a set of binary weighted high-speed pulsing current mirrors at the destination neurons. The selection of which current mirror branch is activated depends on a digital word that represents the neuron's synaptic weight. To implement the neuron leak conductance, an opposite sign pulsing current mirror is driven by spikes generated by a periodic signal from a global on chip clock. Figure
An alternative option requiring approximately the same area usage, but with much higher precision, is to implement the I&F neuron using all digital techniques. This idea was explored in Camuñas Mesa et al. (
Digital VLSI implementations of neurons and neural systems are also being evaluated, without resorting to full-custom VLSI designs. Examples include solutions using FPGAs (Mak et al.,
While the digital processing paradigm, ranging from standard computer simulations to custom FPGA designs, is advantageous for its stability, fast development times, and high precision properties, full-custom VLSI solutions can often be optimized in terms of power consumption, silicon area usage, and speed/bandwidth usage. We anticipate that future developments in large-scale neuromorphic circuits and systems designs will increasingly combine full-custom analog and synthesized digital designs, in order to optimize both core and peripheral neural and synaptic functions in a highly programmable and reconfigurable architecture. The relative merits and the right mix of analog versus digital in neuromorphic computing (Sarpeshkar,
In this paper we described some of the most common circuits and techniques used to implement silicon neurons, and described a wide range of neuron circuits that have been developed over the years, using different design methodologies and for many different application scenarios. In particular, we described circuits to implement leaky I&F neurons (Mead,
Thalamic relay | pg. 8 | Conductance-based, thermodynamically equivalent, compact. |
H–H model | pg. 9 | Conductance-based, biologically realistic, not compact. |
Octopus retina | pg. 10 | Basic I&F model, low power, compact. |
DVS | pg. 10 | Basic I&F model, low mismatch, compact. |
tau-cell | pg. 11 | Log-domain, modular. |
LLN | pg. 11 | Log-domain, cubic two-variable model, low power, compact. |
DPI | pg. 13 | Current-mode, exponential adaptive model, low power, compact. |
H–H model | pg. 14 | Bipolar, voltage-mode, real-time, not compact. |
Quadratic I&F | pg. 15 | Voltage-mode, accelerated-time, low power, compact. |
Current-controlled | pg. 16 | Voltage-mode, conductance-based, accelerated-time. |
Switched-capacitor | pg. 16 | Mihalas–Niebur adaptive threshold model, discrete time, modular. |
Digitally modulated | pg. 17 | Basic I&F model, discrete time, low-mismatch. |
Obviously, there is no absolute optimal design. As there is a wide range of neuron types in biology, there is a wide range of design and circuit choices for SiNs. While the implementations of conductance-based models can be useful for applications in which a small numbers of SiNs are required (as in hybrid systems where real neurons are interfaced to silicon ones), the compact AER I&F neurons and log-domain implementations (such as the quadratic Mihalas–Niebur neurons, the tau-cell neuron, the LPF neuron, or the DPI neuron) can be integrated with event-based communication fabric and synaptic arrays for very large-scale reconfigurable networks. Indeed, both the sub-threshold implementations and their above-threshold “accelerated-time” counterpart are very amenable for dense and low power integration with energy efficiencies of the order of a few pico-Joules per spike (Wijekoon and Dudek,
The sheer volume of silicon neuron designs proposed in the literature demonstrates the enormous opportunities for innovation when inspiration is taken from biological neural systems. The potential applications span computing and biology: neuromorphic systems are providing the clues for the next generation of asynchronous, low-power, parallel computing that could breach the gap in computing power when Moore's law runs its course, while hybrid, silicon-neuron systems are allowing neuro-scientists to unlock the secrets of neural circuits, leading one day, to fully integrated brain–machine interfaces. New emerging technologies (e.g., memristive devices) and their utility in enhancing spiking silicon neural networks must also be evaluated, as well as maintaining a knowledge-base of the existing technologies that have been proven to be successful in silicon neuron design. Furthermore, as larger on-chip spiking silicon neural networks are developed questions of communications protocols (e.g., AER), on-chip memory, size, programmability, adaptability, and fault tolerance also become very important. In this respect, the SiN circuits and design methodologies described in this paper provide the building blocks that will pave the way for these extraordinary breakthroughs.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
We would like to acknowledge the reviewers for their constructive feedback. This work was supported by the EU ERC grant 257219 (neuroP), the EU ICT FP7 grants 231467 (eMorph), 216777 (NABAB), 231168 (SCANDLE), 15879 (FACETS), by the Swiss National Science Foundation grant 119973 (SoundRec), by the UK EPSRC grant no. EP/C010841/1, by the Spanish grants (with support from the European Regional Development Fund) TEC2006-11730-C03-01 (SAMANTA2), TEC2009-10639-C04-01 (VULCANO) Andalusian grant num. P06TIC01417 (Brain System), and by the Australian Research Council grants num. DP0343654 and num. DP0881219.
1Inverting amplifier circuits in which the current is limited by a MOSFET in series appropriately biased.
2This is a realistic estimate considering the high number of synapses connected to this line.