Edited by: Peter Ashwin, University of Exeter, United Kingdom
Reviewed by: Oleksandr Burylko, Institute of Mathematics (NAN Ukraine), Ukraine; Sid Visser, Simplxr B.V., Netherlands
*Correspondence: Axel Hutt
This article was submitted to Dynamical Systems, a section of the journal Frontiers in Applied Mathematics and Statistics
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Metastable attractors and heteroclinic orbits are present in the dynamics of various complex systems. Although their occurrence is well-known, their identification and modeling is a challenging task. The present work reviews briefly the literature and proposes a novel combination of their identification in experimental data and their modeling by dynamical systems. This combination applies recurrence structure analysis permitting the derivation of an optimal symbolic representation of metastable states and their dynamical transitions. To derive heteroclinic sequences of metastable attractors in various experimental conditions, the work introduces a Hausdorff clustering algorithm for symbolic dynamics. The application to brain signals (event-related potentials) utilizing neural field models illustrates the methodology.
Metastable states (MS) and heteroclinic orbits (HO) are prevalent in various biological and physical systems with essentially separated time scales. A MS can be characterized as a domain in a system's phase space with relatively large dwell time and slow evolution that is separated from another MS through a fast transient regime. Such slow phases may be constant states, oscillatory states or even parts of chaotic attractors [
Well-known examples of such sequences are the solution of the generalized Lotka-Volterra model for the population of species [
To explore the nature and underlying mechanisms of HOs, it is necessary to identify their constituent MS and heteroclinic connections between them and to develop models for describing them analytically. These models give insights into possible underlying dynamics. For instance, in a previous series of studies, sequences of MS have been identified in encephalographic data during cognitive tasks [
To understand HO observed in experimental data, we propose a sequence of interweaving data analysis and modeling techniques. Since a HO exhibits a temporal sequence of MS, in a first step these states are extracted by data analysis techniques identifying their location in phase space, their onset times and their durations as essential features [
The corresponding data analysis methods are based on certain model assumptions on the underlying dynamics. Some methods separate MSs from transients by machine learning techniques while neglecting the latter [
Once the sequence of MS has been identified as principle features of the measured data, their HO can be modeled as a dynamical system. The model features are the spatial localization, the onset time and the duration of each MS. To this end, we choose the dynamic skeleton of a sequence of MS and insert the corresponding features. The resulting model represents a combination of optimally extracted experimental data and a dynamical model.
The present work illustrates a certain combination of system identification and modeling techniques to extract sequences of MS in HOs. Parts of this combination have been developed in recent years while we present novel extensions that improve the identification of MS in experimental data under different experimental conditions. In the application, we focus on brain signals but the methodology may be applied easily to experimental data measured in other physical, biological, or geo-systems.
The following sections illustrate in detail both a data analysis method for the extraction of MS features and a model approach to describe the HO of experimental data mathematically.
Let us consider a set of discretely sampled trajectories
Starting point of the (symbolic) recurrence structure analysis (RSA) is Poincaré's famous
of the recurrence matrix
centered at state
Recurrent events (
Applying the recurrence grammar that corresponds to a recurrence plot
The results of the RSA and the obtained symbolic dynamics depend heavily on the parameter ε that defines the ball size of recurrent events. For finding an optimal partition and hence an optimal value for ε, beim Graben and Hutt [
for a Markov transition matrix
are renormalized entropies with
Applying the RSA to the time series
For this aim let us assume two experimental conditions, i.e.,
with
Afterwards, both the original time series
such that the concatenation products are now functions from
From these data we gather all sampling points that belong to the same phase space cell
where
measures the “distance” of the point
Now we proceed as in the previous approach by beim Graben and Hutt [
which consists of zeros and ones as the recurrence matrix
For the application to the ERP data shown below, we chose a rather small similarity distance of θ = 0.25 in order to identify at least the pre-stimulus domain across conditions.
In the view of our subsequent modeling in Section 2.2, recall that the found segments of the RSA are MS in phase space. Therefore, we compute their centers of gravity as time-averaged topographies
where
The recurrence structure analysis, Section 2.1, of the measured data yields a sequential dynamics for each experimental condition. From our present analysis Section 3.1 let us assume that we obtain a total number of
forming
In the following we use the findings of beim Graben and Hutt [
describing the evolution of neural activity
The metastable segmentation patterns of the RSA (11) are interpreted as stationary states,
For the particular case of Lotka-Volterra neural populations, described by activities ξ
with growth rates σ
recruits its corresponding MS
of the neural field.
Under these assumptions, beim Graben and Potthast [
with Pincherle-Goursat [
The kernel
The kernels
The linear term in this expansion is simply
Applying this method for each condition separately, yields two kernels
for μ ∈ [0, 1] entails a dynamical system depending on a continuous control parameter μ with limiting cases
Summarizing the modeling approach, we assume a Lotka-Volterra dynamics of the underlying system while identifying its fixed points with the MS gained experimentally from the data. To derive a spatio-temporal neural model that evolves according to the Lotka-Volterra dynamics, and hence exhibits the right number of HOs, we have employed a neural field model. The final underlying neural model (17) evolves similarly to the experimental data.
In our later brain signal simulation, we construct a
For the temporal dynamics we prepare the HO solving Equation (14) with growth rates
For the simulations with the neural field toolbox of Schwappach et al. [
Running simulations for different conditions with their respective parameters allows finally the computation of simulated brain signal data and comparison of their functional connectivities.
In order to validate our neural field model of HO, we subject the simulated data to the recurrence structure analysis as well. Since the amplitude of the simulated data is slightly diminished, we use a Hausdorff similarity threshold θ = 0.1 in case of the experimental data. Other analysis details are the same as above in Section 2.1.
We reanalyze an ERP experiment on the processing of ungrammaticalities in German [
(22) a. Im Garten wurde oft
In the garden was often
“Work was often going on in the garden …”
b. *Im Garten wurde am
In the garden was on-the
“Work was on-the going on in the garden …”
In German, sentences of type (22-b) are ungrammatical because the preposition
The ERP study was carried out in a visual word-by-word presentation paradigm with 17 subjects. Subjects were presented with 40 trials per condition, each trial comprising one sentence example that was structurally identical to either (22-a) or (22-b). The critical word was the past participle printed in bold font across all conditions. EEG and additional electro-ocologram (EOG) for controlling eye-movement were recorded with 64 electrodes; EEG was measured with
For preprocessing, continuous EEG data were cut into [−200, +1, 000]ms epochs, baseline aligned along the prestimulus interval [−200, 0]ms and averaged over trials per condition per subject after artifact rejection. Subsequently, those single-subject ERP averages were averaged over all 17 subjects per condition in order to obtain the grand average ERPs as the bases of our recurrence structure analysis Section 2.1 and neural field modeling Section 2.2.
First we report the grand average ERPs as snapshot sequences of the respective scalp topographies. Figure
Snapshot sequence of ERP scalp topographies for correct condition (22-a).
The sequence exhibits only one prominent pattern, a frontal positivity around 232 ms, known as the attentional P200 component.
Accordingly, we present in Figure
Snapshot sequence of ERP scalp topographies for violation condition (22-b).
Also in this condition the P200 effect is visible as a frontally pronounced positivity between 232 and 280 ms. Moreover, an earlier effect with reversed polarity can be recognized around 136 ms, the tentative N100 component. Most important is a parietal positivity starting at 472 ms until the end of the epoch window at 952 ms. This late positivity is commonly regarded as a neurophysiological correlate of syntactic violations [
However, a proper interpretation of ERP effects relies on considering condition differences. Thus, we plot the ERP difference between violation condition (22-b) and correct condition (22-a) in the range [−8, 8]μV in Figure
Snapshot sequence of ERP scalp topographies for difference potential, cf. movie
The difference patterns in Figure
During cognitive tasks, Lehmann et al. [
In order to detect brain microstates by means of the RSA method in Section 2.1 we compute recurrence plots based on the cosine distance function
as we are interested in detecting recurrent scalp topographies. This choice has also the advantage that the sparse 59-dimensional observation space is projected onto the unit sphere, resulting into a denser representation of ERP trajectories.
Next we present the results of our ERP recurrence structure analysis and neural field modeling.
For the recurrence structure analysis we optimize the Markov utility function (3) for the grand averages of both conditions (22-a) and (22-b) separately. The resulting utility functions are depicted in Figure
RSA Markov utility functions for conditions (22-a) (solid) and (22-b) (dotted).
Figure
The results of the recurrence structure analysis (RSA) for this optimal value of ε are shown in Figure
ERP grand averages and optimal recurrence grammar partition for experimental data taken from [
After identification of the MS, we construct the neural field model based on results of Section 2.2. First we present the functional connectivity kernels
Reconstructed connectivity kernels for the neural field models for conditions (22-a)
Figure
Next we show the simulated spatiotemporal dynamics of the neural field simulation from Section 2.2. To this end, the dynamics is illustrated as a temporal snapshot sequence of spatial topographies.
Figure
Snapshot sequence of neural field scalp topographies for correct condition (22-a).
The sequence exhibits one prominent pattern, a frontal positivity around 235 ms, reflecting the attentional P200 component. Hence it resembles well the original time series shown in Figure
Accordingly, we present in Figure
Snapshot sequence of neural field scalp topographies for violation condition (22-b).
In this condition the P200 effect is visible as a frontally pronounced positivity between 331 and 426 ms, however that is delayed compared to the original ERP data (cf. Figure
Moreover, the earlier N100 component with reversed polarity is now shifted toward the time window between 139 and 235 ms. The final parietal positivity starting off at about 470 ms in the ERP data is already present in our simulation, now starting at 618 ms until the end of the epoch window at 952 ms.
As above, a proper interpretation of ERP effect requires the computation of condition differences. We plot the simulated neural field difference between violation condition (22-b) and correct condition (22-a) in the range [−10, 10]μV in Figure
Snapshot sequence of neural field scalp topographies for difference potential, cf. movie
A strong artifact effect from 139 until 235 ms is visible that is due to the misalignment of N100 and P200 in both conditions. However, the difference patterns clearly indicate the posteriorly distributed P600 ERP component that evolves between 618 and 1,000 ms post-stimulus.
For the recurrence structure analysis of the neural field simulation we optimize the Markov utility function (3) for the grand averages of both conditions (22-a) and (22-b) separately. The resulting utility functions are depicted in Figure
RSA Markov utility functions for neural fields simulations for conditions (22-a) (solid) and (22-b) (dotted).
Figure
The results of the recurrence structure analysis (RSA) are shown in Figure
Neural field simulation and its optimal recurrence grammar partition for experimental data from Frisch et al. [
Figure
The present work illustrates how to extract MS in a HO in experimental time series and how to model the sequence of metastable attractors. Both the feature extraction and the modeling part is based on underlying model assumptions on the dynamics of the heteroclinic sequence. The RSA is based on a stochastic Markov chain model, while the HO model is supposed to obey a Lotka-Volterra dynamics that can be mapped to a heterogeneous neural field equation.
The application to measured EEG data demonstrates that the combination of the feature extraction and modeling part allows to describe the heteroclinic sequences of metastable attractors in good agreement to experimental data. It is possible to reproduce the sequence of states and the time-averaged mean of the states well. However, details of the heteroclinic sequence, such as variability of durations of states and the duration of transients, may not be captured by both the feature extraction and/or the HO model and may stipulate closer investigations. This is seen in the simulated EEG data of Figure
The methodology presented improves previous attempts to derive dynamical models of HO in experimental brain data by the combination of RSA and the HO model for neural fields. The work introduces a novel approach based on Hausdorff clustering to combine several symbolic sequences gained from different experimental conditions to distinguish common and distinct MS. This analysis provides insights at which time instant and for which spatial EEG distribution common underlying mechanisms are present and when the brain behaves characteristically in different conditions. Moreover, the analysis provides spatial kernels of the neural field models for each experimental condition. The spatial kernels exhibit a hemispheric asymmetry reflecting the brains asymmetry in language processing, e.g., the semantic and pragmatic integration processes supported by the right hemisphere.
Models of heteroclinic sequences exhibit sequences of metastable attractors including attractive and repelling manifolds. By virtue of this construction, the dynamics is sensitive to random fluctuations yielding uncontrolled jumps outside the basin of attraction of the heteroclinic cycle and the divergence from the stationary cycle. The probability to leave the basin of attraction is small for tiny noise levels while increasing the noise level endangers the system to diverge. However, we point out that the modeled stable heteroclinic sequence is constructed in such a way that it is rather stable toward small levels of noise due to the dissipation [
Future work will apply the Hausdorff clustering to additional intracranially measured Local Field Potentials in animals and human EEG recordings to explore gain deeper insights into the brains heteroclinic underlying dynamics. For the neural field simulation, HO with multiple time scales as discussed by Yildiz and Kiebel [
The data has been taken from a study published previously in Frisch et al. [
AH conceived the structure of the manuscript, PB performed the simulations and data analysis and both authors have written the manuscript.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The Supplementary Material for this article can be found online at:
1Note that this recoding scheme applies accordingly to the case of more than two conditions.
2Electrode selection for ROIs: left-frontal (LF) = “FP1,” “AF3,” “AF7,” “F9,” “F7,” “F5,” “F3,” “FC5,” “FC3”; left-temporal (LT) = “FT9,” “FT7,” “T7,” “C5,” “C3,” “TP9,” “TP7,” “CP5,” “CP3,” left-parietal (LP) = “P9,” “P7,” “P5,” “P3”; left-occipital (LO) = “O1,” “PO7,” “PO3”; midline-central (C) = “FPZ,” “AFZ,” “FZ,” “FCZ,” “CZ,” “CPZ,” “PZ,” “POZ,” “OZ”; right-frontal (RF) = “FP2,” “AF4,” “AF8,” “F10,” “F8,” “F6,” “F4,” “FC6,” “FC4”; right-temporal (RT) = “FT10,” “FT8,” “T8,” “C6,” “C4,” “TP10,” “TP8,” “CP6,” “CP4”; right-parietal (RP) = “P10,” “P8,” “P6,” “P4”; right-occipital (RO) = “O2,” “PO8,” “PO4.”