Multilevel Computational Modelling in Epilepsy: Classical Studies and Recent Advances



Fig. 7.1
Schematic representation of spatial and temporal scales relevant in epilepsy and seizures. (ab) Consider the activity of a single neuron generating a spike-train. (ce) Consider a network of coupled neurons. The generated spike-trains show corresponding episodes of firing, introducing the notion of synchronicity. (fg) Consider a single channel of EEG from a person with absence epilepsy displaying SWD characteristic of absence seizure EEG. (h) The seizure activity arises from healthy background activity, generating seizures lasting for several seconds. (i) Schematic histogram counting the number of SWDs in bins over 24 h. (j) Schematic representation of absence epilepsy seizures over 20 years where the person grows out of the condition



Juvenile absence epilepsy (JAE) and childhood absence epilepsy (CAE) are part of a larger group called idiopathic generalised epilepsy (IGE), which is a particularly interesting type of epilepsy as they are generally considered to have an underlying genetic cause. Although absence seizures are very common in IGE, they are not a necessary and sufficient condition. For an overview of the concepts and classifications relevant to IGE, we refer to a review by Mattson [83]. By combining computational, human, and animal experimental studies there is now a clearer understanding of the pathophysiology of absence seizures and the associated SWDs. Absence seizures are usually seen in children and adolescents and are characterised by impairments in consciousness. Particular strains of genetically epileptic rats (WAG-Rij or GAERS) or animals treated with GABAA-antagonists in the feline generalised penicillin model of epilepsy (FGPE) can reproduce patterns of activity similar to SWDs, as well as display behavioural symptoms resembling absence seizures and response to medication [14, 23, 101]. These experimental and animal models are crucially important in identifying the different spatial brain structures involved in absence seizures. Further, they provide a testing environment for investigating the influence of new types of medication and other treatments of epilepsy and seizures.

Although the generation of SWDs and absence seizures are not clearly understood physiologically, the spike component of the SWD is generally associated with firing of cortical neurons, whereas the slower wave component is thought to be related to hyperpolarisation, likely caused by inhibition. A large body of experimental results point to a critical role of the interplay between the thalamus and cortex in the generation of absence seizures and SWDs, and there are many modelling attempts investigating probable causes, such as excessive corticothalamic feedback or inhibitory rebound potentials. For a detailed overview of these type of models and how they relate to activity patterns during sleep, we refer to the work by Destexhe and Sejnowski [34].

The purpose of the present review is to provide an overview of established and influential computational models of generalised and focal epilepsies, alongside a presentation of more recent approaches, that attempt to integrate either computational and experimental, or computational and clinical work respectively. Given seizures arise from the same structures that govern normal brain function, we first provide an introduction to the many scales relevant in modelling neural activity more generally.



Multilevel Computational Models of Neural Activity


When building a mathematical or computational model of neural activity it is important to consider the constituent building blocks required. As described in the introduction, it is presumed (due to the nature of behavioural manifestations associated with seizures) that a malfunction of dynamical interactions between large-scale brain regions plays a critical role. Unpicking this statement suggests that seizures may be thought of as an emergent property, dependent upon the dynamics within a brain region and the connectivity between regions. In this context, the term brain connectivity may refer to many different things, including anatomical links (structural connectivity), statistical dependencies (functional connectivity), or causal interactions (effective connectivity) across different scales (from synapses to whole brain areas). Whilst ultimately large-scale neural activity arises from interactions between neural populations at many levels of description, the challenge of building and analysing fully multiscale computational models means that the focus is instead often constrained to a single scale of description that is typically governed by the availability of data. However, it is worth considering what are the fundamental building blocks of neural activity across multiple spatial scales, and how might function at one level inform and constrain function at another?

At the very highest level we might consider behaviour to be an emergent property of the interactions between macroscopic brain regions, including the cerebral cortex, hippocampus and subcortical nuclei such as the thalamus and basal ganglia. Brain regions are then typically subdivided into smaller functional compartments. For example, the cortex consists of around 105–106 cortical columns which on their turn consists of around 50−100 smaller minicolumns (see Table 7.1) [93, 118, 130]. This combinatorial explosion in connectivity is what makes a fully multiscale model of the brain difficult to develop. Further, complexity theory teaches us that even if such a model could be developed it may be of limited value. Across complex networks the emergent dynamics of the network are not typically predictable from a study of the component parts. Further, how far should our reduction process continue? For example, beyond the single neuron level there are several smaller scales of description such as cell biophysics or the molecular biology of gene regulation.


Table 7.1
Detailed organisation of the cortex


































Structure

Thickness

Neurons/synapses

Scale

Data

Cortex

2−4 mm

1011/1014

Global

EEG/fMRI

Cortical column

200–1000 µm

104−108 /

Meso

LFPs

Minicolumn

20–40 µm

100−200/104

Micro

Single-unit recordings

Rather than an exhaustive process of reduction we instead focus on three distinct spatial scales of description for which computational models have been most commonly used. The microscale, where we consider the properties of individual neurons and their dendritic and synaptic connections. The mesoscale where we consider the properties of networks of circuits consisting of larger numbers of neurons, such as cortical columns, and refers to the spatial scale at which we can approximate the statistical properties of the population, without having to study the properties of all the individual neurons. Finally, the macroscale where we consider the brain as described by specific regions (grey matter), connected by fibre pathways (white matter). We first give a brief overview of how different modelling approaches operate across these scales, before describing their specific contributions to the understanding of seizures in terms of neural dysfunction at these different scales of description. For an alternative, more illustrative, summary of these challenges, we refer the reader to the recent review by Tejeda and colleagues [124], which compares the possible advantages and disadvantages of using specific type of models (i.e., deterministic, stochastic, phenomenological, physiological).


Microscale


At the microscale, anatomical and physiological studies have revealed many of the main characteristics and interconnections of cortical microcircuits. Fundamentally, a neuron consists of three main parts: the cell body, the axon, and the dendritic tree. Information comes into the neuron through the dendritic tree, where it is integrated at the cell body and the transformed signal is eventually sent as output through the axon. The dynamical behaviour of neurons is closely related to the evolution of the potential difference between the inside and the outside of the cell. The dynamics of the membrane potential depend on the electrophysiological properties of the neuron, such as the specific ion currents and conductances, and the overall connectivity-structure including the synaptic inputs. More specifically, it is mediated by the inflow and outflow of ionic currents across the membrane caused by the ion-pumps and ion-channels within the membrane. Neurotransmitters are responsible for opening and closing the ion channels by binding to the receptors on the cell membrane and thereby increase (depolarise) or decrease (hyperpolarise) the membrane potential. For example, GABA can both inhibit and excite neurons in the brain, whereas glutamate is the main excitatory neurotransmitter in the brain. When a neuron receives sufficiently strong input through its synapses, the spikingthreshold is exceeded and the membrane potential undergoes a very fast transient change of the membrane voltage called a spike or action potential. Action potentials propagate via the axon to other neurons, which then on their turn can be excited if their synaptic inputs are sufficiently strong. Spikes are the main means of communication between neurons and spike-trains and specific spike-timings are generally considered as the basic building blocks for encoding and transmitting information [30, 70]. It is important to point out that information flow is not constant or instantaneous as conductances cause transmission delays, thereby further complicating the dynamical complexity of the system, even at this small spatial scale. This complexity is confounded by the many different types of neurons, characterised by different ionic properties and neurotransmitters [104].

Within a mathematical framework, we may characterise a neuron as an excitable, non-linear, dynamical system arising as the result of being close to a bifurcation from resting state to spiking activity. One can study neural excitability in a deterministic setting through a separation of time-scales, with typically a slow recovery variable and a fast voltage variable. The resting state of a neuron corresponds to a stable equilibrium, and large enough inputs can push the system into a stable period orbit, corresponding to periodic spiking activity. The intrinsic properties of the neuron, such as the number and type of currents, their conductances, the number of ion-channel and their kinetics, affect the location, the shape, and the period of the stable limit cycle. Transitions between resting and spiking states can occur in different dynamical settings as the equilibrium and limit cycle might coexist (bistable setting) or one of the states might disappear due to external inputs (bifurcation setting).

The Hodgkin-Huxley model of the squid giant axon is one of the most widely utilised and accepted description of neurons with voltage-sensitive currents [58]. Using patch-clamp techniques, three main currents in the squid axon were determined: a voltage-gated K +-current, a Na +-current, and an Ohmic leak current. The Hodgkin-Huxley equations model the dynamics of the membrane potential by describing the evolution of the activation variables with Markovian kinetics, assuming that the proportion of open channels is dependent on the number of activation gates and inactivation gates and the probability of an activation gate being open or closed. The evolution of the activation variables of the voltage-gated channels depends in turn on the voltage-sensitive steady-state activation function and a time constant, which are determined using voltage-clamps. Positive and negative feedback loops in the model can lead to the generation of an action potential as the system might be perturbed from its attractor. To illustrate this, assume that an external current causes a slight depolarisation in the cell-membrane. Consequently the sodium-conductance will increase, thereby depolarising the cell even further. After a short time-period, the slower sodium-channels activate, resulting in an increase in the potassium-conductance, causing the cell to repolarise and eventually hyperpolarise.

Although the structure of the squid giant axon is somewhat simplistic in comparison to cortical neurons, the Hodgkin-Huxley equations form the basis for many deterministic conductance-based models. As experimental techniques have advanced, new details of the mechanisms governing ion channels have emerged, such as spike frequency adaptation or synaptic depression, and these insights can be incorporated into new conductance-based models thereby providing a closing agreement between model output and experimental data (such as single-unit recordings). One famous extension of the Hodgkin-Huxley equations is the Connor-Stevens model which takes into account an additional transient current within the framework (a K +-conductance) [24]. As an alternative, rather than increasing the complexity of the system, some researchers have sought to reduce the model to its most critical parts. For example, by using a fast-slow decomposition, lower-dimensional models such as the FitzHugh-Nagumo model or the Morris-Lecar equations are derived [44, 92]. These reductions allow mathematical treatment such as phase-plane analysis, bifurcation analysis, and fast slow-analysis while still capturing the essential dynamic properties of the full Hodgkin-Huxley equations. As such, conductancebased neurons provide a successful mechanistic understanding of crucial phenomena such as neuronal excitability and spike-generation [60].

Although morphological studies have revealed much insight in the detailed structure of neurons and inspired detailed compartmental models using cable theory (see [17] for example), most simple models characterise a neuron as a single compartment or point corresponding to the cell-body. In these point-models, current flows strictly inside and outside of the cell, not between different regions within the cell. One of the earliest and simplest model of a neuron is the leaky integrate-and-fire neuron, a simple resistance-capicitator circuit with an Ohmic leakage [1]. By restoring the membrane potential of a neuron to a resetting-value after a specified threshold is reached, the model is able to mimic the generation of action potentials. Although integrate-and-fire models are piece-wise continuous and the spikes are not resultant from any realistic biological dynamics or kinetics, their reduced formulation makes them particularly suitable for mathematical analysis and for proving several theorems. As such, they allow a particular simple setting that includes spikes, excitability, refractory periods, and the difference between excitation and inhibition. This balance between excitation and inhibition has been a crucial dynamical concept in the study of neural activity, especially in relation to synchronisation of networks, and as such often provides a direct relation to the modelling of seizures [128].

An alternative approach for modelling microscale neuronal activity is to focus exclusively on the firing rate properties of neurons [39]. Firing rate models are built on the assumption that, on average, the input from a presynaptic neuron is proportional to its firing rate, and that the total synaptic input is obtained by summing the contributions of all the presynaptic neurons. These approaches specify a relationship between the firing rate and the input a neuron receives from other neurons in the network, or external or sensory inputs (such as an applied current). Note that firing rate models explicitly include the interactions between neurons, whereas the previously discussed conductance-based models were mainly studied in isolation without specified input-relationships. These approaches naturally extend the scale from an individual neuron to the level of networks of interacting neurons [134].


Mesoscale


Computational models of single neurons are often used to describe the underlying dynamics of realistic neuronal networks consisting of interconnected neurons. By coupling neurons together into larger ensembles or (sub)populations, networks of variable size are constructed as sets of coupled differential equations. Simulating these networks then gives the evolution of the state-variables of every individual neuron and reveals the emergent spatiotemporal patterns at the network level. Networks at the microscale are in the order of micrometers, whereas at the mesoscopic scale larger networks of cortical (mini)columns operate at the scale of hundreds of micrometers. The mesoscale is a relevant level of observation in the context of integration of information from the microscale towards whole brain areas, as it extends the level from single neurons to interacting local neural groups. Multi-unit recordings or local field potentials (LFPs), measuring summated dendritic current, can reveal activity of the brain at the mesoscopic level [130]. As shown in the work of Destexhe, networks of single compartment models are used to calculate field potentials, thereby relating both experimental data as well as models from the microscale to the mesoscale [34].

At this level of description, we can consider different levels and types of connectivity or coupling. First there is the level of neuronal connections, where coupling will either have an increasing (excitatory) of decreasing (inhibitory) effect on the membrane potential of the receiving neuron. Further, there might be a hierarchal structure within the overall network, as information might flow from one area to another in a feedback (top-down) or feedforward (bottom-up) structure within the network [130]. Once the type of neuronal coupling (i.e., strength, delay, periodicity) is established and the connectivity-structure described (i.e., all-to-all, sparse connectivity, specific areas), the output of the simulated network can be used to study the collective behaviour of the network as well as the individual neuronal responses.

Current detailed, biophysical modelling attempts typically incorporate more biophysical details into multi-compartmental models using specified software (i.e., NEST, Genesis, Neuron) with the computational power of supercomputers. A particularly striking example of this is the Blue Brain Project [80] and its sequel, the Human Brain Project. It should be highlighted that these large-scale simulations and models limit a global understanding of the dynamical behaviour of the underlying model, in the sense that one typically cannot decide whether the observed simulation displays converged activity or whether there are any other possible attractors within the system. Additionally, given the large numbers of physiological parameters and the fact that it is often notoriously hard to establish reliable estimates for them, a thorough understanding of the dynamical behaviour of these models may require a thorough sensitivity-analysis, as the model might critically depend on the exact settings of the parameters. An interesting future avenue of work here would be to develop uncertainty quantification techniques (see for example [98]), which have proved popular in large-scale climate models, for quantifying uncertainty associated with parameters of these large-scale simulations.

Networks of interacting neurons can exhibit collective behaviour that is not intrinsic to the individual activity patterns, such as synchronisation. Synchronisation is a crucially important concept in the computational modelling of the brain and seizures. For example, synchronisation within large-scale networks of interacting neurons causes the emergence of oscillatory activity in the thalamocortical system, such as the alpha and gamma rhythms [20]. The existence of distinct oscillatory frequencies in LFPs and EEG suggest that the brain, despite its dynamical complexity, produces patterns and trajectories that could be projected onto much lowerdimensional subspaces and studied accordingly. This approach, often termed meanfield approximations, form the basis for lumped models that typically operate at both the meso- and macroscale, by describing the evolution of neural activity with collective variables, such as the proportions of active neurons at a given time in a population, the mean membrane potential, or average firing rate. Simulating networks of thousands of individual conductance-based neurons at such a larger scale is computationally expensive because of the many dimensions, and furthermore, the large number of neurons makes it impracticable to study the influence of each element individually. Lumped approaches offer an alternative for modelling neuronal networks by coupling large collections of individual neurons, by characterising the activity of the neuronal populations in an aggregated manner. An important example of this approach is the Kuramoto-model, which describes the behaviour of large groups of neurons as near identical phase-coupled oscillators [71]. Here we consider the action potential of the neuron to reflect a periodic oscillation and focus on the study of the phase of this oscillation rather that the membrane potential directly. In the case of weak coupling, the amplitudes of the oscillations remain approximately constant, such that interactions can be described by focusing on the phase alone. Synchronisation within this framework typically relates to phase-locking (or coherency), and then often studied in the thermodynamic limit (e.g. the network-size growing to infinity). Mathematical analysis of the order-parameter can then reveal that the dynamics of the network of oscillators can be described by studying its statistical properties [60] and treating the network as a field.

Another motivation for considering mean-field activity of neural populations results from the nature of experimental data recorded at the mesoscale. As LFPs reflect the summated dendritic current at a scale of around 1 mm, they highlight the common action of local networks of neurons rather than the activity of individual neurons [130]. Another popular approach for describing neural population activity is based on a mean-field approximation of the ensemble density function. The ensemble-approach is explained in detail in a review by Deco and others [31], and is based around methods from statistical mechanics to formulate a probability density function that captures the distribution of all neuronal states of a population. Instead of following the evolution of thousands of individual neurons, the probability density function describes the time-dependent average activity level of the whole ensemble. By using appropriate assumptions, the stationary solutions of the probability density function might be analysed in a generic setting. However, one can simplify the probability density approach by relating the evolution of the density to a single variables, instead of a collection. It is this specific mean-field assumption forms the basis of neural mass models [45].

A neural mass model treats large-scale activity as a point process, and as such can equally reflect that activity of neuronal ensembles or EEG-sources. This comes at the cost of throwing away higher-order moments, such that interacting populations can only influence each other through their expected population rate. Despite this limitation, this simplification of the dynamics of the neuronal populations allows one to study interacting subpopulations with small computation times. A particular popular neural mass formulation is the Jansen-Rit model [61] which forms the basis of many of the epilepsy models we consider later.


Macroscale


We now consider models at the macroscale of millimetres, where very large numbers of neurons and neuronal populations form distinct brain regions which are interconnected by inter-regional pathways. It is the brain activity arising from macroscopic populations that we observe directly from EEG or MEG recordings. The activity patterns recorded by EEG are widely regarded as the summation of interactions of large populations of cortical pyramidal neurons, which due to their dendritic organisation align perpendicularly to the surface of the cortex [97]. Whilst EEG reflects the extracellular output of pyramidal cells, these are generated as a consequence of receiving both excitatory and inhibitory postsynaptic potentials.

One approach to study the emergent rhythms of these large-scale brain regions are neural field models. Neural field models are effectively extensions of neural mass models describing the average or coarse-grained activity of populations of interacting neurons by treating the cortex as a continuous excitable medium. Consequently, the spatially extended sheet is modelled as a function of space and time, and the macroscopic ensemble dynamics are described by a set of partial differential equations or integro-differential equations. Using fundamental methods from statistical mechanics, Wilson and Cowan developed the basis of neural field models by extending the earlier work of Beurle by including both excitation, inhibition, and a refractory period within their approach [143, 144]. A review by Destexhe shows how the Wilson-Cowan equations have been used to include more sophisticated and realistic mechanisms such as bursting, adaptation, and synaptic depression, and how their work predicted neural oscillations and stimulus-evoked responses [35]. Naturally, neural field models have been used to model the generation of EEG rhythms, and a key feature of neural field models in the context of seizures is related to the functional significance of the macroscopic variables in relation to EEG-recordings, making it easier to intuitively relate the computational model to the observed data.

The terminology of mean-field approaches can be somewhat confusing in the overall context of neural fields, as neural field modelling is not necessarily confined to only mean-field activity, but can often result from other assumptions and approximations, such as the probability density approach or from an integro-differential approach [31]. Neural field models have been extensively used to study a wide variety of topics including memory storage, pattern-formation, and travelling waves [2, 18, 25, 28], where they are typically formulated as a PDE, with the assumption of some underlying Green’s function describing the connectivity kernel [62]. Alternatively, many mean-field approaches are defined as a set of integro-differential equations describing the coarse-grained activity of a population of neurons, where particular choices of the integral kernel describe the synaptic connectivity structure of the cortex. Common examples of these connectivity structures include shortrange (local) excitation and long-range (lateral) inhibition (so-called Mexican hat), long-range excitatory to excitatory connections and short-range inhibitory to inhibitory connectivity (the inverse Mexican hat), and global excitation [45, 62, 97]. The choice of weight function enables specific dynamical patterns to emerge such as oscillatory behaviour, travelling waves, or bumps [18]. For example, Liley et al. [75] and Robinson et al. [107] include all main types of connectivity at the local scale (e-e, e-i, i-e, i-i) but only long-range excitatory connections that interact with both the excitatory and the inhibitory populations. Liley further includes high-order neurotransmitter kinetics and synaptic reversal potential to describe the excitatory and inhibitory cortico-to-cortical interactions more accurately, finding alpha oscillations to be crucially dependent on the local inhibitory-inhibitory interactions [75]. Furthermore, these models show how different biophysical assumptions regarding the dendritic and axonal structure and dynamics affect the nature of the equations, as the voltage-equations might be either first-order (Liley) or second-order (Robinson) depending on whether the population response has (in)finite rise times affecting the propagation of the response through the cortical sheet. In summary, these modelling choices and assumptions on anatomical and physiological properties have a critical influence in determining the overall dynamical repertoire of the models as well as the suitability of analytical tools.

Currently, there is great interest in understanding the relationship between biophysically detailed spiking models at one scale, and neural field models at the other.

In relating different levels of description, the question arises how microscopic properties of individual neurons relate to mesoscopic local networks or the macroscopic behaviour of large-scale networks and brain areas. Given the wide variety of different model approaches (see Table 7.2), how do these models relate to each other, i.e., how they complement or differ, and how we can describe the complex dynamical distributed activity of the brain aided by these modelling approaches? This is a particularly complex task because of the range of spatial scales (from micrometers to centimetres) as well as the variety in temporal scales, with dynamical changes taking place at the range of milliseconds to years within the brain. Under some very specific constraints, the relationship can be inferred [110], but this relationship is neither unique, nor true in generality [31]. Indeed, as highlighted by Bressloff, there is currently no structural multi-scale analysis of conductance based neural networks that allows a rigorous derivation of neural field equations [18]. An alternative approach is to include higher-order moments beyond the first order mean-field approximation, for example including the variance of activity [41]. Very recently, work by Visser and colleagues has focussed on the addition of delays into the framework of neural fields. These authors argue that conduction delays are likely to critically effect the synchronising effects of brain networks and should therefore be incorporated [127, 133]. For a more detailed overview of the development and application of both neural mass and neural field models, we refer to a review by Coombes [26].


Table 7.2
Summary of classical neurocomputational models





































































Type

Model class

Model

References

Neuronal

Conductance-based

Connor-Stevens

[24]
   
FitzHugh-Nagumo

[44]
   
Hodgkin-Huxley

[58]
   
Morris-Lecar

[92]
 
Phase-models

Izhikevich

[45]
   
Kuramoto

[71]
 
Phenomenological

Firing-rate

[39]
   
Integrate-and-fire

[1]

Neural field

Phenomenological

Amari

[2]
   
Wilson-Cowan equations

[143, 144]

Neural mass

Physiological

Jansen-Rit

[61]


The Application of Multilevel Models in Epilepsy


Given that epilepsy is a pathological condition whose hypothesised causes have been characterised both experimentally and clinically across a wide range of spatial and temporal scales, computational modelling approaches to epilepsy have grown rapidly (see for example [137] for a review of articles predating 2005). Computational models provide an additional tool with which to interrogate experimental and clinical data, and when appropriately utilised, enable a reiterative cycle whereby models can be used to identify underlying candidate mechanisms from data recordings, which may then in turn be tested, and thus validated, through new experiments. It is hoped that this understanding may ultimately lead to new techniques for seizure prediction, treatment and control [91, 114, 115, 138]. Further, the textbook “Computational neuroscience in epilepsy” edited by Soltesz and Staley [117], provides a structural overview of some aligned approaches to computational study seizures and epilepsy and demonstrates the critical advances that the pursuit of a multidisciplinary approach to the problem can enable.

There are further excellent reviews that focus on the overall process of multilevel computer modelling in epilepsy; discussing the basic mathematical concepts (e.g. attractors, nonlinearity, stability), underlying multiscale models (e.g. deterministic versus stochastic, microscopic versus lumped), as well as their application to various types of epilepsy, including both focal and generalised epilepsies [72, 78, 88, 120]. In a further recent review, Badawy and colleagues [4] present a summary of experimental and modelling evidence for dynamic changes of excitability within epileptic brain networks both during (ictal) as well as away from (interictal) seizures. They discuss how the underlying physiology influences the balance between inhibition and excitation in the interictal state before the onset and evolution of a seizure, highlighting several candidate mechanisms, including changes in blood sugar levels and hormones, that could change the level of cortical excitability of these epileptic brain networks over time (see also [105]).

From these publications, we have highlighted a selection of what we consider to be the key publications (see Table 7.3) which have strongly influenced the current modelling approaches (see Table 7.4) that are the focus of the remainder of this chapter.


Table 7.3
Classical papers in computational modelling of epilepsy












































Seizure type

Model type

Main findings references

General

Field

Enflurane and isoflurane induce epileptiform activity [76] Focal/absence seizures caused by physiological [108] parameter-changes

Focal

Neuron

Weak excitatory synapses generates seizure-like activity [37]

Gap junctions could underlie fast population oscillations [126]

Absence (SWD)

Neuron

Thalamocortical network mechanisms generates SWD [14]

GABA-receptors in thalamocortical circuits generates [32, 33]

SWD

Cortical LTS cells important in genesis of SWD [36] SWD caused by dynamical bifurcations in bistable [121] framework
 
Phen

Noise governs transitions to seizure-state in bistable [122] framework

Tonic-clonic

Field

Bifurcation-analysis reveals difference between absence [16] and tonic-clonic seizures

Astrocytes

Neuron

Intracellular oscillation patterns in epileptic astrocytes [5]



Table 7.4
Recent papers in computational modelling of epilepsy
























Seizure type

Model type

Main findings references

General

Neuron

Inhibitory synapses crucial in generating seizures [56] Failure of adaptive self-organised criticality causes [87] seizures

Control of seizures through depolarising periodic [100] stimulation

Computational improvements in multi-scale [102] epilepsy modelling

Slow depression mechanism enforces seizure ter- [131] mination

Addition of gap junctions suppresses seizure-like [135] activity

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Dec 17, 2016 | Posted by in PSYCHIATRY | Comments Off on Multilevel Computational Modelling in Epilepsy: Classical Studies and Recent Advances

Full access? Get Clinical Tree

Get Clinical Tree app for offline access