21 Emergence of Deep Learning Methods for Deep Brain Stimulation–Evoked Functional Connectomics



10.1055/b-0040-174339

21 Emergence of Deep Learning Methods for Deep Brain Stimulation–Evoked Functional Connectomics

Christine Edwards, Abbas Z. Kouzani, Kendall H. Lee, and Erika Ross


Abstract


Deep brain stimulation (DBS) devices are becoming an increasingly common treatment for refractory Parkinson disease and other movement disorders. With advances in neurotechnologies and neuroimaging, along with an increased understanding of neurocircuitry, there is a rapid rise in the use of DBS therapy as an effective treatment for an increasingly wide range of neurologic and psychiatric disorders. DBS technologies are evolving toward an implantable closed-loop therapeutic neurocontrol system to provide continuous customized neuromodulation for optimal clinical results. Even so, there is much to be learned regarding the pathologies of these neurodegenerative and psychiatric disorders and the latent mechanisms of DBS that provide therapeutic relief. This chapter converges two breakthrough research areas—powerhouse deep learning methods and DBS-evoked functional connectomics—that are expected to advance DBS therapies toward precise neuromodulation for optimal therapeutic relief. This chapter describes the resurgence of artificial intelligence and provides an introduction to its subfield of deep learning, followed by an overview of in vivo neuroimaging modalities and brain mappings. A deeper dive into functional neuroimaging processing and an overview of classical multivariate pattern analysis methods is provided to set the stage for a review of functional neuroimaging studies that leverage deep learning methods. Such methods applied to DBS-evoked functional neuroimaging data are expected to enable the characterization and prediction of patterns of activation, in relationship to electrode placement, stimulation parameters, and behavioral assessment data.




21.1 Introduction


Modern day deep brain stimulation (DBS) devices are considered pacemakers for the brain, as the devices were developed based on predicate cardiac pacemakers. Similarly, with cardiac pacemakers, DBS devices deliver electrical stimulation to a targeted area via an implanted electrode lead. The electrode lead in the DBS device is subcutaneously connected to a pulse generator controller that is implanted in the chest beneath the clavicle. As with cardiac pacemakers that restore normal cardiac rhythm, brain pacemakers seek to modulate disordered circuitry to restore functionality. Although they share similarities with cardiac pacemakers, the underlying mechanisms are more complex and far less understood. In general, a DBS open-loop system stimulates a targeted subcortical region with a high-frequency pulse (100–250 Hz) to disrupt the disordered neurocircuitry and modulate underlying electrical and chemical changes. 1 ,​ 2 The target location is dependent on the patient’s diagnosis, history, and corresponding symptoms. Modern day DBS was founded in 1987 and gained traction in the 1990s with Food and Drug Administration (FDA) approval to treat refractory neurological movement disorders, including Parkinson disease (PD), essential tremor (ET), and dystonia. 3 ,​ 4 ,​ 5 Building upon advances in technologies and lessons learned over decades of neurosurgery to treat neurological and psychiatric disorders, over 100,000 people worldwide have been implanted with open-loop DBS devices (see Fig. 21‑1). 6

Fig. 21.1 Illustration of an implanted open-loop deep brain stimulation system. 12 Reproduced with permission from Edwards et al.

Today, there is a rapid rise in the use of DBS to treat an increasingly wide range of neurologic and psychiatric disorders. 7 Following a 30-year gap of little to no innovation in DBS technologies, there is now a drive to create an implantable closed-loop therapeutic neurocontrol system to provide continuous customized neuromodulation for optimal clinical results. 8 ,​ 9 ,​ 10 ,​ 11 Even so, there is much to be learned regarding the immediate and long-term mechanisms of DBS therapy and the neurological and neuropsychiatric disorders that it aims to treat. Linking multimodal in vivo neuroimaging with DBS is a powerful combination that reveals new insights into structural, functional, and effective connectivity of brain circuitry under all of these conditions.


In this era of “big data,” data science has emerged as a highly valued interdisciplinary field that brings together advances in computational methods and technologies to analyze and discover latent patterns in large-scale heterogeneous datasets. This combination of mathematics, statistics, computer science, and domain expertise is creating opportunities to utilize data-driven techniques to unveil insights that lead to new or refined hypotheses and enable more informed decision-making processes. DBS investigative studies and clinical uses are creating a multimodal data-rich environment that is ripe for discovery of the biological underpinnings of functional and dysfunctional brain circuitry. Powerful breakthrough data science methods, such as deep learning applied to DBS data, are expected to lead to advanced pattern analysis analytics. This multidiscipline approach has potential to transform our understanding of brain circuitry and ultimately usher in breakthroughs in bioelectronics medical technologies to optimize treatments.


This chapter provides a historical perspective and an introduction to deep learning methods, followed by an overview of the growing discipline of connectomics. Furthermore, this chapter includes an overview of in vivo neuroimaging modalities that are utilized to visualize and assess the macroscale connectivity of brain regions where each node of the map represents hundreds of thousands of neurons. A deeper dive into the functional neuroimaging processing is provided to set the stage for a review of multivariate pattern analysis (MVPA) methods for global assessment of DBS-evoked functional connectome data.



21.2 Rise of Deep Learning Methods


Deep learning methods are a subset of machine learning approaches that apply a hierarchy of nonlinear transformations to learn invariant discriminant feature representations of data, for pattern analysis and classification tasks. Such methods are not new, but have experienced a significant revival and are now dominating in application areas such as computer vision, audio processing, and natural language processing.


Deep learning origins date back to, at least, the 1940s when artificial neural networks (ANNs) were first introduced as a connectionist model. 13 Connectionism involves the concept of artificial intelligence (AI) emerging from simple interconnected computational units into a hierarchy. Computational units and their weighted connections are analogous to neurons and the strengths of their synaptic connections, respectively. Meanwhile, Hebbian learning theorized that synchronized firing of neurons strengthens their synaptic connections, whereas neurons firing out of sync experience weakened or nonexistent synaptic connections. Hebbian theory describes a mechanism for brain plasticity, where neurons dynamically adapt their connections during the learning process. 14


In the 1950s and 1960s, simple ANNs such as perceptrons were created to learn mappings between input data and output values for pattern recognition tasks such as binary classification of images. 15 A perceptron is considered a single layer neural network, as it has one layer of weighted connections between input nodes and an output node. The weights of the perceptron represent the learned linear decision boundary that optimally separates two classes of data for binary classification. If the data are not linearly separable, then the perceptron will not converge on a decision boundary to appropriately classify the data. During that time, it was shown that perceptrons were incapable of modeling a simple XOR Boolean function, leading to much debate regarding the usefulness of connectionist models. 15 ,​ 16 At the same time, Hubel and Wiesel conducted a series of significant physiological experiments where they discovered simple and complex cells within the primary visual cortex of a cat and monkey via microelectrode recordings. 17 ,​ 18 ,​ 19 Their discoveries of the hierarchical organization of the brain to achieve visual perception earned them the Nobel Prize in Physiology or Medicine in 1981, and inspired decades of vision models and machine learning approaches to teach computers how to recognize visual patterns. 20 ,​ 21 ,​ 22 ,​ 23 ,​ 24 ,​ 25 ,​ 26 ,​ 27 ,​ 28 ,​ 29 ,​ 30


In 1986, interests in connectionist models were renewed with the introduction of the backpropagation algorithm which made it possible to train ANNs, such as feed-forward multilayer perceptrons, recurrent neural networks (RNNs), and convolutional neural networks (ConvNets). 21 ,​ 31 ,​ 32 ,​ 33 ,​ 34 These networks include hidden layers between the input and output layers to model more complex functions (see Fig. 21‑2). During the training process, the input data are first propagated forward through the nodes of the network. The input data may be in the form of raw data such as pixels or voxels, or in the form of feature vectors representing the original data. The computed value of each node in the hidden and output layers is a weighted sum of its inputs passed through a nonlinear activation function (e.g., rectified linear unit). At each output node, an error signal is calculated to measure the difference between the actual output and the expected output. This error signal is then propagated back through the network to compute the delta error at each node. Optimization methods such as stochastic gradient descent are used to determine the optimal weights to minimize the error signal across the network.

Fig. 21.2 Multilayer perceptrons (MLPs) are the most common type of shallow feed-forward neural networks. MLPs include at least one hidden layer between the input and output layers. At each layer, the nodes compute a weighted sum of their inputs that is then passed onto a nonlinear activation transformation function, such as a hyperbolic tangent or a sigmoid function. Most recently, rectified linear unit (RefigLU; instead of Hodgkin–Huxley) functions are preferred due to computational savings required by deeper networks with many hidden layers. During the training process, optimal weights of the connected nodes are learned using the backpropagation algorithm.

According to the universal approximation theorem, an ANN can estimate any sufficiently smooth function. 35 Inspired by the organization of our brain into cortical layers, adding depth (i.e., more hidden layers), rather than simply increasing the width (i.e., more nodes per layer), allows for more complex transformations of input data into patterns for higher-level pattern recognition tasks. Despite the power of the backpropagation algorithm, training neural networks beyond a couple of hidden layers remained difficult, requiring much computational power and training data to learn many parameters that defined the network architecture. Converging on an optimal solution of tuned parameters, without overfitting to training data, was especially challenging. As a result, many steered away from using ANNs for decades, in favor of simpler shallow architectures such as support vector machines (SVMs), which converge to an optimal solution with less training data and computational power requirements. 36


Despite the so-called AI winter in the 1990s to mid-2000s, a subset of researchers remained committed to pursuing biologically inspired connectionist models for automated pattern analysis tasks and ultimately AI. Pioneers of deep neural networks include Yoshua Bengio (from the University of Montreal), Geoffrey Hinton (from the University of Toronto and Google, Inc.), Yann LeCun (from New York University and Facebook, Inc.), Tomaso Poggio (from the McGovern Institute for Brain Research at MIT), and Terrence Sejnowski (from the Salk Institute for Biological Studies). The Neural Information Processing Systems (NIPS) Conference, established in 1987, with Terrence Sejnowski as its president since 1993, remains the primary conference where leaders in connectionism research convene annually. Incremental improvements toward deep neural networks occurred over a decade, while understanding of the hierarchical organization of the primate cerebral cortex significantly increased. For instance, Long Short-Term Memory (LSTM) models were introduced in 1997 to overcome the vanishing gradient problem (i.e., decaying backpropagation error) encountered by previous RNN architectures. 37 Early applications of LSTM models were primarily in the natural language processing domain. 38 ,​ 39 ,​ 40 Meanwhile, Van Essen’s wiring diagram of the hierarchical distributive organization of cortical areas for perception, which included feed-forward and feedback connections, continued to motivate advances in deep neural network models. 23 This wiring diagram includes at least 30 cortical areas for visual perception. An over simplification of the visual system separates processing into the “where or how” (dorsal stream) and “what” (ventral stream) pathways of the visual cortex, while ignoring feedback mechanisms. The feed-forward ventral visual stream progresses from the retina to the lateral geniculate nucleus of the thalamus, which then relays this information to the primary visual cortex (V1), followed by the visual areas V2, V4, the inferotemporal (IT) cortex, and the prefrontal cortex. This primate visual processing model, coupled with early findings of Hubel and Wiesel, inspired computer vision models such as “Hierarchical Model and X” (HMAX) for feed-forward object recognition. 25 ,​ 27 Likewise, ConvNets were largely inspired by biological vision, as their convolutional and max-pooling layers extracted increasingly invariant features that resembled simple and complex cells of the primary visual cortex. Lower levels of the hierarchy are tuned to respond to low-level features (e.g., edges). Ascending the hierarchy, the network nodes combine patterns from lower levels, to respond to increasingly complex patterns, and at the highest level perform tasks such as face and object recognition. Biologically inspired neural networks were increasingly applied to a variety of specific visual recognition tasks, such as LeNet-5 which was a ConvNet fine-tuned to recognize handwritten digits within specific document images. 22 Although these networks were comparable or outperformed other pattern analysis techniques, acquiring sufficiently large annotated datasets and computational power to tune their parameters for visual recognition tasks remained a challenge until the early 2010s.


In 2006, the connectionism research community experienced a turning point—the AI winter was beginning to thaw—with the introduction of a method that enabled faster training of deep neural networks. 41 Rather than starting with random weights, this approach uses generative models, called “restricted Boltzmann machines (RBMs), at each layer to initialize the weights of the network; thus, this unsupervised pretraining of parameters allowed for faster convergence of the network to an optimal solution. Hinton et al demonstrated this method by introducing deep belief networks (DBNs) with an architecture composed of stacked RBMs. 41 Shortly thereafter Bengio et al extended this initialization method to train a deep network of stacked autoencoders 42 (see Fig. 21‑3). Unsupervised training of these generative models at each layer allowed the deep neural networks to learn sparse distributed high-level representations of data to use for dimensionality reduction and more generalized models for pattern analysis tasks.

Fig. 21.3 This is an illustration of an autoencoder, where the hidden layer is a compressed version of the input data, and the output layer is a reconstruction of the input data from the compressed version. During training of the network, the error between the input and reconstructed output is minimized. Autoencoders, along with restricted Boltzmann machines, may be used to initialize deep neural networks or stacked as a building block for various deep neural network architectures.

From the late 2000s, deep neural networks started to advance, and applications were gaining momentum as large-scale labeled datasets and large-scale commodity computing platforms, such as cloud and graphics processing units, were making it possible to train deep generalized models for tasks such as pattern analysis for image understanding. Deep feed-forward and RNNs provide significantly surpassing performance on benchmark datasets and rank first place in many pattern recognition and machine learning competitions. Beginning in 2009, Microsoft Research applied deep neural networks to automatically learn high-level abstract features that represent salient representations of data for natural language processing applications. They demonstrated that deep features learned from acoustic spectrograms were superior to long-standing audio features, such as Mel Frequency Cepstral Coefficients for applications such as speech recognition. 43 Deep learning methods (e.g., LSTMs) have dominated multilingual handwriting recognition competitions. In 2011, deep neural networks won a traffic sign recognition competition. While interests in deep learning were steadily increasing through the late 2000s and early 2010s, the larger machine learning community did not fully embrace this movement until 2012.


Deep learning research catapulted to the limelight in 2012, with renewed interests (and fears) regarding the potential capabilities of AI. During this time the Google Brain project released a paper describing their unsupervised deep autoencoder architecture that learned high-level representations of object categories from an unlabeled dataset of 10 million, 200 × 200 image frames extracted from YouTube videos. This architecture demonstrated the power of learning hierarchical feature representations for applications such as face recognition and compared it to biological neural networks that are believed to have a hierarchy that leads to specific face neurons. Meanwhile, a ConvNet-based method won the ImageNet Large Scale Visual Recognition Competition (ILSVRC) in 2012, which was a catalyst for deep learning efforts that now dominate computer vision and other pattern recognition domains (see Fig. 21‑4). 44 ILSVRC is the benchmark competition (2010–2015) for image classification and object recognition tasks. Its annotated image dataset includes 1.2 million images with 1,000 category labels that semantically map to the WordNet benchmark dataset. To this day, ImageNet is the largest, most diverse, labeled dataset that is publicly available; thus, it has been a game changer for the computer vision community and has fueled the exponential rise in deep computer vision architectures.

Fig. 21.4 An illustration of a convolutional neural network. 44 A hierarchy of convolutional and subsampling layers transforms the input image into an abstract conceptual feature vector that represents the content of the image. This is followed by a fully connected neural network, which maps the feature vector into a lower dimensional semantic space, where each component of the resulting vector is a probability score corresponding to an ImageNet object category.

Today, deep learning techniques and applications continue to evolve at a rapid pace. The deep learning community is investigating methods that reduce the amount of training data required to learn feature representations and regularization methods to create generalizable models. Transfer learning methods enable the retraining of deep ANNs by reusing the lower level feature representations and fine-tuning the higher-level concept representations. As such, transfer learning is an option to fine-tune deep ANNs for other domains where there may be an insufficient amount of labeled training data. Also, variations of Generative Adversarial Networks (GANs), introduced by Ian Goodfellow in 2014, are changing the landscape of unsupervised learning methods and extending their use to novel applications, such as photo-realistic single-image superresolution. In addition, deep reinforcement learning methods are advancing toward systems that are capable of processing high-dimensional sensory inputs from their environment and learn the appropriate actions. 45



21.3 In Vivo Neuroimaging Modalities and Maps


Decades of advances in neuroimaging technologies have resulted in powerful investigative and clinical tools that are providing remarkable noninvasive, in vivo, multimodal views of the brain. Such tools are the basis for the construction of macroscale connectomes that capture the mapping of brain region-to-region wiring diagrams, to reveal the structural, functional, and effective connectivity. 46 Structure and function are interwoven. 47 Structural connectomes characterize and map anatomically connected brain regions, whereas functional connectomes map functionally correlated local and distal brain regions. Effective connectomes provide a directional mapping to characterize causality of functionally related brain regions. 48 Dynamic brain connectivity is encountered with disease progression and treatment, and this neural plasticity may be characterized by analysis of multimodal connectome data across time scales. 49



21.4 Structural Neuroimaging


Neuroimaging technologies that capture anatomical structures include computed tomography (CT) and magnetic resonance imaging (MRI). Since the first human CT scan in the 1970s, this technology is commonly used in clinical situations for anatomical assessments, by creating cross-sectional images and three-dimensional (3D) reconstructions from the acquired attenuation X-ray signals passed through the targeted anatomy. In the 1980s, MRI was introduced as a clinical alternative that leverages magnetic susceptibility to differentiate brain tissue, without exposing the patient to ionizing radiation. Rather, a patient positioned within the bore of an MRI scanner is exposed to a strong magnetic field (e.g., 1.5 T), with gradient coils that spatially vary the field strength across the brain. Radiofrequency (RF) transceiver coils transmit pulsed signals tuned to the resonant spin frequency of targeted atomic nuclei, such as hydrogen protons, which are prevalent in living organisms in the form of water molecules. The targeted atomic nuclei absorb energy from the RF signals, exciting them into a higher energy state that is out of alignment with the magnetic field of the scanner. As the protons relax back into their lower energy state, RF energy is released and captured by the RF antenna coil. These MR signals are acquired and transformed into 2D cross-sectional slices and a 3D reconstructed brain volume. Compared to CT images, MRI scans provide a more detailed view of soft tissue. As such, MRI is often used to assess individualized mapping of a patient’s brain anatomy prior to DBS implantation, thus enabling more precise identification of DBS anatomical target(s) and trajectory path for the DBS electrode(s). Although MRI does not expose the patient to ionizing radiation, safety guidelines must be closely followed to prevent injuries caused by the interaction of the scanner’s strong magnetic field with metallic components of neurostimulation systems. As such, there are only a few institutions that incorporate MRI technologies once the DBS device is implanted. Postoperative CT scans are often used to assess and confirm the placement of the DBS electrode(s). Multimodal approaches may fuse CT and MRI scans to provide a richer anatomical view. In addition to safety measures, intraoperative and postoperative MR imaging must also include imaging protocols to mitigate potential image artifacts surrounding the DBS lead(s).


In general, MRI protocols configure pulse sequence parameters, such as the “time to repetition” (TR) and “time to echo” (TE), in concert with intrinsic tissue properties (e.g., T1 and T2) to adjust the contrast of resulting images based on the targeted applications. TR is the time interval between the RF pulses, and TE is the time interval between the transmission of an RF signal and when the (echo) MR signal is measured. After an RF pulse is applied, T1 is the time required for the atomic nuclei to return to equilibrium, with spins aligned to the scanner’s magnetic field. This realignment to recover longitudinal magnetization is an exponential process with time constant T1. At the same time, T2 is the intrinsic property that describes the time it takes for the atomic nuclei to dephase from their excited state, leading to an exponential decay of the transverse magnetization that is typically slower than the T1 rate. The effect of neighboring proton spin interactions is characterized by this T2 property. T2* is an additional property that encompasses both the intrinsic T2 property and the effect of distortions in the external magnetic field. Spin-echo pulse sequences use an additional 180-degree RF refocusing pulse to reduce the effects of inhomogeneity of the external magnetic field (i.e., reduces T2* sensitivity), such as at air–tissue interfaces. Fast spin-echo imaging is a variation of spin-echo pulse sequences that allow for faster, more practical acquisition times, primarily to acquire T2-weighted images. Gradient-echo (GRE) imaging uses gradients, rather than an additional refocusing pulse, to generate the echo signal. Variations of GRE pulse sequences are often used for generation of high-resolution anatomical T1- and T2-weighted brain images, as well as for generation of functional T2*-weighted images which will be discussed further. In general, for T1-weighted images, white matter appears brighter than gray matter, and fluids, such as cerebrospinal fluid, appear dark, whereas, in T2-weighted images, white matter appears darker than gray matter, and fluid-filled regions are highlighted. In general, MRI scanners with stronger magnetic fields (e.g., ultrahigh 7.0 T) produce images with a greater dynamic range to acquire finer details that visually discriminate neighboring anatomical brain regions. 50 ,​ 51 Furthermore, techniques such as echo-planar imaging have made it possible to more rapidly acquire images, which has in turn paved the way for other modalities of MRI that are more time dependent. 52


MRI technologies continue to evolve and expand to include techniques such as diffusion tensor imaging (DTI) which is sensitive to the diffusion properties of water through specific types of tissues. 53 In particular, DTI is used to visualize and analyze the white matter tracts that connect brain regions. 54 ,​ 55 Macroscale structural connectomes typically leverage conventional MRI and DTI methods, to define the nodes (e.g., parcellated gray matter regions) and weighted edges (e.g., white matter tracts) of the brain graph, respectively. 56 As previously stated, structure and function are interwoven. This applies to the organization of the brain, and to the evolving neuroimaging technologies used to provide insight into the structure and function of the brain. Structural neuroimaging provides the anatomical context for functional brain data.



21.5 Functional Neuroimaging


Functional neuroimaging includes noninvasive in vivo technologies such as single photon emission computed tomography (SPECT), positron emission tomography (PET), and functional MRI (fMRI). Both SPECT and PET scanners detect energy released from intravenously injected radiopharmaceuticals, as they accumulate and decay within the brain, forming 2D and 3D images that capture cerebral blood flow (CBF) and molecular level metabolic changes, indirectly measuring neural activity. Compared to PET, SPECT is more widely available for clinical uses, as it is less expensive, and its radiotracers are more accessible with a longer half-life; however, PET scans are less prone to image artifacts and have better spatial resolution. A common radiotracer used for PET-based neuroimaging is fludeoxyglucose (FDG), which is processed by the brain as glucose. As such, activated brain regions experience increased blood flow and accumulation of FDG to replenish metabolic energy corresponding to neural activity. Both SPECT and PET technologies continue to evolve as powerful neuroimaging modalities that provide insight into global patterns of targeted neurotransmitter (e.g., dopamine) release corresponding to activated brain circuitry. However, as with anatomical CT scans, these neuroimaging modalities expose the patient to ionizing radiation; thus, DBS clinical and investigated studies to optimize DBS lead location and stimulation parameters settings are limited. Meanwhile, MRI technologies were expanded to include functional neuroimaging modalities, namely, blood oxygen level dependent (BOLD) fMRI in 1990. 57 Compared to SPECT and PET, BOLD fMRI produces 2D and 3D imaging with higher spatial resolution, without exposing the patient to radioactive material. Rather than using an exogenous contrasting agent, BOLD fMRI leverages the paramagnetic properties of deoxyhemoglobin as an endogenous contrasting agent to indirectly capture neural activity by mapping of blood oxygenation. As with conventional MRI techniques, DBS studies that leverage fMRI must follow careful safety precautions and imaging protocols to mitigate risks associated with the interactions between the metallic components of the DBS system and the scanner’s strong magnetic field. 58 ,​ 59 ,​ 60 ,​ 61 ,​ 62


Additional macroscale in vivo functional neuroimaging technologies include electroencephalography (EEG) and magnetoencephalography (MEG), which use noninvasive sensors on or near the scalp to measure cortical electrical and magnetic changes induced by neuronal activity, respectively. EEG and MEG are advantageous as they have higher temporal resolution (i.e., milliseconds) compared to fMRI, which indirectly measures neural activity and is limited by the hemodynamic response timing that is on the order of seconds. However, fMRI and PET modalities have higher spatial resolution, on the order of millimeters rather than centimeters. Although more invasive, electrocorticography (ECoG) technologies enable direct electrophysiological monitoring by recording global field potentials on the cortical surface, rather than measuring attenuation signals outside the skull. As such, ECoG techniques have higher spatial resolution than EEG, while also having higher temporal resolution. Recently, functional neuroimaging studies combined intraoperative ECoG sensorimotor cortex recordings with subthalamic nucleus (STN) LFP recordings acquired during the implantation of the DBS leads; in doing so, this enabled the discovery of a potential biomarker for PD dysfunctional motor circuitry and a potential feedback mechanism for future closed-loop DBS systems. 63 ,​ 64 ,​ 65 ,​ 66


Functional connectomics includes the study of intrinsic resting state, as well as stimulus-evoked functional brain networks. Such studies are providing insights into the pathophysiological mechanisms of neurodegenerative and psychiatric disorders and possible clinical biomarkers to aid in diagnosis and treatment. 67 ,​ 68 Given that fMRI provides a noninvasive in vivo view of the brain in action, with high spatial resolution, it is especially powerful when combined with DBS technologies. 48 ,​ 69 ,​ 70 ,​ 71 ,​ 72 DBS-evoked functional connectomics based primarily on multivariate analysis of BOLD fMRI data is enabling characterization of spatially distributed patterns of brain activity to optimize DBS therapy.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

May 5, 2020 | Posted by in NEUROSURGERY | Comments Off on 21 Emergence of Deep Learning Methods for Deep Brain Stimulation–Evoked Functional Connectomics

Full access? Get Clinical Tree

Get Clinical Tree app for offline access