Multimodality Monitoring and Artificial Intelligence




Introduction


Multimodality monitoring can be defined as the simultaneous collection of data from multiple diverse sources pertaining to a single patient coupled with the ability to view the data in an integrated and time-synchronized manner. Alternatively, multimodality monitoring may be considered the use of more than one complementary method to monitor a single organ, when no one single method can provide complete information. It is the first step to create a bedside environment where advanced knowledge tools can be applied to the data to aid the clinician in patient management. However, few, if any, neurocritical care units (NCCUs) have such an environment due to a variety of barriers (see Chapter 4 ). Assuming these problems will be solved, this chapter reviews the creation of a “knowledge infrastructure” in neurocritical care. The rationale and benefits are presented along with a historical perspective, current progress, and future prospects.




Rationale


An accepted goal of neuromonitoring in intensive care is to enable the detection of harmful physiologic events before they cause irreversible damage to the brain. Ideally monitoring should be used to predict “events” in the intensive care unit (ICU). However, the brain is a complex system with interrelated hemodynamic, metabolic, and electrical subsystems. From simply a logical perspective, the most effective monitoring paradigm would be one that monitors multiple sites in this multifaceted system. But to date, no formal studies have been conducted that pair multimodal monitoring with a treatment protocol to demonstrate better outcome, although one such trial, the Brain Oxygen and Outcome Study in Traumatic Brain Injury (BOOST), a phase 2 trial to examine physiologic efficacy of multimodal monitoring, is likely to complete enrollment in 2013. For the moment, however, a common-sense rationale must suffice.


The need for a knowledge-rich infrastructure made possible by multimodal data extends beyond the NCCU to all of high-acuity critical care. First, in intensive care, clinical information systems acquire and store physiologic variables and device parameters online at least every minute. Physiologic signals may contain useful data in frequencies of ~0.2 kHz. These data can be lost when data less than 0.5 to 1 kHz is sampled over a short time (1 to 2 ms). In addition, high-resolution physiologic data can be lost if there is no means to store or archive it. In most NCCUs today, physicians and nurses can only view continuous physiologic data by looking at the bedside monitor. This information, once it has scrolled across the screen, is lost, and if the staff is not at the bedside only intermittently recorded data are available; that is, data collection and storage is essential to NCCU informatics. Second, in the severely brain-injured patient, use of the clinical examination to detect changes in patient state is severely limited. Thus physicians and nurses rely in large part on changes in physiologic metrics and on results from diagnostic tests to make patient management decisions. During rounds of critically ill patients each morning, a physician may be confronted with more than 200 variables. The high data volume and the time-critical situations physicians are confronted with create a problem. This is accentuated because human beings are not able to judge the degree of relatedness between more than two variables. Furthermore, constant information overload is one contributing cause for preventable medical errors. A knowledge infrastructure can provide a closer coupling between the clinician and the data with the promise to decrease errors and improve outcomes. Finally, as the aging population fills the health care system, it is predicted there will be a significant shortage of caregivers, particularly in the ICU. Because of higher efficiencies, workflow software driven by multimodal patient data has the potential for minimizing the number of staff needed to accomplish the same amount of work.


The current federal (and international) emphasis on medical economics provides further justification for multimodal neuromonitoring. The rising costs of health care in the United States has led the government to conclude that better information in a variety of areas could eventually alter the way in which medicine is practiced and yield lower health care spending without having adverse effects on health. Other countries have come to similar conclusions, and such thinking has prompted a rise in comparative effectiveness research that evaluates the impact (primarily costs and benefits) of different treatment options. This emphasis on informatics underscores the need for a more comprehensive and coordinated approach to data collection, something that multimodal monitoring can provide.


There also are benefits to patient safety in creating a knowledge infrastructure. There are a few reports of accidental patient deaths that could have been prevented if medical devices were able to intercommunicate. In one example, a ventilator was momentarily turned off to eliminate blurring of x-ray images that were being obtained. The film plate jammed under the table and the anesthesiologist went to help. In so doing, the anesthesiologist forgot to restart the ventilator and the patient died. Had there been a method for the x-ray machine and the ventilator to communicate, the patient death may have been prevented.


Neurocritical care has evolved as a distinct speciality during the past decade. In large part patient management depends on understanding patient physiology, so techniques to monitor the brain also have evolved. Increasingly, NCCUs use more than one monitor of brain function, and an increasing number of publications and reviews allude to the potential benefits of multimodal monitoring in various aspects of neurocritical care, including traumatic brain injury (TBI), subarachnoid hemorrhage, management of cerebral hemodynamics, and determination of brain death, and in understanding patient physiology in general. Furthermore when multimodality monitoring is used, it has become apparent that single monitors may miss episodes of cerebral compromise.




Requirements of Multimodal Neuromonitoring Systems


Recording multiple simultaneous measurements from a variety of diverse sources brings about problems not encountered in conventional monitoring. These problems can be broadly classified as dealing with the following requirements of monitoring: continuous, comprehensive, and communicative.


Continuous Monitoring


Continuous monitoring is required in critical care to not miss clinically significant events. More specifically, the frequency of monitoring needs to be higher than the duration of the events to be detected. Hemphill et al. compared medical records (with approximately hourly entries) to automatic continuous data recordings (approximately 1-second intervals) recorded simultaneously in 16 patients and found that the medical record data tended to underestimate the number of secondary brain insults detected. They concluded that higher frequency data (i.e., higher than is normally acquired in the medical record) may be necessary for more precise evaluation of secondary brain injury in neurocritical care. A notable example of the value of continuous monitoring is with electroencephalogram (EEG) recordings. In 1993, Jordan monitored EEG continuously rather than intermittently and showed the number patients with seizures and the frequency of seizures present in critical care patients was much greater than what was conventionally thought. He also showed that continuous monitoring of EEG made an impact on clinical decision making in 51% of patients (n = 124). Subsequently, Claassen and Mayer showed that continuous EEG influenced patient management on 50% of monitored days.


The problem inherent in continuous monitoring is the amount of data collected when monitoring from multiple sources. Storage of trend data and a few waveforms is insignificant with current storage capacities. However, EEG is generally recorded with a higher sampling rate and usually with 16 or more channels. Assuming a 256-Hz sampling rate, 16 channels, and 16-bit precision, storage of 1 day of EEG requires more than 700 megabytes of space. Adding simultaneous video of the patient can bring the storage rate from 250 megabytes to more than 1 gigabyte per hour, depending on the frame rate and type of video compression used. Consequently, most institutions do not store all the video data but rather just the segments of interest, a process that requires manual intervention. Similarly manual intervention may be necessary to exclude potential artifacts or during periods when patients are disconnected from a monitor.


Comprehensive Monitoring


Multimodal neuromonitoring must be comprehensive in that all of the necessary data for particular monitoring goals need to be collected simultaneously, time synchronized, and displayed in an integrated fashion. This is a challenge in the NCCU because most monitors stand alone and are self-contained, provided by a commercial vendor. However, this challenge is likely to diminish since the American Society for Testing and Materials adopted the concept of an “integrated clinical environment” in 2009. Time synchronization of data received from multiple devices in a variety of formats presents a significant problem to developers of multimodal data collection systems. The data sampling rates may be widely different and the data may be sent to the collection device continuously or in packets with varying time intervals. The accuracy of the time synchronization task, however, depends on the use of the data. For slow varying trends where only a visual correlation is to be made, a high degree of synchronization may not be required.


There are some applications where very accurate time synchronization of data is required. An example is combining data from different sources to improve the quality of data. For example, EEG data contaminated with electrocardiogram (ECG) or pulse artifact can falsely trigger automated seizure detection algorithms. Other physiologic signals can be used to help remove or reduce such artifacts in the EEG using various analysis methods such as adaptive filtering or independent component analysis. For example, ECG can be used to help remove pulse artifact and a respiration signal can be used to aid in removing high-frequency ventilator artifact from the EEG. Figure 40.1 shows EEG data (left panel) contaminated with pulse artifact. Using simultaneously collected ECG, the EEG can be adaptively filtered to remove the artifact (right panel). The notation on the first channel of each panel shows the difference in an amplitude envelope function caused by the artifact (8.2 µV p-p with the artifact versus 5.4 µV p-p without).




Fig. 40.1


Electroencephalographic data contaminated with electrocardiogram (ECG) artifacts (left panel) and after adaptive filtering with the ECG (right panel) to remove the artifact.


Communicative Monitoring


Once collected, the information in multiple data streams must be extracted and communicated to the clinician. Perhaps the simplest and most effective way to extract information is to observe relationships in the data by plotting time-synchronized trends on a single display. Such displays are standard on conventional vital-signs monitors because they facilitate the visual detection of correlations between measurements. Figure 40.2 shows an example that plots trends of ICP, arterial blood pressure (ABP), flow velocity, tissue oxygenation index (TOI), and brain tissue oxygenation. Episodes of ICP increases are associated with hyperemia due to unstable arterial pressure. Figure 40.3 shows a more comprehensive visual display, a neurocritical care dashboard, which includes data represented by waveforms, images, and other formats in addition to trends.




Fig. 40.2


Simultaneous trends of 1 hour monitoring of intracranial pressure (ICP), arterial blood pressure (ABP), flow velocity (FV), tissue oxygenation index (TOI), and brain tissue oxygenation (P bt O 2 ).

This provides a synchronized view of physiologic data.

(Courtesy Dr. Marek Czosnyka.)



Fig. 40.3


A neurocritical care dashboard that includes information from multiple domains.

(Courtesy Dr. Val Nenov.)


Going further, simple statistical analyses on the recorded data can be revealing. For example, Figure 40.4 shows the change in brain oxygen versus cerebral perfusion pressure at two points in time in the same patient using scatter plots. It illustrates the dynamic nature of autoregulation in neurocritical care. These plots can be calculated with most spreadsheet software using data exported from a neuromonitoring data collection system or with more specialized statistics software. These capabilities also exist in some of the neuromonitoring systems described in the next section.




Fig. 40.4


Scatter plots of brain oxygen vs. cerebral perfusion pressure on subsequent days made using the Statistica Data Mining software.

(Courtesy Dr. Micheal Schmidt.)


In summary, multimodality monitoring creates new problems with data collection, but it is anticipated that most will be solved with near-term technology. For data analysis, there are many ways to process multiparametric data to extract and display information useful to the clinician. This is an active area of critical care research.




Creating a Bedside Knowledge Environment


The Role of Artificial Intelligence


Once continuous multimodal data can be recorded, it will be possible to create a bedside knowledge environment that couples decision support to patient management. These methods are largely in the domain of artificial intelligence (AI) . AI is defined as the science and engineering of making intelligent machines and was coined by computer scientist John McCarthy in 1956. It is a field that attempts to transform data into reasoning and uses a variety of tools such as artificial neural networks (ANN) and statistical classifiers. Neural networks are composed of “neuron-like” software elements that, in the aggregate, can exhibit simple learning behaviors. They have a particular use in the analysis of neurocritical care data in that they can be trained to find patterns in data both in a generalized way and within a single patient.


Although the potential of AI is significant, publications about using AI to analyze data specifically in neurocritical care are sparse, most likely due to the challenges to collecting multiparametric data. Some of the work is reviewed here and then the promise AI has to offer in creating a knowledge environment at the bedside is discussed. A number of references can provide a more in-depth explanation of AI concepts.


An active area of AI research in critical care is to attempt to improve the accuracy of alarms and the detection of events. Most clinicians are familiar with the problem of alarms in a multimonitor environment, such as critical care or surgery. Imhoff recently reviewed the literature and showed that up to 90% of all alarms in critical care monitoring are false alarms, which leads to a dangerous desensitization of the staff toward true alarms. Several studies have shown encouraging results with the application of AI techniques to alarm management, but these techniques have not made their way to routine clinical use, most likely because of liability concerns of medical device vendors.


AI techniques can be used to assess the context of the patient, taking into consideration the variability among patients, to provide more individualized monitoring. For example, Apiletti and colleagues used real-time classification techniques to detect different severity levels of a monitored patient’s clinical situation, whereas Zhang and Szolovits, using neural networks, showed patient-specific modeling in real time to be feasible and effective to generate bedside alerts. An analysis of the dynamics of the critical care work environment as a complex cognitive system was performed by Patel et al.; this work has implications for the design of decision support tools.


Specific to neurocritical care, Nelson et al. analyzed patterns of cerebral microdialysis in patients with TBI and, with a neural network methodology, investigated pattern relationships to intracranial pressure and cerebral perfusion pressure. Minhas et al. used decision tree analysis to identify which patients would be colonized with antibiotic-resistant bacteria on admission to the NCCU. Several groups have studied the prognostic and predictive abilities of AI techniques. For example, Swiercz et al. used neural network techniques with a wavelet algorithm incorporated to predict ICP values, whereas Nikiforidis et al. constructed a Bayesian belief network (BBN) to predict short-term prognosis of TBI patients in the intensive care unit, and Väth et al. used ANN to analyze the accuracy of outcome prediction after TBI.


Discriminate function analysis is used to determine which combination of variables discriminate between two or more naturally occurring states. The resulting discriminate function can be used in monitoring to alert the clinician to a change in the state of the patient. The bispectral index (BIS) is an example that integrates several disparate descriptors of the EEG and has gained wide use in surgery to assess anesthetic effects on the brain and has been used as a measure of sedation in critical care (see Chapter 9 , Chapter 25 ).


Another AI approach is used when “human expertise” is needed to solve problems. Human expertise is formalized as a knowledge base, a series of rules, or other means (or a combination of these) so the computer can use this information to process the data. Expert systems, as these are called, often are used to augment statistical analyses when they are not robust enough to provide the desired results. Figure 40.5 shows the results of an automated seizure detection algorithm developed using an expert systems approach. The first pass of the analysis calculates metrics for a variety of features such as rhythmicity and amplitude. The second pass, using expert systems tools, adds the knowledge of an expert epileptologist and follows a rule-based approach to evaluate the metrics for conditions such as their stability or fluctuation over time and symmetry of activity in adjacent channels. This combination of techniques significantly increases the accuracy of the seizure detection and allows a more precise determination of seizure start and stop on a per-channel basis.




Fig. 40.5


A display of automated seizure detection (red colored trace) that uses a combination of statistical techniques and an expert system approach (in a neonatal electroencephalogram).




Historical Notes and Review of Existing Systems


There are very few commercial systems that integrate neurologic data. Consequently a few university-based NCCUs have developed their own systems. These various systems and range of capabilities are reviewed briefly.


University-Developed Systems


Some of the earliest publications in multimodality neuromonitoring came from the University of Graz, Austria, in the late 1980s. Beginning mostly with the simultaneous collection and integration of electrophysiologic signals (EEG and evoked potentials), the researchers have added a full complement of physiologic signals in their data collection system and have used the system to investigate a variety of procedures in surgery and critical care. Figure 40.6 illustrates the integrated display of signals in a patient in the NCCU whose condition deteriorated to brain death.




Fig. 40.6


Multimodal data recording from a 57-year-old woman with an intracerebral hemorrhage in the left hemisphere that shows a progression to brain death following an intracranial pressure (ICP) increase.

The recorded modalities from left to right are electroencephalo graph (EEG) compressed spectral array (two channels), brainstem auditory evoked potentials (BAEP), the interpeak latency (IPL) of peak I to IV of the BAEP, somatosensory evoked potentials (SSEP) with the N14 and N20 marked and the central conduction time (CCT) calculated, heart rate (HR), heart rate variability (HRV), ICP, blood pressure ([BP], systolic and diastolic), and temperature. Zervical and Kortikal refer to the cervical and cortical responses from the SSEP. Hirntod indicates brain death.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Mar 25, 2019 | Posted by in NEUROSURGERY | Comments Off on Multimodality Monitoring and Artificial Intelligence

Full access? Get Clinical Tree

Get Clinical Tree app for offline access