Medical Informatics




Introduction


The term medical informatics frequently is used to describe systems applied to hospital and physician office record keeping, operations, and regulatory documentation, such as electronic medical records (EMR), computerized order entry (CPOE), and databases for billing or review by regulatory bodies such as The Joint Commission. However, these systems generally provide an electronic recapitulation of written records rather than new insight into disease treatments. In the context of monitoring in neurocritical care, medical informatics is more appropriately defined as the acquisition, storage, and analysis of neuromonitoring data to (1) facilitate knowledge and discovery about pathogenesis and treatment of disease, and (2) improve methods to integrate the complex array of clinical, physiologic, and genetic data into practical bedside care. Unfortunately, although monitoring—specifically neuromonitoring—has advanced tremendously in the four decades since the origins of critical care, for the purposes of clinical care, data are considered in a way similar to what was used a generation ago. However, it is clear that the complexity of data in the neurocritical care unit (NCCU) requires a fresh and novel approach to medical informatics to complement the development of new neuromonitoring paradigms. This chapter provides insight into this clinical problem and describes some initial efforts toward this end.


The Data Environment


The NCCU is an extremely data-intense environment. The art of neurocritical care is to use these data in real-time to make decisions about patient care. Patients routinely are monitored frequently or continuously using 30 or more parameters including blood pressure (BP), heart rate (HR), systemic arterial oxygen saturation (SaO 2 ), respiratory rate (RR), tidal volume, peak inspiratory pressures, intracranial pressure (ICP), and body temperature (T), to name a few. Add to these the variety of new advanced neuromonitoring tools such as brain tissue oxygen tension (P bt O 2 ), cerebral blood flow (CBF), and microdialysis parameters and several obvious questions emerge:



  • 1.

    What do we do with all these data?


  • 2.

    Are all of these data useful and important?


  • 3.

    How are artifacts “cleaned?”


  • 4.

    When and how often are the data collected?


  • 5.

    Which data are discarded and when?



Every NCCU already uses some form of informatics technology for these purposes. However, in most circumstances, this “technology” consists of monitors that display but do not capture and analyze the data. Often raw data are simply visually extracted from the monitor and recorded on a paper chart at defined intervals. Bedside nurses and physicians then review this paper record of the data and rely on their expertise and intuition, without the aid of additional analytic tools, to apply it to clinical care. At a simple level, this leads to data cleaning because obvious artifacts, for example, a monitor that was turned off during patient transport, are accounted for. The presence of large amounts of data that require complex integration and robust analytic tools is now a routine problem for the corporate world, such as the financial, airline, and computer industries. These industries have made significant advances in information technology over the past 20 years. Remarkably, only recently have attempts been initiated to push critical care informatics beyond the paper chart and clinician intuition to address the questions posed earlier. Figure 45.1 provides an example of the complexity and volume of even a short duration of critical care data.




Fig. 45.1


Critical care data are voluminous and complex.

These graphs depict two days of continuous neurocritical care physiologic data from a single patient represented on a time-series plot. The top panel shows blood pressure, central venous pressure, heart rate, and intracranial pressure. The middle panel shows brain tissue oxygen, brain temperature, and jugular venous oxygen saturation. The bottom panel shows measures obtained from the mechanical ventilator. These data are too noisy and compressed for visual inspection to be informative to identify single events, trends, or relationships between parameters, thus necessitating summary measures, indices, or informatics-based analyses.




What Is Not Known


The fundamental purpose of neurocritical care (for central nervous system problems) is the identification, prevention, and treatment of secondary brain injury. Physiology matters and much of the focus in the NCCU is on management of secondary brain insults (SBIs), which may include hypotension, hypoxia, fever, elevated ICP, hyperglycemia, and seizures. In fact, much of the moment-to-moment treatment provided in neurocritical care involves ensuring that these physiologic parameters, which can be seen as surrogate or intermediate targets with presumed impact on survival and long-term functional outcome, are within specified ranges. This often leads to a lengthy series of bedside orders that delineate acceptable ranges of one parameter at a time (e.g., “Call medical doctor if systolic BP <90 mm Hg”; “Call medical doctor if SaO 2 <94%”; “Call medical doctor if ICP >20 mm Hg”). The bedside nurse then calls the physician and abnormalities in a single parameter are treated in a reactive fashion (e.g., acetaminophen after a temperature elevation, mannitol after continued ICP elevation) with little attention to how multiple parameters may interact. Yet, while patients are treated in a univariate manner, they live in a multivariate world in which many factors interact to determine injury and outcome. Furthermore, the clinical response is to a certain threshold rather than to trends over time and with little insight into the complex interactions between various physiologic factors.


Table 45.1 enumerates several questions about secondary brain injury in neurocritical care that remain elusive because of limitations in existing approaches to informatics. It should be emphasized that a fundamental limitation that has hampered attempts to address these issues has been inadequacies in data acquisition and storage. To begin to address these issues in a more comprehensive and sophisticated manner, data acquisition tools well beyond the paper chart are required. This is simply not a recapitulation of a paper record in an electronic form as occurs in many commercial EMRs but rather one in which all data, including patient records, laboratory analysis, imaging, and continuous physiologic data, are not only acquired but also archived, integrated, and synchronized. Many monitors used in the NCCU are stand-alone and self-contained; that is, they are not designed to operate together. This has been an obstacle to NCCU informatics but should become less of a barrier since the American Society for Testing and Materials adopted the concept of an integrated clinical environment in 2009. Other chapters in this text focus on the various monitors and the challenges to integration methods, and they should be seen as both complementary and required aspects of neurocritical care informatics.



Table 45.1

Several Existing Neurocritical Care Informatics Questions








  • Do secondary brain insults (e.g., fever, elevated intracranial pressure [ICP], hypotension) have a dose-response relationship with outcome?



  • Are there multivariate interactions and relationships among various physiologic parameters that influence outcome beyond individual parameters?



  • Are there event signatures that would predict the future occurrence of a deleterious problem (e.g., elevated ICP or organ failure) that would allow proactive measures to prevent its occurrence?



  • How do we integrate new measures (e.g., P bt O 2 )?



  • How often do we need to collect physiologic data to optimize patient care?



  • How can we classify patients to understand their heterogeneity?



  • How can we integrate physiologic data with data from genomic, proteomic, imaging, and other types of data?



  • How can physiologic data be presented in real-time and in a user-friendly manner to provide decision-enabling information to clinicians?



  • To what extent is prior medical history (from before hospitalization or earlier in hospital course) important in interpreting current physiologic values and events?





Are More Data Better? The Example of Defining “Dose” of Secondary Brain Insults


Many studies have identified SBIs, such as hypotension, hypoxia, elevated ICP, and fever, as being associated with poor outcome after acute neurologic injury or illness. However, the manner in which these events have been defined often varies widely across studies. Some studies have used any occurrence of an SBI (e.g., systolic BP <90 mm Hg), while others have used the number of times an event has occurred, and others have used the duration below a specific threshold. Many times, the choice of a specific definition depends on the limitation imposed by the resolution, or frequency, with which physiologic vital signs were collected in the medical record or research database. For example, in a study that identified fever as a predictor of hospital length of stay, only the maximum daily temperature was recorded in the database used for study analysis. In another study of the impact of hypotension in the emergency department (ED) on outcome after TBI, the number of times the systolic BP was less than 90 mm Hg was assessed, regardless of depth or duration of the episodes. All of these various methods are attempts to define a “dose” of the deleterious event, a concept familiar in pharmacology but often not used in the NCCU.


In pharmacokinetics, dose or “exposure” is usually defined as area under the curve of concentration versus time. Intuitively this makes sense for SBIs; a longer duration of a lower BP should be more injurious than a brief event that just barely passed the threshold definition. However, appreciating this dose is not readily apparent from current paper or electronic bedside charting. Even this seemingly simple assessment is an example of informatics technology; the resolution at which data are collected must be defined and analysis must be performed on raw collected data to derive a summary dose term—SBI burden. Figure 45.2 demonstrates the example of fever burden as area under the curve above a defined threshold. Importantly, initial studies suggest that it makes a difference both how frequently data are collected and how “dose” is defined. It is not surprising that more frequent data collection identifies more SBIs. The extension of this is the conclusion that current intensive care unit (ICU) charting practices (usually writing down a value hourly) are missing many events of potential clinical significance. Furthermore, integrating these data as “area under the curve” seems to be more informative than just identifying presence or number of events. Hypotension burden is more associated with outcome than just number of events of hypotension. Fever burden is a broader method of assessing normothermia control than just time over a threshold. Thus just the assessment of SBIs as “dose” is a straightforward and seemingly simple example of ICU informatics that is still not clinically routine because of shortcomings in data acquisition, charting, and bedside data analysis tools.




Fig. 45.2


Fever curve with cutpoint at 38.0° C (dashed line). Depending on the measurement method, this patient could be considered: (1) positive for any fever during hospitalization, (2) had 15 febrile episodes, (3) had a total duration of fever of 78.2 hours, or (4) had a total fever dose (area under the curve [AUC]) of 28.1 degree-hours. AUC is calculated as the sum of all the solid regions over the 38.0° C cutpoint.




New Measures Derived from Old—The Example of PRx


A compelling reason to pursue critical care informatics is the possibility that measurements derived from analysis of single or multiple parameters might be more informative than the raw data of a single parameter alone. Although a very simplified example, the most common measurement routinely used in this fashion is cerebral perfusion pressure (CPP). CPP is not measured directly but rather is a derived measure calculated as the difference between the mean arterial pressure (MAP) and the ICP. CPP is routinely calculated by monitors, charting systems, or by hand and is used in real time at the bedside in neurocritical care. However, current systems and bedside charting methods are poorly equipped to derive measures that involve anything more complex than simple mathematics. Even so, some early neurocritical care informatics systems have provided the ability to assess some new parameters derived from relationships between existing parameters. PRx, the pressure reactivity index, is one example.


Cerebral autoregulation is a critical component to ensure adequate blood flow to normal and injured brain tissue. Impaired autoregulation is associated with worsened outcome after TBI and management of CPP based on an individual patient’s autoregulation status may improve outcome. However, autoregulation status is difficult to assess from usual monitored parameters. PRx is a moving correlation coefficient between ICP and MAP and is one measure of whether autoregulation is intact or impaired. PRx ranges between −1 and 1, with a positive number indicating some correlation between the two parameters and a negative number indicating an absence of correlation. If ICP and MAP are correlated, this means that the ICP increases as the MAP increases; this occurs when the brain’s autoregulation status is impaired or the MAP is shifted off the normal autoregulation curve. If there is no correlation, then ICP and MAP are independent and autoregulation is presumably intact. A PRx of less than 0.3 is often considered as a threshold to assess intact autoregulation. PRx has been used in studies of both TBI and nontraumatic intracerebral hemorrhage to define optimal CPP, the CPP at which PRx is brought within the noncorrelating range. Importantly, PRx is determined, not as a simple arithmetic function between single current values of existing parameters, but rather as a statistically analyzed relationship between time-series data from existing parameters (ICP and MAP); that is, it is a form of real-time multivariable modeling. Although a seemingly straightforward example of a single derived parameter, PRx represents a leap forward in applying real-time analytics to raw data, which then can be used to drive clinical care. It does, however, require time-synchronization and integration of digital data acquisition to allow real-time analysis. Such systems now exist for that purpose (e.g., ICM+).




Advanced Critical Care Informatics I: Data-Driven Methods for Prediction


The previous examples represent fairly straightforward applications of medical informatics that become possible only when data are collected electronically in a high-resolution format. The methods of analysis (e.g., area under the curve, correlation coefficients) are conventional in statistics and provide targeted summaries of measurements collected over an extended period for a single patient. Far more is possible, however. This section goes beyond summarization of measurements into the realm of data-driven methods that interpret measurements and predict outcomes, or better yet events that may occur in the ICU based on experience represented by repositories of data collected from many patients. The next section considers model-based methods that combine repository data with expert knowledge of physiology to reason about underlying patient states and their anticipated evolution over time.


It is has been said that critical care is “the art of managing extreme complexity.” Informatics should thus be seen as a potential tool to help manage this complexity. Both data-driven and model-based methods do this by analyzing raw data and generating outputs that are much closer to actionable decisions. Of course, expert bedside clinicians do this already. Through accumulated experience, they have learned that certain patterns in the data—often, patterns across time and across multiple parameters and monitoring tools (heretofore described as sensors)—may indicate future problems that can be avoided with preemptive intervention. Advanced informatics techniques can support this process through data visualization and by extracting features not readily available from raw data and clinical charting and, to some extent, they can automate this process, at least in part.


The fields of statistics and machine learning have produced a huge range of methods to analyze data and make predictions. The methods are typically trained on previously collected data and can be tested on unseen data to evaluate the accuracy of their predictions. Most of the methods used in medical informatics are supervised learning methods, which means that the training data contain ground-truth or expert-supplied answers for each of the instances. (Unsupervised methods such as clustering algorithms are useful for exploratory analysis—“making sense” of a large data collection or discovering unsuspected regularities through “data mining.”) Within the class of supervised learning methods, many different prediction methods are used. Three are considered here: linear models (which include most indices), neural networks, and decision trees.


Linear Models and Indices


The simplest way to build a predictor from multivariate ICU data is to combine them linearly—that is, by taking a weighted sum. Linear regression sets the weights to give the best linear fit to a continuous output variable such as length of ICU stay, whereas logistic regression passes the sum through a soft threshold function to give an estimated probability for a binary output variable such as mortality. The Glasgow Coma Score (GCS) can be thought of as a linear model that sums three terms: eye, verbal, and motor responses, with weights 4, 5, and 6, respectively. Although there is no explicit outcome predictor, thresholds often are applied to the GCS to predict outcome. As the GCS example illustrates, selecting or designing highly predictive indices that aggregate or summarize the raw data is a crucial contribution.


Recent techniques have included the development of indices, such as total fever burden, elevated ICP dose, or burden of brain hypoxia, based on area-under-the-curve calculations. For example, Van Santbrink et al. investigated the prognostic value of a derived index known as brain tissue oxygen response (TOR) . Patients with an unfavorable outcome had a higher TOR during the first 24 hours. Logistic regression analysis supported the independent predictive value of TOR for outcome. Oddo et al. examined the burden of brain hypoxia and intracranial hypertension after TBI and found that brain hypoxia was associated with poor short-term outcome after severe TBI independently of elevated ICP, low CPP, and injury severity. Amin and Kulkarni used the opinions of clinical experts to develop a “fuzzy” GCS to predict cognitive recovery and found that it improved the information content for prediction. Chambers et al. developed pressure-time indices and identified age-related thresholds for ICP and CPP. These indices also were predictive of score on the Glasgow Outcome Scale (GOS) and could be compared independently of age. Hornero et al. studied changes in ICP complexity, estimated by the approximate entropy of the ICP signal, as subjects progressed from normal to elevated ICP. They concluded that decreased complexity coincides with episodes of hypertension, suggesting that regulatory mechanisms that govern ICP are disrupted during acute rises. These different examples all demonstrate early efforts to develop informatics methods to aid in outcome prediction. Regression analysis, however, assumes that all data are of value and usually assumes a linear relationship between the parameters and outcome. In addition it is difficult to include time-series data in this form of analysis, and so other tools may be better suited to bioinformatics in the NCCU.


Neural Networks


Real-world systems rarely behave in a simple, linear fashion throughout their entire dynamic range, and physiologic systems are no exception. For example, if MAP is used as an input variable in an outcome predictor, poor outcomes would be expected for very low and very high MAP with better outcomes in between. A linear model cannot generate this behavior; instead, a nonlinear predictor such as a neural network is needed. A neural network is essentially a nonlinear input-output function with many tunable weights. (The “network” simply displays the internal structure of the nonlinear function, with weights on the links and intermediate values computed at the internal nodes.) The training process iteratively modifies the weights to improve the degree of fit between the network’s predictions and the actual training data.


Several recent studies have described the use of neural networks in critical care patients and in particular those with acute neurologic insults. For example, Vath et al. used neural networks to predict outcome after TBI for different combinations of clinical and neuromonitoring parameters. Different network models were composed using clinical data plus additional inputs for the number of events related to ICP and P bt O 2 levels. Lang et al. attempted to determine whether neural network modeling would improve outcome prediction compared with logistic regression analysis. They confirmed the importance of age, GCS, and hypotension in outcome prediction. Model performance was similar. Becalick and Coats developed a neural network using anatomic and physiologic predictors. Head injury, age, and chest injury were the most important predictors. Li et al. compared three different models to predict a particular TBI decision about open-skull surgery: logistic regression, a multilayer perceptron neural network, and a radial basis function neural network. Sensitivity and specificity for the regression model were lower than for the network models to predict a neurosurgeon’s decision, so confirming the need for a nonlinear predictor. Recently Nelson et al. used mixed effects models and nonlinear (artificial neural networks) analysis to examine microdialysis data in TBI. This analysis showed multiple perturbations in metabolism and that the long-term metabolic patterns were weakly correlated to ICP and CPP; that is, factors other than pressure and/or flow likely influence the microdialysis findings.


Decision Trees


Whereas linear models and neural networks generate smoothly varying functions of the input variables, decision trees use hard thresholds (or discrete inputs, such as gender or injury type) to split the input space into distinct regions, each of which can be treated separately from the others. The training algorithm starts with a “stump” and adds branches to create subsets of the training data that are homogeneous with respect to the output variable. Decision trees are essentially flowcharts that can be used to categorize items such as patients, as shown in Figure 45.3 . For example, a set of patients in a study might all be grouped together at the “top” of a tree. At each branch, the set of patients can be divided into two (e.g., “Is a patient female or male?”) or more (e.g., “Did a patient receive 5 mg, 10 mg, or 25 mg of drug?”) subsets. These subsets can further be subdivided. The nature and sequence of the rules can be optimized according to various algorithms. Ultimately, the final subsets are evaluated based on an endpoint (e.g., females who received 5 mg of drug showed 70% improvement, females who received 10 mg showed 80% improvement). This type of evaluation might suggest a decision such as increasing drug dosage for poor responders.


Mar 25, 2019 | Posted by in NEUROSURGERY | Comments Off on Medical Informatics

Full access? Get Clinical Tree

Get Clinical Tree app for offline access