Measuring Outcomes for Neurosurgical Procedures




Health care evolution has led to focused attention on clinical outcomes of care. Surgical disciplines are increasingly asked to provide evidence of treatment efficacy. As the technological advances push the surgical envelope further, it becomes imperative that postoperative outcomes are studied in a prospective fashion to assess the quality of care provided. The authors present their experience from a multiyear implementation of an outcomes initiative and share lessons learned, emphasizing the important structural elements of such an endeavor.


Key points








  • Outcomes must be measured before they can be improved.



  • Improving outcomes involves understanding current levels of performance, identifying areas in need of potential improvement, initiating changes in clinical care provided, and finally, measuring the change in performance achieved.



  • Secondary data (like claims data) can be used to somewhat track outcomes and quality, although there are significant problems with data validity and completeness.



  • Primary data are the most accurate source of outcomes data, but require a large investment in time and money, are subject to reporter bias, and are prone to privacy concerns.






Introduction


Surgical procedures account for a large portion of health care expenditures. As such, their cost and indications have come under scrutiny at a policy level as well as on the news media. Studies of the effectiveness and efficacy of surgical interventions have been argued to be necessary in accurately assessing surgical outcomes and potentially avoiding preventable complications as it has been shown in other aspects of medical inpatient care. The Centers for Medicare and Medicaid Services (CMS) has in the past initiated a program for the assessment of hospital performance in measures of care ( www.hospitalcompare.hhs.gov ), but no such programs yet exist for the assessment of surgical care provided.


Measuring surgical outcomes has been indirectly associated with improving outcomes of care. A focus on outcomes is advocated as the best way to improve outcomes. Such a process involves understanding current levels of performance, identifying areas in need of potential improvement, initiating changes in clinical care provided, and finally, measuring the change in performance achieved, a cycle that has been argued to be a powerful ally in the quest of improving quality of care.


The recent explosion of Web sites dedicated to the assessment of purported physician quality indicates an increasing public interest in the quality of their physician of choice. As most of these Internet sources rely on haphazard and potentially inaccurate data, misinformation abounds. To date, however, the lack of any official data sanctioned by the CMS, private payers, or physician professional societies has allowed for a gap of available information, often fulfilled by subpar surrogate venues.


The recently implemented Affordable Care Act promises a focus on quality and accountability of care. Although these have been concepts traditionally embedded in medical and surgical practice, few attempts have been made in quantifying those parameters as they related directly to surgical procedures, in general, and neurosurgical procedures, in particular, outside the realm of research.


Part of the difficulty in addressing quality of surgical care is the absence of a universally agreed on definition for quality. In addition, for any definition of quality used, there are limited instruments to accurately and reproducibly measure it. Although the neurosurgical literature is replete with disease-specific outcome studies using narrowly applicable scales, a more generic methodology that can be used uniformly to assess surgical outcomes does not exist.


In this article, the important components of such a process are presented, and the possible application of a quality assessment initiative in the clinical setting is discussed. The article draws from the experience of a large similar process carried out at Mayfield Clinic and the University of Cincinnati’s Department of Neurosurgery that was designed, was implemented over the past decade, and has been reported elsewhere. This process has been shown to accurately record procedure-specific and disease-specific outcomes following surgical intervention in a large mixed academic and private practice setting, incorporating the entire gamut of neurosurgical interventions, both cranial and spinal.




Introduction


Surgical procedures account for a large portion of health care expenditures. As such, their cost and indications have come under scrutiny at a policy level as well as on the news media. Studies of the effectiveness and efficacy of surgical interventions have been argued to be necessary in accurately assessing surgical outcomes and potentially avoiding preventable complications as it has been shown in other aspects of medical inpatient care. The Centers for Medicare and Medicaid Services (CMS) has in the past initiated a program for the assessment of hospital performance in measures of care ( www.hospitalcompare.hhs.gov ), but no such programs yet exist for the assessment of surgical care provided.


Measuring surgical outcomes has been indirectly associated with improving outcomes of care. A focus on outcomes is advocated as the best way to improve outcomes. Such a process involves understanding current levels of performance, identifying areas in need of potential improvement, initiating changes in clinical care provided, and finally, measuring the change in performance achieved, a cycle that has been argued to be a powerful ally in the quest of improving quality of care.


The recent explosion of Web sites dedicated to the assessment of purported physician quality indicates an increasing public interest in the quality of their physician of choice. As most of these Internet sources rely on haphazard and potentially inaccurate data, misinformation abounds. To date, however, the lack of any official data sanctioned by the CMS, private payers, or physician professional societies has allowed for a gap of available information, often fulfilled by subpar surrogate venues.


The recently implemented Affordable Care Act promises a focus on quality and accountability of care. Although these have been concepts traditionally embedded in medical and surgical practice, few attempts have been made in quantifying those parameters as they related directly to surgical procedures, in general, and neurosurgical procedures, in particular, outside the realm of research.


Part of the difficulty in addressing quality of surgical care is the absence of a universally agreed on definition for quality. In addition, for any definition of quality used, there are limited instruments to accurately and reproducibly measure it. Although the neurosurgical literature is replete with disease-specific outcome studies using narrowly applicable scales, a more generic methodology that can be used uniformly to assess surgical outcomes does not exist.


In this article, the important components of such a process are presented, and the possible application of a quality assessment initiative in the clinical setting is discussed. The article draws from the experience of a large similar process carried out at Mayfield Clinic and the University of Cincinnati’s Department of Neurosurgery that was designed, was implemented over the past decade, and has been reported elsewhere. This process has been shown to accurately record procedure-specific and disease-specific outcomes following surgical intervention in a large mixed academic and private practice setting, incorporating the entire gamut of neurosurgical interventions, both cranial and spinal.




Discussion


The lack of a universally accepted definition for quality of surgical care has led to the use of several surrogates for quality, including population-derived/patient-centered data (rates of mortality, rates of major postoperative complications, and rates of type of discharge disposition as well as length of stay) and systemic indicators (data extracted from deidentified databases related to inpatient diagnoses, malpractice claims, as well as adherence to accepted standards). Such data, although not traceable to individual patients, are nonetheless powerful because they are derived from large cohorts of patients with specific diagnoses or having undergone specific procedures. Much of the early clinical outcomes work was performed analyzing such data.


As powerful and relatively accessible secondary data as they may be, there are several limitations that are associated with their analysis. Deidentified data that, in general, populate such databases do not allow for careful analysis of individual patient charts to assess comorbidities, additional information, or auditing of the accuracy of the data. Databases that are based on coding data are susceptible to inaccurate coding, particularly of secondary diagnoses, because the primary providers are rarely the people responsible for the coding. The almost universal adoption of electronic medical records (EMR) in most clinical settings promises to reduce the inaccuracies of coding and transcribing, but such potential benefits in studying clinical outcomes remain still unproven.


Systemic indexes, usually related to adherence to certain processes found to be effective in clinical settings, are another standard that can be used in a fairly straightforward way to assess quality of care. There have been several early data related to processes resulting in the CMS process of hospital performance evaluation ( www.hospitalcompare.hhs.gov ). Nonetheless, a study by Nicholas and colleagues assessing the accuracy of outcomes reporting in 2000 US hospitals found a low correlation between rates of compliance with CMS preoperative process of care and perioperative outcomes.


With all the limitations of analysis of secondary data, the importance of assessing primary clinical data becomes evident. However, before the components of such an analysis are addressed, the limitations of primary outcome data study should be discussed. Primary data can be difficult to measure accurately (eg, reported postoperative pain levels), may have limited clinical utility and relevance (eg, mortality rates for most neurosurgical procedures), can be prohibitively time-consuming in their collection (eg, postoperative neuropsychological testing), are overwhelmingly operator-dependent (eg, radiologic determination of bony fusion), are often multifactorial and potentially biased by unknowable factors (eg, return to work date depending on patient social circumstances and potential for secondary gains), and are difficult to manage in a secure way that ensures against breaches of patient privacy.


Lessons from the design and implementation of an organization-wide quality improvement process that Mayfield Clinic and the Department of Neurosurgery at the University of Cincinnati undertook over a period of several years are discussed. In doing so, it is hoped that the important components of such a process are assessed and lessons learned throughout the design, trial, and implementation course of the project are shared. As the practice setting is a large mixed academic and private practice environment with more than 5000 neurosurgical procedures spanning the spectrum of cranial and spinal surgery, it is likely that parts of the authors’ experience will be applicable to most neurosurgical settings.


Timeline


During the initial discussions regarding this initiative, it was decided to proceed in a graduated fashion. Once the process was designed, 3 physician champions were asked to implement a pilot trial for 3 months; this was used to identify systemic problems as well as opportunities for improving the entire process. Data generated during that time helped improve the EMR forms used to capture data, shape the process as well as identify training methods for the ancillary staff involved in the initiative. The redesigned process was then implemented system-wide, using lessons learned during the pilot phase. The entire process from beginning of design to full implementation took 6 years.


The lengthy implementation process was the result of multiple factors, including that there were no other such processes described before in the literature which could be emulated, the timing being early in the overall use of EMR for data capture in general, and the limitations in clinical practice that this entailed, as well as a purposefully cautious approach to the process to win system-wide buy-in and trust. Based on these lessons, such a full-scale implementation in a large practice may reasonably currently take as much as half of the time reported.


Components


Setting up the process for outcomes assessment is critical. Building up the culture that will support and nourish the process and developing a nuanced understanding of the level, detail, and kind of information that will be collected avoids future structural revisions to the process that can be costly and time-consuming. In the current health care environment, such a process should be EMR-based. Early design of custom forms that would include a comprehensive list of quality indicators to be collected is important. Compliance with the process can be facilitated by including the additional data fields in EMR forms already in use, such as charge sheets for procedures as well as template postoperative visit forms. Appropriate design of the additional fields should allow for information to be easily compiled and used in summative ways; therefore, free-text fields should be avoided and numeric format or preselected drop-down menus are preferable.


In general, most similar databases should include data on procedure-associated complications, length of stay, discharge disposition as well as posttreatment symptom resolution. In the authors’ process, they chose to use CTCAE (National Cancer Institute Common Terminology Criteria for Adverse Effects, version 4.0, ctep.cancer.gov ) grading because this has been a proven complication-reporting algorithm, with the definition of minor complication a grade 1 or 2 and major complication a grade 3 and 4. In addition, using disease-specific validated scales (eg, Karnofsky Performance Scale or Oswestry Disability Index) can add significant additional clinical information that may not be easily discernible from the general clinical data obtained during a physical examination.


It is universally accepted that any such process should be done in a prospective way to minimize biases. Furthermore, a point-of-service timing of information gathering is advisable, including data recorded at the time of patient contact, every time the patient is evaluated in an inpatient or outpatient setting. Although this may seem like an ambitious goal in most busy clinical practices, in the authors’ extensive experience over several years, it is surprisingly easy to implement. This way, completeness of data collection is ensured and the end results are more robust compared with randomly chosen entry criteria that several large studies have used in other settings.


Empowering multiple members of the health care team to record outcomes data, appropriate for their level of training and expertise, can be an important safeguard against missed information and input inaccuracies. A case in point is the accurate recording of postoperative major and minor complications that can often be missed if there is an outpatient or inpatient evaluation outside the primary health care system. Medical assistants, nurses, and nurse practitioners have proven in the authors’ opinion to be better than surgeons in ensuring that such information is collected during either postoperative phone inquiries or brief office assessments. As opposed to the initial concern that such an open access to the outcome information by all members of the health care team may create more inaccuracies, the opposite was observed. It should be noted that whenever there is disagreement in different assessments, the system should allow for adjudication by the primary health practitioner.


Accuracy of data is a primary goal of such a process. Increasing daily experience with the data collection system leads to improvement in accuracy, with fewer missed data as well as fewer incorrectly coded data. Nonetheless, in most clinical practice settings, a mature outcome data collection process involves hundreds if not thousands of data fields recorded daily. As with any such large-scale process, errors are intrinsic to it and a systematic approach should be used to address this. Auditing is one of the most powerful ways to both correct errors and identify, record, and prospectively follow the rate of errors in the entire process.


Auditing can be performed in many different ways. As a safeguard for optimizing data accuracy, auditing can be performed in all data entered. In the authors’ experience, this is expensive, impractical, and potentially unnecessary. The exact portion of patient records assessed is a function of manpower and level of error allowed. The authors implemented a limited audit review of 5% of the charts of patients treated each given time period and attempted to adjudicate all incongruent information recorded. Auditing consisted of generating a random list of 5% of all patients for each surgeon, and for those patients, a dedicated nurse performing a complete review of the EMR (both outpatient and inpatient) to assess for any missed data or inaccuracies in the data reported.


In using one nurse full-time equivalent (FTE) for the ongoing audit of the entire record, the authors found that not only did they develop a fair grasp on the amount of inaccuracy within the data but also they identified systemic problems with correct data gathering that in turn resulted in significant systemic improvements in minimizing erroneous information capture and missed data. An example of this improvement process was the early identification through auditing that length-of-stay information was often missing or inaccurate. This missing or inaccurate information was because when the primary surgeons were completing the immediate postoperative outcome collection forms, the exact date of discharge was not always readily available. In an attempt to improve on this, the authors shifted the date of discharge field capture to the hospital billing form completed following discharge by the coders who had the discharge date information always available, and the length-of-stay information became one of the most accurately recorded data fields.


Missing data found by auditing were noted and a rate of missed data was extrapolated from that. It should be noted that auditing results helped galvanize organization-wide support for the process and provided direct feedback along multiple vectors of performance for all involved. More generically, one should note that sharing such information among all health care providers creates an environment of positive competitive spirit that results in universal buy-in. As a result, the rates of missed information identified by the process steadily dropped for almost each successive quarter. Inaccuracies that were identified during the audit were adjudicated by the physician chair of the outcomes committee or, if need be, by the entire outcomes committee.


At the conclusion of each quarter, a report was generated that included the top 5 procedures by frequency per surgeon per quarter, with the number of procedures, complications, disposition, and length of stay for the ones the individual surgeon performed compared with the relevant aggregate numbers for the surgeon’s entire career (as reported in the system) as well as for the entire organization. These reports were confidentially presented to each physician and were not shared. The chair of the committee reviewed the reports in a process designed to identify outliers (individual surgeons with outcomes more than 2 standard deviations from the mean). No such outliers were identified throughout the process.


Future Directions


Creating a prospective database wherein data on all patients are recorded in an auditable fashion is a very valuable process, but not the ultimate goal in quality improvement. Developing ways that such collected information can be channeled back to providers and system processes managers is imperative if change is to be implemented. Although this last part of the process may be difficult to carry out, the information gathered is nonetheless valuable in order for every surgeon to appreciate their outcomes and to be able to follow them in real-time throughout their practice. The Mayfield Clinic / University of Cincinnati process has not yet completed an entire cycle of implementing practice changes based on lessons learned, yet the outcomes initiative has already affected patients’ outcomes by the focused attention on detailed outcome measures collected. The entire organization thus acquired a culture focused on patient-centered outcomes and that in itself is a major improvement over the antecedent anecdotal memories of complications and overall postoperative patient outcomes that most surgeons are accustomed to.


Specific improvements derived by lessons learned, particularly related to specific procedures and/or through comparisons of specific surgeons’ preferences and choices, may make for a continual quality improvement process. Innovative ideas for treatment as well as new surgical technologies and techniques can in this way be tested against traditional approaches in a more nuanced way that will allow for the identification of possible opportunities for improved outcomes.


Disclosures: None.


Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Oct 12, 2017 | Posted by in NEUROSURGERY | Comments Off on Measuring Outcomes for Neurosurgical Procedures

Full access? Get Clinical Tree

Get Clinical Tree app for offline access