Although it is intuitive that any neurosurgeon would seek to consistently apply the best available evidence to patient management, the application of evidence-based medicine (EBM) principles and clinical practice guidelines (CPGs) remains variable. This article reviews the origin and process of EBM, and the development, assessment, and applicability of EBM and CPGs in neurosurgical care, aiming to demonstrate that CPGs are one of the valid available options that exist to improve quality of care. CPGs are not intended to define the standard of care but to compile dynamic advisory statements, which need to be updated as new evidence emerges.
Key points
- •
Clinical practice guidelines (CPGs), which involve a (1) systematic review, selection, and ranking of studies as evidence for each therapeutic option, followed by (2) achievement of a multidisciplinary panel agreement based on the analysis of the strength of the latter evidence, offer a more reliable approach to achieving quality and effectiveness than expert opinion, which is principally derived from past experience.
- •
CPGs are one of several tools available to improve health care delivery by assisting the decision-making process.
- •
All physicians should critically appraise CPGs, and determine whether the recommendations are applicable and appropriate in the case of any individual patient.
- •
CPGs are subject to changes as new evidence emerges, and thus are intended to be advisory statements and not standards of care.
Introduction
There is still considerable variability in therapeutic approaches for numerous neurosurgical conditions. In addition, the rising costs of health care delivery, partly arising from the increasing use of sophisticated but expensive technology, represent significant societal financial challenges. Consequently many reform efforts around the globe, such as development and implementation of clinical practice guidelines (CPGs), aim at improving clinical outcomes in a cost-efficient manner. However, despite being endorsed by the American Association of Neurosurgical Surgeons (AANS) and the Congress of Neurosurgical Surgeons (CNS), CPGs appear not to be widely integrated as a means to assist the clinical decision-making process in neurosurgery. Both evidence-based medicine (EBM) and CPGs have often been the source of misinterpretations and misconceptions, which could in part explain the reluctance of some neurosurgeons to apply EBM and adhere to CPGs. Indeed, some neurosurgeons believe that it is inappropriate to apply data obtained from large clinical studies to an individual patient, or that adequate EBM should exclusively rely on high-quality (level I) evidence, or, indeed, that surgery is just as much an art as it is a science. Consequently, any attempt at standardization is seen not only as a violation of surgeons’ autonomy but also a potential sword of Damocles in terms of medicolegal issues.
The purpose of this article is to emphasize the necessity for neurosurgeons to critically appraise their clinical and surgical decisions in the light of current available evidence. Thus, by reviewing the definition, purpose, elaboration, and appraisal of EBM and CPGs, as well as their advantages, limitations, and applicability, the authors hope to clarify their intended use and demonstrate that adequately constructed CPGs represent another tool to optimize patient care which, therefore, should be integrated into neurosurgical teaching and practice.
Introduction
There is still considerable variability in therapeutic approaches for numerous neurosurgical conditions. In addition, the rising costs of health care delivery, partly arising from the increasing use of sophisticated but expensive technology, represent significant societal financial challenges. Consequently many reform efforts around the globe, such as development and implementation of clinical practice guidelines (CPGs), aim at improving clinical outcomes in a cost-efficient manner. However, despite being endorsed by the American Association of Neurosurgical Surgeons (AANS) and the Congress of Neurosurgical Surgeons (CNS), CPGs appear not to be widely integrated as a means to assist the clinical decision-making process in neurosurgery. Both evidence-based medicine (EBM) and CPGs have often been the source of misinterpretations and misconceptions, which could in part explain the reluctance of some neurosurgeons to apply EBM and adhere to CPGs. Indeed, some neurosurgeons believe that it is inappropriate to apply data obtained from large clinical studies to an individual patient, or that adequate EBM should exclusively rely on high-quality (level I) evidence, or, indeed, that surgery is just as much an art as it is a science. Consequently, any attempt at standardization is seen not only as a violation of surgeons’ autonomy but also a potential sword of Damocles in terms of medicolegal issues.
The purpose of this article is to emphasize the necessity for neurosurgeons to critically appraise their clinical and surgical decisions in the light of current available evidence. Thus, by reviewing the definition, purpose, elaboration, and appraisal of EBM and CPGs, as well as their advantages, limitations, and applicability, the authors hope to clarify their intended use and demonstrate that adequately constructed CPGs represent another tool to optimize patient care which, therefore, should be integrated into neurosurgical teaching and practice.
Definition and general purpose
CPGs constitute a broad topic intertwining a plethora of concepts. The Institute of Medicine recently revised its definition of CPGs as: “statements that include recommendations intended to optimize patient care that are informed by systematic review of evidence and an assessment of the benefits and harms of alternative care options.” (p4) Similarly, the World Health Organization defines CPGs as “systematically developed evidence-based statements which assist providers, recipients and other stakeholders to make informed decisions about appropriate health interventions.” (p2) These quotations use several key words that need to be carefully considered within their background and context to fully appreciate their meaning and implications.
Evidence-based medicine and clinical practice guidelines: context of creation and approach
To begin with, CPGs involve critically appraised evidence, which cannot be dissociated from the concept of EBM. In fact, the 1970s marked the convergence of pivotal movements in modern medicine that came to be called “Evidence-Based Medicine” and from which EBM and CPGs conjointly emerged. Historically and traditionally, medical teaching and practice essentially depended on knowledge provided by medical leaders. Hence, clinical decisions were chiefly based on past experiences, and were thus prone to subjectivity biases and resulted in wide variability in clinical practice. The actual term Evidence-Based Medicine was coined in 1991 by clinical epidemiologists from McMasters University, Canada, and made its first appearance in print in 1992. EBM describes “the application of scientific method in determining the optimal management of the individual patient.” (p89) In 1992, an EBM Working Group emphasized that “[while] clinical experience and the development of clinical instincts (particularly with respect to diagnosis) are a crucial and necessary part of becoming a competent physician […] systematic attempts to record observations in a reproducible and unbiased fashion markedly increase the confidence one can have in knowledge about patient prognosis, the value of diagnostic tests, and the efficacy of treatment” and “the study and understanding of basic mechanisms of disease are necessary but insufficient guides for clinical practice.” The goal of rigorous experimentation and observation is to minimize error (bias). Clinical studies are vulnerable to 5 principal sources of bias: subject selection, subject allocation to different interventions, assessment of the effect of each intervention, analysis of the results, and how those results are reported. Randomization and appropriate meticulous study design control for random and systematic error, respectively. Hence, randomized controlled trials (RCTs) are the gold standard in clinical research. Limiting bias allows greater confidence in the validity and accuracy of the results obtained and, consequently, in their interpretations and conclusions in answering specific clinical questions. Sackett and colleagues defined EBM as “the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients” [which involves] “integrating individual clinical expertise with the best available external clinical evidence from systematic research.”
EBM can be considered the philosophic approach from which CPGs are developed. CPGs may thus be perceived as a materialization of EBM. EBM involves 5 steps: (1) defining the question or problem; (2) searching for the evidence; (3) critically appraising the literature; (4) applying the results; and (5) auditing the outcome. Critically appraising the literature (third step) essentially implies the concepts of “level of evidence” and “grades of recommendation.” In fact, EBM establishes a hierarchy of strength of evidence (level of evidence) that classifies published studies based on analysis of their design and methodological rigor. EBM favors data from studies that constitute a higher level of evidence (RCTs and meta-analysis of RCTs) in making clinical, guidelines, and health care policy decisions regarding therapy. In 1979, the Canadian Task Force on the Periodic Health Examination presented organized clinical studies based solely on their study design, and derived strength of recommendation according to this system. Although the simplicity and ease of application of this approach made it attractive and popular, it also generated significant criticisms ( Table 1 ). Since then, several systems to classify quality of evidence and strength of recommendations (see Table 1 ; Tables 2 and 3 ) have been elaborated to address a wider spectrum of aspects, such as research design and rigor in the methodological process, consistency of effects, clinical relevance, generalizability, audience, and clinical foci. The GRADE Working Group recently elaborated a grading system that assesses 4 principal aspects: study design, study quality, consistency of effects across studies, and directness. These systems, based on the level of evidence and consistency of findings, allow the grading of studies, which distils down to a strategy for rating evidence and identifying the strength of recommendations for a specific treatment according to an ordinal scoring scheme.
Canadian Task Force on the Periodic Health Examination | American Medical Association (AMA, 1990) and Surgical Management of TBI Author Group (2006) | ||
---|---|---|---|
Effectiveness of Intervention | Class of Evidence | ||
I | Evidence obtained from at least one properly randomized controlled trial | I | Evidence from one or more well-designed RCTs, including overviews of such trials; Prospective RCTs |
II-1 | Evidence obtained from well-designed cohort or case-control analytical studies, preferably from more than one center or research group | II | Evidence from one or more well-designed comparative clinical studies, such as nonrandomized cohort studies, case-control studies, and other comparable studies; Studies in which the data were collected prospectively and retrospective studies based on clearly reliable data (eg, certain observational studies, cohort studies, prevalence studies, and case-control studies) |
II-2 | Evidence obtained from comparisons between times or place with or without the intervention. Dramatic results in uncontrolled experiments (such as the results of the introduction of penicillin in 1940s) could also be regarded as this type of evidence | ||
III | Opinions of respected authorities, based on clinical experience, descriptive studies, or reports of expert committees | III | Evidence from case series, comparative studies with historical controls, case reports, and expert opinion; most studies with retrospectively collected data (eg, clinical series, case reports, and expert opinion) |
Certainty of Recommendation | Certainty of Recommendation | ||
A | There is good evidence to support the recommendation that the condition be specifically considered in a periodic health examination | Standard | Represent accepted principles of patient management that reflect a high degree of clinical certainty |
B | There is fair evidence to support the recommendation that the condition be specifically considered in a periodic health examination | Guideline | Represent a particular strategy or range of management strategies that reflect a moderate degree of clinical certainty |
C | There is fair evidence to support the recommendation that the condition be specifically considered in a periodic health examination | Opinion | Remaining strategies for patient management for which there is an unclear clinical certainty |
D | There is fair evidence to support the recommendation that the condition be excluded from consideration in a periodic health examination | ||
E | There is good evidence to support the recommendation that the condition be excluded from consideration in a periodic health examination |
Grade of Recommendation | Study | Therapy/Prevention, Etiology/Harm | Prognosis | Diagnosis | Differential Diagnosis/Studies on Prevalence of Symptoms | Economic and Decision Analyses |
---|---|---|---|---|---|---|
Level | Treatment Outcome | Disease Outcome | Diagnostic Test | Economic/Decision Model | ||
A | 1a | Homogeneous SR a of RCTs | Homogeneous SR a of inception cohort studies CDR b validated in different populations | Homogeneous SR a of level I diagnostic studies CDR b with 1b studies from different clinical centers | Homogeneous SR a of prospective cohort studies | Homogeneous SR a of level I economic studies |
1b | Individual RCT with narrow confidence interval | Individual inception cohort study with more than 80% follow-up CDR b validated in a single population | Validating c cohort study with good d reference standards CDR b tested within one clinical center | Prospective cohort study with good follow-up (ie, >80%) with adequate time for alternative diagnoses to emerge | Analysis based on clinically sensible costs or alternatives with multiway sensitivity analyses SR of the evidence with multiway sensitivity analyses | |
1c | All or none e | All or none e case series | Absolute SpPins and SnNouts f | All or none e case series | Cost-effective studies | |
B | 2a | Homogeneous SR a of cohort studies | Homogeneous SR a of either retrospective cohort studies or untreated control groups in RCTs | Homogeneous SR a of diagnostic studies superior to level II | Homogeneous SR a of 2b and better studies | Homogeneous SR a of economic studies superior to level II |
2b | Individual cohort study (including low-quality RCT (eg, <80% follow-up) | Retrospective cohort study or follow-up of untreated control patients in an RCT Derivation of CDR b or validated on split sample g only | Exploratory h cohort study with good d reference standards; CDR b after derivation, or validated only on split sample g or databases | Retrospective cohort study, or poor follow-up | Analysis based on clinically sensible costs or alternatives with multiway sensitivity analyses Limited review(s) of the evidence, or single studies with multiway sensitivity analyses | |
2c | Outcomes research; ecologic studies | Outcomes research | Ecologic studies | Audit or outcomes research | ||
3a | Homogeneous SR a of case-control studies | Homogeneous SR a of 3b and better studies | Homogeneous SR a of 3b and better studies | Homogeneous SR a of 3b and better studies | ||
3b | Individual case-control study | Nonconsecutive study or without consistently applied reference standards | Nonconsecutive cohort study or very limited population | Analysis based on limited alternatives or costs, poor-quality estimates of data, but including sensitivity analyses incorporating clinically sensible variations | ||
C | 4 | Case series and poor-quality cohort and case-control studies i | Case series and poor-quality prognostic cohort studies j | Case-control study, poor, or nonindependent reference standard | Case series or superseded reference standards | Analysis with no sensitivity analysis |
D | 5 | Expert opinion without explicit critical appraisal, or based on physiology, bench research | Expert opinion without explicit critical appraisal, or based on physiology, bench research | Expert opinion without explicit critical appraisal, or based on physiology, bench research | Expert opinion without explicit critical appraisal, or based on physiology, bench research | Expert opinion without explicit critical appraisal, or based on economic theory |

Stay updated, free articles. Join our Telegram channel

Full access? Get Clinical Tree


