1 Evidence-based medicine (EBM) has been widely promoted as an ideal in clinical practice.1,2 We are called to judiciously combine the best available evidence with clinical expertise and our patient’s values to deliver the best care possible.2,3 Without incorporating current evidence, clinical practice is at risk of becoming stale and outdated. However, without also drawing from clinical expertise, evidence cannot be safely applied, as even the best research may be inappropriate for an individual patient or clinical scenario.2 Thus, proper practice requires the integration of external evidence, clinical experience, and patient factors into informed decision making. The modern EBM movement was born from legitimate concern that the ongoing and unchallenged use of unproven therapies would result in undue harm and missed opportunities to adopt more effective interventions.4 As such, early proponents of EBM called for evaluation of treatments through systematic and unbiased methods, and appealed to medical practitioners to continually update and appraise their own knowledge. Although the philosophical origins of EBM are not new,2,5 it has only been since the late 20th century that evidence-guided practice has been widely promoted in medicine as an ideal over authoritative teaching alone.2,4,6 Over the last two decades, EBM has become the cornerstone to many undergraduate, postgraduate, and continuing medical education programs.7–10 Of note, the deliberate introduction of EBM into medical curricula and practice has been associated with improvements in knowledge acquisition, clinical care, and patient outcomes.7,10–13 Indeed, the evolution of EBM has had a profound impact on medicine today. The practice of EBM involves several interrelated steps, which broadly include asking an appropriate clinical question, retrieving the pertinent evidence, appraising its quality, translating evidence into practice, and evaluating the resulting outcome (Table 1.1). Admittedly, however, the adoption of EBM has faced some resistance.14 Some critics have voiced concerns that EBM is impractical given (1) the severe time constraints in typical busy clinical practices; (2) the often unreliable or discredited external evidence; and (3) the difficulties in applying discrete, one-dimensional studies to patients with complex, clinical conditions. This chapter systematically addresses these perceived challenges and offers practical solutions to help practitioners effectively access, appraise, and apply evidence to patient care. Table 1.1 The Process of Practicing Evidence-Based Medicine
The Role of Evidence-Based Medicine in Bleeding and Coagulation Management in the Neurosurgical Patient
Adopting Evidence-Based Medicine and Its Challenges
Step | Action |
1 | Asking a clearly focused clinical question, considering the patient (or problem), intervention (or exposure), comparator, and outcome |
2 | Searching for the best available evidence |
3 | Critically appraising the evidence for its validity, importance, and applicability |
4 | Integrating the evidence with clinical expertise and patient values into clinical practice |
5 | Evaluating the outcome |
Source: Adapted from Straus SE, McAlister FA. Evidence-based medicine: a commentary on common criticisms. CMAJ 2000;163:837–841.
Information Management
Perhaps the greatest challenge to pursuing EBM is the time constraints that most clinicians face in a busy clinical practice.15–17 Further, the volume of information that is presented to clinicians is increasing at an extraordinary pace as we are witnessing an unprecedented growth in scientific discovery.18,19 It is, therefore, of utmost importance to identify efficient and reliable sources of evidence that can quickly provide relevant answers to clinical questions.
As opposed to traditional textbooks (which are quickly outdated), optimal resources should feature the combination of current evidence-based content and easy accessibility.3 Examples of these resources include systematic reviews (e.g., Cochrane Reviews), evidence-based synopses of the literature (e.g., American College of Physicians [ACP] Journal Club), and systems-based resources (e.g., clinical practice guidelines and updated evidence-based handbooks that explicitly cite evidence to support claims; this handbook is one such example).3 Although it is occasionally necessary to refer to original research articles for news that is “hot off the press” or for particularly specialized information, the trade-off is that of convenience. In this respect, the value of librarians and medical informaticians in assisting with information retrieval of primary data cannot be overstated. Library services facilitate information retrieval, integration of evidence into practice, and decision making, and thus have a positive impact on patient outcomes.20–22
However, for most clinicians, routinely seeking out primary evidence to answer every clinical question, even when provided with research assistance, is impractical. Most full-time clinicians, even those who are enthusiastic about EBM, rarely have time to search and review relevant, original research.23–25 Advocates and leading enthusiasts of EBM have long acknowledged this barrier.23,26 Although it may be unrealistic to expect everyone to be “evidence-based practitioners” (i.e., those who are able to seek out and appraise raw evidence from scratch), all care providers should at least be taught to be “evidence users” (i.e., practitioners equipped with tools to flag important studies and trained to incorporate evidence into practice).23 Accordingly, information management is paramount to achieving this latter goal.23–26 Reassuringly, “evidence users” who refer to secondary sources for pre-appraised evidence can still become highly competent, up-to-date practitioners who are able to deliver evidence-based care.23
Critical Appraisal
Another challenge raised by critics of EBM is that research discoveries may be unreliable and sometimes even misleading. Indeed, there are numerous examples of studies that later became discredited and of reputedly evidence-based guidelines that, over time and unexpectedly, needed to be changed. For example, based on a prospective cohort study conducted decades ago, there was a long-held belief that abrupt smoking cessation within 8 weeks of surgery was harmful and increased the risk of postoperative pulmonary complications,27 presumably because of decreased coughing and increased sputum production.28 Physicians were cautioned against advising patients to stop smoking shortly before surgery.27,28 However, subsequent studies have yielded contrary results. Two randomized controlled trials (RCTs) have shown that smoking cessation within 8 weeks of surgery did not increase the rate of pulmonary complications but rather reduced overall perioperative morbidity.29,30 Likewise, recent systematic reviews and meta-analyses have similarly concluded that short-term preoperative smoking cessation is likely beneficial, with no indication of harm.31–33 This example illustrates how some clinicians have become understandably frustrated when “best evidence” changes, leaving them to wonder what to believe or trust.
Indeed, these research discrepancies may be related to problems inherent to study design or analysis (e.g., confounding and bias in observational studies) or attributed to the inappropriate interpretation of results (e.g., inferring causation from data that merely show association). Fortunately, these discrepancies are rarely the result of deliberate scientific misconduct or fraud. Nonetheless, we should not allow these problems to dissuade us from keeping up-to-date. Rather, these problems simply highlight the importance of critical appraisal to evaluate study importance and validity—that is, to understand the parameters and limitations of each study and to interpret the implications of the results. Furthermore, critical appraisal involves grading studies according to the strength of the underlying study design, identifying possible sources of bias, and determining if the resulting conclusions are appropriate.34
Many clinicians have difficulty grasping the complexities of the different study designs reported in the literature. As an overview, primary evidence can broadly come from experimental studies (e.g., RCTs), observational studies (e.g., cohort, case-control studies, quasi-experimental designs), and nonsystematic observations (e.g., case reports and case series). All observational studies are inherently subject to confounding and possible bias because of baseline differences in the characteristics between comparison groups, which may, in turn, threaten a study’s validity and potentially result in incorrect inferences.35 Addressing this issue, statisticians have developed increasingly sophisticated analytical techniques to account for imbalances between groups using matching, multivariable regression analysis, propensity scores, and instrumental variables. Nonetheless, it is impossible to absolutely guarantee that an observational study is free from all potential confounding and bias regardless of the extent of statistical adjustment. It is for this reason that RCTs are commonly heralded as the gold standard of study designs, as they are the least susceptible to confounding and bias when properly designed and with adequate randomization. However, a common weakness of the RCT is that patient selection criteria is often intentionally narrow (so as to minimize variability and maximize statistical efficiency), thus limiting the generalizability of the results. Still, it is widely accepted that a properly conducted RCT is more valid than observational studies or nonsystematic clinical observations.
Table 1.2 Possible Hierarchy of Evidence for Therapeutic Studies
Strength | Design |
Strongest | N of 1 randomized trial Systematic reviews of randomized trials Single randomized trial Systematic review of observational studies addressing patient-important outcomes Single observational study addressing patient-important outcomes Physiological studies |
Weakest | Unsystematic clinical observations |
Source: Adapted from Guyatt GH, Haynes RB, Jaeschke RZ, et al; Evidence-Based Medicine Working Group. Users’ Guides to the Medical Literature: XXV. Evidence-based medicine: principles for applying the Users’ Guides to patient care. JAMA 2000;284:1290–1296.
With this in mind, many clinicians commonly refer to a hierarchical pyramid of evidence to determine the strength of a study. First introduced in 1979,35 this concept has been refined over the years, and is presently used in a wide variety of guidelines.36,37 These systems commonly place RCTs in a pyramidal hierarchy above observational studies (Table 1.2). It should be noted, however, that although it is true that RCTs are a critical component of the evidence base for questions of therapeutic efficacy, it may be inappropriate for other questions (e.g., of prognosis or natural history where exposures cannot be controlled, or where RCTs cannot be ethically conducted).38 In situations where RCTs are neither feasible nor practical, observational designs may be superior. In this regard, dedicated research registries with rich clinical data have greatly facilitated quality improvement, post-trial evaluations in “real-world” settings, evaluations of process of care, and surveillance for rare outcomes. Thus, it is increasingly apparent that registries and databases are indispensable tools that offer unique advantages over even well-designed trials in some circumstances.
Furthermore, although RCTs may potentially provide the highest quality of evidence, not all RCTs are properly conducted or analyzed. Grading the quality of evidence, therefore, should not be based entirely on the study design alone.34,39–41 In this regard, the Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) Working Group has identified the shortcomings of using a rigid evidence pyramid and take into account other important study factors in addition to design.42,43 For RCTs, study limitations, inconsistency of results, indirectness of evidence, imprecision, and reporting bias are taken into consideration; for observational studies, the magnitude of effect, the presence of a dose–response relationship, and the impact of potential biases are taken into consideration. Accordingly, the grading of a study may shift down or up given the presence or absence of these factors.42,43 Of note, the current American College of Chest Physicians (ACCP) antithrombotic therapy and prevention of thrombosis guidelines have presented clinical recommendations based on the described GRADE methodology (Table 1.3).43,44 These ACCP guidelines, a mature clinical practice initiative, are particularly relevant to the content of this handbook.44
Thus, this book follows a similar template, where appropriate; statements relating to prognosis, diagnosis, harm, and therapy are accompanied by a clear articulation and grading of the evidence. Understanding how to correctly assess the quality and strength of evidence is foundational for the practice of EBM.45
< div class='tao-gold-member'>

Stay updated, free articles. Join our Telegram channel

Full access? Get Clinical Tree

