The application of principles of evidence-based medicine to neurosurgical practice is intended to improve patient outcomes. Patient quality and safety improvement efforts are designed to measure actual quality and safety metrics in practice as a way to assess and incentivize actions intended to improve quality and safety. Evidence-based neurosurgery and formal quality and safety projects are complementary, with evidence providing the scientific basis for optimizing care and QI/Safety projects providing real-time feedback on achievement of the goal of improving outcome.
In this chapter, the basic principles of evidence-based practice applied to neurosurgery are reviewed and their connection to formal quality and safety efforts is highlighted.
KeywordsEvidence-based medicine, Quality improvement, Patient safety
Understanding Variation: Noise and Bias 99
The Role of Process Improvement, Quality, and Safety Projects in Evidence and Evidence-Based Practice 100
Decisions, Decisions 101
Problems in an Evidence-Based Practice 103
Readings on Evidence-Based Practice 105
The deepest sin against the human mind is to believe things without evidence.
The fundamental assumption of evidence-based practice is that basing decisions in practice on best available evidence will maximize the likelihood of correct diagnosis, effective treatment, and minimal complications. Therefore, evidence-based practice should result in high quality, safe neurosurgical practice. It is therefore sensible to review the evidence-based practice of neurosurgery as a way to support efforts to improve quality and safety in neurosurgical practice. In this brief chapter, we will review the principles of evidence-based practice in neurosurgery and the role that formal quality and safety efforts play in continuously improving the evidence-based practice of neurosurgery.
It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.
The definition of evidence-based neurosurgery with which we have worked since 2006 is “a paradigm of practicing neurosurgery in which:
the best available evidence is consistently consulted first to establish principles of diagnosis and treatment that are,
artfully applied in light of the neurosurgeons’ training and experience informed by the patient’s individual circumstances and preferences,
to regularly produce the best possible health outcomes.”
Encapsulated in this definition is that the evidence-based practitioner begins with generalizable knowledge about the condition to be treated, interprets that knowledge in light of the individual circumstances of the patient being treated, and is consistent in the application of these principles. The definition also recognizes that generalizable evidence cannot answer every question or deal with every individual circumstance the practitioner may confront and therefore allows for the artful application of principles to individual circumstances.
Understanding Variation: Noise and Bias
“Onestaisément dupé parcequ’onaime.” (One is easily fooled by that which one loves.)
This definition begs the question “what is evidence?” Evidence comes in many forms. A single observation can make the difference between knowing that something can happen or believing that it is impossible. A series of observations in apparently similar circumstances can lead to a tentative conclusion (or hypothesis) about a predictable phenomenon.
Moving beyond simple sets of observations, it was not until the 1800s that physicians began to analyze collected series of observations in more rigorous ways. This began with counting (e.g., numbers of patients at risk and numbers of patients actually becoming ill) and tabulating (number of patients at risk, number of patients actually becoming ill, cross tabulated with gender). This allowed a more sophisticated analysis of collected observation.
By the early 1900s, it became clear that such collections of observations were subject to some natural forms of variation. When the individual physician published his tabulated results, it was with the hope that other physicians could use those results to predict outcomes. It was recognized then that the physician who collected the results from his practice considered his patients to be a “sample” of the larger population of people susceptible to this disease. The study of sampling and the variation in tabulated results that could result from observing only a subset of the patients to whom the results might be applied led to the development of biostatistics. Statistics were developed to allow an analysis and understanding of the natural variation results or “noise” in the data. This was a major intellectual advance in understanding the observational evidence of medicine in that it allowed quantification of the accuracy of estimates obtained from samples of larger populations and estimates of the likelihood that differences found between sample populations were simply the result of chance variation or “noise” rather than real biological phenomena.
The next major intellectual advancement in understanding collected clinical evidence was the realization that the methods of data collection could lead to important distortions in results, or “bias.” For example, collecting data from patients who came to see a famous consultant physician in a large city might produce results very different from those patients with similar complaints who came to see a general practitioner in a small town. The simple recognition of this possibility led to more careful interpretation of results. Methods for controlling for bias then began to be developed. The discipline of experimental design arose from the need to assure unbiased collection of data. By the middle of the 20th-century, additional methods such as blinding of observers to the applied experimental techniques and randomization of subjects were developed to minimize the likelihood that bias would lead to incorrect results. While the fields of clinical epidemiology, clinical biostatistics, and trial design evolve continuously, these techniques for controlling noise and bias in the collection, analysis, and interpretation of clinical data provide much greater assurance that the results accurately reflect reality.