Clinical reasoning: the core of clinical practice

Chapter 8 Clinical reasoning


the core of clinical practice



One of the most difficult aspects of creating a successful clinical practice revolves around developing efficient clinical reasoning skills. Proficiency in assessment directly interfaces with the ability to determine which modalities and strategies to use and which to avoid, how long and how often to apply them, which communication skills are effective for each person, when to refer to another practitioner, and a number of other critical factors of the treatment plan. The less experienced practitioner needs to acquire the skills right away, yet, by nature they develop primarily with experience and practice. This same inexperienced practitioner might usefully take a course in the concepts of clinical reasoning (or read a chapter such as this); however, true clinical reasoning skills develop over time and, while they need to contain a number of essential features, are uniquely individual to each practitioner. A component of clinical reasoning involves the individual achieving a balance between what has been termed evidence based practice and practice that is based on his/her own experience.


The first portion of this chapter looks at some of the components of clinical reasoning skills, the review of which may help the practitioner to effectively begin to develop these. The remainder of the chapter discusses evidence of effectiveness of manual techniques in the presence of known pathologies. For concepts in addressing specific conditions and discussion of ‘red flag’ warning signs, the reader is directed to Clinical application of neuromuscular techniques: Case study exercises (Chaitow & DeLany 2005, Churchill Livingstone).


A thorough discussion of clinical reasoning can be found in Clinical reasoning in the health professions, 3rd edn (Higgs et al 2008, Elsevier). The authors of this text gratefully acknowledge it as a primary source of some of the following information. An additional source of information derives from the chapter The role of clinical reasoning in the differential diagnosis and management of chronic pelvic pain by Diane Lee & Linda-Joy Lee (Chaitow and Lovegrove 2011).



Evidence vs experience


An important question arises, which impacts directly on clinical reasoning: should evidence change the way manual therapy is practiced?


Therapists and practitioners might be justifiably concerned as to whether techniques and approaches that have previously been found to be useful might need to be abandoned, due to lack of supporting research evidence or on the basis of studies that fail to provide concrete evidence of efficacy. When considering this, it is important to retain a perspective that recognizes that lack of proof does not equal disproof. It is also possible that ‘evidence-based practice’ – in the strictest sense – is not possible to achieve in manual therapy since so much of what is done clinically has not been specifically researched – and when assessment and treatment methods have been studied, results have frequently been equivocal (Seffinger et al 2004, Hsieh et al 2000).


Scientific evidence should inform practitioners about clinical work and influence assessment approaches and treatment choices. However, it is not possible to base practice only on manual palpation and treatment techniques that have been supported by high-quality evidence – this simply does not exist. It is possible that there will never be sufficient supportive evidence for all manual techniques used due to the cost and difficulty of the research that would be required.


Another consideration is that many studies examine single modalities (Ballantyne et al 2003, Lenehan et al 2003, Wilson et al 2003), while in a manual therapy practice, techniques are usually used in combination (for example, myofascial release and muscle energy techniques), rather than in isolation, as described later in the chapter (Noll et al 2000). As a result, the contribution to outcomes of individual modalities remains difficult to judge.


With these thoughts in mind, it is important to emphasize that it would be unwise to abandon techniques that have a long history of anecdotal evidence of efficacy, but which currently lack scientific support. Hence, clinical practice might benefit significantly from a combination of evidence-based research and clinical experience. See Box 8.1 for a brief evaluation of these issues.



Box 8.1 Evidence based practice (EBP)


There is unquestionably a need for research to validate the efficacy of manual treatment. To a large extent, this derives from demands from governments, health authorities, and insurers to prove that what is done in manual clinical practice actually works and is safe. There is also an academic, intellectual, desire to understand the nature of the dysfunctions being treated and why (or whether) manual treatment can effectively modify or remove these?


Research should, therefore, enable practitioners and therapists to objectively examine and determine the most effective ways to treat their patients, feeding into clinical reasoning, and evidence based practice (EBP).


And, of course, when actual evidence emerges from research that a particular approach or modality has no therapeutic benefit, or that it lacks reliability and validity as a diagnostic or therapeutic approach, or that it poses potential dangers to the patient, practitioners have an intellectual and ethical duty to reconsider their practice of that approach.



Origins


The term ‘evidence based’ was first promoted by Eddy (1991), while the expression ‘evidence based medicine’ (EBM) was coined by Guyatt et al (1991). The natural evolution of these ideas pointed towards evidence based practice (EBP). Subsequently the methodologies used to determine ‘best evidence’ were largely established by a Canadian McMaster University research group, led by David Sackett and Gordon Guyatt (Jaeschke et al 1994, Sackett et al 2000, Guyatt et al 2004).


Guyatt (1992), a Scottish epidemiologist, has been credited with increasing the acceptance of the principles behind evidence-based practice. His work led to the naming of centers of evidence based medical research, as Cochrane Centers, which form the Cochrane Collaboration. Evidence-based medicine categorizes and ranks different types of clinical evidence, using terms such as ‘levels of evidence’ and ‘strength of evidence’ to refer to the protocols for ranking evidence that emerges from research studies, based on the quality of the study being examined, and its relative freedom from bias.


The highest level of evidence for therapeutic interventions is considered to be a systematic review, or meta-analysis, that includes only randomized, double-blind, placebo-controlled trials that involve a homogenous patient population and condition.


In the EBM/EBP model, evidence that derives from expert opinion is considered to have little value, being ranked lowest due to the placebo effect, the biases inherent in both the observation, and reporting of cases, and difficulties in discerning who is actually an ‘expert’.




The place for experience


Sackett et al (2000) defined evidence-based practice as ‘the integration of best research evidence, with clinical expertise and patient values’.


They note that, ‘External clinical evidence can inform, but can never replace individual clinical expertise, and it is this expertise that decides whether the external evidence applies to the patient at all, and if so, how it should be integrated into a clinical decision.


Clinical expertise, therefore, comprises both such evidence as exists, as well as skills and experience – with the practitioner/therapist knowing what, and how, to do the right thing at the right time (i.e., a combination of clinical reasoning and skill).



Recently, the term ‘evidence-informed’ has surfaced, the intent being to suggest that since there is not enough research evidence for every situation met in clinical practice, the clinician should be informed of what is known and make their clinical decisions accordingly. However, if we adopt Sackett et al’s definition of EBP, there is no need to modify the term, since clinical expertise (reasoning and skill) is seen to form a major part of the definition of best practice.


The implications of the discussion, and evidence found in Box 8.1 suggests that clinical reasoning – at least in part – depends on the practitioner’s grasp of the balance between evidence, information and clinical experience.


An aspect of the process of clinical reasoning, therefore, involves empiricism – the use of knowledge deriving from evidence gathered via the senses and the interpretation of what the clinician sees, hears and feels, in relation to the patient. A major feature of the gathering of evidence in manual therapy settings, therefore, derives from palpation and observation.



Bullock-Saxton (2002) has observed that: ‘Strategies taught to enhance clinical reasoning should ensure a high level of knowledge and organization of that knowledge; the development of a capacity to accurately perform technical and manual skill … and encouragement to understand any [clinical] problem at a deeper level.


In relation to palpation and observation evidence, Lee & Lee (2002) suggest that clinical reasoning – for example, in a setting that involves assessment of the mobility of particular joints – requires that a number of different tests be employed to reach a conclusion of hypomobility, hypermobility or instability. They insist that such a conclusion cannot be reached from one test alone.


The importance of not relying on only one test, or assessment method, when attempting to devise a therapeutic plan is highlighted by May et al (2010) in their systematic review of the reliability of physical examination procedures, used in 36 separate studies of the clinical examination of patients with shoulder pain. May et al found that ‘Overall, the evidence regarding reliability was contradictory’ and that ‘There is no consistent evidence that any examination procedure used in shoulder assessments has acceptable levels of reliability.’ They add that, because tests are frequently used in conjunction with each other to support clinical decision-making, it might be that when used in this way, these tests are more reliable. Clinical reasoning, in relation to any patient with a shoulder problem (and by implication, almost all musculoskeletal conditions), demands that such cautionary analysis of the validity of individual assessment tests should be kept well in mind. To repeat the caution of Lee & Lee: conclusions cannot be reached from one test alone.


Discussing chronic pain, Dommerholt (2009) notes that the initial clinical task relates to obtaining information, regarding the causes of any problem, as well as the nature of the patient’s local and global tissue-status and stress adaptability. Clinicians are urged to strive for completeness in their observations, and, to that end, a thorough examination is required involving a detailed patient history, observation, functional evaluation, palpation and the drawing of relevant conclusions (Materson & Dommerholt 1996). It is also important, as part of the ongoing process of clinical reasoning, that new data should be collected at each encounter, with a flexible attitude being maintained regarding the initial clinical hypotheses, which may need to be modified to facilitate efficient and effective patient management (Jones 1994). An important, obvious, but often neglected observation is that a diagnosis – for example, of myofascial pain syndrome – does not exclude other possible problems being involved, such as joint dysfunction or metabolic insufficiency. The process of clinical reasoning demands – at all times – that all possible contributing factors to the pain syndrome be considered.


Dommerholt (2009) also notes that, in the context of all possible contributing factors being considered as part of the process of clinical reasoning, particular importance should be given to evaluation of the emotional and psychological aspects of individuals with chronic pain. Attempts should be made to gain insights into cultural, familial and interpersonal dynamics, coping skills and the presence of fear avoidance (Bennett 2002, Vlaeyen & Linton 2000). With these thoughts in mind it is useful to remember that the chronicity of a pain problem may be related to specific stressful conditions or situations, and these need identifying.


The authors of this text would add other considerations to the clinical reasoning reflections required. The following are examples of strategic questions that can be used to provide comprehensive data to the assessment.



The issue of what has become known as clinical prediction rules, which is explored in Box 8.2, might also be considered. While it remains unclear as to the degree of value clinical prediction rules will provide, it is obvious to the authors of this text that clinical decision-making benefits from sound development of practical skills, from clinical experience, and from evidence-based research, when it is available.



Box 8.2 Clinical prediction rules (Chaitow 2010)


A trend in manual therapy has been the development of clinical prediction rules (CPR).


These are ‘rules’ that are derived statistically – literally ‘translated’ – from research evidence, with the aim of identifying the combinations of clinical examination findings that can predict a condition or outcome (Fritz et al 2003, Fritz 2009, Cook 2008).


Falk and Fahy (2009) have summarized the key element of CPR as follows:



In manual and movement therapies, this might translate into a focus on particular problems, such as nonspecific low back pain (LBP), as well as those patients enduring this condition. This allows for a degree of categorization – so predicting which forms of treatment would be most likely to be of benefit to LBP in general, and/or which specific subgroups of patients with LBP should be targeted with particular therapeutic approaches.


If such prediction were reliable, this would make clinical reasoning a lot easier. All that would be needed would be to slot a particular patient, with a particular set of symptoms, into a category, and to treat according to the prediction rules.


The obvious question arises as to the reliability of the tests involved in producing such categorization.



Reliability?


Patelma et al (2009) examined inter-tester reliability in classifying sub-acute low back pain patients, comparing specialist and non-specialist examiners. They observed:



Not surprisingly, the better trained the individual practitioners, the more accurate the findings.


As to the reliability of tests used for placing types of low back pain into separate groupings, the evidence is variable.


Patelma et al (2009) summarize the current situation as follows:



A further obvious question is whether classification of different types of low back pain actually improves clinical outcomes. In a study involving over 2000 patients with ‘mechanical low back pain’, in which there was no direct reference to anatomic site, or pathological process, Hall et al (2009) observed that:


Stay updated, free articles. Join our Telegram channel

Dec 11, 2016 | Posted by in NEUROLOGY | Comments Off on Clinical reasoning: the core of clinical practice

Full access? Get Clinical Tree

Get Clinical Tree app for offline access