Having a foot in both camps and generally loving both groups, I sense an unhealthy straining of relationships between clinicians and scientists in rehabilitation. Where once clinicians dominated conferences and the literature, the field is nearly all led by scientists. Both scenarios are unhealthy and out of balance.
Let’s be one sided for the moment and have a dig at the scientists:
It has been said that “research is like motherhood – there is no such thing as an ugly baby” (Greer 1987). Of course, researchers want to validate their choice of subject and world view, but aware clinicians see scientific arrogance emerging in some quarters, of proclamations of what to do in the clinic by clinically immature researchers, they note research niche protection at conferences at any cost, or often becoming an expert on all matters. Many scientists openly state they hate talking to clinicians. I see dangers of research protocols driven by data and PhD factories rather than idea and experimentation emerging from clinics and I see the ghastly and paradoxical situation where some clinicians feel they have to do a PhD to be someone!
Clinicians see the methodological bastardry of their tenuous clinical ideas even if research based, (e.g. Johnson et al 2012) and they do wonder sometimes “why on earth was that researched”. Reports of scientific misconduct LA times don’t help and as clinicians often can’t access the journal articles unless they are open access, a kind of firewall has been placed between them and science (as an aside see the guardian)
In medicine the decline of the clinician scientist is often lamented – the percentages of MDs getting NIH research funding compared to PhDs has decreased sharply and there are worries that few scientists have a deep understanding of clinical problems.
In the rehab professions however, I believe and hope that the clinical scientist is just emerging. The features of a clinical scientist range from active and wide readership, integration of science to clinical behaviours, guideline following, data collection and research collaboration. But this clinician researcher divide could limit this. The divide exists in all health professions and one result is that research translational problems remain and much beaut research sits in the “valley of death”(Roberts, Fischhoff et al. 2012) never or belatedly to see the light of day and the use and uptake of clinical guidelines is minimal. And how many clinicians read just one paper in their association journal?
Of course, it is not just the scientist or their system which may be at fault here, despite my one way attack – I know that many do it tough. We’ll discuss clinicians later and we tend to have a go at them all the time anyway. The issues here could be dealt with here by the recognition of the potential of the growing clinical scientist group in rehabilitation, encouraging them and including them. It has to be a win-win situation.
Some suggestions for scientists:
- interview at least three clinicians who are using the techniques or strategies researched. Ideally interview the originator of the idea if it came from the clinic. This may have an impact on methodology and outcome measurements
- be involved in translational research. Did your paper have an effect, what is the basic knowledge of the target audience, advise those who construct guidelines
- include a clinician in the team if appropriate – they like names on papers too and can be very helpful in the discussion section and have often worked extremely hard at reasoning an idea or technique.
- Clinicians love review papers.
- Never be frightened to talk to clinicians. Clinicians and scientists – ultimately one won’t exist without the other
Finally, a big thanks to all scientists for your contributions to date.
David Butler
www.noigroup.com
References
Johnson S et al (2012). Using graded motor imagery for complex regional pain syndrome in clinical practice: failure to improve pain. Eur J Pain 2012 16: 550-561
Greer, A. L. (1987). “The two cultures of biomedicine: Can there be consensus.” The Journal of the American Medical Association 258: 2739-2740.
Roberts, S. F., M. A. Fischhoff, et al. (2012). “Transforming science into medicine: how clinician scientists can build bridges across research’s “valley of death”.” Acad Med 87: 266-270
“Of course, researchers want to validate their choice of subject and world view, but aware clinicians see scientific arrogance emerging in some quarters, of proclamations of what to do in the clinic by clinically immature researcher”.
Thanks for an interesting blog David. I would consider myself bot a researcher and an experienced clinician. I wonder whether in many cases “scientific arrogance” might be a label thrown by clinicians at scientists who produce displeasing results? The evidence based project has often not been kind to therapies and such results can be hard to take. In my experience of therapy research, many researchers in the field, like their clinical counterparts, strain against the more negative connotations of their findings, avoid the more damning conclusions (even where they are the most approriate ones), and look for reasons to continue to love that therapy (for example sub-grouping). Ted Kaptchuk once referred to this as “rescue bias” – find an excuse that the results are wrong, rescue the therapy. Most therapy researchers arose from the field, love the field and, like clinicians, want to defend the field.
With my reseachers hat on I am acutely aware of an alternative source of arrogance. The arrogance of “knowing” through “expertise” and observation which treatments have value. This position requires an active denial, (or often sometimes ignorance) of the vast range of confounders that corrupt this process. Regression to the mean, natural recovery, cognitive biases, “placebo” (whatever that is). I think all clinicians, when rating the primacy of clinical experience, should reflect on the compelling story of Bill Silverman and his premature babies: http://www.jameslindlibrary.org/essays/cautionary/silverman.html
The prophecy of knowing a treatment works in this way is self fulfilling. We wouldn’t continue with it if we didn’t believe. On this basis every treatment is useful and for ever will be, from guanethidine blocks for CRPS (not useful, likely harmful) to homeopathy.
There is much bad and ill-conceived research. There are some, frankly ridiculous “innovations” arising from the clinic. I mean, who knew that a stretchy brightly coloured sticking plaster would be so wonderful once we gave it a funky name (kinesiotape) and charged a premium for the courses?
But moaning aside I agree that the best solution is for clinicans and researchers to be connected, to discuss, to debate. Research should be driven by clinically relevant questions after all, but clinicians should perhaps recognise the limitations of their experiential knowledge. For this to happen though I think that we all need to up our game in one area particuarly – not physiology, not neuroscience, not anatomy. Critical thinking and contrsructive skepticism. I think this is often missing in the research arena as well.
Cheers
Neil O’Connell
“Critical thinking and contrsructive skepticism” ahh, but there is (some of) the rub. I think it it was Lorimer in a recent BiM post that characterized (good) science as a process of developing hypotheses and then going about trying to disprove them.
Arguments about p values, null hypotheses and asking the right questions aside, is it easier to maintain critical thinking and contrsructive skepticism in a scientific/experimental context as compared to the clinical setting?
I think it may have been BiMster Laura Gallagher who posted over there at one stage about the powerful effect of the clinician’s belief in what they do and the positive impact that this can have on outcome.
Anecdotal and experiential most definitely, but I’ve met many a clinician who I think provide absolute horse$&@t treatment but have clients that swear by them and keep going back. These clinicians truly, deeply believe that the “ultrasound therapy” helps to warm up soft tissues prior to mobilization so as to enable them to loosen those facet joints that much more effectively. Ditto for interferential, pulsed shortwave, magnetic pads, dryaccuneedlingpuncture and so on.
I can also vividly recall the moment I stopped reading the Australian Physio Journal and ceased membership in the APA so I didn’t receive it anymore. It was after reading yet another article that suggested that physiotherapy had no benefit over advice and home exercise post Colle’s fracture ORIF.
I had *very* different ideas about Physio back then (long time ago!!) and I recall thinking that we were a profession that was trying to self-destruct by proving our worthlessness in our own journals.
As an aside, I can not ever recall reading an article in a chiropractic journal that concluded that chiropractic treatment isn’t effective…… For anything….
Now? Well now I’m all for some deconstruction of the profession. Deconstruction of the fallacies of trigger points, core stability, biomechanical dysfunctions, pain “caused” by muscles, joints, fascia, ligaments that can be “fixed” by a therapist heating, cooling, electrocuting, stretching, releasing, strengthening, manipulating, relocating, adjusting, re-balancing, correcting, puncturing, pressing, mobilising, cavitating, heating and mobilising, strengthening and lengthening, cooling and stretching and so on and so on and so on.
And I think it is the scientists that have provided, via their rigorous scientific method (as unpenetrable as it can be at times to outsiders) the wrecking ball and dynamite that have been (and will continue to be) necessary for this knock down and rebuild.
As well as the tools required to reconstruct with an emphasis on understanding, educating, explaining, learning, biopsychosocialising, reinhibiting and empowering.
And what’s a good rant without some colloquial analogies…. Because you can’t make an omelette without breaking some eggs, maybe this tension is a good thing; yes there’s cognitive dissonance to deal with (over at a popular science based PT forum they call this “crossing the chasm”) and the pain of letting go of quaint ideas and deeply held beliefs, but there’s also opportunity to come out of this tussle and tension stronger and better off overall.
It’s been said that when it comes to bacon and eggs, the chicken contributed, but the pig *commited*. I’m in the camp that thinks the scientists already have curly tails. I reckon it’s time for clinicians to grow a snout and commit to keeping up to date, to commit to being challenged and at the same time challenging -themselves, each other and the scientists.
Last year, when Dr Mick Thacker (the world’s nicest smart person) was in Australia talking about the neuroimmune system, a few phsyios’s openly had a bit of a moan about how complex it all was. Mick pulled no punches in his response, he simply stated that if as a therapist you were out there putting your hands on people, claiming to “treat” them, then you had an obligation to work towards understanding the latest research, no matter how complex. A challenge that has stuck with me since.
Thanks for the response to the noijam post Lorimer – I agree with most of it, but I am not convinced of your assertion that “precision and honesty is at the core of scientific practice”. To use a good Australian term – “you’re dreamin’ my friend ”. You have just copped it yourself in my view and it ultimately affects us at the retail end of science, i.e. the patient battlefront.
I am an avid consumer of as much of the CRPS based literature as I can and I have just read a review paper (Bailey J, Nelson S, Lewis J, McCabe C 2012. “Imaging and clinical evidence of sensorimotor problems in CRPS: utilising novel treatment approaches” J Neuroimmune Pharmacol DOI 10.1007/s11481-012-9495-9) that provides an incredibly selective review of CRPS literature. This is a review that would leave the reader who does not know the field with a very inaccurate picture of the state of the science, and a reader who does know the field (and that includes a growing number of good clinical scientists), baffled that several of the most important studies are conspicuous by their absence. That a review on sensorimotor findings in CRPS could mention no more than one of your many landmark studies is intriguing. How can one discuss graded motor imagery without mentioning it? How can a ‘review’ cite the only published paper with a null result and miss the several with good results, better methods and better reporting? How can one ‘review’ neglect-like findings in CRPS without mentioning you, Gallace or Spence? How can one review tactile discrimination training without mentioning the only rigorous studies on that topic? How can one ‘review’ mirror therapy without mentioning any study that is not overwhelmingly positive?
You argue that precision and honesty are integral to scientific training and practice, yet here is a paper (and there are others too) that clearly does not survive your challenge to not compromise precision or honesty in the pursuit of impact. Is this an example of the “research niche protection” that I referred to in my blog? It is certainly an example of science being held back from the clinic and its clinicians and patients who ultimately suffer.
It’s ironic to me, that in this case, your work is that which is most obviously missing in action. The authors seem to bend over backwards to avoid citing you. This is surely a failure of scientific process that extends to the review process as well and I am not convinced that all scientists hold precision and honesty at core of what they. Maybe we need clinical scientists to monitor the review process as well, but at the moment we’re too busy at the clinical battlefront.
David Butler
http://www.bodyinmind.org/finding-the-love-between-scientists-and-clinicians-a-response-to-dr-butler-on-noijam/#comment-142274
Wouldn’t it be interesting to have researchers think of the commercialisation aspects of their research- can they not only research something that is relevant but work with others to package it in to something that is day-to-day useful for the clinician. This takes a certain entrepreneurial thinking and business focus, but it would be exciting!
What an excellent post; thank you! All the more so coming from David Butler (for it is he…)
I am but a humble clinician, so I feel I have climbed a lofty ladder to reply to this blog.
So, is this not a question of synthesis? Clinicians on the one hand with their practice-based evidence; scientists (clinical or otherwise) with their evidence-based practice.
Personally, the most valuable thing about learning the research methods taught to me at undergraduate and postgraduate level is having a toolbox with which I can dismantle (sometimes destroy!) research articles to assess them for their quality; assess their potential application to the people who come to me for treatment; see how it may challenge my thought-habits and (after a healthy dose of ‘constructive skepticism’) change my practice. In spite of what David writes about (“And how many clinicians read just one paper in their association journal?”) I do read the research literature; and I don’t think I’m an exception. I will look for the ‘best available evidence’, down to baseline physiology, to find a framework for what I am doing. Experiences with researchers, however, have left me with the impression that sometimes their attitudes (and perhaps even their thinking) is inflexible. As a clinician I feel the weight of duty in the need to integrate scientific evidence with my propositional knowledge. Once again, thank you David for suggesting that clinical researchers ought to be doing the same by integrating what clinicians are doing or thinking about.
Of course, as Lorrimer Moseley points out in his reply, there is an honesty and rigour in scientific method. But it is about continuing to ask the right questions, not simply saying, “we have answered / disproved [insert ‘unscientific’ treatment technique here], no further questions need to be asked”. I wonder why more researchers are not interested in why patients keep coming back for those treatments that are deemed to be ‘unsound’ based on the current research evidence. Is healthcare in general, first and foremost, just about human interaction? The rest is probably finery. It is interesting (to me at least) that the Hierarchy of Evidence has become rather like the Tablets of Stone. While I understand this hierarchy, I am in agreement with Anne Bruton’s blog entry; there are some parts of physiotherapy clinical practice that an RCT will never be suitable for. Even more interesting is that I find if you flip the pyramid, and stand it on its apex of SRs and Meta-analyses of RCTs, you find something that more accurately reflects the attitudes and beliefs of the patient who is sitting in front of you in the clinic room. What some bloke or their Aunty Ethel swears by for treatment will hold more sway for them than high-falutin’ RCT evidence.
Still, we all have our furrows to plough. What would be lovely would be a little more cross-fertilisation of the fruits of our respective labours.
All the best
Nice to hear from David, as always his comments are thoughtful and thought provoking. I have only one line to take to task, and it is his final one that scientists and clinicians need each other to survive. As I said in my conclusion at the Great Debate at the APA Conference in Brisbane a couple of years ago, imagine a world without physio researchers. What impact would that have on the people seeking physiotherapists for treatment? Now imagine a world without physio clinicians, what impact would that have on physio researchers? With no one to implement research findings what would be the point?
Good, clear thinking and science based clinicians will always be in demand irrespective of the swings in the evidence base.
Just a brief comment for now – thanks everyone for the balanced and thoughtful replies so far to the first noijam post – here and on Body in Mind . Good reading.
I am taking 8 researchers out for drinks next week.
David
The posts here, and over at BIM, remind me of another aspect to the straining clinician: scientist relationship. I work in private practice in New Zealand and on a daily basis I experience ‘conflict’ between the part of me that wants to be evidence based, and the part of me that works in the real world.
Let me try to explain:
I endeavour to work from within a biopsychosocial framework, using a modern conceptual model of pain. However, the prevailing culture of our clinic, and I would guess of the physiotherapy profession in NZ more generally, is to use the biomedical/biomechanical model, which we were taught as undergraduates. I am in a minority, conceptualising pain as an output of brain, rather than an input of the tissues.
People come to our clinic often with an expectation of what we do as physiotherapists and what they ‘want’ in terms of treatment. This is especially true if they have seen a physiotherapist in the past. I don’t have too much of a problem with working within these expectations, knowing that doing so probably increases the odds of “success” of the treatment.
Until (here’s where the strain comes in) -the person asks for something that isn’t evidence based.
Take, for example – Acupuncture: 5 years ago I trotted off excitedly to learn acupuncture thinking it would transform my practice. Now, a few short years later, after critically examining the evidence base for acupuncture, I barely use it – unless, someone specifically asks for it. And then I will, but with awareness that I’m probably just treating them with a placebo.
However, giving the patient what they want, when it’s at odds with the evidence, sits a bit uncomfortably with me, and (here’s the rub!) also probably reduces the effectiveness of the treatment!
Lately I’ve been wondering:
• Is it okay to be delivering a treatment, that isn’t supported by the evidence, largely because it offers a ‘meaning response’/works via placebo, when a patient requests it?
• And if the client asks, for some background on the efficacy of the treatment or an explanation of how it works… how do I answer them, without “lying” to them? Do I disclose that I’m really only giving them this treatment because they believe it will work, thereby probably negating the mechanism by which it might? Or do I keep my fingers crossed and just hope they don’t ask?
Sometimes I think my professional life would be easier if I retained the naivety I had as a newly graduated therapist, truly believed sticking needles in someone, or mobilising their right L4 facet joint would fix them, and ‘sold’ my treatments with conviction!
Any words of wisdom on how to ease the straining of this relationship gratefully received!
Louise asks “is it okay to be delivering a treatment, that isn’t supported by the evidence, largely because it offers a ‘meaning response’/works via placebo, when a patient requests it?”
well considering that the majority of what is currently offered has not been through scientific I guess we have been doing it any.
Here is the extreme example – an elderly man comes to your clinic with back pain. he says that his wife was there with back pain last year and ultrasound really helped her. He comments that she recently and he is wondering about a bit of ultraound for his back pain. You have just read the guidelines suggesting ultrasound won’t help. However I daresay that there would be few clinicians out there who wouldn’t dust off the old ultrasound machine, warm it up and almost lovelingly rub it on. Some may even probe a bit deeper and offer some knowledge therapy at the right time.
Our scientist friends are a bit slow helping us out here.! A Pat Wall quote that has been with me for years is something like this…”in the end if much of what you is shown to be placebo, don’t panic – just try and work out what is was in this thing called placebo that helped them and enchance that”.
I think good clinicians can integrate a meaning response into biopsychosocially driven evidence based practice.
Cheers all
David
Some more (frivolous) food for thought:
http://pps.sagepub.com/content/7/6/643.full
(Well, reduced my serum cortisol for a few minutes at least…)
I arrive late to the party! I’ve recently started an MSc in manual therapy combined with MACP. Part of our journey includes blogging about clinical issues – my topic for development being “The hands – on : hands-off debate” Thank-you for the posts above which have added fuel to my discussion, from a postgrad students point of view, the challenge of finding more and more questions rather than answers is all part of the process – but I hope that the answers are out there?!
Please feel free to contribute postgradphysio.wordpress.com