At first I thought that I’d potentially been a bit of a dullard, since both sessions sent me through reading to do as homework ahead of the session – had I actually signed up for 2 critical appraisal sessions inadvertently. Well there was a small degree of overlap, but I’m pleased to say that I got a lot of different things from both sessions.
Running a journal club is something that many librarians do from a supportive point of view – setting up current awareness alerts for clinical teams, so that the selecting of articles to read is easier, perhaps facilitating critical appraisal workshops to increase confidence in reading research literature. But this session was more about running journal clubs for our own professional benefit – need to read the evidence to be able to apply it to our practice, and yet reading it critically (ie constructively, rather than just ripping it apart) can often be a more interesting process in a group.
One of the issues raised and that I’ve noticed from my own experience, is that maintaining momentum is difficult – enthusiasm wanes fairly quickly, and there are always conflicting demands on time. Having journal clubs online can allow for a more extended conversation that can be fitted in round other demands, but there were a few comments that the restrictions of 140 characters mean that Twitter journal clubs didn’t really allow for in depth/ complex thoughts.
Sheila has blogged about this session too. She has liveblogged the opening plenary from Dr Hazel Hall, which was inspiring stuff about how to incorporate research into our practice. And Hazel has put up her slides too.
The critical appraisal session with Wendy and Douglas was a modified version of the sessions that CASP run, and the overwhelming sense in the room was one of dread with regard to the S word – statistics!
Douglas did a sterling job of taking us through odds ratios, relative risk, relative and absolute risk reduction, confidence intervals and the rest. I think that as much as a session on critical appraisal I need to actually go through a set of exercises with some papers, and try to calculate these figures for myself. As well as an exercise in serious self-abuse, it would help it become more real, more practical. But that’s something I can try to do myself later.
What the session did very well was to give practice on actually reading a paper and working through the checklist to answer 2 key questions – did the intervention work, and would it be worth paying for it. This led on to an interesting discussion about the value of carrying out a systematic review that actually established that the evidence was poor, or that the evidence showed there was a poor effect. Douglas and Wendy provided great examples where trials of a drug established its effectiveness 20 years before it became prescribed as standard practice. That is the point of systematic reviews. To show where work can stop (for the time being) because we do actually know the answer, and also (just as importantly) to show where we really don’t know the answer, or not to a sufficiently convincing degree: more work is required. We can’t assume we know the answer, just because it seems self-evident.
This just proves again how important the search in a systematic review is – the amount of bias that can be inadvertently included in a search strategy can be huge, just because people put all the elements of their search question into the strategy, eg does broccoli prevent cancer, borrowing from Wichor’s example. As soon as the “prevention” element is included your search is biased, unless you’ve taken care to balance it off with “cause” synonyms.