For the first time in 10 years I attended the Health Libraries Group conference. I contributed to the programme in 3 ways:
- co-presenting with NHS colleagues Helen Else and Deborah Lepley about Can Do Cafes, an initiative we set up with Leanne Kendrick and Laura Wilkes (aspects of which I’ve shared before)
- poster presentation of collaborative work with NHS colleague Mary Smith, and clinicians Richard Williams and Matt Smith, resulting in publication of multiple guidelines on epistaxis
- the honour of delivering the Bishop LeFanu lecture, on the voluntary work I do with Evidence Aid (aspects of which I’ve shared before)
It’s a really important aspect of any conference that it gives an opportunity to meet up with colleagues, and I had the pleasure of chairing 2 sessions too, which is always fun.
There were 2 standout presentations, one of which had been inspired by a presentation at EAHIL 2016, which I was lucky enough to also attend. Alicia F. Gómez-Sánchez, Mar González-Cantalejo, Gaétan Kerdelhué, Pablo Iriarte and Rebeca Isabel-Gómez presented their work assessing the quality of reporting of a certain subset of systematic reviews. It’s great reading, if you like reading about just how badly reported most systematic reviews are.
Jane Falconer referenced this paper when she reported on her own survey of the systematic reviews produced by her own institution, London School of Hygiene and Tropical Medicine.
— Isla Kuhn (@ilk21) June 14, 2018
Key points from her talk
- great respect to LSHTM for being willing to share the fact that Jane found that a large proportion of their publications failed aspects of assessment on the basis of PRESS, PRISMA and AMSTAR criteria.
- the frustration that all the teaching and guidance that Jane had provided over the years still hadn’t resulted in better outcomes in this audit (though of course we don’t know how much more poorly papers would be reported without her influence)
- that by carrying out this audit and presenting the results to the leadership at LSHTM, the profile of the library was raised.
— Isla Kuhn (@ilk21) June 14, 2018
If the statistic that 85% of research is wasted is still true, then the cost of poorly reported SRs must surely contribute massively to this, and peer reviewers must take a portion of the responsibility for this – how do these papers get accepted?
I was already looking forward to hearing Kate Miso‘s paper, but after Jane’s I was really fired up!
Echoing aspects of that blog by Iain Chalmers and Paul Glasziou, Kate quoted Doug Altman:
Again, there was a sorry tale of school-boy errors in the reporting and a catalogue of other increasingly tooth-grinding errors. They culminate in significant errors which would have consequences for the interpretation of the results. Any poor method will bias the results, and any bias unrecognised will result in problems if the results are applied in practice.
One of the take-homes for me, was that if the review doesn’t actually have “systematic review” in the title, then start looking for more flaws.
Kate was kind enough to share her slides with me after the event, but clearly they’re not for me to share here.
As with Jane’s presentation, I have to wonder about doing an audit of local SRs in a bid to check that my own house is in order, and I’m also minded to start a process of local PRESS activity with reviews that my librarian colleagues and I produce to ensure that the strategies we put together in collaboration with researchers are the best they can be. Watch this space for how both these projects progress.
Librarians can make a difference to improve reporting SRs, but perhaps we also need to do some work to give our colleagues a wake-up call that they need some help.