Reduce / Reuse / Recycle / Review / Respect / Report / Robust
These are the themes that came through to me during and since the EAHIL 2019 workshop in beautiful Basel.
It’s always great to meet colleagues at conference, and to take time out to think about particular topics and challenges with them. I particularly love EAHIL for giving that space. Check out all the tweets at #eahil19.
inspired by the Rhine, brace yourself – this is a long one……
Reduce / Reuse / Recycle
The amount of waste in research is a classic quote now. Where we, as librarians, are involved in waste in terms of how much time we spend, creating search strategies from scratch, rather than re-using/adapting existing strategies.
Now there’s something of a circular argument, that research that is reported badly probably wont have the search strategy included which means that it’s not available to reuse. Fair point, and one I’ll come back to.
I’m sure most of us do try to find Cochrane and other reviews which overlap with our topic, and we make use of portions of the strategies which are reported there. And there are other ways to solve this problem: Jane Falconer from LSHTM uses her institutional repository, loading up the search strategies. This has the double benefit of not having to rely on appendices in online articles which might be behind a paywall, and also gives the strategies DOIs of their own. She’s even been so fabulous as to create a guide as to how to do this! What a star!
There are also other banks of search strategies, created by Dutch colleagues and an EAHIL site too. This might raise the question of credit – at what point is a search strategy intellectual property which should be cited in the paper, how many tweeks makes it your own? I reckon that a search filter used without tweeks should be cited.
On this note, the quality of search filters was discussed, at a session facilitated by Alison Bethel and Morwenna Rogers, in particular how difficult it was to develop a filter for qualitative studies. What we noticed in particular was how different the quality of indexing for qualitative studies was between Medline and CINAHL, enough to influence @wichor‘s future practice:
I also learned (via Lina Gulhane’s session and more recently tweets from @srobalino) about OVID Search Launcher – so long as you remove the numbers from your strategy, you can upload a massive long search string into the OVID interface without lots of copy/paste! Wonderful (though it’s a bit of a drag having to remove the numbers…) @srobalino’s twitter thread also has a suggestion about using OVID jumpstart – suggested by @v_woolf, which I’ll need to play with. I think that @v_woolf‘s blog will be required reading from now on, for expert hints and tips like these.
Add to this list of new toys is SR-Accelerator, and the polyglot section, which can help with translation of strategies between databases.
The issue of librarians involved in peer review was raised at EAHIL in Seville in 2016, and was the subject of much debate in Basel. Ideally poor quality work should be improved much earlier in the research life cycle than at point of publication, but if librarians were more involved at the peer review stage this might help some flawed papers making it into print. Watch the combined EAHIL/MLA/CHLA/UHMLG space for some action on this front.
I know there are some librarians who already are regularly involved in peer review – Wichor Bramer and Dean Giustini (see Dean’s thread about open peer review) are just 2 examples, and I’ve tried it once or twice myself. Personally, I had to take a very deep breath before I did it, but I focused on the bits I knew, and ignored the other bits.
But wouldn’t it be so much better if the quality of the search strategy, and search methodology was improved before screening took place? This is where PRESS comes in, and I enjoyed another very engaging session with Alison Bethel and Morwenna Rogers, where we worked through a PRESS checklist. Another set of eyes to check over a strategy is always valuable, spotting gaps and typos, questioning the decisions we’ve made about databases etc.
My colleagues in the Medical Library and I are going to try to be more diligent about doing this in future (we’ve tried it once or twice, more as a training exercise than anything else – Note to self- could this be used in teaching with students?? ) And for those without willing colleagues close by, there’s the PRESS-Forum which you can register for, to ask the community in the forum to review your searches.
But what I particularly liked was the suggestion that the fact that the strategy had been PRESS’d could go in the methods section, as evidence of rigour. Since wordcount is always a problem, perhaps a short narrative could go in to the appendix to explain some of the decision making around the strategy. This could help with replication too. Nice!
Sandy Campbell was very clear: “We don’t work for acknowledgements”. The issue of respect for the contribution that librarians make to systematic reviews, and their right to be co-authors was discussed in several sessions. I really liked the form that Sandy and her team uses at University of Alberta, to set out right at the start the expectations of workload.
Librarian as co-author?
They also list explicitly on their libguide “who is an author” – just to make it absolutely clear.
If you don’t ask, you don’t get, and I will be reviewing local practice (ie nicking that form!)
I also had a great conversation with @sandyiverson about her teams recording of “billable hours”. Even if you don’t charge for a service, it can be a powerful tool to say how long a piece of work actually took (and possibly add a ££ which will not be charged). We do a lot “for free”, but there is a cost.
This also raises the issue of how you set up a systematic review service – what are the ground rules? what’s a free service, what’s not? I wasn’t able to attend Hannah Ewald’s session, but it’s definitely something I need to explore further.
The fabulous team at KSR, Caro and Shelley, gave a great session on how librarians can improve the conduct and reporting of systematic reviews. PRISMA and PRISMA-P, and, still in draft, PRISMA-S were all part of the conversation. If methods aren’t reported well, then that raises doubts about the validity of the findings.
We should definitely use our place as co-author to comment on the whole paper that our name is attached to before it is published. We could even acknowledge the limitations (doesn’t every paper have some limitations?) – if not in the methods, perhaps in the discussion section? or in the narrative in the appendix that I mentioned earlier.
In the session we used ROBIS, the Risk of Bias tool, to evaluate papers. As co-authors, we could advocate (and use ourselves) this tool before submitting for publication, to help us see the research as others might see it, and maybe spot gaps/flaws before it’s sent for peer review.
All these steps make research more robust, and therefore more worthwhile.
Better research = less waste. What’s not to like?