Does the science publishing system need to be saved? A radical idea from our guest blogger Dr. Ram Blur
Scientists live in fear under the mantra ‘publish or perish’. This simple phrase has far reaching ramifications that many believe are harming both the reputation and integrity of science. The culture most scientists find themselves in is that their ‘value’ is measured by both the volume and ‘impact’ of their published work. Herein lie two problems. The first begins when researchers ask themselves, ‘how many papers can I get out of this?’ Most researchers, especially those in ‘un-sexy’ fields, are unlikely to ever get ‘high impact’ papers and so go for volume. At its worst, this can mean work is split up and spread thin. I’m sure many researchers are also familiar with trying to whip and fluff an otherwise stray dataset into a small paper. The second of our problems relates to ‘impact’ with high impact papers being used as an inappropriate proxy for good science. The allure of high impact journals such as Nature and Science can prove an all too tempting a carrot and has led to high profile fraud scandals but its effects can be more subtle. One major problem is the urge to overstate claims in a bid to make findings more attention grabbing and sensational. While the liberal use of bold language in abstracts and discussions may not bring about the downfall of science, it is symptomatic of an unhealthy culture where results are analysed with one eye on positive findings and the other firmly shut to anything that might mess up the ground-breaking paper. Instead of considered caution we have a culture of bold claims and superlatives.
The culture most scientists find themselves in is that their ‘value’ is measured by both the volume and ‘impact’ of their published work.
These issues, among others not discussed, have negative knock-on effects most notably on robustness and integrity – two words I feel make for a far more suitable mantra. Science is suffering from a well-documented reproducibility crisis as highlighted by one survey which found 70% of researchers have failed to replicate others results while half also failed to reproduce their own. In less anecdotal fashion, scientists from a biotech company tried to reproduce results from 53 ‘landmark’ papers in oncology, only succeeding in 6 (Begley and Ellis, 2012). There are a number of other such examples. While there are many reasons for low reproducibility, these findings are very worrying as they erode trust in the scientific process. They also have very tangible effects by, for example, contributing to rising cancer drug trial costs as a high proportion of pre-clinical results turn out to be no more than flashes of fools gold. The current system of publication and the evaluation of a scientists worth no doubt contributes to this worrying problem as papers are either rushed out with minimum requirements (insufficient replicates and inappropriate statistics), contain cherry picked data and are topped off with a not-so-healthy dollop of bluster and exaggeration.
While the liberal use of bold language in abstracts and discussions may not bring about the downfall of science, it is symptomatic of an unhealthy culture where results are analysed with one eye on positive findings and the other firmly shut to anything that might mess up the ground-breaking paper.
There have been no shortage of suggestions to remedy the current state of affairs but action has been lacking because the current system works just well enough and discoveries are still being made (and at least some are real!). But just well enough is not good enough as science is at risk of eroding away at its cornerstones: robustness and integrity. I’d also like to introduce another aspect of science which has long been in short supply: transparency. The current publishing system goes some way to promoting secrecy and rivalry which, while not always a bad thing, can be to the detriment of science. Collaborations have long been an integral part of science bringing together complementary teams to maximise productivity, spread ideas and even unite otherwise political enemies. To remedy the issues outlined here, many possible changes both big and small have been suggested but here I’d like to outline the skeleton of an admittedly radical alternative three tier system which could foster an all-round healthier, more connected scientific community.
The first tier involves the publication of progress reports every few months much in the same way most (good!) scientists maintain a lab book basically including methods and data (with judicious use of censorship if appropriate). The second tier involves the publication of proto-papers every year or so when certain landmarks in a project are reached. These are intended to allow the easy digestion of results using good figures and also framed by introductions and discussions. Such proto-papers would not be traditionally peer reviewed but published in the way pre-print versions are now in repositories such as arXiv, bioRxiv, ChemRxiv and engrXiv. The final tier would be a more traditional peer reviewed publication. All three tiers would be linked using the magic of the internet under a project name and with all associated members linked to their contributions. The three tiers also allow interested readers to drill down into increasingly more detailed levels depending on their level of interest. Firstly, such a system would greatly increase transparency reducing the level of fraud, cherry picking of data and improper statistical analysis. The first tier could also greatly reduce redundancy as lab groups with overlapping interests might be encouraged to either collaborate more often or steer clear of too much overlap. The second tier of proto-papers allows the rapid dissemination of findings avoiding the frustrations of the peer review system in this regard. The proto-papers could also (again through the magic of the internet) be live documents with versions updated as new data rolls in as well as acting on suggestions and comments from peers. Such peer review-lite could be a powerful tool in increasing the robustness and integrity of research as interested readers might very well look into the first tier data and methods and spot inconsistencies or suggest improvements. The lack of any significant gap between data analysis and publication in proto-papers also means that people worried about being scooped need not as the time stamps of work are there for all to see and any ‘cheats’ can be quickly called out. What this should all mean is that the final published peer reviewed papers should both be fewer in number and higher in quality.
The current publishing system goes some way to promoting secrecy and rivalry which, while not always a bad thing, can be to the detriment of science [...] here I’d like to outline the skeleton of an admittedly radical alternative three tier system which could foster an all-round healthier, more connected scientific community
A very important and intended side effect of all this is to alter the way we assess scientists. This system could be used to more accurately assess the contributions of individual researchers through their work output and ability to develop ideas and projects, all easily accessed at the click of a button. Potential employers would then have a tool to measure far more than just the number of papers a candidate has either sweated their way, or ‘networked’ their way, onto to. Of course this proposed system is far from perfect, but science is in need of a shake up or it risks becoming irrelevant through a loss of power in what it can achieve and, importantly, a loss of the general public’s trust. A dynamic and open system would also be greatly conducive to collaborations allowing the greater flow of ideas and knowledge, a science environment I know I want to be part of.
See more of Dr. Ram Blur here
Don't forget to share this article if you liked it!
Image source: Designed by Freepik