With the integrity of scientific research under more scrutiny than ever, there is a demand for more accountability from publishers to ensure the quality of science they share. So in the era of digital platforms and real-time publishing, the case for post-publication peer review is stronger than ever.
In the early days of modern science, the size of the academic community was manageable enough that local meetings were sufficient for researchers to pitch and share their findings with one another for feedback.
This reviewing of work with fellow peers provided an effective network-based accountability system to ensure its accuracy, and propel the research forward.
Now with the rise of the online open source movement, an informal shared feedback system is again possible. Referred to as ‘post-publication peer review’ (PPPR), this relatively new, additional stage in the process permits the scientific community to buffer itself against flawed, damaging or dishonest research.
The trouble is the traditional journal-based peer review system is demonstrably flawed, with failure to uphold the integrity of the science at several steps in the publishing process.
These problems range from conflicts of interest between authors and reviewers, increasingly complicated–and often ineffective–bureaucratic measures between publishers and authors to try and avoid these conflicts, and failure by new plagiarism and publishing tools to comprehensively detect duplicate submissions. There’s also been a reluctance by journals to print errata due to the financial or reputational cost associated.
And whilst it’s true, there is an increase in the number of retractions and errors coming to light with PPPR, this only highlights the need for an additional reviewing system to prevent bogus studies and false data circulating and harming science.
For example, one recent scandal found to be particularly damaging to the public view of science was when it was uncovered scientists themselves were fudging results and being dishonest with their methodology, and that peer review had repeatedly failed to pick up on this.
Use of PPPR could help detect these submissions shortly after publication so that they are contained and rejected.
Early forms of PPPR kicked off with the rise of informal blogs like Retraction Watch, where other scientists would offer critique on paper methods and results. However, seeing as bloggers often didn’t link these posts to the research papers, authors of the studies weren’t always aware of the community feedback, and this was problematic.
Similarly, with the growth in popularity of these niche blogs also came the rise in critique via user-based comments, stimulating discussion within the scientific community. Even social media posts today are highly valuable tools for PPPR, although these are often missed by more official channels.
This is why, as part of the open source movement, posts like these are encouraged to be posted and shared publicly, to solidify their value as PPPR.
It’s not as simple as it seems, however. There are certain dilemmas to overcome for PPPR to prevail.
One issue is when publishers like BioMed Central and PLOS caught onto this trend and enabled user-based comments, it was noticed that in reality, many scientists were reluctant to directly criticize work from others in the industry under their own name.
We value privacy and anonymity, particularly when being the “bringer of bad news”. This is understandable, especially when some scientific fields are a very small-knit community of professionals.
But with anonymity, the legal ramifications of abusive or defamatory comments complicate things for the platform or blog hosting them. For example, one PPPR platform that allows users to comment anonymously, PubPeer, encountered a defamation lawsuit after a scientist claimed that a comment on the platform had cost him a tenured $350,000 per year research position.
For these reasons, to avoid libelous or abusive behavior, particularly on more controversial scientific topics, many digital scientific publications have since opted to disable user-based commenting altogether.
Banning user-based comments just creates further barriers to the discussion of science, and quality of shared research is the first form of collateral damage. As a compromise, limiting who can comment instead streamlines the quality of the discussion to those with expertise in certain fields. For this reason, other PPPR platforms which have cropped up in recent years require registration or additional layers of identification before users are permitted to comment.
For example, the academic network ResearchGate has launched Open Review, which prompts contributors and authors to offer feedback. Similarly, PubMed Commons also invites authors of PubMed papers to comment on other papers in the database.
But by being exclusive and no longer anonymous, how do we motivate researchers, often overwhelmed with long hours and other duties, to commit to the time it takes to author their views on appropriate channels for PPPR?
Well, PPPR doesn’t just allow policing of bad science. Crowdsourcing the verification of research also allows constructive feedback to reach authors, so other specialists around the world can directly share their experiences and advise them on how to avoid and correct errors in their analyses.
Allowing access to quality, niche expertise from consultants that can further propel their research and future studies is a strong form of motivation for many scientists.
With multiple publishing platforms developing their own solution to each of these transparency barriers, competition between PPPR platforms has resulted in a lack of standardization in their approach. So how do we prevent valuable PPPR submissions from being lost–again–in the noise?
A cross-platform assimilation tool for all comments and opinion pieces attached to each paper is therefore in demand, to bring some form of standardization to the process in order for PPPR to become the new norm.
This is what networks like ScienceOpen are trying to address: cross-referencing and assignment of a DOI to each PPPR submission to curate feedback and make it more visible.
But as these networks further refine and connect their feedback platforms, another key demand for PPPR to work is that there is a wider recognition of its legitimacy as a tool for the improvement of science by professional networks. The more awareness there is of its different forms and how to contribute to the system, the more its accuracy and value to researchers will grow.
So if more funding bodies and research institutions accept and encourage use its use, PPPR can eventually become a new gold standard in scientific publishing.
Wow! That’s a great article! Thanks for sharing!
When scientific research is developed to a certain extent, it is time to require a set of guidelines to bind them.