FORCE2015 £1k Challenge Winner

Printer-friendly version

GROUP HAS COMPLETED THEIR WORK AND IS NO LONGER ACTIVE.

FOR QUERIES, PLEASE WRITE TO ‘INFO@FORCE11.ORG

 

Crowdreviewing: The Sharing Economy at its Finest

Submitted by:  Werner Liebregts

ADD YOUR COMMENTS BELOW

 

The current review process of scientific articles is outdated. Only a few reviewers assess the quality of papers before publication. Many more experts read them after publication, have a strong opinion about their contents, but are hardly able to share it to a wide audience. Academics would greatly benefit from crowdreviewing, a post-publication peer review (PPPR) process in which anyone who has read a scientific article is asked to review it according to a standardized set of questions. Today’s consumers themselves decide whether the consumed good meets their quality standards, and this is quickly and easily shared with the rest of the world. Others can then let their demand for the reviewed good depend on the reviews. Interested people observe the articles’ quality (and the reasons why) at a glance. Visibility of the reviews, including comments and remarks, will even have an upward effect on the quality of future research. The sharing economy at its finest, applied to academia.

 

Project Goal

In short: to successfully run a pilot project on crowdreviewing scientific articles.

ScienceOpen has offered me the opportunity to run a pilot project on their already existing platform for open access (OA) publishing, for which I want to thank them a lot.

A major goal of ScienceOpen is to foster the OA movement by raising the awareness for high-quality OA and by involving the entire scientific community in a transparent and open evaluation process. Today, the ScienceOpen platform comprises nearly 1.5 million OA articles and preprints so far, aggregated from various sources.

The associated ScienceOpen network enables authors, readers and reviewers to connect and keep informed about recently published OA articles in their field(s) of interest. ScienceOpen explicitly aims to provide services to the whole scientific community. That is:

  • To authors, by enabling them to publish their results immediately, and to receive transparent feedback;
  • To readers, by offering them a free to use database of OA articles;
  • To reviewers, by giving them the opportunity to receive recognition and credit for their valuable and voluntary duty.

More specifically, I am going to set up a collection of scientific OA articles in economics. A collection should start with a minimum of ten articles. Applying a complete yet accessible review process to it should result in a high number of people reviewing each article in a consistent way. If successful, a new metric for the quality of an individual scientific article might be derived.

 

Project Outcomes

The outcome of the project is threefold:

  1. Agreement on how to review scientific articles in a good way;
  2. A standardized scientific review process suitable for crowdreviewing;
  3. A first proof of concept.

The first project outcome will be obtained by facilitating and moderating online discussions concerning the scientific review process. The conclusions drawn from the discussions should then be incorporated into the new standard way of reviewing scientific articles. Applying this process to a collection at ScienceOpen will (hopefully) lead to the third project outcome. The success of this project also depends on your cooperation and input!

 

Project Timeline

Below, you can find a rough timeline of the project, starting with the moment of idea generation in November last year, and ending with a presentation at the next FORCE11 conference.

  • November 2014: Idea generation
  • November 2014 – end: Idea promotion (incl. personal and Skype meetings with partners)
  • February 2015: The £1k Challenge
  • February 2015 – end: Contact with ScienceOpen
  • March 2015: Editor status at ScienceOpen
  • May 2015: Collection set up at ScienceOpen
  • May 2015 – August 2015: Online discussions about the scientific review process
  • September 2015 – end: Idea implementation
  • April 2016: Presentation of the project outcomes (/intermediate results) at the FORCE2016 meeting in Portland

The process of idea implementation comprises the application of an agreed-upon review process to the collection of articles set up at ScienceOpen, inviting readers to review at least one of those articles, monitoring the progress (and making adjustments if necessary), and evaluating the entire process. Short updates about the progress will be published on a regular basis.

 

Discussions

You are cordially invited to actively participate in online discussions about how to review scientific articles in a good way. Several discussion forums will be opened one by one in the upcoming months. Any comment and/or suggestion is highly appreciated. Examples of questions that will be addressed are:

  • What hinders and what encourages people to do a review?
  • Can we come up with a standardized set of questions (or, assessment criteria) that are applicable to any type of scientific article from any field of research or should we have multiple ways of reviewing?
  • Is it possible to create a one-dimensional indicator of an article’s quality based on reviews?
  • What is the minimum number of people to be able to call it a ‘crowd’? In other words, when can the results from crowdreviewing be called reliable?
  • Should the reviewing process be fully open? Why (not)?
  • Should reviewing be rewarded? If so, how?
Working Group Participation
This is a closed group. Only a group administrator can add you.

Comments

Mr. Liebregts, one of the winners of the 1k challenge, proposes PPPR as a model to 'fix' the peer reviewing process.  There are several flaws with this model and with the eay it is proposed. 

We have all felt the sting of disappointment when our (oh so excellent) submission was rejected by incompetent or just shoddy reviewers.  And sometimes we’ve also felt also the compensatory (although somehow never quite adequate) satisfaction when the same submission is accepted elsewhere. 

How to fix the reviewing process? 

One can take at least four distinct approaches:

  1. Simply not review at all.  Accept everything that is submitted.  But then the filtering and/or quality control one expects from a journal or conference is lost.  Leaving it to the reader or attendee to wade through thousands of submissions in order to select what to see can be extremely inefficient.  Some organizations (like the Society for Neuroinformatics) routinely accept close to 1000 abstracts at each conference, making it a daunting proposition to decide what to actually look at. 
  2. Hire professional, highly informed, reviewers.  This is a nice ideal but not practical: it is expensive, time-consuming, and simply not an option for smaller conferences and other meetings.  Even if the budget were available, the appropriate reviewers are probably not; they are doing the actual research! 
  3. Convene committees of experts as needed for the purpose, drawing from the relevant community.  This is the current situation, which sometimes results in uninformed or sloppy reviewing. 
  4. PPPR: Accept and publish everything, but institute a procedure of post-publication review.  This is the position Mr. Liebregts espouses: “Only a few reviewers assess the quality of papers before publication. Many more experts read them after publication, have a strong opinion about their contents, but are hardly able to share it to a wide audience. Academics would greatly benefit from crowdreviewing, a post-publication peer review (PPPR) process in which anyone who has read a scientific article is asked to review it according to a standardized set of questions. Today’s consumers themselves decide whether the consumed good meets their quality standards, and this is quickly and easily shared with the rest of the world.”

Is PPPR a valid option?  While it sounds appealingly democratic, there are at least three problems that make this unworkable and rather naive:

  1. Organization: How is the PPPR process managed?  When do reviews get made, and who collects them?  Where are the reviews maintained?  By whom?  What is the process of organizing, balancing, and somehow standardizing them so that some sort of informative collective ‘wisdom’ can emerge?  Is there only one repository of reviews for a published paper?  If so, who enforces this?  If not, how does one find what is most informative?  One might respond: “Oh, this is easy.  The publisher of the journal or proceedings is responsible for these functions”.  But who would pay them?  Is there some sort of central editor or board who try to organize the reviews to make them readable, check for ad hominem and objectionable reviews, etc.?  Who appoints and pays for this?  For how many years?  Without a clearly thought-out procedure, and a model of the organization and finances, the PPPR idea is an unworkable dream. 
  2. Quality: The reason that academic communities appoint reviewers is to perform quality control.  If Science or Nature started next month publishing literally everything they received, from high school papers to crank science about 7-dimensional aliens with anti-gravity spaceships, their value as a source of informed and responsible information drops to zero.  Not even Mr. Liebregts would read them.  One can respond: “Oh, I don’t mean completely remove reviewers!  Just add a post-publication review board as well!”.  But then we have returned to the central objection.  The problem Mr. Liebregts is trying to solve is with current reviewers, and this response does not remove them or curtail their influence.  In fact, it is already possible for a paper that was rejected in one place and published somewhere else to garner additional critical attention and to be then re-published, for example in a collected volume of the most influential papers.  This has occurred for two centuries already.  PPPR adds nothing new here. 
  3. Integrity: If, as the PPPR idea advocates, literally anybody can be a [post-publication] reviewer, what prevents unscrupulous authors to canvass reviews in their favour?  Even worse, an author might hire dozens or hundreds of people to fill in some quasi-review commentary template and submit this to whatever PPPR management process exists, in order to boost his or her academic standing artificially.  While the present-day problem of possible uninformed or sloppy reviewers is real, surely this situation is a lot worse.  This sort of canvassing subverts the goals of academic review, namely to provide some sort of (at least ideally) semi-objective judgment plus suggestions for improvement.  It turns reviewing into a political popularity contest.  One can stipulate (as Mr. Liebregts does): “Oh, these post-publication reviews have to be written by peers, not by anyone!  And in fact they have to review according to a standardized set of questions!”.  But the same sorts of management questions arise: who determines who is a ‘real’ peer?  Do people have to authenticate themselves?  Can they simply join a professional organization and thereby self-declare to be legitimate peers?  What happens if they do not follow the standardized questions?  Who edits their reviews?  In practice, how many academics will actually voluntarily fill in and submit such reviews?  (Most academics I know are already suffering from a reviewing burden.)  Again, it will be up to the author to canvass for voluntary (or paid) reviewers.  And in fact, doing so is already possible: nothing stops a researcher from canvassing fake reviews for his or her paper or position today.  So PPPR does not in fact suggest anything revolutionary; we simply don’t do it today because it adds no value. 

This discussion is about policy, and is political, not scientific.  Mr. Liebregts obtained his $1K challenge prize by a process of canvassing, and is exercising his right to argue his point.  He is providing us as academics a chance to exercise clear thinking and responsible reflection of one of our time-honed practices.

Dear Eduard, I'm about to set up the first discussion forum about part of the issues that you've raised. Will you be involved in the discussions to also answer the questions that you posed?

It depends; if the discussion is thoughtful and not self-promoting or stupid, then I would be happy to contribute.  But if it is simply an exercise in sloppy wishful thinking then I will not participate. 

So far I see no valid answers to the following problems with PPPR:

- why peers would take time to create new reviews post-publication (real peers, not the author's own graduate students and friends and family)

- who would select these peers and check over their reviews to remove problematic content

- where the reviews would be hosted, for how long, and how they would be organized

- who would pay attention to these reviews post-publication, since traditional citation counts are already giving much of the information we want

 

Let's try to find these answers together, Eduard! I already have my own thoughts (of course), but I do not want to harm the discussion by giving my own opinion in advance. You are right in saying that it should not be a self-promoting discussion, but a fully objective and open one instead. At the same time, this also means that I expect none of the discussants to be narrow-minded or prejudiced.