Peer Review, Port 25 and Good Things to Come

by admin on June 12, 2006 07:12pm

Peer Review, Port 25 and Good Things to Come
The peer review system is a much revered and reviled system within academic circles. Tim O’Reilly’s recent comment on Nature magazine’s fresh look at peer review piqued my interest.
Interest piquing is always bad, because its sets off the oft dangerous process of “thinking”! 

There is a great description of peer review on Wikipedia. I encourage you to read the complete description, if only to get a sense for the issues involved in peer reviews.

In short, the peer review process:

ensures work submitted is commented on by experts in the field
exposes the possibilities or the mistakes in an intellectual approach
improves research quality in most cases even where the experts  misunderstand the research!
Pre-history of Peer Review

I did all my research in the Internet/Arpanet era, but I am old enough to be able to see the tail end of the paper based peer review process. Talk about snail-mail!

First you send off a request for re-prints for all the papers that you think you will need for your research. Each of them costs about $$ in reprint costs. Of course you order a lot of them since you don’t know which ones you will find useful. Then you write your paper (it takes a couple of months for you to get all those re-prints btw!) and send them to the editors of journals, referees in conferences etc. in those nice brown paper envelopes. The reviewers send you their comments (this is a couple of months later). You modify your manuscript and repeat the process. I am surprised the world made any progress at all!

The predecessors to the World Wide Web include FTP, WAIS and Gopher. The first wide use of all these technologies was for distribution of research papers. (That may be just my bias speaking, since I was a computer science graduate student in the 1990s and that’s all I could use these technologies for!). Tim Berners-Lee invented the World Wide Web based on the vision of peer collaboration between scientists after all!

With the advent of the Internet peer review was now potentially much more efficient.  Research papers and reviews could now be retrieved in near real time. However it has taken a little bit of time for the academic greybeards to catch on. My personal opinion is that the way forward on peer review was shown by the Open Source community.

Here comes Open Source

The first thing that comes to mind when talking about peer review in the OSS world, is in the realm of development. You know the usual story, person develops code, throws code for peer review, corrections and comments come back, code is unstoppable!  My friend and colleague Hank Janssen does say (see his handsome visage) “OSS world peer review delivery can be very brutal. The social aspect of peer review is not very becoming many times”. But that is what it is!

However there is another peer review system within the OSS framework and that is called OSS Intelligence. OSS Intelligence means the application of collaborative principles developed by the Open Source Software movement to the gathering and analysis of information. Wikipedia is an outstanding example of this, but any of the major discussion boards supporting OSS users are also examples of this.

Now academia is catching up – and Nature magazine is trying a new peer review process in which a manuscript is thrown open for public peer review and the authors are invited by the editors to reply to comments from the public.

Peer Review and You

That got me thinking (remember the dangerous activity I alluded to before?). Aren’t our Port 25 readers our peers in ensuring that we have great, interoperable, easy to manage, easy to install, easy to maintain software that changes the world? Why not use this forum for peer review of what we do in our lab?

The proposal is that we open the Open Source Lab project planning to the Port 25 community. The project research methodology would be proposed on the site with methods, experiments, hypotheses and data sets all laid out for “peer review”. Please be gentle - on second thoughts this is dealing with the Open Source community, so just be natural!  We will open this for a period that makes sense. Then we will carry out the research based on what we hear back from our peers right here on port 25. The Open Source Lab will carry out the research and put the conclusions and feedback back to the Port 25 community for review. We will then publish this “peer reviewed” content in its final form.

Of course you are always welcome to suggest what those projects should be!

As an example of the kind of information you will see, let us take a project that the Open Source Software Lab recently executed. This involved simulating RedHat and SUSE in a production environment and evaluating the patch management experience.

The research plan will describe:

    • the hardware used
    • the topology of the hardware (e.g.  network configuration, the distribution of applications on servers
    • the workloads simulated (e.g. web search, database access, benchmarks) and parameters of the simulation (number of searches, number and type of accesses)
    • the software used to download patches
    • the duration of the experiment
    • the assumed distribution of the patches (in terms of frequency and time) 
    • how failures will be classified and measured

The peer review feedback could let us know (just a sample):

    • if the hardware used was the kind of hardware that would be used in real world situations?
    • if the topology made sense, or did we need to evaluate different topologies?
    • were the workloads real?
    • what were some common variances in workload?
    • were we using the software to manage and download patches that our peers would use?
    • were there factors like quarterly financial report generation that meant that a realistic experiment would need to span more than the period specified?
    • did the assumed distribution of patches make sense or were there fewer or greater number of patches?
    • did our peers actually care about the failures we would measure?

Now in the best traditions of peer review – what do you think of this idea?