Saudi Journal of Gastroenterology
Home About us Instructions Submission Subscribe Advertise Contact Login    Print this page  Email this page Small font sizeDefault font sizeIncrease font size 
Users Online: 664 


 
ARTICLES Table of Contents   
Year : 1997  |  Volume : 3  |  Issue : 3  |  Page : 107-112
Peer review and refereeing in medicine and medical sciences


Department of Surgery, College of Medicine and Asir Central Hospital, Abha, Saudi Arabia

Click here for correspondence address and email

Date of Submission14-Apr-1997
Date of Acceptance02-Jul-1997
 

   Abstract 

Every academic scientist will come into contact with peer review process either as a reviewer of others work, or as an author whose work is being reviewed by others, or as an applicant for research grant. Historically, peer review process came at various times to various journals for a variety of reasons in haphazard manner. Many forms of criticisms have been made against the process of peer review, and some critics do not believe that the process prevents the publication of flawed articles and fraudulent research. Despite the criticisms, there are advantages of peer review. At present in medical journalism peer review process is an indispensable entity for assessing manuscripts intended for publication.

How to cite this article:
Ajao OG. Peer review and refereeing in medicine and medical sciences. Saudi J Gastroenterol 1997;3:107-12

How to cite this URL:
Ajao OG. Peer review and refereeing in medicine and medical sciences. Saudi J Gastroenterol [serial online] 1997 [cited 2019 Aug 19];3:107-12. Available from: http://www.saudijgastro.com/text.asp?1997/3/3/107/33917


Sooner or later, everyone in an academic environment will come into contact with peer review or refereeing. This will be either as a reviewer of others' work, or as an author whose work is being reviewed, or as an applicant for research grant. It is, however, surprising that our formal education either as scientists or physicians does very little to prepare us for peer review process. It is therefore not surprising that many academic scientists know very little about the process of peer review. Therefore, when confronted with the situation, they act only by intuition, hoping for the best !

A well-known unwritten law in practically all academic environment is the law of "publish or perish" because the duties of a university teacher in order of priority are teaching, research and service. Some claim that research takes priority over teaching, but there seems to belittle disagreement about service coming last when considering medical schools. Therefore an academic scientist needs to publish in peer-reviewed journals to get promoted, to get grants and to continuously improve his knowledge.

The purpose of this write-up is to give a brief overview of this process of peer review.


   Historical aspect Top


What is surprising in the history of peer reviewing is that it has no clear-cut history ! The only serious attempt to bring together the various irregular and ad hoc performances of the older journals to constitute a form of historical aspect of peer review was done by John C. Burnham [1] .

These practices varied from journal to journal and depended on the whims and caprices of the editors.

However, David A. Kronick [2] who also attempted to document the history of peer review believed that the practice that was closest to peer review started in 1752 by The Royal Society of London. The publication which was titled "Philosophical Transactions" had a "committee on papers" whose function was to review all articles. Five members of that committee constituted a quorum.

Right from the beginning, two types of reviewing became obvious. The first was the peer reviewing that dealt with research grant application and award, and the second was the editorial peer review that dealt with manuscripts. Each of these evolved independently and had no influence on each other.

In the 19th century medical journals served as personal mouth pieces, and like newspapers, carried opinion as well as news. Many physicians during that period were interested in publishing their view points, and it was the quantity that mattered and not the quality as each editor tried to fill his pages [3] . The early founders of medical journals were Thomas Wakley of the "Lancet" and Henry Maunsell, a co­founder of the "Dublin Medical Press" in 1839. What could again be considered as the closest thing to peer review was that medical journals like newspapers started copying each other's good articles. Obviously, plagiarism did not seem much of a problem at that time. Naturally, good articles were copied by many journals and poor articles ignored [4] .

Even when specialization was making in-road into medicine, peer review was still not considered necessary by many journals at that time. For example, an official publication of a continental research institute like "German Academic Institute" specialized in only one kind of research. It therefore had no need of an outside referee since the journal existed to publish the research done in the institute. And the editor who was also the director of the institute was the expert in the subject matter. This type of institute publication only appeared when there were enough articles to publish.

As specialization progressed editors lacked specialized knowledge on highly technical articles. By necessity therefore peer review surfaced again. In 1905, the founding editor of "Surgery, Gynecology and Obstetrics", now called "Journal of the American College of Surgeons ", decided that specialists should supervise the contributions that related to their areas of specialization [5] . In spite of these, many editors still did not think much of peer review process. This was reinforced by an incident narrated by Burnham [1] . In 1871 a Yale University geologist, James Dwight Dana, editor of the "American Journal of Science" sent an article on physics by Henry A. Rowland to two colleagues at Yale. These reviewers recommended against publication, but James Clerk Maxwell published the article in the prestigious "Philosophical Magazine" to Dana's embarrassment.

Other reasons why early editors did not accept peer reviewing were because, generally, there were not enough articles to fill the journals [7] for a regular appearance and there was opposition to specialization. It was felt that any one with a medical degree was as good as the other, specialization or no specialization [8] . The third reason was that editors of journals regarded themselves as educators not only of their readers but also of their contributors. Therefore, the need for any other assessor, or peer review did not arise.

However, this was not a policy of all journals. By 1893 for example, Ernest Hart, editor of the "British Medical Journal" started referring every article received for his journal to an expert having a special knowledge of the subject matter. He also recommended the practice to his colleagues with very little success. There were occasions when "committees" or "referees" were used by other journals, but these were usually the articles appearing in "proceedings" or "transactions". In book publishing, however, in technical fields, since most publishers were lay persons, the use of "committees" or "referees" for assessment was rather very common [9] .

During that period, many journals formed what can now be regarded as "Editorial Boards" but under different titles such as "consulting editors" and "advisory committee". Their functions bore very little relevance to peer review process. Their functions were to give moral support to the journal, to contribute articles, editorials, clippings, book reviews and abstracts for the editor to publish. As a matter of fact the selection of these people was based on their geographical locations, and their main function was to gather articles from their various locations for their journal [10] .

Peer review, similar to what we know today became entrenched accidentally and as a result of necessity. Before specialization, editors had to scamper for articles, but as specialization became entrenched, editors, were overwhelmed with articles they did not have enough space for. Therefore, assessment had to be made to select the best articles submitted. As Burnham [1] pointed out, between 1913 and 1925, "The Journal of the American Medical Association " received between 1500 and 2000 manuscripts [11],[12] .

The second reason for the entrenchment of peer review was the progress in the development of expertise. A classical example was an embarrassment caused when a journal published an article in which a worker claimed to discover a substance called "Ureine". He further claimed that this constituted four percent of the urine volume and that it caused uremia [13] . Independent workers found this to be false much to the embarrassment of the editor. It is therefore obvious that peer review process gave conflicting signals as to its usefulness. The "Yale episode" discredited the process whereas the "Ureine episode" credited the process. Peer review therefore came at various times to various journals for a wide variety of reasons. It became a standard practice in English speaking and some European countries in the middle of the 20th century. The earliest journals that practised the process were "British Medical Journal" in the 19th century and the "Lancet " in the early 20th century.


   Pitfalls in peer review and refereeing Top


Some still use the term "Peer review" for processing grant applications, and the term "refereeing" for determining the suitability of an article for publication in a scientific journal [14]. However, many use the terms for both. In peer review, the assessment of a medical article intended for publication is entrusted into the care of another scientist who is usually the peer of the author, with the same interest. This reviewer is to act in the interest of the scientific community and police the proposed work to be disseminated.

This assessor who is to determine the suitability or otherwise is more often than not the author's professional rival and his academic competitor who may be competing for journal space with the author! [15] In academic medicine, while competition among peers for world recognition and promotion may not be cut throat, it is not passive either !

There may be bias inherent in the selection of reviewers themselves [16] . Most editors tend to pick as reviewers, members known to them or those usually in the same institution or those in their "inner circle" [12] . Many of these tend to have similar views as the editor.

Some reviewers are unqualified or biased. Some are stricter than others. Experienced reviewers give stricter assessment of manuscripts than less experienced ones, and also young referees give stricter assessment than older colleagues [16] . Sometimes the euphoria, and the sudden "power" acquired, in being asked to judge a colleague through the assessment of his manuscript can cloud the judgment of a young, immature academic upstate.

In some cases, unfair or inadequate reviews result in the rejection of manuscripts that deserve publication. Sometimes the hostility of the reviewer to the author is an indication that it is not the manuscript that is being assessed, but it is the author as shown by \crude language like, "This is a useless paper", "This paper is a disgrace", "The author has no knowledge of what he is talking about", "This work lacks credibility" without proof ! etc.

What I consider a most unfair comment is when the credibility of the author is called into question, usually without any basis whatsoever. To falsify data is like committing murder in day-to-day life. Because of the serious nature of this it is a great disservice to accuse someone wrongly of data falsification.

On some occasions negative comments are based not only on professional jealousy and envy, but also on ignorance. An information that may seem insignificant and useless now may be the basis of a future scientific breakthrough. Some critics believe that peer review tends to allow only conventional work into journals and discourages innovators or original thinkers [17] .

It is not unusual to find an article rejected by one journal accepted by an even more reputed journal, or to have articles previously rejected being accepted by the same journal after a re-review by an entirely new reviewer. It has been suggested that seven to 12 reviewers would ensure against bias and incompetence of one or two reviewers [18]. However, this is usually not feasible.

It has also been suggested that if a referee is in doubt of a paper, he or she should act in favor of recommending the paper for publication or at least suggest a second opinion [14],[19] . An experienced reviewer has been quoted to state, "A border line paper published is not a sin; but a reasonable paper rejected is a shame, as purpose of medical journals is to convey information, not to block it".

Others have claimed that peer review is unnecessary, and slows communication of information to the scientific community, and yet it does not seem to prevent the publication of flawed and fraudulent research.

Some journals forward selective comments from the reviewers back to the authors for comments. While this is a good idea, the number of articles received by many well-established journals will make this practice impossible in all cases.

The practice of a reviewer "sitting on" an article or "misplacing" an article to prevent or delay its publication is usually not allowed by many editors as deadline is usually given for the reviewer's recommendations, although this deadline is not often adhered to in some cases. Some of these criticisms of peer review process may not be valid, since medical journals are not established to suppress information [20] . The mere fact that they are present however is a cause for concern.


   Advantages of peer review Top


The obvious advantage of peer review is that since the reviewer is usually familiar with the subject of the author, the author cannot hoodwink the scientific society and this gives adequate policing. Reviewers are usually authors themselves whose works are frequently cited by others [21] . It has also been shown that peer review is an effective screening process to evaluate medical manuscripts because about 62% of rejected manuscripts in a major medical journal were also turned down by other good indexed medical journals solicited [22] .

The quality of many papers have been improved tremendously by good comments and suggestions from the reviewer [23] . Most readers of medical journals generally are satisfied with the quality of papers published [24] , and this to some extent, can be regarded as a vindication of the peer review process. However, those polled would prefer research articles to be more relevant to their clinical practice.

The reviewers also benefit from the peer review process. Even though the reviewers give a substantial input of uncompensated labor financially, the exercise contributes to their own academic growth and critical insights, and this can be regarded as adequate compensation [21] .


   Comments Top


Academic scientists who have made their marks or in the process of making their marks, in their field of specialization will have to review their peers, sooner or later. Therefore in the interest of scientific community, a balance must be struck between killing good articles and recommending for publication worthless articles. This can be achieved by a reviewer being meticulously objective and fair in his or her assessment.

It is not fool-proof, but blinding a reviewer is successful in about 73% of cases [25] and blinding improves the quality of reviews. It has been shown that major manuscripts from institutions with greater prestige were no more likely to be recommended or accepted for publication than those from institutions with lesser prestige, when blinding the reviewers is successful. However, selection for publication of "brief reports" appear to correlate with the prestige of the institution [26] .

It has been argued that, "manuscript assessment like a sophisticated diagnostic test has a certain `sensitivity and specificity'. Therefore it must yield a certain complement of false-positive and false negative results" [28] . Some believe that peer review lacks objectivity, but Kassirer and Campion [28] believe that even though this process is probably crude, it is not totally unscientific, arbitrary or subjective.

Useful guidelines for reviewers suggested by Pyke [19] and Dopularized by Lore [14] are summarized below with some modifications.

  1. Do not lose the manuscript.
  2. Do not keep the manuscript for more than two weeks.
  3. Stick to the instructions of the editor.
  4. If in doubt, the bias should be in favor of recommending the paper for publication or to seek another opinion.
  5. Do not reject a paper because of some insignificant things (like format etc.) which does not detract from the message being conveyed. This can be corrected to conform with the format of the journal before publication.
  6. Do not be influenced positively or negatively by the institution where the work is done or by the authors. Usually editors blind their reviewers, but, then many reviewers can still have a knowledge of whom the authors of a particular manuscript are [5] .
  7. Limit your assessment to the objective of the paper.
  8. The usually accepted format for publication is the SIMRAD (1) structure. That is Summary, Introduction, Materials and Methods, Results and Discussion. However, in cases of "Case Reports" the SICDC structure, Summary, Introduction, Case Reports, Discussion and Conclusion is also acceptable.
  9. Resist the temptation to be favorably disposed to a poor paper because the reviewer is cited under references.
  10. There should be no communication with the authors on account of the manuscript.
  11. Do not use rude language and do not accuse the authors of fraud, mediocrity, etc., on baseless argument.
  12. Referee the manuscript, not the authors and keep your opinion of the authors out of the manuscript being reviewed.


A table of checklists (listed below) designed by Gardner and Bond [29] for assessing certain group of manuscripts can help in peer review.

Table of checklist designed by Gardner and Bond for certain group of manuscripts [29].

  1. Was the objective of the study sufficiently described
  2. Was an appropriate study design used to achieve the objective
  3. Was there a satisfactory statement given of source of subjects
  4. Was a prestudy calculations of required sample size reported
  5. In the conduct of the study, was a satisfactory response rate achieved.
  6. Was there a statement adequately describing all statistical procedures used
  7. Were the statistical analyses used appropriate
  8. Was the presentation of statistical material satisfactory
  9. Were confidence intervals given for the main results.
  10. Was the conclusion drawn from the statistical analysis justified
  11. Is the paper of acceptable statistical standard for publication
  12. If "No" to question No.11, could it become acceptable with suitable revision



   Conclusion Top


In spite of the flaws in peer review and refereeing, the systems do work. It will work even better if mature, experienced and older academicians are used more as reviewers. Until someone comes up with a better system, we are stuck with it.

 
   References Top

1.Burnham JC. The evolution of editorial peer review. JAMA 1990;263:1323-9.  Back to cited text no. 1  [PUBMED]  
2.Kronick DA. Peer review in 18th century scientific journalism. JAMA 1990;263:1321-2.  Back to cited text no. 2  [PUBMED]  
3.Billings JS. The medical journals of the United States. Boston Med Surg J. 1879;100-2.  Back to cited text no. 3    
4.Fishbein MA. History of American Medical Association 1847-1947. Philadelphia Pa WB Saunders Co. 1947:46.  Back to cited text no. 4    
5.Martin FH. Surgery, gynecology and obstetrics 1905;1:62.  Back to cited text no. 5    
6.Reingold N, ed. Science in Nineteenth-century America. A documentary history. New York, NY; Hill & Wang Inc; 1964:262-6.  Back to cited text no. 6    
7.Billings JS. Literature and institutions. Am J Med Sci 1876;72:460.  Back to cited text no. 7    
8.Konold DE. A history of American Medical Ethics 1847­ 1912. Madison; University of Wisconsin press:1962.  Back to cited text no. 8    
9.Fritschner LM. Publishers' readers, publishers and their authors. Publishing history 1980;7:45-100.  Back to cited text no. 9    
10.Bunker WH. The Maine Medical Journal. Main Med J 1934;25:32.  Back to cited text no. 10    
11.Garland J. Medicine as a social instrument: N Engl Med J 1951;244:43.  Back to cited text no. 11    
12.Simmons GH, Fishbein M. The art and practice of medical writing. JAMA 1925;84:892.  Back to cited text no. 12    
13.Moyer HN. The relation of the medical editor to original communications. Ann Gynecol Pediatry 1901;14:694-5.  Back to cited text no. 13    
14.Lore W. Peer review and refereeing in science. East. Afr. Med. J. 1995;72:335-7.  Back to cited text no. 14    
15.Gordon M. Evaluating the evaluators. New Scientist 1977;73:342-3.  Back to cited text no. 15    
16.Nylenna M, Riis P, Karlsson Y. Multiple blinded reviews of the same two. manuscripts effects of referee characteristics and publication language. JAMA 1994;272:149-51.  Back to cited text no. 16    
17.Knoll E. Communities of scientists and journal peer review. JAMA 1990;263,1330-2.  Back to cited text no. 17    
18.Belshaw C. Peer review and the current anthropology experience. Behav Brain Sci 1982;5:200-1.  Back to cited text no. 18    
19.Pyke DA. How I referee. Brit. Med. J. 1976;2:117.  Back to cited text no. 19    
20.Kassirer JP. Do medical journals suppress information ? New Engl. J. Med 1992;327:1238.  Back to cited text no. 20    
21.Yankauer A. Who are the peer reviewers and how much do they review. JAMA 1990;263:1338-40.  Back to cited text no. 21  [PUBMED]  
22.Abby M, Massey MD, Galandiuk S, Polk HC. Peer review is an effective screening process to evaluate medical manuscripts. JAMA 1994;272:105-7.  Back to cited text no. 22    
23.Lee ST. Annals editorial: review process-improving the quality of publication (Editorial). Ann Acad Med Singapore 1992;21:301-3.  Back to cited text no. 23  [PUBMED]  
24.Justice AC, Berlin JA, Fletcher SW, Fletcher RH, Goodman SN. Do readers and peer reviewers agree on manuscript quality. JAMA 1994;272:117-9.  Back to cited text no. 24  [PUBMED]  
25.McNutt RA, Evans AT, Fletcher RH, Fletcher SW. The effects of binding on the quality of peer review. JAMA 1990;263:1371-6.  Back to cited text no. 25  [PUBMED]  
26.Garfunkel JM, Ulshen MH, Hamrick HJ, Lawson EE. Effect of institutional prestige on reviewers' recommendations and editorial decisions. JAMA 1994;272:137-8.  Back to cited text no. 26  [PUBMED]  
27.Lock S. A difficult balance: Editorial peer review in medicine. Philadelphia. Pa: ISI Press 1986;1-38,122-32.  Back to cited text no. 27    
28.Kassirer JP, Campion EW. Peer review: crude and under studied, but indispensable. JAMA 1994;272:96-7.  Back to cited text no. 28  [PUBMED]  
29.Gardner MJ, Bond J. An exploratory study of statistical assessment of papers published in British Medical Journal. JAMA990;263:1355-7.  Back to cited text no. 29    

Top
Correspondence Address:
Oluwole Gbolagunte Ajao
Department of Surgery, College of Medicine, P.O. Box 641, Abha
Saudi Arabia
Login to access the Email id

Source of Support: None, Conflict of Interest: None


PMID: 19864786

Rights and PermissionsRights and Permissions




 

Top
 
  Search
 
  
  
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
    Email Alert *
    Add to My List *
* Registration required (free)  


    Abstract
    Historical aspect
    Pitfalls in peer...
    Advantages of pe...
    Comments
    Conclusion
    References

 Article Access Statistics
    Viewed2437    
    Printed156    
    Emailed2    
    PDF Downloaded1    
    Comments [Add]    

Recommend this journal