INTRODUCTION

The peer review process is a cornerstone for maintaining the standards of leading scholarly journals such as the Journal of International Business Studies (JIBS). We acknowledge the many criticisms of the process (e.g., Bedeian, 2004; Suls & Martin, 2009; Tsang & Frey, 2007) but agree with Miller (2006: 425), who noted that “although this system of peer review is well accepted, it is far from perfect. Many would characterize it just as Winston Churchill characterized democracy – ‘it is the worst possible system except for all others.’ ” While perhaps not ideal, peer review is a good system and one that can be improved through the quality of the reviews written. It is in this spirit that we investigated the characteristics of the best JIBS reviews in order to offer specific guidance on how to write high-quality reviews.

JIBS is similar to other scholarly journals in its reliance on reviewer anonymity, in our case through a double-blind peer review process. The procedure at JIBS is to place into the review process the submitted manuscripts that meet the “minimum JIBS norms for fit, quality, and contribution to IB” (JIBS Statement of Editorial Policy) (http://www.palgrave-journals.com/jibs/jibs_statement.html). Specifically, all manuscripts are read by the Reviewing Editor (RE), who forwards each paper to either the Editor-in-Chief (EIC) or Deputy Editor (DE), depending on the topic area. This central editorial team (RE, EIC and DE) evaluates whether or not a manuscript is well developed enough to be sent out for review, and in particular whether it lies within the scope of the journal, which covers the domain of the international business field as set out in our Statement of Editorial Policy, as well as assessing a checklist against minimum standards. Based on this initial evaluation, the EIC or DE either desk rejects the manuscript or assigns it to one of the Area Editors (or in some cases a Consulting Editor) as the action editor. Note that JIBS has a policy of desk reject and resubmit that allows authors to resubmit previously desk rejected manuscripts if they have been substantially revised according to the editor's recommendations. The initial review process is concerned not just with the screening of manuscripts for their fit and suitability for the journal, but more importantly it has the objective of assessing what might be the likely impact of a paper on the field as a whole, from the perspective of international business scholarship in general. A key aim we have for the journal review process is that it should combine constructive suggestions for revision to take account of subject and topic-specific concerns on a particular study from the viewpoint of specialists in the relevant research area, along with the objective of developing articles that have the capacity to attract interest and attention among international business scholars more widely.

Action editors review the manuscript and can either desk reject the manuscript or send it out for review. Thus, before a manuscript is seen by reviewers it has already been reviewed by three JIBS editors as to whether or not it meets the minimum standards at the journal, and for an overview of where its potential contribution to the field might lie, with the goal that the nature of this contribution will emerge more clearly through the review process. The number of reviewers that are typical in the contributing disciplines to JIBS varies. At JIBS, two or three reviewers are selected by the action editor based on the reviewers’ expertise on the submitted manuscript's topic and method and are then sent the manuscript (which has been removed of author identifying information). JIBS asks reviewers to rate the manuscript as to its overall contribution, theory development, literature review, methods (if applicable), integration and style of presentation, and to suggest to the action editor if the manuscript should be accepted, rejected or invited to revise and resubmit. We also ask that reviewers provide detailed comments to authors, and we give reviewers the opportunity to make confidential comments to the action editor, which are not shared with authors. When the reviews are received, the action editors use this information as guidance and advice in reaching their editorial decision. When the action editors receive reviews, they also rate them as to quality on a 1–5 scale according to the following:

5 – Outstanding review (exceptionally high quality): This is reserved for cases where the review is of such high quality that the individual should be a candidate for JIBS Best Reviewer Award.

4 – Very good review: This review is insightful and truly developmental.

3 – Good review (average review): This review is critical but fair, constructive and reasonably comprehensive.

2 – Acceptable (but below average) review: This review is sketchy and below average.

1 – Poor review: This review reserved for cases where the review is of such low quality that the individual should no longer be used as a JIBS reviewer.

These scores, in part, comprise the data for determining the annual JIBS best reviewer awards. For this study, we used these scores to create a database of the best-rated and not so highly rated reviews over the past 2 years. An analysis of these reviews along with a survey of JIBS Editors provided the data for this editorial. With it we hope to provide guidance for current and future reviewers on how to write helpful, developmental and possibly even award-winning peer reviews.

THE FEATURES OF THE MOST HELPFUL REVIEWS: A SURVEY OF EDITORS

To better understand the features of the most helpful reviews, we surveyed the current JIBS action editors (EIC, DE, Consulting Editors, RE, Special Issue Editors and Area Editors), asking them to provide ratings on the helpfulness of 20 features of peer reviews. These features, listed in Appendix A, were adapted from the JIBS guidelines for peer reviews (http://www.palgrave-journals.com/jibs/reviewer_guidelines.html). Seventeen action editors responded.

The action editors surveyed were asked to rate each feature's helpfulness or usefulness to them as an editor. The scale ranged from 1=useless/very unimportant to 5=critical/extremely important. With the exception of one feature (The review corrects grammatical and typographical mistakes) all of the features of peer reviews were considered “helpful,” receiving scores around 4. The most helpful features – those with an average score over 4 – include:

  • The reviewer discloses any potential conflict of interest (usefulness mean=4.53).

  • The review makes plausible suggestions for improving the manuscript (usefulness mean=4.47).

  • The review offers advice on how problems in the manuscript could be addressed (usefulness mean=4.41).

  • The reviewer declines if he or she is feels unqualified to judge (usefulness mean=4.35).

  • The review indicates strengths (as well as weaknesses) of the manuscript (usefulness mean=4.35).

  • The review provides comments on the manuscript's overall contribution to the field (usefulness mean=4.24).

  • The review suggests alternate ways to analyze the data (usefulness mean=4.00).

Next, the action editors were asked to rate the frequency with which they observed each feature in the peer reviews they have received as editors. This scale ranged from 1=very rarely see this in reviews to 5=always see this in reviews. Across the key features, the editors report that these features “occasionally” appear in the reviews, scoring them around 3. The mean frequency scores for these most helpful features are as follows:

  • The review indicates strengths (as well as weaknesses) of the manuscript (frequency mean=3.44).

  • The review makes plausible suggestions for improving the manuscript (frequency mean=3.38).

  • The review provides comments on the manuscript's overall contribution to the field (frequency mean=3.37).

  • The review offers advice on how problems in the manuscript could be addressed (frequency mean=3.37).

  • The reviewer declines if he or she is feels unqualified to judge (frequency mean=3.08).

  • The reviewer discloses any potential conflict of interest (frequency mean=2.44).

  • The review suggests alternate ways to analyze the data (frequency mean=2.94).

The most helpful features were echoed in the final, open-ended question asking the action editors, “What is the most important thing a review should contain in order to be helpful to you in making an editorial decision?” Some editors listed more than one feature for a total of 30 comments. Many of the comments (11 out of 30) focused on describing the potential contribution or added value of the paper. Sample responses around this theme included:

  • An assessment of the contribution to the field and theory … how important are the ideas?Assess the significance of the manuscript to the literatureAn assessment of the potential contribution to the field

This indicates to us that editors are at least as concerned with introducing innovative concepts to the field as they are with quality control.

Consistent with the importance of evaluating the potential impact of the article, a number of comments (6 out of 30) focused on offering constructive and specific suggestions for improvement on theory and analyses. Sample responses around this theme included:

  • Information about directions for improving the manuscriptA sense of what can be done to improve the paper and what cannot be doneMake constructive suggestions for improving the manuscript

Offering ways of improving the manuscript was also reflected by the editors whose comments (7 out of 30) mentioned that it was helpful for the reviewers to point out the strengths/appropriateness and weaknesses/inappropriateness of particular analytic strategies or theoretical arguments. Sample responses around this theme included:

  • Identify and articulate the main strengths and weaknesses of a manuscript, both in the conceptual part and empirical partIdentification of the key shortcomingsIdentify fatal errors in research design/methods

In summary, the open-ended comments echoed the results of the survey, which focused on the importance of peer reviews identifying and helping to bring to the surface the most important contribution of the manuscript as opposed to simply erecting barriers that authors must overcome in order to see their work published. That is, while quality control is important, it is not the overriding factor that editors are looking for in a review.

THE FEATURES OF THE BEST PEER REVIEWS: A CONTENT ANALYSIS

As a complementary means of assessing the characteristics of high-quality reviews, we content analyzed the comments to authors sections of reviews received from July 2010 through June 2012 that met two criteria: (1) had quality ratings of either 4 or 5 and (2) were written by 1 of the 17 individuals who won the JIBS best reviewer award in 2011 and 2012 (10 awarded each year, 3 people won both years). Based on these criteria the JIBS Managing Editor removed identifying information and extracted the 78 reviews in this category, which comprised the best reviews received during this period. From the same 2-year time period, a random sample of 78 reviews was extracted that had quality ratings of either 1 or 2. Identifying information was removed and these reviews were included in the content analysis.

The two sets of reviews were analyzed with the aid of content analysis software (NVivo). Key terms used in coding were derived from the review of features in the Appendix, in particular a gap analysis of the delivered vs provided responses of editors, and from an exploratory word frequency analysis of the reviews. Once themes were established, segments of text ranging from a few words to several sentences were categorized as fitting the theme. The basic differences in the two sets of reviews and the main themes that emerged are the following:

  • The best reviews were longer. Best reviews were 1403 words on average, compared with 438 words in the less effective reviews. It is not length per se that is the important aspect here, but that longer reviewers were indicative of more complete and in-depth coverage of issues. Shorter reviews often covered as many points but in a fairly cryptic manner with little explanation. The longer reviews also tended to include full citations for the authors to consider referencing rather than offering a passing suggestion about other references (without specifics).

  • The best reviews did not make an obvious recommendation in the comments to authors (e.g., paper should be rejected). None of the best reviews did this, compared with 6.5% of the less effective reviews. Reviewers are provided an opportunity to make a recommendation as to the disposition of the manuscript in the evaluation form and in their confidential comments to the action editor. More often than not, reviewers do not agree on the appropriate outcome of the editorial process, and making this opinion known in comments to authors is not helpful.

  • The best reviews gave complete references to sources cited 17.9% of the time. The less effective reviews did this 6.5% of the time. This is just good form and indicates the kind of collegial and conversational style that is prevalent in the best reviews as discussed ahead.

  • The best reviews focused on the contribution or potential contribution 85% of the time. The less effective reviews did this 49% of the time. Again, this aspect of the review is most important to review quality. The best examples of this were very clear and concise statements of how the manuscript could potentially influence the field and what the reviewer found interesting or unique about the potential contribution. It is important to note that this focus was evident even in reviews where the reviewer was recommending that the manuscript be rejected.

  • The best reviews used a numbered or indexed format 89% of the time. The less effective reviews did this 38% of the time. Obviously, editors like this, because it makes it easier to refer to specific comments in the reviews. And it allows authors to be systematic in responses to reviewer comments.

  • The best reviews used a more positive tone 55% of the time and a more neutral tone 43% of the time. They used a negative tone only 2% of the time. The less effective reviews used a more positive tone 45% of the time and a more neutral tone 31% of the time. They used a negative tone 24% of the time. Some of the less effective reviews were inconsistent in tone at times (e.g., positive at outset and overly negative comments at the end). Perhaps the best way to describe the tone of the best reviews is collegial. That is, they read like a discussion between colleagues who respect each other as opposed to a restaurant review where the critic did not like the meal.

From the perspective of decisions, the opinions of the better reviews tended to align more closely with the editors’ opinions. The editors agreed with the best reviews 89% of the time and 49% with the less effective reviews. This alignment is possibly the result of the fact that the better reviewers had taken the time to become more familiar with JIBS Statement of Editorial Policy. Certainly every reviewer should be very familiar with the journal's policies before conducting a review. It does not appear that the quality of the reviews affected rejection rates, which were comparable in both sets of reviews.

The gap analysis of desired vs provided features of peer reviews indicated that the key areas on which editors focused were comments on the overall contribution to the field, advice on how problems in the manuscript could be addressed, plausible suggestions for improving the manuscript and suggestions for alternate ways to analyze the data. Content analysis of reviews based on these themes, as well as themes derived from word frequency, indicated that the best reviewers were much more focused on providing information that would be useful in surfacing the unique contribution of the manuscript as opposed to simply identifying deficiencies. And of course this is consistent with the goal of JIBS to publish research that is insightful, innovative and impactful.

GUIDELINES FOR WRITING AN EXCELLENT REVIEW

The following advice is derived from the qualitative assessment of top reviews and the survey responses of action editors on specific elements of excellent reviews. The top five are offered in the following:

Focus the review on the potential contribution: The primary issue for peer reviewers should be the extent to which the manuscript has the potential to make a meaningful contribution to the field of interest. In the case of JIBS, based on your knowledge of the field, identify for the editor (and author) where the potential contribution lies and what must be done to realize that potential. That is, how does or might the paper advance our collective understanding in international business from the lens of your field (e.g., marketing, finance, management)?

Offer details about the strengths and the weakness of the paper: As guardians of the knowledge creation in our respective fields, we are trained to have critical and questioning eyes. Thus, identifying weaknesses is the easiest part of the reviewer's job. However, it is the strengths of the manuscript that provide the platform for development, so it is helpful to identify the strengths of the manuscript along with the weaknesses.

Offer specific and constructive feedback for ways to address problems: The ability to critique, find fatal flaws, suggest alternative explanations and identify problems with methodology and theory are, collectively, half of the skill of great reviewers. The other, and often more difficult, half is to offer constructive suggestions on realistic ways to address a given concern (Kohli, 2011).

Evaluate your objectivity and ability to review before agreeing: For obvious reasons, it is important for potential reviewers to disclose any conflict of interest that might impair their objective judgment, such as familiarity with the manuscript or a personal relationship with the authors. It is often difficult for editors to know about professional relationships between, say, former advisors and advisees, or among co-authors and colleagues. And being overly familiar with a manuscript or its authors has the potential to impair objectivity. To preserve this objectivity, reviewers are encouraged not to try to identify authors by conducting an electronic search of their posted working papers, curriculum vitae and conference presentations (Hillman & Rynes, 2007). At the same time, it is also acceptable to decline to review because you do not feel as though you have the expertise to make a fair judgment on the contribution.

Improve the mechanics your review: Your review should follow a logical order. Often, although not always, the points are written in the same order of the manuscripts’ sections. It is helpful for authors (and the editors) if points are numbered, allowing responses to flow more logically. If you are suggesting additional references, add the complete reference to the review. And, please, do not indicate an editorial decision in the comments to authors.

Of course these suggestions alone are insufficient, as writing an excellent review is as much art as it is science and requires striking a balance between being positive and constructive yet critical and challenging. And it also involves offering ways of resolving research problems while respecting the objective and goals of the author.Footnote 1

IMPLICATIONS FOR GRADUATE SCHOOL TRAINING

Writing high-quality peer reviews, while critical for JIBS and other scholarly journals, is a skill rarely trained, coached or developed. Senior scholars are encouraged to mentor future potential referees “to develop and reinforce peer review skills and capabilities” (Carpenter, 2009: 191). Curious to learn how senior scholars learned to write reviews, we asked the action editors the open ended question, “How did you learn to write peer reviews?”

If this group of action editors is representative of the larger body of reviewers, it seems that most learn of us learn from experience, one review at a time. Most of the editors wrote that they learned how to write reviews “by doing them”; some further refined this “learning by doing,” describing that they reviewed for conferences to gain experience. Others refined “learning by doing” by reflecting on how their reviews compared with the editor's letter and other reviewers’ comments on the papers they reviewed. Many reviewers learn how to conduct reviews by mimicking the style of the most helpful reviews they have received. About two-thirds of the editors learned how to review by reading the reviews of their own papers. Some of the action editors (5 of the 17) credited their graduate school training or their advisors in preparing them for how to write a review. They recalled having sessions on how to write reviews in their PhD seminars, having to write mock reviews and receiving feedback on their early efforts.

While experience as reviewers might increase over time, it seems that we rarely receive constructive feedback on our reviews (outside of graduate school or when we request such feedback). Feedback is limited to “best reviewer” awards at one extreme and not being asked to review by the same journal twice at the other. In both cases, we might get the message, but without ever understanding what made our reviews great (or not so great). To prepare the next generation of scholars to be able to write high-quality peer reviews, we recommend the following for doctoral programs:

  • Offer formal training on how to write a peer review.

  • Offer feedback on first reviews, perhaps comparing actual reviews with practice reviews.

  • Encourage graduate students to review for conferences – and offer feedback their comments.

  • Teach an approach for continually developing reviewer skills (how to evaluate the reviews received on one's own work, how to compare one's review with other reviews for the same manuscript, how to compare one's review with the editors comments on the review conducted).

  • Mentor young scholars on the mechanics of how journals are managed from an editorial perspective (e.g., getting on editorial boards, the importance of conducting ad hoc reviews, the way in which reviews are rated).

CONCLUSION

Excellent reviewers are a “scarce and valuable resource” (Marchionini, 2008). Northcraft (2001) found that more senior scholars with the greatest levels of expertise are less likely to agree to ad hoc reviewing. However, Rynes (2006) found a negative relationship between reviewers’ professional age and review quality and suggested that the mix of more professionally junior and senior reviewers would balance innovation with established wisdom. Whether the reviewers are senior or junior, encouraging excellence in reviews is of critical importance to continuing to advance our field with breakthrough ideas along with attention to theoretical and methodological rigor. A list of 20 review features is offered in Appendix A. These could provide a guide for the training of junior scholars to build skills as effective reviewers.

In writing this editorial we drew on the expertise of the current set of JIBS editors as well as an analysis of some of the best (and not so good) reviews by JIBS reviewers over the past 2 years. Reviewing papers for JIBS is a voluntary activity but an activity that goes to the heart of maintaining and enhancing the field of international business. It was a privilege to read some of the best reviews conducted by JIBS reviewers over the past few years and to contrast them with their more run-of-the-mill cousins. We hope our synthesis and report of this activity will be helpful as we all try to perfect our skills in this important endeavor.