The objective analysis of ISI (Web of Science) journals reveals, in addition to benefits, some shortcomings that disadvantage some valuable authors. Unfortunately, medical literature is difficult to be influenced, because to some extent it has become a business. Although rigidity in thinking has led to many dramas and stagnations in the evolution of mankind, research and the exposure of research, achieved through the publication of research papers, are hit by a parti pris that only journals with an impact factor represent value and only in these journals should be published. This presentation is not a plea against journals with an impact factor, but it wants to emphasize that an article resulting from a study or a review can be valuable even though it is published in a journal that does not have an impact factor. I consider that a journal without an impact factor, but indexed in an international database, can be a real school for beginning authors and a means of rapid expression for experienced authors. Assessing scientific quality is a difficult problem that has no standard solution. Preferably, the publication of the scientific results of the various studies should be analyzed by real experts in the field who will grant points for the quality and quantity of information according to preestablished rules. Evaluating scientific quality is a notoriously difficult issue. Ideally, published scientific results should be scrutinized by true experts in the field and given scores for quality and quantity according to established rules(1). True experts realize a committee of evaluation.

Committees tend, therefore, to resort to secondary criteria like crude publication counts, journal prestige, the reputation of authors and institutions, and estimated importance and relevance of the research field(1), making peer review as much of a lottery instead of a rational process.

Eugene Garfield, the founder of the Journal Impact Factor (JIF), had originally designed it as a means to help choose journals(1). Unfortunately, the JIF is now often used inappropriately – for example, to evaluate the influence of individual pieces of research or even the prestige of researchers. This metric has recently come under considerable criticism owing to its inherent limitations and misuse. 

The impact factor of a journal is a simple average obtained by considering the number of citations that articles in the journal have received within a specific time frame. A previous article, “The impact factor and other measures of journal prestige”, touched upon its calculation and features(1).

This article goes a little deeper into the fallacies of the impact factor and the points you should consider when using it. Next, I will present how the impact factor of a medical journal can be misused, from an article published on editage.com website(2):

1. The JIF is a measure of journal quality, not article quality. The JIF measures the number of citations accrued to all the articles in a journal, not to individual articles.

2. Only citations within a two-year time frame are considered. Thus, the true impact of papers cited later than the two-year window goes unnoticed.

3. The nature of the citation is ignored. As long as a paper in a journal has been cited, the citation contributes to the journal’s impact factor, regardless of whether the cited paper is being credited or criticized.

4. Only journals indexed in the source database are ranked. Thomson Reuters’ – Web of Science®, the source database for the calculation of the JIF, contains more than 12,000 titles. Although this figure is reasonably large and is updated annually, several journals, especially those not published in English, are left out. Thus, journals that are not indexed in Web of Science don’t have an impact factor and cannot be compared with indexed journals

5. The JIF varies depending on the article types within a journal. Review articles are generally cited more often than other types of articles, because the former present a compilation of all earlier research. Thus, journals that publish review articles tend to have a higher impact factor.

6. Journal impact factor is discipline depen­dent. The JIF should only be used to compare journals within a discipline, not across disciplines, as citation patterns vary widely across disciplines. For example, even the best journals in mathematics tend to have low impact factors, whereas molecular biology journals have high impact factors.

7. The data used for JIF calculations are not publicly available. The JIF is a product of Thomson Reuters®, a private company that is not obliged to disclose the underlying data and analytical methods. In general, other groups have not been able to predict or replicate the impact factor reports released by Thomson Reuters.

8. Journal impact factor can be manipulated. Editors can manipulate their journals’ impact factor in various ways. To increase their JIF, they may publish more review articles, that attract a large number of citations, and stop publishing case reports, which are infrequently cited. Worse still, cases have come to light wherein journal editors have returned papers to authors, asking that more citations to articles within their journal – referred to as self-citations – be added(2).

Conclusions. No numerical measure can replace actually reading a paper and/or trying to replicate an experiment to determine its true value.

 

Conflict of interest: none declared.

Financial support: none declared.

This work is permanently accessible online free of charge and published under the CC-BY licence.