The Significance of Journal Impact Factor in Academic Publishing

Most academics who have published papers in scientific journals are familiar with the term “impact factor.” So, what exactly is impact factor of a journal? Scientific journals are ranked by a metric known as “impact factor.” Thomson Reuters is an academic publisher that has come up with a database of impact factors of journals. Although it is primarily used a library resource, it is also very good to attract papers for publications.

Impact factor is a crucial yet controversial metric in scientific publishing. Based on the impact factor of a journal, scientists decide whether it is suitable to publish their work. The impact factor of a journal is a metric that describes the visibility range of a journal. In general, journals with high impact factor are considered to be prestigious in a particular field.

How did journal impact factor gain significance in academic publishing?  

Academic publishers felt that journals should be ranked according to their impact or significance. To address this concern, they devised a metric known as “journal impact factor.” The impact factor of a journal indicates the patterns and frequency of citations of a journal.

The origins of “Impact Factor” can be dated back to the year 1955. In an issue of the journal Science, Eugene Garfield first expressed the need for a metric that ranks journals on the basis of their impact on research. Eugene Garfield was an information scientist who came up with this idea in 1955.Eugene Garfield worked with Irving Scher, who was her colleague in the field of information sciences. Together, they introduced the metric “impact factor” in the year 1960.

The “journal impact factor” ranked all scientific journals after comparing the difference between their sizes and their network of circulation.  The impact factors of all scientific journals were presented in the form of a database, which was termed as the “Science Citation Index (SCI).” This database was first published by the Institute for Scientific Information. Eugene Garfield was the founder of this institute. Later on, the database was rechristened as “Journal Citation Reports (JCR)” and was published by Thomson Reuters, a well-known academic publisher.

How to determine the impact of a journal, and which journals are associated with it

Eugene Garfield determined the number of citations received by papers published in a scientific journal over a span of two years. Then, Eugene Garfield divided this number with the total number of papers that were published in that particular journal over a time period of two years.

As research is carried out at a different pace in different fields of study, Eugene Garfield compared a journal with other journals of the same field. In other words, a medical journal was compared with other medical journals. Similarly, a journal on ecology was grouped with other journals of ecology.

Although impact factor largely depends on a journal, it is also affected by the research conducted in a field of study. In the year 2009, the impact factor 87.925 was the highest for a scientific journal. However, the next highest impact factor was only 50. Thus, the field of study and the related research work significantly affects the impact factor of a journal.

Every two years, JCR is published by Thomson Reuters in the month of June. For example, the database published in 2016 presents the journal impact factors for the time period of 2014-2015. About 9000 journals were included in the JCR database of 2009. However, this database includes only 25% of all the published journals. Moreover, it mainly comprises of journals in English language.

Why it is necessary to know about a journal’s impact factor

According to Eugene Garfield, impact factor is a metric that reflects a journal’s prestige in the scientific community. Scientists often see the journal’s impact factor to decide whether it is suitable for publication. By publishing their papers in scientific journals of high impact factor, scientists can gain more respect in their community. Moreover, they also gain other benefits, such as better access to research funding, an extension of tenure, recruitment to prestigious institutions, and promotions at universities. Nevertheless, journal impact factor cannot be considered as a sole criterion for the integrity of a journal or a research study.

The editors of journals always make an effort to increase the impact factor of their journals. Sometimes, these editors request authors to increase citations in the papers submitted for publication in their journal. This is an unethical practice and should not be conceded at all costs. The impact factor of a journal is a metric used in information sciences: it does not govern the quality of a research work.

The controversies and problems of journal impact factor 

The journal impact factor’s indiscriminate use in academic employment industry has been severely criticized by many information scientists, including Garfield. The significance of an author’s research work cannot be solely estimated from the journal’s impact factor. Impact factor should always be considered along with other parameters of evaluation, such as the peer review process.

It should be noted that smaller fields of study attract lesser citations, so the journals of these niche fields have lower impact factor. These journals may contain papers of path-breaking research work. The impact factor of a journal should always be compared with that of a journal in the same field of study. The impact factor of a journal is not really an indication of the significance of a research work.

At this stage, it is also necessary to point out the problem associated with prestigious journals. Because these journals have a high impact factor, it is really very difficult to publish papers. This is because the rejection rate of such journals can be as high as 75%. Remember, the main aim of a researcher is to get their work published in a peer reviewed journal. Therefore, researchers must not just limit their efforts to high impact journals. They should consider all other factors while deciding which journal is most suitable for their work.

Which metrics can be considered as good alternatives to journal impact factor?

Since the significance of impact factor has been very controversial, researchers are advised to use other alternative metrics, such as SCImago Journal & Country Rank, the h index, Scopus, and the Eigenfactor. In the year 2005, Jorge Hirsch was a physicist who developed the h index. This metric compared the author’s total number of published papers with the number of citations received by those published papers. In other words, it evaluated the productivity of an author in academia.

The Web of Science is an index that uses the metric Eigenfactor. This metric measures the frequency of citation of a published paper over a period of five years. It thus determines how influential the article is in a particular field of study.In the metric The SCImago Journal & Country Rank, a database of journals was provided. This database was based on the rankings and the visibility received by journals, which were further organized according to their types. It comprehensively covered all international publications. Scopus is a database based on abstracts and citations. This database is published by the noted scientific publisher, namely, Elsevier.

 How should journal impact factor be used by researchers in academia?

Although journal impact factor is an important metric to be considered before publishing a paper in a scientific journal, it should never be considered as the sole criterion for evaluating the quality of a journal. The decision to submit and publish a paper should never be made on the basis of the journal’s impact factor. It is always essential to assess the scope and objectives of a journal and then determine the possibility of your paper getting published in that journal. Harrisco is a company that provides complete publication support to authors and can help authors in journal selection, peer review, language editing, and translation. Harrisco is a name to reckon with in the academic publishing industry as it has been in business since 1997.

 

What is real research impact: downloads or citations

The world of scientific publishing has undergone a metamorphosis, with most scientific articles being published online.  To measure the impact of scientific data, many concerted efforts have been made to develop new tools. Rather than waiting for publication of citations in the print media, these tools help us to decipher the impact of tools in the online medium.

One of the most prominent journal metric is the “download impact factor.” It is defined as the rate at which articles are downloaded from a journal. This tool is similar to the “journal impact factor.”  Another prominent tool for this usage is the Journal Usage Factor, which is calculated on the basis of mean and not median. Although there are many social network metrics, the download networks estimate the information through clicks and not download logs.

To determine the measure of journal impact, both citations and download data log have been defined. A single indicator cannot be used to measure the impact of scientific journals. Most researchers now believe that indicators measuring the download data have greater impact today given the firm grasp of online media.

The download frequency of a journal would not be affected by the impact factor. In terms of absolute value, there is a strong correlation between citation and frequency of download for a journal. Furthermore, there is moderate correlation between download number and journal impact factor.

Scopus is a very useful tool to measure citation data. On the other hand, ScienceDirect is a tool to measure the number of downloads. Both these tools are used to comprehend the relationship between download and citations. Thus, the influence on publication output is measured.  Scopus is an impact tool that does not include conference papers and abstracts. ScienceDirect is a measuring tool that includes the impact of all kinds of papers.

Scopus is a measure of the time taken for a paper to be cited, whereas downloads is the tool that measures the innovative value of papers.  In each subject area, “excellent” papers were those that had a large number of “mean downloads.”

In both English and non-English journals, there was a strong correlation between downloads and citations. There were journals whose papers were downloaded in great numbers but these downloads did not really result in citations.

For individual papers, correlations are weaker than that of journals; however, they are markedly more significant than sample size.  The number of downloads depends on the how well circulation is the journal. It does not really depend on novelty. Quality of paper is reflected in terms of citations today. Journals that have wide circulation and diffusion would have many downloads, but that does not really correspond with citations.

Papers published in journals with low impact would have less number of downloads, regardless of whether these papers receive many citations later. This implies that download data cannot be considered as a predictor of citation, especially when the journal has lower significance in its early years.

In English journals, the number of downloads is slightly less than citation for papers. In non-English journals, the number of downloads is slightly more than the number of citations. In non-English journals, the correlation between citations and downloads seems to be much lesser.