Thanks to advances in computer algorithms and data science, several metrics and indicators are today available for evaluating the quality of research papers. The impact of a research paper can be evaluated with these metrics that are specially designed to suit the needs of an individual researcher, a team of researchers, a department of an institution, etc.
Peer review is a qualitative process, but metrics is a quantitative parameter and the choice of metrics depends on the goal and subject of a research study. Despite receiving funding for metrics research, scientists still do not have high quality metrics to evaluate the findings of a research study that presents global data. Despite having shortcomings, journal level indicators are still being used as parameters for research quality evaluation.
A 2014 conference on Science and Technology Indicators (STI) was held at Leiden in Netherlands. The panel consisted of eminent professors and professionals from a leading STM publisher, Elsevier in Netherlands. They felt that there is an urgent need to standardize the quality of metrics and indicators that are used to evaluate the content and data presented in research studies.
For this purpose, they coined the term “Snowball Metrics” initiative: all international universities decided to collaborate and develop a set of standards for evaluating the quality of research in terms of both output and research methodology. This is a collaborative effort between the Director of Research Metrics at Elsevier and eminent professors of leading universities in the world. The moot points of Elsevier’s vision are as follows:
- An entire workflow of a research study is evaluated on various aspects with the help of several metrics.
- Peers related to the subject of a research study must be allowed to select their set of metrics.
- It should be possible to generate and use metrics in an automated fashion and at a scalable level.
- In general, the focus of a research study is to answer a question related to science, technology, or medicine. A metrics evaluates the data of the study in quantitative fashion, but this data should also be validated qualitatively with evidences to ensure that the findings of the research study are complete.
- The quantitative inputs of multiple metrics are found to be most reliable in nature.
- Metrics are affected by some characteristics, such as discipline and so they must be considered even if they do not reflect the performance of a research study.
- The use of metrics can be done in a transparent way only when researchers do not use manipulative ways of “gaming.” Nevertheless, researchers still use metrics in an irresponsible and incorrect way.
- Researchers who use metrics on a daily basis should be given the responsibility of defining the set of metrics that they need everyday for use. Thus, this community of researchers can alone come forth with the definitions of various metrics.
- There should never be any black boxes in research methodologies.
- An aggregation of metrics should never be considered. Moreover, composite metrics never reflect the true value of data presented in a research study.
- The methodologies used to evaluate the quality of metrics should never be related to the sources of data and the tools used to generate the data. Moreover, the business models and access codes used to gather underlying data should never be affected by metrics.