Challenges facing research scientists in academia

With the recently held rally “March for Science” on April 22, 2017 in Washington, DC, the Donald Trump administration must have felt pressure to concede to the demands of research scientists. The Trump administration received severe criticism for reversing climate change policies and reducing funding of academic research projects. In this article, the challenges facing research scientists in academia have been summarized as follows:

1)     Reduction in government grants toward scientific research

Scientists need money for performing research studies. With Donald Trump mulling over a further reduction in financial grants toward scientific research, scientists would grapple with various issues as research projects are already struggling at various levels. However, research funding has been drying up over the last few decades. Most path-breaking research discoveries happen in projects that last over a decade, while grants allotted by serial governments in the USA last for just three to four years.

In such a scenario, scientists have to seek grants from external sources to cover lab costs, research assistant salaries, and to implement procedures. The funding received from universities covers only the salaries of scientists working on projects. The sources of external funding are limited and most researchers have to primarily depend on the federal grants provided by the US government.

As funding is getting limited, the process of grant approval is becoming stricter. In the year 2000, more than 30% NIH research proposals received federal grants. Today, the situation is grim with only 17% NIH research proposals receiving federal grants.

All this cost-cutting measures have led to dismal status: researchers shy away from unconventional subjects today and stick around to publishing short papers with a faster turnaround period. Thus, mediocre science is the current state of academia.

2)     Conflict of interest from external sources

As the federal grants become highly competitive and meager, scientists turn to industries and commercial establishments for funding their research work. This ultimately leads to conflict of interest, with most reviewers questioning the authenticity of results. Scientists are compelled by these industries (FMCG, pharma, food, etc.) to produce results that favor the commercial prospects of the sponsoring agencies.

3)     The study design of most experiments is biased, all thanks to poor incentives

Most research scientists are compelled to create study design of experiments and produce “novel” results, which will ensure the publication of work in prestigious journals. The “path-breaking discoveries” do not occur often, so scientists introduce bias early on in the experimental study design to embellish results. As they are pressurized to produce “significant results” for publication, scientists are helpless as they also need to save their research careers. Most scientists manipulate the analysis of results, rather than providing an honest assessment of their findings. For example, most biomedical researchers conduct extensive p-test to statistically analyze their results against other hypothesis. They only publish “statistically significant” results, which are easily achieved by this so-called “p-hacking”

Can you believe how poor incentives has jeopardized insignificant results? More than 30 percent of the so-called high quality medical research papers are now found to be containing exaggerated or wrong results. In monetary terms, this has translated to a wastage of $200 billion, that is, 85% of the money spent on scientific research globally.

4)     Peer review process is faulty

Although most journals have peer review process to improve the quality of manuscripts and to prevent wrong studies from getting published, the process seems to be losing its sheen. As peer reviewers are NOT paid by the journals for providing constructive feedback of the manuscript, they do it under obligation. Thus, many systematic reviews have now found out that peer review is a faulty process: it fails to ensure that bad science is NOT published. Time and again, many manuscripts with faulty results and plagiarized content seem to have got published. As the editor and peer reviewers know the authors of the study but the authors do not know about the editors and peer reviewers, there could be instances of biases toward researchers from certain institutions and countries.

5)     Scientific research is inaccessible to the public owing to high subscription prices of journals

Publishing a research study in a journal is not enough to disseminate science. Most journals are extremely costly as leading companies like Elsevier acquire numerous journals and sustain the print model for their vested interest. Most articles in journals can be accessed by readers at a hefty fee. For example, a yearly subscription to the Journal Cell costs around 279 $. If an educational institution subscribes to the 2000 Elsevier journal for a year, the cost would soar to anything between 10,000 $ to 20,000 $. Most US universities pay for these journals and their students can access it whenever they want; however, PhD scholars in developing countries like Iran need to shell out from their pocket, which means they would need at least 1000 $ a week for reading some novel research papers.

It is indeed a sad story that the common man’s tax returns are funding research studies at universities and government labs, but the common man has to again pay a hefty sum of money to access this work in scientific journals. Can you believe the annual revenue of Elsevier was pegged at around $3 billion in 2014?


Science is not yet doomed and there are methods for fixing these issues. The process needs to be modified to include more proofing and to mitigate biasing: this can be achieved by rectifying the peer review process and by ensuring better allocation of federal grants. With more federal grants being processed at regular intervals, scientists would be happier to pursue unconventional subjects. The tendency to suppress non-significant results would diminish, leading to better transparency. Thus, the more frequent sources of bias would be eliminated in academic publications and scientific research.

The impact of Open Access Publishing on scientific research

Scientists often rue the fact that scientific research studies have limited viewership due to the high subscription charges of journals. As the world moves from the print to the digital medium, science policy makers have been advocating “Science should be freely available to the common man”.

Open Access Model of Scientific Publishing

With this perspective, the Open Access model of Publishing has changed the dynamics of the industry. In the Open Access model, the author usually pays a hefty fee to the publisher to make the article freely accessible to all through web portals.

As the journals are digitized, the cost associated with print publication is mitigated. So, how much does the author really need to pay to get their work published in an Open Access Journal? PLOS is the most noted Open Access Journal Publisher whose high costs are primarily associated with technology and labor. In a PLOS One, an author has to pay usually $ 1350 for publication. In another Open Access journal PeerJ, authors are charged a one-time fee of 299$ and they can publish unlimited papers in the same journal.

Highly selective Open Access Journals of BioMed Central and PLOS One stipulate a fee of $2700 to $2900 from authors intending to publish their work. According to a recent survey by researchers at the University of Helsinki and the University of Michigan, Open Access Journal publishing is a grey area with journals charging anything between 8$ to $3900.

According to a leading source working at Hindawi Publishing, an Open Access Journal Publisher in Cairo, Egypt, the cost of publishing a single article turned out to be just 290$ with successful publication of 22, 000 articles in a single year. On the other hand, the marketing source of PeerJ concedes that the cost of publishing an article in their journal is about hundred dollars.

All editors and reviewers working for an Open Access Publishing house are voluntary workers who are NOT paid. The estimated cost of operation for The Open Library of Humanities, a non-profit organization that publishes seven peer reviewed journals in the Open Access model, is approximately $ 3,20,000. There are certain “free journals of open access” but the cost of operation is borne out from the grants received by a university and the staff is primarily volunteers. Costs are associated with everything related to online publishing in an open access model.

Sustainability of Open Access Journals

The print format of subscription journals is lobbied by traditional publishers who argue that Open Access Publishing is sacrificing the quality of science at the cost of “free dissemination to the public”. Elsevier has more than 2000 journals with subscription or hybrid model of publishing, and it earned a revenue of $1.1 billion in the year 2010. Its profit margins have been about 36 % in the same period.

Open Access Publishers seek to cover the costs and any additional money is further used as a reservoir to overcome unforeseen costs. PLOS keeps some profit margin on the journals, but they are not bound like subscription journals to share their profits with shareholders. The primary source of funding to Open Access Publishers is through university grants and the fees charged to authors for “article-processing.”

In the subscription model, the universities sign Non-Disclosure Agreements before making bulk subscriptions of journals. In the Open Access model, the author is required to pay an appreciable amount to initiate the publication process and make it available for free viewership. At the Open Library of Humanities, the non-profit organization sustains not only on grants from external foundations but also on the fees paid by libraries availing its work. The money provided by libraries is more like a form of endorsement to the novel process.

As of 2013, there are 8847 Open Access journals enlisted in the Directory of Open Access Journals (DOAJ). This number has risen sharply within 5 years from being only 5000 in 2009. According to PLOS, open access model is completely flourishing with 12% of peer reviewed articles of STEM disciplines being published in Open Access Journals. The NIH drafted a policy mentioning that the results of scientific studies would be freely available on the internet within one year of its publication.

The lure of subscription model still exists due to “high-impact factor” of these journals

Although the real cost of publishing is low and peer reviewers are not paid for their work of academic editing even in subscription journals, it’s the “high impact factor” that attracts researchers to the subscription model. For example, the impact factor of subscription journal Science is 34.661, whereas the impact factor of PLOS One, the most noted Open Access Journal, is just 3.234.

With most universities not considering new scientists whose publications are in journals with impact factor < 5, the death knell on the research career of budding scientists discourages them from pursuing Open Access Movement. Open Access Journals are favored by seasoned scientists at this juncture propagating a shift in science policy.

Changes in the scientific publishing industry

Today, most subscription journals are drifting toward the hybrid model, which is an offshoot of open access publishing. Here, authors pay a large sum of money to the subscription journal to make it open access. For example, the subscription journal Cell presented its hybrid journal Cell Reports in 2014. The authors are charged 5000 $ to make their work freely accessible to all. With the Open Access movement, the role of traditional scientific publishers is being mitigated to that of middlemen.


The impact of Open Access in scientific publishing can be quantified with the latest data in sales volume: the subscription model of journals was previously 100 billion dollar industry in terms of revenue. As of 2010, the Open Access model of publishing has eaten 3% of its market share. Open Access model is now worth 100 million dollar in revenue. This is because there is a drift from print to internet (digital media). As of 2010, the print v/s digital media platform for scientific papers stands at 40:60 ratio.

Has SCI publishing really benefitted the masses from medical research?

Has SCI publishing really benefitted the masses from medical research? The cat is out of the back recently according to a report presented by Dr. John Ioannidis on how medical ethics are grossly compromised in the last century. Medical research results are manipulated to favor the sponsoring pharmaceutical companies, which raises the most important question: do lives of the common man don’t matter at all to the government and state agencies?

Clinical Trials sponsored by pharma companies cause conflict of interest

In the clinical trials sponsored by drug companies, the clinical outcome is NOT measured in terms of “survival v/s death” but it only lays emphasis on symptoms reported by subjects, such as “chest pain,” “fever,” “vomiting,” etc. While reporting improvements in the conditions of patients, research studies do not exactly explain whether the administered drug had an effect on the condition of the patient. In other words, statistical analyses are not conducted to reflect whether the novel drug discovery indeed produces prognostic effect that is more than marginal.

All these findings were reported by Georgia Salanti, a biostatistician assisting Prof. John Ioannidis who practices and teaches at the medical school affiliated with the University of Ioannidis. How did drug companies so successfully manage to introduce their novel drugs with successful clinical trial results? What was the secret code of their magic formula?

Manipulations begin NOT just at statistical analyses but AT experimental study design

Even before data crunching and statistical analyses are implemented, drug companies carefully choose their hypothesis. For example, the experimental study design is such that their novel drug is pitted against drugs that have been proven to be less effective in previous studies. Yet again, questions in the analyses directly introduce the biases, not answers, said Prof. Ioannidis. The moot point now is—can medical research studies really be trusted?

How has Prof. Ioannidis grappled with this topic throughout his career in medical research? Well, he is the one who specializes in conducting meta-analysis of research studies. His expertise in these kinds of work has made him a global name in medical research.

Physicians providing misleading advice to patients thanks to these studies

Much of the results reported by biomedical scientists in their published work are falsely fabricated to suit the needs of the sponsoring agencies. All these studies provide misleading information to the physician, and most physicians are really aware of the drug lobby, most possibly hand-in-glove with these commercial agencies.

So, there may be cases where the patient had simply a normal chest pain but had to undergo angioplasty as the physician diagnosed it as myocardial infarction (heart attack). There may also be instances where a simple medication could have cured your annual flu attack, but the physician prescribed expensive antibiotics to handle the case. According to the noted meta-researcher Prof. Ioannidis, almost 90% of the results published in medical journals is either misleading or simply amplified to suit the drug lobby.

Prof. Ioannidis, a noted medical researcher with expertise in meta-analyses

What are the real credentials of Prof. Ioannidis in the medical research community? His findings have mostly been published and highly cited in the most noted medical journals. In fact, he is a leading speaker at various medical conferences all over the world. Nevertheless, medical ethics have so rampantly been tampered in these studies that the results are mere embellishments and not innovations: the “conflict of interest” term is simply coined to enter medical journals, with the fact being that most studies do have conflict of interest as they were sponsored by pharma companies: commercial establishments.

Prof. Ioannidis first came across rampant malpractices in research studies as early as 1990s while working as a young medical researcher at the prestigious Harvard Medical School, USA. In that era, studies focusing on rare diseases had limited data from previous studies. Most medical researchers preferred the rule of thumb rather than performing statistical analyses. However, most medical researchers investigating common diseases, such as cancer, diabetes, heart illness, etc. also followed the same principle. The “hard data” illustrating the probability of “survival v/s death” should be actually used to govern their medical diagnosis of patients. However, this data was actually NOT reported in most studies.

A novel arena of “evidence based research” was looking promising to young researchers in the 1900s. Prof. Ioannidis also joined the fray of young researchers. Thus, he worked at the following prestigious medical institutions: Tufts University, John Hopkins University, and the National Institute of Health. Although he was a math-genius in school in Greece, Prof. Ioannidis decided to emulate his illustrious parents, who themselves were renowned medical researchers. The “contradictory results” in medical research studies is not an uncommon phenomenon. For example, recent studies have proved that mammographs, colonoscopies, and PSA tests are not really useful in detecting cancer, unlike studies in previous era that reported otherwise. Furthermore, the efficacy of anti-depressant drugs like Prozac was questioned in recent research studies as their efficacy was not more than that of a placebo. In the previous era, most doctors recommended constantly replenishing your body with fluids during intense workouts: the current lot of medical researchers is questioning the health-outcomes of this advice.

Prof. Ioannidis is today spell-bound at how peer reviewed studies employing “randomized clinical trials” are producing absolutely antagonistic results: a case in point is whether the extensive use of cell phones causes brain cancer. Thus, “randomized clinical trial,” previously considered as a gold-standard of medical research, is today being questioned for its accuracy in producing reproducible results in independent research studies.

Plausible causes for studies with antagonistic results on the same topic

So, how are so many studies on the same topic or condition coming up with conflicting results? The answer lied in the errors introduced by researchers at various levels: i) the questions that were evaluated by researchers examining the subjects; ii) the study design used to handle an objective; iii) the inclusive criteria laid down for the subjects; iv) the various medical parameters that were examined during the study; v) the statistical tests used for data analyses; vi) the reported results in these studies, and finally vii) the publication of these studies in medical journals of various impact factors.

The extreme pressure of noted medical journals: only “novel results” are published

What makes medical researchers compromise on their ethics? It is the extreme pressure to receive funding for their work, so the data is easily manipulated to suit the vested interests of the funding agencies. Manipulation of results may either be done voluntarily or it may have occurred unforeseen. Why is the pressure so extreme to manipulate results? This is because it is NOT enough to publish medical research in journals: the impact factor of the journal decides the prospects of the researcher. Most noted medical journals have a rejection rate of more than 90%. Thus, only “novel studies with innovative results” make the cut in these journals.

Though Prof. Ioannidis had to carry out his research work in the form of meta-analyses for many years, he continued with his effort finally bearing fruit: the Open Access Journal PLOS Medicine. The journal publishes all fittingly correct medical papers, regardless of whether the results are “innovative.”

Final remark: SCI Journals have failed medical research completely in their quest of publishing “innovative result,” rather than “true results.”

According to the model put forth by Prof. Ioannidis, medical research studies are flawed grossly because the rates of wrongness were almost equal to the rates at which the so-called “novel findings” were substantially proved to be wrong. Check out his astounding statistical report: the most common type of study design is non-randomized clinical trial and 80% of these clinical trials are ultimately proved to be wrong in terms of results. Furthermore, the so-called randomized trials that serve as gold standards also prove to be wrong is as much as 25 % instances. Strange but true, the most high-quality platinum standard studies involving “large randomized trials” also have 10% chances of being wrong.

Altmetrics- an important tool for measuring research impact

With the social media wave gripping scientific publishing, the way a research article has an impact on future studies has been defined today by an innovative tool: altmetrics. The conventional tools to assess the impact of scholarly publications, such as journal impact factor, peer review process, h-index, etc. are now being considered as redundant even as there is metamorphosis in the world of academic publishing.

With most scientists propagating the “online/internet of things” channel for publication, academic social networks have gained significance. Mendeley is one such academic social network cum reference manager that has become a repository of 40 million research articles, thereby exceeding the US government’s initiative for biomedical articles, Pubmed. With this novel approach, the previously uncited articles are now gaining effective visibility and are being shared by collaborators all across the world.

Definition and scope of Altmetric

In the new-age world of academic social network, Almetrics is the tool defining the impact of a research article across various online channels. In many collaborative platforms, scientists are now sharing ‘raw datasets” and “experimental study design” before manuscript preparation to journals. In recent times, we have seen a number of “semantic publication units” which contain just a passage of the citable article, not the entire article. Altmetrics constitutes the impact created by all these composite traces of online channels.

Impact of altmetrics on peer review

Previously it was a slow process that included overburdened researchers from advanced countries. Today, we can see the impact of a research article simply by collecting the number of shares, reads, and bookmarks of the article in an academic social network or repository. This means the peer review process can be completed within just one week through the crowd-sourced platform. Many Open Access journals, such as PLOS, PeerJ, BMJ Open, are now considering this innovative approach to accelerate peer review process.

A comparison of altmetrics with conventional tool

Altmetrics is a correct measure of the impact created by the article. In case of Journal Impact Factor, it only gives an indication of the journal’s average citation for each article, thereby restricting its impact within a reference frame of a journal. In contrast, altmetrics gives a summary of the impact created by the article within various online platforms: academic, non-academic, uncited articles, and articles published without peer review. Although traditional researchers argue that altmetrics cannot reflect the quality of the article in terms of novelty, we argue that JIF is a tool that can be manipulated in a very extensive manner.

What does altmetrics truly reflect in various categories

1) The attention received by the article on online channels: There are complex algorithmic tools in-built to determine the reach, share, and popularity of the article. For example, the metric tool will let you know its shares or mentions on news websites, Twitter, etc. through the “impression” tab. Pageviews and Downloads are tools that can help you understand if the article is well-received or not.

2) Dissemination of an article in terms of quantitative measure: These tools will let a researcher know if the article is being shared or discussed in a community of researchers or in the public sphere. For example, these tools will let you know mentions of the article on news websites and authoritative blogs.

3) The impact and influence created by the article: Altmetrics are tools that gather data of the article for leveraging the impact. With qualitative analysis of the data, one can understand the following:

·        General comments of various researchers on the article, constituting constructive feedback.

·        The various journals, magazines, and academic networks where the article is being cited in different parts of the world

·        How many people have read the article on various online channels

·        Whether or not the article is being reused in other research publications.

In summary, altmetrics enable qualitative data analysis of research publications. They are faster than the conventional citation-based metrics, with the perennial shift of researchers from the print media to online channels of the internet.