The Significance of Journal Impact Factor in Academic Publishing

Most academics who have published papers in scientific journals are familiar with the term “impact factor.” So, what exactly is impact factor of a journal? Scientific journals are ranked by a metric known as “impact factor.” Thomson Reuters is an academic publisher that has come up with a database of impact factors of journals. Although it is primarily used a library resource, it is also very good to attract papers for publications.

Impact factor is a crucial yet controversial metric in scientific publishing. Based on the impact factor of a journal, scientists decide whether it is suitable to publish their work. The impact factor of a journal is a metric that describes the visibility range of a journal. In general, journals with high impact factor are considered to be prestigious in a particular field.

How did journal impact factor gain significance in academic publishing?  

Academic publishers felt that journals should be ranked according to their impact or significance. To address this concern, they devised a metric known as “journal impact factor.” The impact factor of a journal indicates the patterns and frequency of citations of a journal.

The origins of “Impact Factor” can be dated back to the year 1955. In an issue of the journal Science, Eugene Garfield first expressed the need for a metric that ranks journals on the basis of their impact on research. Eugene Garfield was an information scientist who came up with this idea in 1955.Eugene Garfield worked with Irving Scher, who was her colleague in the field of information sciences. Together, they introduced the metric “impact factor” in the year 1960.

The “journal impact factor” ranked all scientific journals after comparing the difference between their sizes and their network of circulation.  The impact factors of all scientific journals were presented in the form of a database, which was termed as the “Science Citation Index (SCI).” This database was first published by the Institute for Scientific Information. Eugene Garfield was the founder of this institute. Later on, the database was rechristened as “Journal Citation Reports (JCR)” and was published by Thomson Reuters, a well-known academic publisher.

How to determine the impact of a journal, and which journals are associated with it

Eugene Garfield determined the number of citations received by papers published in a scientific journal over a span of two years. Then, Eugene Garfield divided this number with the total number of papers that were published in that particular journal over a time period of two years.

As research is carried out at a different pace in different fields of study, Eugene Garfield compared a journal with other journals of the same field. In other words, a medical journal was compared with other medical journals. Similarly, a journal on ecology was grouped with other journals of ecology.

Although impact factor largely depends on a journal, it is also affected by the research conducted in a field of study. In the year 2009, the impact factor 87.925 was the highest for a scientific journal. However, the next highest impact factor was only 50. Thus, the field of study and the related research work significantly affects the impact factor of a journal.

Every two years, JCR is published by Thomson Reuters in the month of June. For example, the database published in 2016 presents the journal impact factors for the time period of 2014-2015. About 9000 journals were included in the JCR database of 2009. However, this database includes only 25% of all the published journals. Moreover, it mainly comprises of journals in English language.

Why it is necessary to know about a journal’s impact factor

According to Eugene Garfield, impact factor is a metric that reflects a journal’s prestige in the scientific community. Scientists often see the journal’s impact factor to decide whether it is suitable for publication. By publishing their papers in scientific journals of high impact factor, scientists can gain more respect in their community. Moreover, they also gain other benefits, such as better access to research funding, an extension of tenure, recruitment to prestigious institutions, and promotions at universities. Nevertheless, journal impact factor cannot be considered as a sole criterion for the integrity of a journal or a research study.

The editors of journals always make an effort to increase the impact factor of their journals. Sometimes, these editors request authors to increase citations in the papers submitted for publication in their journal. This is an unethical practice and should not be conceded at all costs. The impact factor of a journal is a metric used in information sciences: it does not govern the quality of a research work.

The controversies and problems of journal impact factor 

The journal impact factor’s indiscriminate use in academic employment industry has been severely criticized by many information scientists, including Garfield. The significance of an author’s research work cannot be solely estimated from the journal’s impact factor. Impact factor should always be considered along with other parameters of evaluation, such as the peer review process.

It should be noted that smaller fields of study attract lesser citations, so the journals of these niche fields have lower impact factor. These journals may contain papers of path-breaking research work. The impact factor of a journal should always be compared with that of a journal in the same field of study. The impact factor of a journal is not really an indication of the significance of a research work.

At this stage, it is also necessary to point out the problem associated with prestigious journals. Because these journals have a high impact factor, it is really very difficult to publish papers. This is because the rejection rate of such journals can be as high as 75%. Remember, the main aim of a researcher is to get their work published in a peer reviewed journal. Therefore, researchers must not just limit their efforts to high impact journals. They should consider all other factors while deciding which journal is most suitable for their work.

Which metrics can be considered as good alternatives to journal impact factor?

Since the significance of impact factor has been very controversial, researchers are advised to use other alternative metrics, such as SCImago Journal & Country Rank, the h index, Scopus, and the Eigenfactor. In the year 2005, Jorge Hirsch was a physicist who developed the h index. This metric compared the author’s total number of published papers with the number of citations received by those published papers. In other words, it evaluated the productivity of an author in academia.

The Web of Science is an index that uses the metric Eigenfactor. This metric measures the frequency of citation of a published paper over a period of five years. It thus determines how influential the article is in a particular field of study.In the metric The SCImago Journal & Country Rank, a database of journals was provided. This database was based on the rankings and the visibility received by journals, which were further organized according to their types. It comprehensively covered all international publications. Scopus is a database based on abstracts and citations. This database is published by the noted scientific publisher, namely, Elsevier.

 How should journal impact factor be used by researchers in academia?

Although journal impact factor is an important metric to be considered before publishing a paper in a scientific journal, it should never be considered as the sole criterion for evaluating the quality of a journal. The decision to submit and publish a paper should never be made on the basis of the journal’s impact factor. It is always essential to assess the scope and objectives of a journal and then determine the possibility of your paper getting published in that journal. Harrisco is a company that provides complete publication support to authors and can help authors in journal selection, peer review, language editing, and translation. Harrisco is a name to reckon with in the academic publishing industry as it has been in business since 1997.

 

Tips for being a good peer reviewer

Peer review is an integral part of scholarly communications. It is a matter of pride for a researcher to receive an invitation of peer review. This process of peer review is carried out for every manuscript intended to be published in a journal or a book.

To perform the peer review of an article, journal editors only invite researchers who have done valuable and commendable work in their field of expertise. These distinguished academics are entrusted with the job of evaluating the manuscript of another researcher in the same field.

What is peer review in scholarly publishing?

Although it is honorable to be a peer reviewer, there are a lot of responsibilities associated with this position. The main goal of peer review is to determine whether the work of another researcher is good enough to be published in a scientific journal.

According to the editor of the journal Biochemia Medica, a peer reviewer is expected to objectively analyze the manuscript of another researcher.After thoroughly examining the manuscript, the peer reviewer has to provide constructive feedback to the author in the form of comments.

Depending upon the quality of research work, peer reviewers may consider it to be commendable enough for publication or they may reject a manuscript for poor presentation of scientific facts.

Tips for being a good peer reviewer of journal articles

1. Acceptance or rejection of invitation: A peer reviewer has to consider many factors while deciding whether to accept or reject the invitation sent by a journal editor. Among these factors, subject matter expertise is of prime importance.

A peer reviewer must first go through the abstract of the article to surmise his or her expertise in the given subject. Although journal editors extend their invitation to distinguished academics, there are chances of a researcher not being an expert in that particular topic. In such instances, a peer reviewer may reject the invitation of the journal.

Another important factor is time constraints. Researchers are normally busy people working for about 50 hours in a week. They have to carry out experiments, collaborate with other laboratories, and work on their own manuscripts.

However, they may spend some hours on weekends exploring the work of other researchers. In general, journal editors provide them about three weeks to complete a peer review. If a researcher is pressed for time, they should politely decline the invitation.

2. Academic misconduct: In a manuscript, researchers have to often cite the work of other related studies. In such cases, ESL authors find it hard to paraphrase the findings of previous research studies. They are often accused of plagiarism by journal editors.

Although the academic community may trust their peers, that is, colleagues, peer reviewers should always check the manuscript for plagiarism issues. They can thus detect academic misconduct of authors. Peer reviewers have the authority to reject a paper on issues of plagiarism.

Sometimes, authors may present exemplary results in their manuscript. However, a peer reviewer should double check such results by repeating the experiment in his or her laboratory and with the same equipment. In this way, peer reviewers can catch hold of authors presenting false data.

3. Scope and objective of the journal: A peer reviewer should always look for the scope and objective of the journal. The target audience of the journal should also be considered. If a researcher has received a journal editor’s invitation for the first time, the researcher should make it a point to read few of the published papers from that journal. Moreover, the author information presented on the journal’s website should also be considered.

4.Title of the article: A peer reviewer should thoroughly judge the suitability of the article’s title. The title should be presented in a lucid language and should not contain unnecessary jargon. It should clearly reflect the content of the article. Although a peer reviewer may suggest improvements in the title, the author should not be compelled to have a style that super-imposes the style of the reviewer.

5. Review the article’s content: The main objective of the peer review process is to determine the novelty factor of the results presented in the manuscript. A reviewer has to peruse through the document to understand whether the content adds something new to their area of expertise.

The view-points of the peer reviewers may be subjective, but they can certainly make the process more transparent. For this purpose, they must check whether the manuscript is concisely summarized in the abstract. Moreover, the references presented in the bibliography must be precise, reliable, and sufficient to support the claims made in the literature.

A peer reviewer should thoroughly check whether there are any omissions of citations in the reference list. They should then point out this error in their feedback to the author. A peer reviewer should also check whether the author has justified all the claims with adequate data and results. If not, the peer reviewer must suggest ways to justify all the arguments and claims.

The author has to provide sufficient data for the reproducibility of results. A peer reviewer is not required to point out English language errors or inconsistencies in citation styles. However, a peer reviewer must mention the need for copy-editing in their comments to the author.

6.Accept or Reject Decision: A peer reviewer will rarely come across a paper that does not need any suggestions for improvement. If there are some issues that need to be corrected in a paper, the peer reviewer has to give the following decision: “Accept with minor revisions”. This is a favorable outcome for most authors, and the job of the peer reviewer is completed.

If the author has presented novel results but has not provided sufficient evidence, a peer reviewer may suggest major rewriting of the paper. The decision of the peer reviewer would thus be “Accept with major revision.” Some journals may prefer calling the decision as “Revise and resubmit.” In such cases, the paper may be again submitted for a second round of peer review.

If the paper is poorly written and offers no novelty factor, the reviewer would not recommend it for publication. In such cases, the decision of the reviewer would be outright “Reject.” There are instances where the content of the paper does not match with the scope and objectives of the journal.

In such cases, the authors must carefully consider another journal to avoid instances of mismatch. To tackle cases of journal mismatch, peer reviewers must not “reject” the paper outright but suggest the names of appropriate peer reviewed journals

 

 

  • .

 

 

 

 

 

 

 

How ESL researchers can overcome the obstacles of English journal publishing

Today, most scientific research papers are authored by scientists who are not native speakers of English. For example, China, Japan, South Korea, Italy, France, and Germany are aggressively promoting scientific research in their countries. These researchers have to publish their work in international peer-reviewed English journals, because English is the lingua franca of scientific publishing. However, these researchers face a lot of difficulties while writing their manuscripts in English as they are not native speakers of English. Besides, they also face tremendous obstacles in publication process.

Obstacles that ESL researchers face in English journal publishing

Linguistic issues: Most scholars of ESL (English as second language) countries face problems related to English language. For example, the researchers who are non-native speakers of English tend to translate their manuscript from their native language into English. In this case, they always need the help of an English researcher to polish their manuscript. Thus, the process of writing a manuscript becomes more tedious, lengthy, and costly. All these efforts are done to tackle the rigorous demands of the peer-review process.

Plagiarism issue: In a manuscript, researchers have to often cite the work of previous studies. These citations involve summarizing the work of related studies. However, ESL researchers are often caught in plagiarism issues as they find it difficult to express these statements in their own words. Moreover, ESL researchers are not really familiar with the style guides of English journals.

Publication bias, scarcity of funding, and lack of international collaborations: Most ESL researchers do not have any connections with the key members of a journal. Most journal editors are biased while reviewing the work of ESL researchers. Moreover, researchers from developing countries do not have sufficient research funding; therefore, their work is often limited to research in their own countries. Sometimes, they may collaborate with neighboring countries. However, we rarely come across international collaborations in such studies.

Non-conducive environment in the sub-Saharan region of the African continent: most scholarly communications have to face tremendous challenges in the sub-Saharan region of the African continent: the environment is just not conducive to scholarly publications. Besides facing an economic crunch, they also have to overcome socio-political barriers and technological issues. Academic conferences are hardly held in this region. The problem of “brain-drain” is also acute in these countries.

Lack of internet access: Internet access is not possible in some African countries; therefore, it is difficult for researchers to access the work of related studies electronically. Moreover, this causes hassles in electronic submission of manuscripts. They also cannot access the electronic systems that provide a list of peer-reviewers. Owing to these difficulties, the articles published by African authors are very few in number.

Solutions to the obstacles faced by ESL researchers

Although ESL researchers face many obstacles in publishing their work, there is always a silver lining in dark clouds. In this section, we discuss all the efforts undertaken to overcome the linguistic barriers of ESL authors. Some of the steps used for improving their publication success are as follows:

1) Be persistent and calm: ESL researchers should not feel hopeless and dejected when their paper is rejected by a peer-reviewed English journal. The editors and reviewers certainly point out the flaws in their manuscript, and these flaws can be surely corrected. Thus, the quality of the rejected paper can be definitely improved.

2) Collaborate with senior researchers: Young researchers should always explore possibilities of a collaboration with researchers who are more experienced in their field of study. By developing contacts with senior researchers, they can certainly improve the quality of their manuscripts.

3) Familiarize with English journal styles: researchers must often read published papers of internationally acclaimed English journals. In this way, they would be able to emulate the rhetorical style of the journal. They should strive to paraphrase the work of previous studies in their own words. They would thus overcome the issue of plagiarism, an obstacle that is faced by most ESL researchers.

4) Comply with journal guidelines: before submitting their work to journals, researchers must read the guidelines of journal submission very carefully. They can thus prepare their manuscript in strict adherence to journal guidelines.

(5) English editing: The linguistic nuances of English language are seldom understood by ESL researchers. It is very important for them to get their work checked by a native English speaker before submitting it to an internationally acclaimed English journal. Although English editing services are offered by many companies all across the world, they are usually expensive and do not fit into the budget of ESL authors from poor countries. In such situations, they should seek the help of a colleague who is a native speaker of English.

(6) Identify the right journal: Some journals do not have any bias against authors who are not native speakers of English. Check out the websites of many English journals and identify the journal that is appropriate for your work. In this case, ESL researchers are advised to peruse through articles that are already published in these journals. With this strategy, ESL researchers can certainly improve their chances of getting published in English journals.

(7) Make your work more visible: Open Access journals are generally more visible as their articles can be freely read by everyone. Before submitting a paper to an open access journal, ESL researchers must try to make their research findings more visible to the masses. They can develop a website for their research team and use social media to propagate their work. They can also publish their research findings on authoritative blogs.

 

A promising clinical trial developed a novel immunotherapy for lung cancer

A recent clinical trial of lung cancer has shown promising results, which could be considered as groundbreaking developments. In this clinical trial, a novel immunotherapy combination was very effective in controlling the progression and development of lung cancer. The results of this innovative study were published in the journal The Lancet Oncology. The clinical trial focused on combating non-small cell lung cancer, which is the most common form of lung cancer.

This clinical trial was conducted under the supervision of John Wrangle, M.D. He is a prominent immunologist at the Hollings Cancer Center, which is affiliated to the Medical University of South Carolina. According to Dr. John Wrangle, the clinical trial’s results are promising enough to confirm that the novel therapy can be delivered effectively in an outpatient setting.

In general, metastatic lung cancer is “incurable” in patients till date. But the results of immunotherapy have been promising enough to offer a ray of hope to these patients. . The disease-free survival rate of these patients was drastically improved when they treated with novel immunotherapy.

In the very least of terms, metastatic lung cancer patients cannot be “cured” presently but this novel immunotherapy has certainly increased their chances of survival. .Dr. John Wrangle designed this clinical trial with the help of his colleague Mark Rubinstein, Ph.D. John Wrangle and Mark Rubinstein work together at the Hollings Cancer Center. The clinical trial was started in the year 2016.

Despite receiving chemotherapy at regular intervals, metastatic non-small cell lung cancer always shows signs of progression in most patients. Therefore, these patients are also treated with immunotherapy to combat their deteriorating situation.

Immunotherapy is a recent development in cancer treatment. The principle of immunotherapy is as follows: the immune system of the human body is programmed to fight cancer cells. “Checkpoint inhibitor” is the most common class of immunotherapeutic drugs: white blood cells constitute the most important component of natural defenses in the human body.

White blood cells can effectively target cancer cells when “checkpoint inhibitor drugs” target the checkpoints associated with the regulation of immune system. According to Rubinsetin, the pathophysiology of checkpoint inhibitor drugs is as follows: the drugs cut off the brake cables of white blood cells, which are very effective in killing cancerous cells.

Tumor cells also have their own mechanism for proliferation and progression: Suppressive factors are produced by tumor cells, turning the brakes of white blood cells and preventing them from effecting the apoptosis of tumor cells.

Rubinstein further states the novel immunotherapy is more effective in killing lung cancer cells because it is based on the following principle: apart from cutting the brakes cables of white blood cells, the novel immunotherapy provides fuel so that cancer cells can be killed very effectively.

The novel immunotherapy developed by Wrangle and Rubinstein was based on the following principle: the checkpoint drug nivolumab was combined with ALT-803, which is a novel and powerful drug for stimulating the immune system.

The clinical trial was path-breaking because although the drugs were completely different from each other, they were effectively combined and administered to humans for the first time. Moreover, the results of the clinical trial indicate that these drugs can be administered safely. The evidence is compelling enough to prove that this immunotherapy can also be successful on patients who did not respond well to checkpoint therapy.

Rubinstein and Wrangle reiterate the significance of this novel immunotherapy: checkpoint therapy is not provided to lung cancer patients when they stop responding positively; however, the survival period of these patients can be improved significantly with the addition of ALT-803 drug.

This is because many studies have established that the immune system in the human body is activated by ALT-803 drug. Consequently, the lymphocytes of the immune system may be effectively coaxed to combat tumor cells. In such a scenario, combination treatments may be good enough provided they include the drug ALT-803.

In their clinical trial, they had carefully monitored the condition of 21 patients with metastatic lung cancer. Out of them, 9 patients had become resistant to single-agent immunotherapy after a certain period of time. All the nine patients either had stable disease or they responded partially to the single-agent immunotherapy. Therefore, novel combination therapy is the right step in combating cancer.

Surgery, chemotherapy, and radiation are the conventional modes of treating cancer since several decades. However, the last decade has shown prominent strides in cancer treatment, with promising results shown by targeted therapy and immunotherapy. The balance of power between cancer and human immune system has been tilted with these innovative approaches.

Better patient care with new online tool launched by FDA

The Food and Drug Administration (FDA) is the regulatory body for pharmaceutical and healthcare industry in the USA. The FDA has developed a new strategy to get real-time information and updates pertaining to the manufacture, sale, and approval of novel antibiotics and anti-fungal medications. This information shall be available to all healthcare providers (doctors, nurses, and pharmacists). The main objective of FDA is to combat the growing menace of antimicrobial resistance.

The regulatory authority FDA has created a special website that provides real-time information about how a special drug can be used to combat specific bacterial or fungal infections. This information is necessary to tackle the growing menace of medical negligence and non-optimized medications; the implications of these limitations have burdened the current healthcare system by more than billion dollars annually. The proliferation of resistant bacteria can be effectively tackled by healthcare professionals with this real-time information, thereby providing better patient outcomes.

One of the biggest problems of modern medications is the growing resistance to antibiotics. While concerted efforts are being made to develop new therapeutic drugs for various ailments, the use of antibiotics cannot be halted at this stage; however, medical doctors now prescribe limited doses of antibiotics to livestock as the problem of antimicrobial resistance is more severe in these subjects. The FDA has also implemented new guidelines on antibiotic use to improve patience care.

In a candid interview with Scott Gottlieb, M.D (erstwhile FDA Commissioner), we received the following feedback: most doctors have to tackle patients with critical ailments. To cure such patients, the doctor has to exactly identify the pathogen that is causing critical ailment in the patient. Moreover, the doctor has to assess comprehensively how defiant is the pathogen to various treatments.

A general diagnosis means that a doctor may prescribe a medication that is combated and resisted strongly by the bacterial or fungal pathogen. Such a situation does not do any good to improve the patients’ condition, and we cannot ignore the broader consequences of such situations as they can metamorphose into public health problems.

Under conventional treatment modality, the individual’s drug labeling had to be combined with the results of susceptibility testing; the process was quite lengthy and took a battery of tests for identification and confirmation.

A more centralized approach to tackle this issue of poor diagnosis and prognosis, FDA authorities have come up with a more centralized approach. The process had improved tremendously with this new tool; the efficiency of accurate diagnosis and prognosis has increased remarkably as healthcare providers are abreast with real-time information about latest drugs and medications.

In order to identify an antibacterial or antifungal drug that is most effective to treat infection in a patient, the FDA authorities have compulsorily asked physicians to perform antimicrobial susceptibility test (AST). The results of AST tests must be considered before prescribing any drug.

The criteria for these tests are as follows: “breakpoints” or “susceptibility test interpretive criteria. With these criteria, a physician has to evaluate the susceptibility of antibacterial/antifungal drugs to specific bacteria or fungi. The number of bacteria and fungi changes in the patients’ body over a period. With this changing trend, their susceptibility also decreases with respect to certain drugs. Breakpoints should be updated to take into account these occurrences.

The erstwhile conventional approach was as follows: the new breakpoint information was provided by the manufacturer of each drug in the drug label; each of these drug labels was reviewed and introduced into the market only after receiving approval from FDA. This process had to be accurately on a case-by-case basis. After receiving approval for revised drug labeling, the AST results also had to be updated and incorporated in the drug labeling. Owing to this process, there was an unnecessary delay in disseminating information to healthcare providers. In each case, the drug and device labeling had to be changed whenever there was a sharp change in breakpoints.

Because the US Congress updated the 21st Century Cure Act, the FDA could come up with this new approach: the breakpoints can now be updated for multiple drugs with same active ingredient; moreover, the information could be shared vividly through a dedicated website designed by the FDA. Thus, healthcare providers can now access all the FDA-recognized breakpoints on the online channel. Although the breakpoints are determined by the Standard-Development Organization, the FDA is the final regulatory authority that reviews and leverages their work. The FDA agrees as to whether they are appropriate for commercial use. Based on the review provided by the FDA, the standard can be accepted partially or completely. Furthermore, alternative breakthroughs can be established with the review of FDA. If companies disagree strongly with any of the recognized standards, they have full authority to supply data that can authenticate alternative breakpoints.

The breakpoint information is presented on the webpage of FDA. All drug manufacturers now have to update each drug labeling with respect to the breaking information updates. As the process has shifted, it has become automated and the previous time-consuming process of continuous updates has been overthrown. In other words, the process of drug and device labeling has become more efficient and less time-consuming. Thus, the responsibility of drug manufacturers and AST device developers has also reduced tremendously.

 

Daily doses of ibuprofen can prevent the onset of Alzheimer’s disease: Canadian neuroscientists

Research studies have been conducted by a team of neuroscientists to understand how the onset of Alzheimer’s disease can be prevented in general population. These studies were conducted under the supervision of the most famous neuroscientist in Canada, Dr. Patrick McGeer.

These comprehensive research studies were carried out a research team in Vancouver, Canada. They have presented some startling revelations: they suggest that ibuprofen [a non-steroidal anti-inflammatory drug (NSAID)] must be prescribed in daily doses at an early stage to prevent the onset of Alzheimer’s disease.

Ibuprofen is an over-the-counter medication and it seems to be a wonderful strategy to ward off this debilitating condition. According to latest estimates by the World Health Organization, Alzheimer’s disease has affected about 47 million people across the globe in 2016.

This has caused additional burden on major healthcare system all across the world, with the medical cost of treatment being pegged at US$818 billion per year. In fact, Alzheimer’s disease is considered to be the fifth most common cause of death in patients who are senior citizens (65 years and older).

According to the Alzheimer’s Association, United States of America accounts for more than 5 million cases. In fact, Alzheimer’s disease is so common in the USA that each new case is being recorded every 66 seconds. The burden on healthcare system due to Alzheimer’s disease is estimated to have been $259 billion in the year 2017. Moreover, the economic burden would certainly rise to 1.1 trillion $ by the year 2050.

The revelations of the research study are path-breaking and the fact that they have been conducted by the most noted Canadian neuroscientists (Dr. Patrick McGeer and Dr. Edith McGeer) only adds weightage to renewal of hope and prevention.

The study was conducted in the laboratory owned by Dr. Patrick McGeer and Dr. Edith McGeer (his wife). In this laboratory, they have conducted several research studies to understand the pathophysiology, prognosis, epidemiology, and prevention of several neurological diseases, with a special focus on Alzheimer’s disease.

They have devoted their careers to devise novel therapies in neurosciences for the past 30 years. The esteemed Journal of Alzheimer’s Disease has published a paper detailing the most recent discoveries of Dr. McGeer. Dr. McGeer and his team of researchers made an important announcement in 2016: they had devised a simple saliva test for the diagnosis of Alzheimer’s disease; this test could also be performed on healthy subject to predict the future onset of Alzheimer’s disease.

The saliva secretes the peptide amyloid beta protein 42 (Abeta42), whose concentration is measured by performing the aforementioned saliva test. Regardless the age and gender of healthy subjects, the rate of Abeta 42 production is almost constant. If the rate of Abeta 42 production is twice or thrice the normal rate, the individual may develop Alzheimer’s disease in the near future.

It is important to note that Abeta42 is produced throughout the body and it is relatively insoluble in bodily fluids; however, the deposits of Abeta42 occur only in the human brain. The deposited Abeta42 causes neuroinflammation to destroy the neurons of patients with Alzheimer’s disease.

Dr. McGeer and his team of neuroscientists made a path-breaking discovery in this study: they proved that Abeta42 is a peptide that is secreted into the saliva from the submandibular gland. Furthermore, they went on to prove that they could predict the susceptibility of the patient to Alzheimer’s disease by analyzing just a teaspoon of saliva.

As the saliva test is a predictive marker of Alzheimer’s disease, preventive measures can be prescribed at an early stage. This includes consumption of ibuprofen in daily doses: ibuprofen is a non-steroidal anti-inflammatory drug (NSAID).

The more startling facts of this study are as follows: the secretion of Abeta 42 peptide is same in patients and healthy individuals who are susceptible to developing the condition in the near future. What is even more assuring is the fact that elevated levels of Abeta 42 peptide are exhibited in healthy subjects at all times of the day, so the saliva test does not call for any special condition and restriction.

The saliva test can be performed on subjects at any given time of the day. In clinical practice, most patients are diagnosed with Alzheimer’s disease at the age of 65. Therefore, Dr. McGeer and his team suggest that individuals must get tested for Alzheimer’s disease at the age of 55.

The early signs of Alzheimer’s disease typically develop at the age of 55, although the subjects appear to be completely healthy in appearance. If the levels of Abeta 42 peptide are elevated at the age of 55, then a daily dose of ibuprofen is recommended for preventing the disease.

In most clinical trials, neuroscientists have included patients who showed mild to severe impairment in cognitive ability. When the disease progresses to a late stage, therapeutic opportunities are limited in number.

Unfortunately, the progression of the disease could not be halted in any of the clinical trials. The discovery of McGeer is path-breaking, innovative, and a true game changer. The saliva test is an accurate predictor of whether a healthy individual would develop Alzheimer’s disease in the near future.

They have proposed the use of ibuprofen to prevent the incidence of Alzheimer’s in such healthy individuals. Given that ibuprofen is a mild NSAID that is available over-the-counter, it is truly a simple solution that does not warrant the visit of a doctor. This is a truly innovative strategy to eliminate the crippling condition of the human brain.

 

Promising pilot trial for tumor vaccine

The University of Pennsylvania has conducted a promising clinical trial to devise a new type of vaccine for cancer. Although the clinical trial was of an initial stage, promising results have been meted out and researchers are hopeful of a breakthrough discovery.

The clinical trial was a joint collaboration between researchers of following medical schools, which are affiliated to the University of Pennsylvania: the Perelman School of Medicine and the Abramson Cancer Center. The vaccine is truly innovative in the sense that it corporates the immune cells of patients; the immune cells are directly exposed to the tumor cells of patients.

This experiment was carried out in a laboratory under simulated conditions. Following treatment, the immune cells are then injected into the patient to elicit a better immune response.This experimental clinical trial was performed on patients diagnosed with ovarian cancer at an advanced stage.

This was a pilot trial whose sole purpose was to determine the feasibility and safety of novel vaccine; however, the results were promising enough to ensure that is very effective in nature. Anti-tumor T-cell responses were elicited in more than half of the patients that participated in this clinical trial.

Patients that responded to this treatment had higher life expectancy despite tumor progression unlike patients who just did not elicit any response. In fact, one patient became “disease free” for five years after receiving being treated with this vaccine for two years. The promising results of this clinical trial have been published currently in the journal Science Translation Medicine.

The lead author of this study was Dr. Janos L. Tanyi, MD, who works as an assistant professor of obstetrics and gynecology at Penn Medicine. The researchers concluded that the novel vaccine was safe for clinical treatment of patients. This vaccine elicited a broad anti-tumor immunity; however, they have strongly recommended more clinical trials on a larger scale.

The other researchers who worked with the lead author at the Perelman School of Medicine at the University of Pennsylvania are as follows: Lana Kandalaft, PharmD, PhD, George Coukos, MD, PhD, and Alexandre Harari, PhD. The conventional treatment offered by cancer vaccines can be summarized as follows: A cell-surface receptor is a specific molecule that is mostly attacked by most cancer vaccines till date.

This molecule is generally found on cancerous cells in any kind of tumor. However, the team headed by Lausanne-Penn devised a far more aggressive approach. They developed a personalized vaccine that took into considered every individual cancer patients’ condition. For this purpose, they comprehensively analyzed the tumor system of each cancer patient.

The set of mutations are unique to each tumor, presenting a unique pathology of the impaired immune system. With this information, they developed a whole-tumor vaccine that elicited immune response and combated not just a single target in the tumor, but about hundreds or thousands. This is a truly innovative strategy that outshines the efficacy of conventional vaccines.

The basic objective of this clinical trial was to elicit a strong immune response that targets tumors comprehensively. They were successful in eliciting an immune response that hits all kinds of markers, including the markers that are unique to a particular tumor.

The formidable defenses of tumors were overcome by harnessing the T-cell immunity with the vaccine. To prepare a personalized vaccine for each patient, the researchers sifted through the mononuclear cells of peripheral blood, which was obtained from each patient.

They identified precursor cells that were suitable enough for use in this clinical experiment. These cells were grown into a culture in a laboratory under carefully controlled conditions. Thus, they produced a large number of dendritic cells. A T-cell immune response can be effectively elicited with the use of dendritic cells.

Infectious pathogens are engulfed by these T-cells; moreover, these T-cells also engulf tumor cells and anything that is considered “foreign”. Nevertheless, a specific response is elicited by the patients’ immune system when T-cells and other components of the immune system are again exposed to pieces of invader cells.

The patients’ tumor cells were obtained and a special extract was prepared from these tumor cells. Then, the extract of tumor cells was exposed to dendritic cells; the dendritic cells were activated by irradiating them with interferon gamma. Finally, the patients’ lymph nodes were injected with these activated dendritic cells and a T-cell response was generated.

The team of researchers successfully carried out this strategy on 25 patients in total. Every three weeks, each patient was administered a dose of dendritic cells; it is important to note that these dendritic cells were treated with tumor cells by a process described above.

The exposure of dendritic cells at periodic intervals was carried out for six months. A huge increase in the number of T-cells was reported in more than half of the patients included in this trial. What’s more fascinating is the fact that generated T-cells were specifically reactive to tumor cells. In other words, the personalized vaccine developed for combating cancer was hugely successful.

The patients that responded to this treatment showed 100 percent survival for a period of two years. The patients that failed to respond to this treatment showed an overall survival rate of just 25 percent over a two-year period.

In this experiment, researchers had a included a stage 4 ovarian cancer patient who was 46 years old. The prognosis of this patient is generally very poor with conventional treatment, which includes five courses of chemotherapy. Interestingly, this patient remained disease-free for five long years after receiving 28 doses of the personalized vaccine over a two year period.

In conclusion, the researchers hope that the efficacy of this personalized vaccine would be doubled if it is combined with chemotherapeutic drugs that strive to suppress anti-immune responses of the tumor.

 

 

Effects of non-optimized medications

Although the prices of drugs have risen steeply in recent times, prescription drugs are in reality much costlier than what is priced in terms of dollars and cents in retail pharmacies. A recent study was conducted by pharmaceutical researchers at the University of California in San Diego, USA.

The impact of non-optimized medications is surely dangerous: it can leads to illness and death in patients. What is frightening to know that the gruesome impact of non-optimized medications has now translated into a parallel market of $528.4 billion annually. This implies that non-optimized medications cause a 16% increase in medical expenditure in the USA, as of latest industry reports in 2016.

In the journal Annals of Pharmacology, the analysis report was published online in March 2016. The news has created a ripple effect in the pharmaceutical and healthcare sector in the USA. The research study was supervised by Dr. Jonathan Watanbe, PharmD. He is associate professor of clinical pharmacy at the Skaggs School of Pharmacy, which is affiliated to the University of San Diego in California. In an ideal situation, patients visit a doctor whenever they are sick or battling a chronic, progressive disease.

The healthcare professional prescribes medications, and patients feel better when they complete their due course of medication as per instructions. However, many a times the prescription dosage is not in agreement with the severity of your illness. This means the medication is not optimized to suit patients’ needs. Alternatively, a patient may not take the prescription drug as per indicated. In such scenarios, the patient would surely develop a adverse reaction or succumb to a new health problem.

The other collaborators in this path-breaking research study are as follows: Dr. Jan Hirsch, PhD, professor of clinical pharmacy at the Skaggs School of Pharmacy and Terry McInnis, MD of Laboratory Corporation of America. These researchers have lucidly explained the impact of non-optimized medications with a real-life scenario: Suppose a patient is down with flu and visits the emergency department of the hospital in his or her locality. A doctor would normally prescribe Tamiflu, but the patient does not take up the requisite dosage as it is too expensive.

The patients’ symptoms worsen over a period of time and he or she ultimately lands up in the Intensive Care Unit (ICU). Now, this translates into a huge financial set-back to the patient and the hospital. If the patient has been paying regular premiums to the medical insurance company, then it can certainly lead to a huge financial drain to the company. Nonetheless, the patient still has to go through a lot of paperwork to receive medical reimbursement from the company. In other words, a small problem of improper dosage and expensive medications has now snow-balled into a big problem.

The problem of non-optimized medications is not just restricted to improper dosage of medications. Watanbe has also analyzed instances where a certain medication can cause other health issues. For example, a patient is administered a steroidal drug for two years to combat epileptic fits; however, the impact of steroid is detrimental to the patients’ health.

The patient develops diabetes due to the administration of steroidal drug. Similarly, an ACE inhibitor is the most preferred drug used to combat blood pressure in patients. However, the most common side-effect of this drug is frequent cough. The patient consumes an over-the-counter medication to combat cough-and-cold and this further causes a steep rise in blood pressure. Moreover, the patient feels drowsy during daytime and ultimately falls down.

In both the scenario, a particular drug treatment used to combat a chronic ailment further leads to more complications. The researchers have developed decision-outcome models to estimate the financial impact of these situations, which includes visits to emergency department, intensive care units, additional medications, long-term medical treatment, etc.

The consequences have been shocking enough to compel changes in medical interventions: non-optimized medications certainly cause other illnesses and its annual cost is in the range of 490 to 670 billion dollars. At an individual level, the annual cost can be as high as $2500. It is important to note that this encompasses just medical treatments and does not include transportation and loss of productivity due to illness.

The last estimates were presented in the year 2008 and they seem relatively demure at $290 billion annually. Thus, the impact of non-optimized medications was just 13 percent of the US healthcare system. This implies that a phenomenal rise has occurred within a duration of eight years at 2016.

With a capitalistic economy, healthcare costs are at an all-time high; however, under the ambit of Obama healthcare, more than 20 million people were brought under the umbrella of Affordable Care Act. Thus, more than 20 million people could now access prescription drugs, a scenario that was not visible in the year 2008.

Consequently, the instance of non-adherence to prescribed dosages has increased phenomenally, leading to secondary health issues that are caused by adverse effects of long-term medication. It is not just non-adherence of medications, but also instances where healthcare professionals fail to prescribe an accurate medication region in accordance with the presented symptoms.

Each case is different, so the doctors should take into account all factors in order to provide optimized dosages to each patient rather than just considering external symptoms. For example, a diabetic patient with an attack of flu may need a more aggressive treatment regimen as compared to a healthy patient. This is because the immunity levels of diabetic patients are at an all-time low even when it is well-controlled with medications.

To overcome all these disturbing trends, Watanbe and his associates have proposed a novel model that improves patient outcomes: currently, pharmacists do not play an important role in analyzing each patients’ case. The direct contact is between the patient, nurse, and a trained medical doctor.

They have proposed a comprehensive healthcare management plan in which trained pharmacists must work along with the trained doctor to analyze the illness of each patient. This is because pharmacists are more trained in medications and their adverse effects, while a trained medical doctor is more trained in pharmacology and human body analysis.

 

 

Pancreatic stem cells can regenerate beta cells and respond to glucose

Within the human pancreas, scientists stimulated progenitor cells and developed them into beta cells that were responsive to glucose. These findings were published in the journal Cell Reports, which paved the way for developing novel cell therapies, which is an important breakthrough for type 1 diabetic patients. This addresses a major obstacle that blocks the way for discovering a complete cure for type 1 diabetes.

Pancreas contains progenitor cells and has the potential of regenerating islets. This hypothesis has been established since many decades, but it has not been proven conclusively. Scientists identified the exact location of stem cells anatomically. They validated their proliferative ability to transform into beta cells, which were responsive to glucose.

A detailed study of stem cells was conducted in the human pancreas, and the results were used to tap into the cell supply ‘bank’ of beta cells. These events occurred endogenously and were used for regeneration purposes. In the years to come, these stem cells could be used for therapeutic applications of type 1 diabetic patients..

In earlier studies, it was found that the bone morphogenetic protein 7 (BMP-7) could be used for clinical applications and to stimulate cells that resemble progenitors. These cells occur within the non-endocrine sections of the human pancreatic tissue. In previous studies, it was reported that BMP-7 is used to stimulate growth and to induce the transformation of stem cells into functional islets.

In a recent study, researchers further demonstrated that stem cells responding to BMP-7 reside within the network of ducts and glands of the human pancreas. Moreover, the expression of PDX1 and ALK3 is used to characterize these cells of the human pancreas. The protein PDX1 is required for the development of beta cells, whereas ALK3 is a receptor of cell surfaces and is used to regenerate several tissues.

With the help of “molecular fishing” techniques, researchers could selectively extract cells that expressed PDX1 and ALK3. A petri-dish was used to grow the cell culture, and they proliferated due to the expression of BMP-7. These cells were later differentiated into beta cells. The combined results of this study were used to develop regenerative cell therapies for both type 1 and type 2 diabetes patients.

In patients with type 1 diabetes, the cells that produce insulin in the pancreas are attacked and sabotaged by the immune system. Patients had to control their glucose levels in the blood with a daily regimen of insulin therapy. In patients with type 2 diabetes, insulin was produced to some extent but beta cells became dysfunctional over a period of time.

With islet transplantation, some type 1 diabetes patients could live without insulin injections. This is because donor cells were infused into these patients; however, enough cells are not there to treat several patients with type 1 diabetes.

Presently, research studies have primarily focused on synthesizing many pancreatic cells, which can be transplanted from embryonic (hESc), pluripotent (hPSc) and adult stem cells, and porcine (pig) islets. It would be better to regenerate insulin-producing cells in patients, which prevents the need to completely transplant donor tissue and eliminate roadblocks to other immune-related disorders.

Regenerative medicine strategies must be developed to restore insulin production in native pancreas. This would replace the need for pancreas transplantation or other cells that produce insulin. In patients with type 1 diabetes, autoimmunity abrogation must be stopped in order to prevent the destruction of immune system and newly produced insulin cells. For this purpose, efforts were made to converge immune tolerance induction that did not require anti-rejection drugs for a long period of time.

 

The risks and advantages of phase I clinical trial in kids with cancer

On an average, one out of ten children with pediatric phase I cancer improve after being treated for the illness. But one out of fifty children succumb to drug-related complications. This was mentioned in a meta-analysis review published in PLOS Medicine. In phase I clinical trials, researchers determined how safe was the prescribed dosage of drugs used to fight cancer.

According to the guidelines of regulatory authorities in the US, limits on permissible risk were determined with respect to minors. Researchers systematically scoured the phase I clinical trials of pediatric patients, which were published from 2004 to 2015. They found that there were 170 studies related to pediatric cancer and they included a total of 4,604 patients. They objectively determined the rate of response by pediatric patients, and they graded their intensities as follows: 3, 4, or 5 (fatal). These events had led to an adverse reaction of drugs.

Among all clinical trials, the overall response rate was 10.29% (95% CI 8.33 to 12.25). The overall response rate for tumors in the solid state (3.17, 95% CI 2.62 to 3.72) was significantly greater than that for malignancies that occurred hematologically (27.90, 95% CI 20.53 to 35.27).

The overall rate of  adverse events of the grade 5 type was 2.09% (95% CI 1.45 to 2.72). An average response rate of 1.32 was reported for grades 3 and 4, which were adverse events related to drugs administered to each person. The response rates and adverse events were similar to those observed in adults that participated in the phase I clinical trials of cancer patients.

This study has following limitations: we evaluated cancers of the heterogeneous type and investigated the treatment provided in the included clinical trials; we relied on only published data. We also included the outcomes of clinical trials that were of low-quality or had incomplete reports.

The data was carefully combined with the findings of ethical analysis, providing an empirical platform for further investigation on the therapeutic value of phase 1 clinical trials in pediatric cancer patients. They provided evidences for improving the risk/advantages of phase I clinical trials and for identifying studies, which impose greater challenges for complying with the  standards of tolerable risk in children.