Summary: A new AI system developed by computer scientists automatically selects open access magazines to identify potentially predatory publications. These journals often charge high costs to publish without examination by appropriate peers, undergoing scientific credibility.
The AI has analyzed more than 15,000 journals and reported more than 1,000 as questionable, offering researchers an evolutionary means of identifying risks. Although the system is not perfect, it serves as a first crucial filter, with human experts who make end calls.
Key facts
- Predatory edition: The journals use researchers by charging costs without examination by quality peers.
- AI price: The system reported more than 1,000 suspicious journals out of 15,200 analyzes.
- Science firewall: Helps preserve confidence in research by protecting against poor data.
Source: University of Colorado
A team of computer scientists led by the University of Colorado Boulder has developed a new artificial intelligence platform that automatically seeks “dubious” scientific journals.
The study, published on August 27 in the journal “Scientific advances,Alarming tackles in the world of research.
Daniel Acuña, principal author of the study and associate professor in the IT department, recalls several times a week in his reception box by email: these spam messages come from people who claim to be publishers in scientific journals, generally accuña prices, and propose to publish his papers – at high costs.
These publications are sometimes called “predatory” magazines. They target scientists, convincing them to pay hundreds, even thousands of dollars to publish their research without appropriate verification.
“There has been an increasing effort among scientists and organizations to examine these journals,” said Acuña. “But it’s like Whack-A-Mole. You catch one, then another appears, generally from the same company. They just create a new website and provide a new name.”
The new AI tool of its group automatically leads scientific journals, evaluating their websites and other online data for certain criteria: do journals have an editorial committee with established researchers? Do their websites contain many grammatical errors?
Acuña stresses that the tool is not perfect. In the end, he thinks that human experts, not machines, should call on to know if a newspaper is renowned.
But at a time when important personalities question the legitimacy of science, the judgment of the spread of dubious publications has become more important than ever, he said.
“In science, you are not starting from scratch. You are building above the research of others,” said Acuña. “So, if the foundation of this tower collapses, then everything collapses.”
Shake it
When scientists submit a new study to a deemed publication, this study generally undergoes a practice called peer exam. External experts read the study and assess it for quality – or, at least, that is the objective.
An increasing number of companies have sought to get around this process to make a profit. In 2009, Jeffrey Beall, librarian in Cu Denver, invented “predatory” journals to describe these publications.
Often, they target researchers outside the United States and Europe, as in China, India and Iran – countries where scientific institutions can be young, and pressure and incentives for researchers to publish are high.
“They will say:” If you pay $ 500 or $ 1,000, we will examine your article, “said Acuña. “In reality, they do not provide any service. They simply take the PDF and publish it on their website. ”
Some different groups sought to brake the practice. Among them is a non -profit organization called Free access journals directory (Doaj).
Since 2003, doaj volunteers have reported thousands of journals and suspects on the basis of six criteria. (Reputed publications, for example, tend to include a detailed description of their peer review policies on their websites.)
But to follow the pace of the spread of these publications was intimidating for humans.
To speed up the process, Acuña and her colleagues turned to AI. The team trained its system using the DOAJ data, then asked the AI to scrutinize a list of nearly 15,200 free access journals on the Internet.
Among these journals, AI initially reported more than 1,400 as potentially problematic.
Acuña and her colleagues asked human experts to review a subset of suspicious journals. The AI made mistakes, according to humans, reporting around 350 publications as questionable when they were probably legitimate. This left more than 1,000 journals that researchers identified as questionable.
“I think it should be used as assistant to presCreen a large number of journals,” he said. “But human professionals should do the final analysis.”
A science firewall
Acuña added that the researchers did not want their system to be a “black box” like some other AI platforms.
“With Chatgpt, for example, you often don’t understand why it suggests something,” said Acuña. “We tried to make ours as interpretable as possible.”
The team discovered, for example, that questionable journals have published an unusually high number of articles. They also included authors with more affiliations than more legitimate journals and authors who have cited their own research, rather than the search for other scientists, at an unusually high level.
The new AI system is not accessible to the public, but the researchers hope to make it available to universities and publishing companies soon. Acuña considers the tool as a way for researchers to protect their areas from bad data – which he calls a “firewall for science”.
“As an informatician, I often give the example of when a new smartphone comes out,” he said.
“We know that the phone’s software will have faults, and we expect bug corrections to come in the future. We should probably do the same with science. “
About this news of research on AI and science
Author: Daniel Strain
Source: University of Colorado
Contact: Daniel Strain – University of Colorado
Picture: The image is credited with Neuroscience News
Original search: Open access.
“”Estimation of the predictability of questionable open access magazinesBy Daniel Acuña et al. Scientific advances
Abstract
Estimation of the predictability of questionable open access magazines
Doubtful journals threaten the integrity of global research, but manual verification can be slow and inflexible.
Here, we explore the potential of artificial intelligence (AI) to systematically identify these places by analyzing the design, content and metadata of the website.
Assessed in relation to extensive human annotated data sets, our method achieves practical precision and the previously neglected indicators of the legitimacy of the newspapers.
By adjusting the decision threshold, our method can prioritize either complete screening or precise and low noise identification.
On a balanced threshold, we point out more than 1,000 suspicious journals, which collectively publish hundreds of thousands of articles, receive millions of quotes, recognize the financing of large agencies and attracted authors of developing countries.
The analysis of errors reveals challenges involving abandoned titles, series of poorly classified books as magazines and small outlets with a limited online presence, which are resolved problems with better data quality.
Our results demonstrate the potential of the AI of evolutionary integrity checks, while highlighting the need to twin the automated sorting with an expert review.