I’d like to invite you to an online demo about a plugin we made and have your feedback!
Over 600 scientific papers have been retracted in recent years due to fraudulent peer review using puppet reviewer accounts [1]. In such cases, authors manipulated the editorial process to review their own work. Other forms of identity-related academic fraud are also on the rise. Besides fraud, there are mundane issues in just identifying people correctly. GÉANT [2] project’s Trust & Identity may can offer meaningful solutions. We will present experimental methods for researcher vetting, including integration with the widely-used Open Journal Systems (OJS) editorial platform.
The demo will take place on Zoom on Thursday, 5 June 2025, from 16:00 to 16:30 CEST.
(Start times: San Francisco – 7:00 AM, New York – 10:00 AM, London – 3:00 PM, Athens – 5:00 PM)
[1] as reported by Oransky, I., 2020. Retraction watch: What we’ve learned and how metrics
play a role. In Gaming the Metrics, MIT Press. https://ieeexplore.ieee.org/document/9085626
[2] GÉANT is a flagship EU project, a collaboration of European National Research and Education Networks (NRENs) on infrastructure and services. Users may know us by our services like EDUROAM, EDUGAIN and high-speed networks.
The scoring is subject to change (we are at halftime in our incubator cycle meaning that the team has 3.5 months left to work on this). This is why I’m reaching out to the community, to be impacted by your opinion. Right now we are thinking about having multiple dimensions instead of a simple score, to avoid reducing people to one number (so the screenshot might be outdated). Moreover, I’m giving a lot of thought to the wording - its the information that is trustworthy, not the person, it would be crucial to convey that difference.
That being said, here is what we do right now:
we rely on crossref (DOI), ORCID, RoR, EOSC databases mainly. Did a lot of experiments with other sources.
from RoR, we have domain names and we investigate the email-related DNS records (such as MX). If the author happens to have an institutional email, we may be able to match that, giving you a score with a weight yet to be defined.
we also try to match the name to person provides with names from the databases above. A lot of work goes into disambiguation, different name orders, special characters, etc.
same with ORCID (which is trivial, given that it is a globally unique ID)
This way the identity of the person can be matched to an authoring history in crossref/DOI, for instance. This way we can also verify institutional affiliation (usually part of crossref), a big achievement in itself. Then, you see the fields the person is active in (good indicator for deciding who is a good reviewer or good guest editor). Finally, co-authorship graphs can be created. Then, all of these might get a weight, perhaps configured by the journal itself.
Obviously, all the above will fail for a first-time author. That is why - in line with the Leiden declaration - we emphasize that the metric should only be used in the proper context. Apart from OJS, there are other uses envisioned, like verifying external members of panels (MSc, PhD), funding application reviewers, etc.