openbiblio.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Der Einstieg in das Fediverse für Bibliotheksmenschen

Administered by:

Server stats:

660
active users

#bibliometrics

1 post1 participant0 posts today

ResearchFish Again

One of the things I definitely don’t miss about working in the UK university system is the dreaded Researchfish. If you’ve never heard of this bit of software, it’s intended to collect data relating to the outputs of research grants funded by the various Research Councils. That’s not an unreasonable thing to want to do, of course, but the interface is – or at least was when I last used it several years ago – extremely clunky and user-unfriendly. That meant that, once a year, along with other academics with research grants (in my case from STFC) I had to waste hours uploading bibliometric and other data by hand. A sensible system would have harvested this automatically as it is mostly available online at various locations or allowed users simply to upload their own publication list as a file; most of us keep an up-to-date list of publications for various reasons (including vanity!) anyway. Institutions also keep track of all this stuff independently. All this duplication seemed utterly pointless.

I always wondered what happened to the information I uploaded every year, which seemed to disappear without trace into the bowels of RCUK. I assume it was used for something, but mere researchers were never told to what purpose. I guess it was used to assess the performance of researchers in some way.

When I left the UK in 2018 to work full-time in Ireland, I took great pleasure in ignoring the multiple emails demanding that I do yet another Researchfish upload. The automated reminders turned into individual emails threatening that I would never again be eligible for funding if I didn’t do it, to which I eventually replied that I wouldn’t be applying for UK research grants anymore anyway. So there. Eventually the emails stopped.

Then, about three years ago, ResearchFish went from being merely pointless to downright sinister as a scandal erupted about the company that operates it (called Infotech), involving the abuse of data and the bullying of academics. I wrote about this here. It then transpired that UKRI, the umbrella organization governing the UK’s research council had been actively conniving with Infotech to target critics. An inquiry was promised but I don’t know what became of that.

Anyway, all that was a while ago and I neither longer live nor work in the UK so why mention ResearchFish again, now?

The reason is something that shocked me when I found out about it a few days ago. Researchfish is now operated by commercial publishing house Elsevier.

Words fail. I can’t be the only person to see a gigantic conflict of interest. How can a government agency allow the assessment of its research outputs to be outsourced to a company that profits hugely by the publication of those outputs? There’s a phrase in British English which I think is in fairly common usage: marking your own homework. This relates to individuals or organizations who have been given the responsibility for regulating their own products. Is very apt here.

The acquisition of Researchfish isn’t the only example of Elsevier getting its talons stuck into academia life. Elsevier also “runs” the bibliometric service Scopus which it markets as a sort of quality indicator for academic articles. I put “runs” in inverted commas because Scopus is hopelessly inaccurate and unreliable. I can certainly speak from experience on that. Nevertheless, Elsevier has managed to dupe research managers – clearly not the brightest people in the world – into thinking that Scopus is a quality product. I suppose the more you pay for something the less inclined you are to doubt its worth, because if you do find you have paid worthless junk you look like an idiot.

A few days ago I posted a piece that include this excerpt from an article in Wired:

Every industry has certain problems universally acknowledged as broken: insurance in health care, licensing in music, standardized testing in education, tipping in the restaurant business. In academia, it’s publishing. Academic publishing is dominated by for-profit giants like Elsevier and Springer. Calling their practice a form of thuggery isn’t so much an insult as an economic observation. 

With the steady encroachment of the likes of Elsevier into research assessment, it is clear that as well as raking in huge profits, the thugs are now also assuming the role of the police. The academic publishing industry is a monstrous juggernaut that is doing untold damage to research and is set to do more. It has to stop.

In the Dark · The Researchfish Scandal
More from In the Dark

An entertaining, informative and overall really well done video about the h-index and why you shouldn't use it, by @stefhaustein, @carey_mlchen et al.

I really will be sharing this video a lot: "What is the h-index and what are its limitations? Or: Stop using the h-index"

youtube.com/watch?v=HSf79S3XkJw

#hIndex #bibliometrics #researchEvaluation #researchAssessment #publishOrPerish

@academicchatter

Finally, our paper has an official issue number and page range! Our study Ukrainian Arts & Humanities Research in #Scopus is now officially published in Library Hi Tech!

🔗 doi.org/10.1108/LHT-05-2023-01

Key findings:
📈 Ukrainian A&H research is growing but struggles with impact
📉 Over half of publications remain uncited
🌍 International collaboration & English boost visibility
📰 Many papers appear in local or Russian-oriented journals

Recently published on #Zenodo -- an excellent primer on the history of evaluative bibliometrics!

Also offers solutions for adapting evaluative #bibliometrics for a new, more responsible, scholarly world.

Principles of Evaluative Bibliometrics in a DORA/CoARA Context doi.org/10.5281/zenodo.1467206 #research #altmetrics #metrics #scientometrics

ZenodoPrinciples of Evaluative Bibliometrics in a DORA/CoARA ContextThe document, "Principles of Evaluative Bibliometrics in a DORA/CoARA Context," provides a comprehensive examination of evaluative bibliometrics, exploring its role within research evaluation. It begins with an overview of bibliometrics, its evolution from the early 20th century, and the theoretical and practical frameworks that have shaped its development. Notably, the work emphasizes the integration of bibliometrics with emerging trends in science evaluation, offering solutions for adapting these methodologies to contemporary evaluative systems. The text outlines five key principles that guide evaluative bibliometrics: (1) the Principle of Support for Decision-Making, (2) the Principle of Collaboration with Experts, (3) the Principle of Respect for Contexts, (4) the Principle of Metric Multidimensionality, and (5) the Principle of Data Verifiability and Openness. Each principle is framed as a flexible and evolving concept that can be adapted to various evaluative contexts. Furthermore, the work underscores the importance of combining quantitative methods with qualitative expertise, advocating for a more inclusive and collaborative approach to scientific evaluation. 1. PREFACE    2. INTRODUCTION    Definition and origins    The legacy of CWTS    Professional perspectives    Bibliometric indicators    Approach to the principles    3. FUNDAMENTAL PRINCIPLES    Principle 1. Principle of support for decision-making    Principle 2: Principle of collaboration with experts    Principle 3: Principle of respect for contexts    Principle 4: Principle of metric multidimensionality    Principle 5: Principle of verifiability and openness of data 4. EPILOGUE    5. BIBLIOGRAPHY    
Replied in thread

@egonw @dingemansemark Hidden gems is another term people have used. When I was on the PLOS article-level metrics team, I proposed using the PDF downloads to HTML views ratio to find articles that weren't necessarily the most highly viewed but were more deeply engaged with - the assumption being that downloading the PDF indicated greater interest from that reader.

everyone.plos.org/2014/12/23/l
#Altmetrics #Bibliometrics

EveryONE · Let Me Count the Ways: Top 20 PLOS ONE Articles Based on Article-Level Metrics for 2014 - EveryONEAt PLOS ONE, we’ve been compiling year-end lists to reflect on the most popular articles and research videos published in our journal…

Nilsson RH, et al. (20 Nov 2024):

20 years of bibliometric data illustrates a lack of concordance between journal impact factor and fungal species discovery in systematic mycology.

MycoKeys 110: 273-285.

doi.org/10.3897/mycokeys.110.1

colleague Alice Retter involved

MycoKeys20 years of bibliometric data illustrates a lack of concordance between journal impact factor and fungal species discovery in systematic mycologyJournal impact factors were devised to qualify and compare university library holdings but are frequently repurposed for use in ranking applications, research papers, and even individual applicants in mycology and beyond. The widely held assumption that mycological studies published in journals with high impact factors add more to systematic mycology than studies published in journals without high impact factors nevertheless lacks evidential underpinning. The present study uses the species hypothesis system of the UNITE database for molecular identification of fungi and other eukaryotes to trace the publication history and impact factor of sequences uncovering new fungal species hypotheses. The data show that journal impact factors are poor predictors of discovery potential in systematic mycology. There is no clear relationship between journal impact factor and the discovery of new species hypotheses for the years 2000–2021. On the contrary, we found journals with low, and even no, impact factor to account for substantial parts of the species hypothesis landscape, often discovering new fungal taxa that are only later picked up by journals with high impact factors. Funding agencies and hiring committees that insist on upholding journal impact factors as a central funding and recruitment criterion in systematic mycology should consider using indicators such as research quality, productivity, outreach activities, review services for scientific journals, and teaching ability directly rather than using publication in high impact factor journals as a proxy for these indicators.

The results of this study show that the most productive Ukrainian researchers in the social and humanities have chosen to bravely face and endure the "winter" on their native land:

:oa: arxiv.org/abs/2412.11719

64% of productive researchers since 2021 have managed to maintain or even improve their annual publication output.

In 2023, the number of 🇺🇦 authors with high publication activity even increased compared to 2021.

arXiv.orgThe publication activity and migration trends of Ukrainian scientists in the social sciences and humanities during the first two years of the Russo-Ukrainian warThis study analyses the publication activity and migration patterns of Ukrainian scholars in the social sciences and humanities (SSH) during the initial two years of the Russo-Ukrainian war. Focusing on scholars who published at least three papers, the study underscores the resilience of these scholars, who continued their academic endeavours within their homeland despite the conflict. The research utilizes data from the Social Sciences Citation Index (SSCI) and the Arts & Humanities Citation Index (AHCI) to illustrate their continued scientific contributions under adverse conditions. It also highlights the crucial role of international collaboration in supporting Ukrainian SSH research, emphasizing that such collaborations primarily manifest through joint research projects rather than relocation of scholars to foreign institutions.