DailyDirt: Measuring Scientific Impact Is Far From Simple
How do you measure the impact of a scientist’s research? Some common metrics include the number of publications in peer-reviewed and high-impact journals, the number of citations, etc. But it’s more complicated than just using the quantity and quality of a scientist’s peer-reviewed publications to determine their significance in the scientific community. Here are a few more things to consider.
- Researchers interested in an academic career, beware! Apparently, in recent years, it’s become popular for universities to evaluate prospective hires based on their “h-index,” which reflects both the number of publications and the number of citations per publication. However, a recent study has shown that current mathematical models that predict a scientist’s future performance based on their past performance aren’t reliable and shouldn’t be used in career advancement decision processes. [url]
- Getting depressed because you can’t get funding? Don’t Despair… A study has found that grant size doesn’t strongly predict a researcher’s scientific impact. [url]
- Traditional metrics used to gauge a researcher’s scientific impact are inadequate, since they typically assume that all co-authors of a paper contribute equally to the work. Now researchers are proposing a new metric that takes into account the relative contributions of all co-authors to establish a more rational way of determining a researcher’s scientific impact. [url]
- This takes the cake: A new study has found that scientists are terrible at judging the importance of other researchers’ publications. Apparently, scientists rarely agree on the importance of a paper and are strongly biased by what journal the paper is published in. Also, the number of times a paper is cited has little relation to its actual scientific merit. [url]
If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.
Filed Under: academics, citations, co-authors, h-index, merit, metrics, peer review, publication, science, scientific impact, tenure
Comments on “DailyDirt: Measuring Scientific Impact Is Far From Simple”
There are probably tons of obscure math papers that we won’t know the “true” impact of for centuries. maybe we shouldn’t be giving tenure in the first place.
What does tenure have to do with obscure mathematics and potential future impact thereof?
I do not understand this war on education. Clearly it is in our best interest to become more educated, not less. I understand that an uneducated populace is easier to rule, but to what end. A frivolous free reign of unbridled opulence is hardly a worthwhile endeavor. And why do those with massive wealth fear an educated populace? Seems it would be to their benefit not hindrance.
“However, a recent study has shown that current mathematical models that predict a scientist’s future performance based on their past performance aren’t reliable and shouldn’t be used in career advancement decision processes.”
I imagine they will be like the entertainment industry and ignore the data. Better to make up whatever the hell they want and tout it as the truth.
“A study has found that grant size doesn’t strongly predict a researcher’s scientific impact.”
I bet the author of the study has a small grant…
It is virtually impossible to asses what the possible future impact of scientific research will have, especially in the short term.
Consider for example the battery, it was the product of scientific research and was developed over 100 years before it found any real application, 100 years !!!!.
What impact has the battery had on the world ?
Bibliometric follies in the Heartland
At my university, a large state university in a Great Plains state, our administration (and appallingly a faculty “task force”) had the incredibly stupid idea of using bibliometric data (number of citations over a ten year period) to compare between fields. It turns out the average number of citations a paper receives in the short-run strongly correlates with the average number of citations in the bibliography of papers in the same discipline as the paper. The typical bibliography in microbiology and immunology runs for several pages (in small print) while the typical bibliography in mathematics might have from two to ten citations (there are longer ones, but they aren’t typical — in mathematics it is an honor to have your result turn “classical” and be cited and used without citation to the original paper in the bibliography).
Add to the folly that they proposed using data from commercial publishers whose database excludes the major house journals of some professional societies (notably the Association for Symbolic Logic) and the journals (some published by professional societies, others online only) set up by academicians in protest against abusive practices of commercial publishers.
This baleful trend is part of the rise of the all-administrative university in which management types, whatever lip-service they provide to the actual purpose of universities, behave as if university administration is the core function of a university, rather than research, scholarship or education.
How Science Goes Wrong
Here is another good article which points out most science is probably wrong.