Judging scientist performance by impact factors is like judging CEO performance by short term share prices.
And both produce perverse incentives.
Let me add as a final note, if you scan people's CVs for Cell, Nature, Science papers, then you're judging by impact factor.
3 comments:
What's funny about it is that when I pubmed somebody with a lot of fancy publications, I get all intimidated. But in journal clubs etc when I read the fancy publications, I often end up thinking, "How the heck did they convince the editor on this one?"
It's just hard to maintain that skepticism when I'm looking at a daunting PubMed line-up.
The only problem that I have with this type of argument, is that the more we leave out all attempts at "quantitative" measures (imperfect as they undoubtedly are), the more we are left with basing every decision on the so-called "intangibles" (e.g., where one got his/her PhD, who was the advisor, how well-connected the person is...). If I have to choose, I would rather go with impact factor, otherwise 90% of all scientists out there will never have a chance.
Just my opinion...
@DrJ - Exactly. Probably we all have this same kind of feeling, which makes me wonder why people still rely on it. It must be because of the huge numbers of papers out there, so we initially substitute other peoples' judgement for our own.
@Massimo - I'm definitely not against attempts to apply quantitative measures to assess scientists and science work. But the impact factor is particularly problematic, since due to the distribution of # of citations in a given journal. Thus, a randomly selecting a paper from that journal isn't described by that number. (And that doesn't account for the ways journals manipulate the numerator and denominator to massage their impact factor).
I'd also argue that the ability to get into those C/N/S is strongly related to who you know, and forma another mechanism of the "Matthew Effect"
Post a Comment