I left off last time with a promise of opinions, and today I shall not disappoint; I’m following on from my previous post by showing you the opinions of some established scientists that I talked to, and walking through how this has affected my view of the subject of Impact Factors and their use as figures of merit for scientists. A reminder of last time; I started out extremely upset that I might be judged by such a simplistic number, and worried that this will affect my academic career…
Due to my lack of experience, I’m in no position to start ragging on about IF’s myself. However, I do know people who are qualified to give a complex and reasonable opinion. Let’s ask them and look for a bigger sample size – statistical power, if you will.
My initial wave of questions was via Twitter, and one lecturer in high energy physics responded to my question about how problematic IF’s really are. He had this to say;
“[The problem varies] depending on employers, but for some it’s a big problem. Look up the Queen Mary Physics Department – they have a problem with it.”
This is a reasonable point – with any kind of figure of merit, there will always be someone who abuses or misinterprets or simply over-relies on it for judgement. Other people (and the implication was that the department he belonged to counted among these) do not use them in such a way – indeed, possibly not at all. So then the question became – how important are they? Would there be a real price for giving them up?
“There’s no reason we couldn’t give up IF’s, but the forces from above like IF. “
That’s a fairly clear “no”, then. We could, if we wanted to, get by without IF’s at all – that is not necessarily saying that they are good or bad, just that IF’s are not the only way of representing the importance, or impact, of one’s work. Notice that the response is somewhat free of criticism of IF’s specifically; this was to become a pattern in the responses I got.
At this point, my suspicion of IF’s and their application had actually deepened – I was rather convinced at this point that IF’s were simply a bad statistic to use no matter what. I suppose I agreed with the original challenge on statistical literacy. However, a sample of two does not a dataset make – I continued bugging anyone I could think to ask. Next to respond was a senior academic (previously associate dean for research). He had some interesting things to say;
“You [Steve] are at such an early stage that a publication anywhere will be thought of as good, even conference proceedings, but when I am looking at the CV of anyone more established, like a potential postdoc, then I would be much more interested in someone with one high (or very high) impact paper than several low impact publications.”
Aha! Damning evidence that IF’s are evil – at a cursory glance, at least. Taken from a slightly calmer standpoint, this again doesn’t really confirm or deny anything – sure, IF’s do make a difference. On the other hand, according to our dean, neither does a good set of IF’s guarantee you a job. Similarly, having less impact doesn’t preclude you from one either. This is important, as it backs up my suspicion that most places don’t worry solely about IF’s to the exclusion of all else. Probably. There was more;
“I would want to know how much influence over the paper the person had – i.e. did you just act like a technician doing what you were told, did you just do a minor part of the experiment, did you just write a computer code using someone else’s theory, or did you have a ‘thinking’ role.”
So in the very next sentence I discover that I was right – there is significantly more depth to this than I initially saw. We can easily see that (some) people in the position to make decisions on applications for funding aren’t completely reliant on one specific measure. In fact, since those people are often experienced researchers themselves, they often have the wisdom to realise the difference between a bunch of IF’s and an actual researcher. It’s not all rosy – the reality is evidently that they are used to differentiate between candidates. Thankfully, they aren’t usually the sole judgement factor.
The next contact I had is – in comparison – a lowly post-doctoral research associate hailing from the other side of the Atlantic. This response was somewhat of an essay (a welcome one), so I’ll summarize a little; firstly, there’s nothing straight out bad about IF as a metric – it’s just the application which is at fault (IF’s do what they say on the tin – it’s just some people haven’t read the tin). However, he did confirm that a lot of Universities do base promotions on the number and impact of papers a researcher publishes each year. He had this, in particular, to say;
“That’s where I think it’s an egregious error, because often Science or Nature isn’t the right place to publish a particular article, even if it’s fairly groundbreaking in a particular sub-field.”
That’s the first real criticism that I’ve seen of IF’s by somebody I’m acquainted with – it’s certainly an appropriate one. Partly, I took this as further evidence that IF’s can be misread, but it also implies that it’s not unknown for people to deliberately publish in a place that looks good, even if it’s not the best place for the science itself. I find that distasteful on principle; a researcher should always do what is best for the science itself where possible – not doing so somewhat reduces the credibility of said researcher (in my opinion). It’s just not in the spirit of the scientific method, and that depresses me somewhat.
I’ve gone on for long enough, now. Next time, in Impactful – Part 3; I finish my exploration of the subject of Impact Factors with some more opinions and examine my final perspective on their utility and use; we’ll also look at actions you can take in order to reduce the likelihood that you will be judged solely by Impact Factors. Please comment and share your experiences – I’m genuinely fascinated with this discussion.