Impactful – Part 2

I left off last time with a promise of opinions, and today I shall not disappoint; I’m following on from my previous post by showing you the opinions of some established scientists that I talked to, and walking through how this has affected my view of the subject of Impact Factors and their use as figures of merit for scientists. A reminder of last time; I started out extremely upset that I might be judged by such a simplistic number, and worried that this will affect my academic career…

The Investigation

Due to my lack of experience, I’m in no position to start ragging on about IF’s myself. However, I do know people who are qualified to give a complex and reasonable opinion. Let’s ask them and look for a bigger sample size – statistical power, if you will.

My initial wave of questions was via Twitter, and one lecturer in high energy physics responded to my question about how problematic IF’s really are. He had this to say;

“[The problem varies] depending on employers, but for some it’s a big problem. Look up the Queen Mary Physics Department – they have a problem with it.”

This is a reasonable point – with any kind of figure of merit, there will always be someone who abuses or misinterprets or simply over-relies on it for judgement. Other people (and the implication was that the department he belonged to counted among these) do not use them in such a way – indeed, possibly not at all. So then the question became – how important are they? Would there be a real price for giving them up?

“There’s no reason we couldn’t give up IF’s, but the forces from above like IF. “

That’s a fairly clear “no”, then. We could, if we wanted to, get by without IF’s at all – that is not necessarily saying that they are good or bad, just that IF’s are not the only way of representing the importance, or impact, of one’s work. Notice that the response is somewhat free of criticism of IF’s specifically; this was to become a pattern in the responses I got.

At this point, my suspicion of IF’s and their application had actually deepened – I was rather convinced at this point that IF’s were simply a bad statistic to use no matter what. I suppose I agreed with the original challenge on statistical literacy. However, a sample of two does not a dataset make – I continued bugging anyone I could think to ask. Next to respond was a senior academic (previously associate dean for research). He had some interesting things to say;

“You [Steve] are at such an early stage that a publication anywhere will be thought of as good, even conference proceedings, but when I am looking at the CV of anyone more established, like a potential postdoc, then I would be much more interested in someone with one high (or very high) impact paper than several low impact publications.”

Aha! Damning evidence that IF’s are evil – at a cursory glance, at least. Taken from a slightly calmer standpoint, this again doesn’t really confirm or deny anything – sure, IF’s do make a difference. On the other hand, according to our dean, neither does a good set of IF’s guarantee you a job. Similarly, having less impact doesn’t preclude you from one either. This is important, as it backs up my suspicion that most places don’t worry solely about IF’s to the exclusion of all else. Probably. There was more;

“I would want to know how much influence over the paper the person had – i.e. did you just act like a technician doing what you were told, did you just do a minor part of the experiment, did you just write a computer code using someone else’s theory, or did you have a ‘thinking’ role.”

So in the very next sentence I discover that I was right – there is significantly more depth to this than I initially saw. We can easily see that (some) people in the position to make decisions on applications for funding aren’t completely reliant on one specific measure. In fact, since those people are often experienced researchers themselves, they often have the wisdom to realise the difference between a bunch of IF’s and an actual researcher. It’s not all rosy – the reality is evidently that they are used to differentiate between candidates. Thankfully, they aren’t usually the sole judgement factor.

The next contact I had is – in comparison – a lowly post-doctoral research associate hailing from the other side of the Atlantic. This response was somewhat of an essay (a welcome one), so I’ll summarize a little; firstly, there’s nothing straight out bad about IF as a metric – it’s just the application which is at fault (IF’s do what they say on the tin – it’s just some people haven’t read the tin). However, he did confirm that a lot of Universities do base promotions on the number and impact of papers a researcher publishes each year. He had this, in particular, to say;

“That’s where I think it’s an egregious error, because often Science or Nature isn’t the right place to publish a particular article, even if it’s fairly groundbreaking in a particular sub-field.”

That’s the first real criticism that I’ve seen of IF’s by somebody I’m acquainted with – it’s certainly an appropriate one. Partly, I took this as further evidence that IF’s can be misread, but it also implies that it’s not unknown for people to deliberately publish in a place that looks good, even if it’s not the best place for the science itself. I find that distasteful on principle; a researcher should always do what is best for the science itself where possible – not doing so somewhat reduces the credibility of said researcher (in my opinion). It’s just not in the spirit of the scientific method, and that depresses me somewhat.

I’ve gone on for long enough, now. Next time, in Impactful – Part 3; I finish my exploration of the subject of Impact Factors with some more opinions and examine my final perspective on their utility and use; we’ll also look at actions you can take in order to reduce the likelihood that you will be judged solely by Impact Factors. Please comment and share your experiences – I’m genuinely fascinated with this discussion.


About stoove

A physicist, researcher, and gamesman. Likes to think about the mathematics and mechanics behind all sorts of different things, and writing up the thoughts for you to read. A competent programmer, enjoys public speaking and mechanical keyboards. Has opinions which might even change from time to time.
This entry was posted in General Science, Opinion, Physics and tagged , , , , , , , , , . Bookmark the permalink.

9 Responses to Impactful – Part 2

  1. Ken says:

    The issue is, I believe, how IFs are determined. It’s similar to the problem of using an increased average salary to suggest that typical salaries have increased. If the richest few people in a country make a lot of money in a particular year, the average salary may rise but the majority of the people may not have had a pay rise. I believe that IFs are essentially the average number of citations a paper in a particular journal will get in a certain time period. This appears to be determined largely by a few very highly cited papers. Using IFs to then determine the perceived quality of a typical paper in the journal would – in my view at least – be meaningless because the average number of citations to a paper in this journal does not, in any way, tell you the number of citations you would typically expect a paper, published in this journal, to get. It is just – in my opinion – a very poor way to estimate the quality of papers in a particular journal.

    • stoove says:

      Well I think that you’re right that there is likely a problem with the way that IF’s are applied in terms of the translation between *journal* and *paper*. This is true, and your analogy makes sense. I’m told that the calculation of IF’s is a tad more complicated than a simple mean (but also very possible to exploit). You’re right – IF’s describe the likelihood of seeing a widely cited paper in a journal **not** the number of citations one should expect when publishing there.

      However, this isn’t really what got me so angry. If I may extend your analogy a little:

      Judging a **person** on the IF’s of the **journals** they publish in is somewhat like judging a **person** on the average wage of the **countries** they have worked in. It assumes that high IF’s necessarily imply that your research is good, which is simply not true. That is particularly what offends me about this issue, and it’s what I’m investigating – how prevalent is this assumption?

      The causal relationship is the opposite way; good quality people make highly cited papers which cause high IF journals. It’s debatable to what extent a good researcher will necessarily publish in a high IF journal (as somewhat evidenced by the last quote I made in the article) and so is rather naive to simply assume that someone with high IF publishing targets is necessarily a good researcher.

      • Ken says:

        If I understand what you’re asking, I don’t believe that anyone I know believes that the Impact Factor of the Journal they publish in is important. If any individual was to be judged (say when short-listing for a job or a Fellowship), what is typically used is their own h-index (i.e., the number of their papers n that have at least n citations). This has it’s own problems, but we’re not really discussing that here. I’ve never ever heard (for example) of anyone ever suggesting that we shouldn’t include a certain paper because it was published in a journal with a low IF. Sometimes people will publish a conference paper that gets a reasonable number of citations. In my experience, even this would be regarded as good. If people are citing it, it must be having some impact. What I believe has happened, is that University Administrators (in some universities) have decided to judge researchers in their universities and are using the IFs of the Journals to make this judgement. I don’t believe any active researcher believes that the IF of the Journal is a good indicator of the quality of a paper.

      • stoove says:

        Well we shall see in the conclusion of this series that this is not necessarily the case. In fact, we’ve already seen (admittedly not the strongest) evidence that some people **do** look at higher IF journals first when publishing, even if it’s “not the best place for the paper” in terms of how good it is for the science. This shows that no matter what people might say, it does register in people’s consideration. I maintain that there is an issue here that IF’s are misused on a personal scale. It might not be to the extent that was implied in the first part (and in the quote I started with there), but it certainly exists to an extent.

        EDIT: Perhaps I’m getting ahead of myself a little. That final quotation only implies what I said, rather than states it specifically.

  2. Ken says:

    If the quote you are referring to is the one about being “statistically illiterate” if you use Impact Factors, then I think the quote is essentially correct. IFs give you virtually no information. It could be one paper with all the citations (and all the rest with none), or it could be all the papers with the same number of citations. As a statistically indicator, it is meaningless. If you the use it to judge the papers in a journal (i.e., you are using it as a statistical indicator) then you are being, essentially, statistically illiterate.

    I should clarify something with regard to my last comment. Even though I’m unaware of anyone who would claim a paper wad good because it had been published in a journal with a high IF, there is definitely a sense that for the forthcoming REF (which will judge the submitted papers as 4*, 3*, 2*, 1*) papers in Nature or Science will be automatically 4*. There is therefore a drive to publish in Nature or Science.

    • stoove says:

      Well I would personally debate your point about the truth that using such a thing makes you illiterate – if I use a mean in my research, it doesn’t make me statistically illiterate. What would make me statistically illiterate is not acknowledging the variation, distribution, causal relationships, etc. Where someone crosses the line is where they apply a statistic blindly, and this is no less true with IF’s. I suppose the third part (published Friday) will clear this up a bit.

      However, I will say that I’m not attempting to persuade anyone here; document, discuss, contribute; yes. Establish one view as truth? No. I think we should probably all agree that this subject is worth more than blindly creating another polarising debate which gets us nowhere. That would be very sad. You’re certainly welcome to your opinion, and I hope that you re-evaluate it given the opinions I provide; even if your opinion is unchanged, just getting the reader to think again is all I’m aiming for here.

      I think your point about REFs is a good one.

      • Ken says:

        I think I do agree with your first point. If you use IFs together with other measures of impact, that might be fine (although I’m not entirely convinced that even this is true). I was referring to a case (as appears to be happening at Queen Mary) where you use IFs alone to try and determine the quality of a paper or a researcher. If someone thought that was an appropriate and acceptable thing to do, I would argue that using the term “statistically illiterate” would then be appropriate.

  3. stoove says:

    Well in that case it may well be so.but the evidence that I’ve presented so far hasn’t been able to conclude one way nor the other whether that is a typical state or an extreme.

  4. Pingback: Impactful – Part 3 | UNconstant

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s