Impactful – Part 3

This is the concluding part of my series on Impact Factors and their use as figures of merit for researchers. Starting with little more than one person’s insult and anger that researchers could be judged on one simplistic figure, I started my own small investigation into the subject. I got in contact with some established scientists and analysed what they had to say about the subject. We pick up having heard some balancing evidence about IF’s and their use…

The Investigation Continues

So far I’d found evidence that IF’s can be misused, but sometimes aren’t – indeed, they are often used as a part of a range of tools. On the other hand, I still know that they are used contrary to their intention as well. The conflict was beginning to trouble me. Finally, we start getting to the real core of the problem; a statistic misleading people. Here’s our post-doc again;

“The problem is that people misinterpret IF as a judge of how influential a piece of work is. There’s certainly a correlation, but it’s not one-to-one. I’ve seen absolutely terrible papers in high-IF journals and amazing papers in low-IF journals.

Everyone wants to do great work and discover neat things, but I think IF has led some people to chase the number rather than tackle the interesting problems.  Often they go hand-in-hand, of course, but I’m always skeptical of a researcher that’s more interested than the publicity than the project.”

I think that sums up nicely what the root cause seems to be – IF is just a statistic. I can’t really say “blame IF”; rather, it’s people’s misinterpretation of what an IF means and how it should be applied that becomes the problem. To quote from a lovely little popular statistics book called The Tiger That Isn’t: “Whenever you see an average, think: white rainbow, and imagine the vibrancy it conceals”. Should researchers, scientists, the scientific community really need reminding about the pitfalls of using an average value? It seems that perhaps they do.

Academics (thankfully) aren’t stupid – you don’t get to the level where IF’s matter without understanding statistics. In that sense, I don’t think that the statement “if you use IF’s, you are statistically illiterate” is correct – it’s more likely that you’re just not thinking about what you’re doing*. I don’t think that my initial anger at the use of IF’s was justified; I don’t think we should disown anybody who does misuse them. They should be woken up, reminded that it’s just another statistic that doesn’t exist in a vacuum, squarely reminded that they have been silly. Perhaps if they persist, such as in the case of Queen Mary’s, more outrage is in order – still, let’s not characterize everybody who uses IF’s as stupid** .

* – although arguably any academic not thinking is an academic not doing their job properly, this is somewhat debatable and anyhow slightly off-point.
** – given the nature of IF’s as a simplified method used for generalising, I think grand statements about IF’s which generalise in their conclusions are somewhat ironic.

Further Reading (between the lines)

There’s more to talk about, but I’ve waffled on for quite a while now so we’ll keep this quick. An important question is if we don’t use IF’s, what do we do instead?

The obvious answer would be just to think more carefully about who we want to interview, and spend more effort on researching candidates. Our post-doc did have something to say along these lines;

“My university goes the time-consuming route – basically, when someone comes up for tenure, they send out letters to prominent scientists in that person’s sub-field, and ask for evaluations of that persons work, contributions to the field, etc.  Basically to show that they are an expert within that field, and among the top 10 or 15 names you’d rattle off when listing the influential work done within that field.”

That’s a very sensible thing to do, but there are problems with expecting institutions to simply make things harder for themselves; exercises like this might suffer from serious time restrictions, participation issues, money or manpower problems – it’s no small undertaking. Indeed, A large part of the reason that IF’s have become so popular in the first place seem to be that they reduce the amount of work and the amount of risk that institutions have to go to in the first place;

“A smaller school often can’t take the time, nor can they set the bar as high.”

“Joe who? Never heard of him.” isn’t going to get anybody a job, so it makes some sense* to use IF’s sometimes, perhaps in many circumstances. Perhaps we can’t expect institutions to give up IF’s simply on request. With that in mind, what can we do to encourage a shift in the climate towards a less single-minded approach? Our ex-dean** had an insight to share;

“In my CV, after each publication (actually now since there are so many, only after the ones I think are important), I write a few lines about why the paper should be thought of as important (i.e. who cares?) and what was my role. Here is an example:
The first of a series of papers on dilute nitrides for mid-infrared optoelectronics.  This work introduced a record long wavelength bandgap III-V alloy for that time and a competitor to …  It also showed that band structure engineering could be used to …  I had the idea for the project, and my role was to perform or supervise all experiments. It led to a large sponsored project …” [contracted]

I, personally, like that idea very much (I’m already working on how I might talk about my MPhys year in an analogous way.) I would suggest that the scientific community in general would do well to adopt systems such as this as standard [rather than a tip to people who enquire] – taking every opportunity to reduce the need for IF’s to be used to judge a person should be a sensible measure for anyone who’s even a little uncomfortable with the idea. Someone could judge you on a single number; you wouldn’t even know. To me, that seems like ample reason to adopt new ways of edging out the demand for IF’s.

Indeed, if everyone were to take up multiple (practical) methods of reducing said demand for IF’s, the practice would become less prevalent. If one is convinced that IF’s are a bad thing, then they should take concious action to change the way they approach things (in my opinion).

* – indeed, some might be able to make the point that it’s fairer on people in very large sub-fields or in their early careers.
** –  not in the sense of “ex-parrot”. Fortunately.

quod erat demonstrandum

I set out (over two thousand words ago now) to show you, dear reader, how applying rational thought and the core idea of science – that every idea should stand up to evidence – can lead one to not only change their opinion but be happy with the result. I’m not ashamed to say that I initially jumped to conclusions (all humans are guilty of that). I am proud to say that I applied myself and came to what seems a sensible conclusion. I’m still happy to change my attitude based on new evidence [this should go without saying].

To conclude the piece, I’d like to share with you a statement that particularly demonstrated how I think science should be done; on principle, regardless of fashion or potential personal gain, to find the correct answer, truthfully. I have tried my hardest to ensure this series was written in the same spirit of intellectual honesty. Take it away, Mr. Post-Doc;

“I wasn’t so happy about [the high-impact research first] approach, because it seems like putting the cart before the horse. My approach to science is to attack neat problems, and then publish the result wherever it seems most appropriate. Not to determine which project is most likely to get into $BigNameJournal and work on those.”

a

b

c

d

e

f

g

As I’ve finished, I have a clarification; I am not by any means attempting to persuade you (the reader) into one viewpoint or another. Instead, I’m presenting the evidence I have seen and explaining what I think of it. I don’t begrudge disagreement as long as you have taken the time to incorporate the information I have provided into your opinion. My goal is to inform, contribute, discuss; not to debate or declare grand truths.
I think that providing information and discussion is more in the spirit of the scientific method than any argument; one should always be ready to change their opinion in the face of new evidence. Often I run into people who seem to disagree on both counts. This saddens me.
And yes, here is the Dead Parrot sketch.
Advertisements

About stoove

A physicist, researcher, and gamesman. Likes to think about the mathematics and mechanics behind all sorts of different things, and writing up the thoughts for you to read. A competent programmer, enjoys public speaking and mechanical keyboards. Has opinions which might even change from time to time.
This entry was posted in General Science, Opinion, Physics and tagged , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s