Since I last posted about my PhD I’ve been hard at work on my PhD project, understanding how my new algorithm works, how to control it, and how to get the best out of it. You’d think that because I made it it’d be simple (you’d be wrong).
One of the interesting things I’ve been doing is trying to apply this “new” (actually very old, very simple) method to some of my group’s data to see what it does and how much it improves by. I honestly would have taken the smallest of improvements, just to show that it does something. It’s quite important to measure the improvements thoroughly because (to quote my supervisor) “the rest of the field will be initially horrified by this method”. We need to prove that it’s advantageous. Just in case that’s not enough pressure to perform, we’re looking at using the data that my method cleans up in an actual publication.
Holy crap help me. That’s wonderful!
Of course, I went and messed it up.
I was supposed to be measuring the ratio of signal (peak height) to noise (random squiggle height) in a set of data that I was smoothing, before and after processing, so that I could show an improvement. I did many hours of work and eventually came out with some simply astonishing results – one data set was improved by a factor of 80 (eight thousand percent)! This was somewhat incredible, but I was too tired at the time to question it. I finished a brief internal report on the work and went to move on.
Next thing I know, I realize I screwed up. Big time. All measurements wrong kind of screw-up. Cue a panicked weekend trying to figure out what was going wrong. Just today, I finished fixing my mistakes. On the one hand, it would have been nice if I hadn’t had to compensate for my own incompetence. On the other, I learned some lessons about best practice in the kind of analysis I’ve done.
As with many things, my mental notes on this debacle are “must do better“.