Outscoring old science
Rich posted a nice quote the other day on the introduction of the forward pass in football some 100 years ago, and linked that to sciences. I commented with the remark that the outscoring is the problem:
The big question is: how do we measure our outscore. The other football teams would not have switched too, if the success of the St.Luois team if the outscore was obscured.
In openaccess publications, there is a slight outscore: higher impact for openaccess publications. But I do not feel this effect is as pronounced as in the football example.
You got a good statistics to impress people new the forward-pass in science?
Just after that, I read this blog by Antony on survival-of-the-fittest chemical search engine. Even though the measurement of the score is easy, these statistics can easily be obfuscated. Independent rankings, like Google Rank and Alexa Rank, may help.
However, what we really need is a direct competition. Us against them, old against new. I don’t mind to be in either group, as long as it is the fittest. But, we urgently need to define what fittest is. Agreeing with Timo’s statement (e.g. “It therefore troubled me that the initial counterattacks on PRISM were themselves often lacking in nuance and discrimination.”), we need exact measures to do the discrimination. Each team prepares for the game, plays the competition, indepdent scoring, and there is your 142-11 outscore. PRISM versus PloS, Modgraph versus ACD/Labs, CDK against OpenBabel, KNIME versus Taverna, JOELib versus Dragon, microformats versus RDFa, openscience versus patents, PLS versus SVR, gemini versus single-tail surfactants, … Bring on those competations! Let the score be clear (open), fair, and discriminating!
Maybe this is something we should set up with the Blue Obelisk: a yearly competition, with various categories (think: databases, prediction, modeling, …), with scientific relevant judging.
May the best team win!