As a young kid I was a collector of almost anything one can think of. Natural objects such as rocks, (dried) plants, animal skeletons and insects had my particular interest. I spent hours looking at insect wings and drawing them as I saw them through the objective of my kiddy microscope. Or growing salt crystals with my home chemistry set. In hindsight it must be no wonder for my parents that I ended up doing a PhD in the natural sciences. Because at our Kavli Institute of Nanoscience, people are pursuing exactly what I was practicing for back in the days: purely curiosity-driven research: fundamental research, often without a direct apparent practical application. Take the viral polymerase molecular motors I’m observing through my microscope-for-grown-ups these days; in the far future this may lead to novel anti-viral vaccines, but as for now, all we really want to do is understand how nature works. People in the lab are not in it for the money, they are driven by curiosity.
As an editor of The Lancet once stated: “We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong.” Though stated quite bluntly, those familiar with the process would quite likely tend to agree to some extent – or admit that though useful, peer review (PR) is a time-consuming and inefficient process, to say the least.
Since my entire professional life is English spoken, my writing here could also somewhat reflect this fact. As a member of the Kavli Institute of Nanoscience here in Delft, I took the opportunity to contribute to the quarterly newsletter with a piece of semi-colloquial writing of my choosing. The entire newsletter can be found here, my column starts here!
Success is 1 percent inspiration, and 99 percent perspiration – almost a Thomas Edison quote, were it not that this American Idol avant la lettre was referring to (his own) geniality rather than success. But it is success most of us are aiming for in science. And everyone – students to professors alike – has experienced first-hand that the better part of being successful is just plain hard work. So far nothing new.
But how to assess success? A recent report  shows the future is not that unpredictable after all. Using the h-index as a measure of success, a large database of predominantly neuroscientists and employing machine-learning techniques, the authors came up with an equation that predicts the future to a reasonable extent. The Hirsch or h-index is – as most readers of this purple periodical undoubtedly know – a scientist’s h number of papers with at least h citations each. As Hirsch reasoned in 2005, the main advantage over other single-number measures is that h combines both the productivity (# of papers) as well as the impact of this productivity (# of citations) into a single number. In 2007 this California-based-physicist-gone-sociologist empirically demonstrated the potential predictive power his index could have. The new-and-improved formula also takes into account other factors such as the total number of papers, the number of distinct journals, the number of active years in research and the number of publications in top-journals.