IgNobel 2008

This year’s IgNobel prizes have been awarded (last Thursday). Each year, they reward scientists for truly tought-provoking discoveries. For instance, last year J.M. Toro, J.B. Trobalon and N. Sebastian-Galles won a linguistics prize for demonstrating that rats cannot differentiate between a person speaking Japanese backwards and a person speaking Dutch backwards. (You can download their report here.)

My personal all-time favourite, however, is the 2003 physics prize, which went to J. Harvey, J. Culvenor, W. Payne, S. Cowley, M. Lawrance, D. Stuart, and R. Williams, for their work in analysing what it takes to drag sheep over various surfaces. (Their highly technical report can be downloaded from here.)

This year’s winners include:

M. Zampini and C. Spence, who won a nutrition prize for electronically modifying the sound of a potato chip to make it appear crisper and fresher than it really is.

T. Nakagaki, H. Yamada, R. Kobayashi, A. Tero, A. Ishiguro, and A. Toth, who won a cognitive science prize for "discovering that slime molds can solve puzzles".

G. Miller, J. Tybur and B. Jordan, who discovered, after extensive field work one would assume, "that a professional lap dancer’s ovulatory cycle affects her tip earnings".

And the people of Switzerland, apparently, who won this year’s peace prize for "adopting the legal principle that plants have dignity". (I’ll keep that in mind the next time I eat vegetarian.)

Full list of this year’s IgNobel winners is here.

Advertisements

Academia and ethics

Not long ago, Kingston University staff were caught trying to pressure students into giving dishonest replies to a nation-wide survey about student satisfaction. The BBC and Wikileaks have several informative postings about the subject. In a world dominated by marketing and PR, the underlying motto has become: what looks good, must be good. It’s the same logic that makes (some) university departments spice up their activity reports with dead projects listed as being ongoing, non-active students listed as being active, non-refereed articles being passed off as refereed, and so on. Dishonesty and deceit are seen as short-cuts to better-looking results, which increases the attractiveness for prospective students and sponsors.

Individual researchers do this, too, in the hope of increasing the apparent value of their own CVs. There is a whole website devoted to famous plagiarists, for instance, many of whom are scientists. Although I would suspect that plagiarism as a simple copy-and-paste procedure is less common than the practice of "forgetting" to credit one’s sources and properly display the origin of ideas. Although I must hasten to point out that while it’s often easy to suspect, "forgetfulness" is difficult to prove.

Ideally, a scientist should be a kind of guardian of truth. At the very least, s/he should be trusted to be honest and ethical about data, methods, etc., and nothing like the South Korean Scientist & the much-publicised affair of the faked cloning research. With the advent of high-tech computers and digital imaging, deceptions are becoming more and more advanced. See, for instance, this article about a caught attempt at using faked images for a medical research article.

There are also other types of deceitful behaviour in academia, some of which I’ve witnessed myself. For instance, I have been asked to forge signatures on applications, and fake receits for travel accounts; both with the claimed intent of simplifying paper work. (Don’t ask. I don’t understand it either.) I’ve even been asked to use my private bank account to "store" project funds in order to make it easier to access. Needless to say, I’ve refused participation in each case. It does make you wonder, though. Can people who rationalise shammy behaviour be trusted to produce honest research? Personally I doubt it. People who lack moral standards in one area, generally lack morals in other areas, too. The simple truth is that con men are con men 24 hours a day.

Anyway, this sounds a bit depressing, but it really isn’t. There are, generally speaking, good people in academia, just like there are elsewhere.

Does science need theories anymore?

I just read a somewhat flawed paper at the Edge entitled: The end of theory, written by Chris Anderson.

In short, the paper argues that we will no longer need theories in science, because we have Google. We can now use computers to look at massive amounts of data, and use them to detect patterns for us. From this, Anderson draws the slightly irrational conclusion that we need no theories.

Says Anderson:

Petabytes allow us to say: "Correlation is enough." We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot …

There’s no reason to cling to our old ways. It’s time to ask: What can science learn from Google?

Anderson’s somewhat fallacious observation is that we don’t need scientific theories or models since Google will give us all our answers anyway. However, what Anderson is talking about is nothing new. He is merely describing a first step in a long-established scientific method called induction, or "data-to-explanation". In its modern form it has been around since at least Francis Bacon (late 16th century), sometimes referred to as the father of scientific induction.

Amputating the inductive method by removing the explanation part (the model, theory) is not the way to go, as then we would effectively be entering a stage of scientific stagnation. It would be a job half-done (to some degree even pointless) for a scientific endeavour to collect data and establish patterns and not try to explain why the patterns are there. The explanation part is essential if we want to understand *why* the patterns exist, and for that we need models. The models need not be established before-hand, of course (even though analysing data without some prior theory is virtually impossible). Finding patterns in data can be, and often is, a perfectly valid impetus for developing (new) explanatory models.

What Google, and the like, does is offer us new methods in handling much larger amounts of data than what has been possible before. With Google, we can find new, previously undetected patterns, some of which our existing theories cannot predict. These, in turn, will create a need for new explanations and new theories. Hence it is more likely that Google will foster even more theories and models, not less.