Sunday, January 8, 2017

The reproducibility crisis in science

American scientists pour forth a formidable Niagara of scientific papers every year.  Generous federal research money, grants, and the pressure on university professors to publish or perish, help to increase the flow of papers.
  Unfortunately, a high percentage of this flood of papers, cannot be reproduced.  When other scientists attempt to obtain the same results in their lab,  they cannot do it.  The results claimed in the paper simply cannot be reproduced by others. In science, if  the results cannot be reproduced, they must be considered quackery. Not science but B***S***. 
   I experienced the reproducibility problem myself some years ago.  Working on a new medical device product, I consulted the literature looking for ways to do what we needed to do.  I found one, it did what we needed, and I coded it up.  And it worked.  It's just that it didn't work as well as the author claimed.  In fact my realization of the process was exactly 50% low.  The author had claimed twice the performance I was able to obtain in our lab.  Eventually I telephoned the author to ask him for advice.  After a few minutes of conversation, the author somewhat sheepishly admitted that he had left out a factor in his computations, and yes, the algorithm only worked at half the claimed performance.  Damn.  After wasting a good deal of time, I would have done better using the standard Huffman coding algorithm. 
    And, just the other day, the Wall St Journal ran an op ed claiming that all the important medical advances have been made by privately funded research at the big drug companies.  National Institute of Health funding, although ample, had not produced anything of clinical use. 
   Somebody ought to do a study of the effectiveness of federally funded research.  Go back a lot of years.  Tot up the amount of money spent, the number of papers published, and the number of products based on one of the papers that actually made it to market. 

No comments: