Note: Everything in this post is true, but I have omitted the name of the professor and the research institution to protect everyone involved. I do not have a degree from this university.
As bad as the work environment in my advisor’s lab was, I tried to stick with it because I really wanted to get my master’s degree and PhD in plant science. Then, after I’d been there a whole month, my advisor finally gave me a copy of the research proposal that had funded my assistantship. It was a USDA grant for research on the possible influence of rootstocks on bitter pit, an abiotic apple disease.
The proposal was well-written and persuasive. It framed the research as essential for apple growers, possibly saving them thousands of dollars from postharvest fruit losses. Here was that kind of language that had gotten me interested in science: noble research that would help the struggling farmer and save the apple industry. Certainly a worthwhile goal to devote two years of my life to. And yes, it was full of neat and tidy statistics: some rootstocks had only 5 percent bitter pit, and others had 95 percent! A significant difference if I ever saw one!
Until I looked at the actual data and ran some basic stats. Uh-oh. For most of the rootstocks, the standard deviation was larger than the mean. That’s really bad. It means that there is too much variation to do any statistical analysis at all. Even worse, some of the rootstocks only had five trees. That’s too small of a sample size for good statistics.
As I thought a good graduate student should, I immediately started reviewing the literature. Typically, scientists only cite articles that are less than ten years old, but I wasn’t finding much on bitter pit, so I ended up going all the way back to the 1980s. Back then, someone had done some research to see if there was a correlation between rootstock and bitter pit, and their results were insignificant. That’s why nobody had researched it since,.
My review of literature said that there were lots of other factors that affected bitter pit that my advisor hadn’t even mentioned. He didn’t even take the samples at the right time—they were taken at harvest, and since bitter pit is a storage disease, most researchers took monthly samples from stored apples. Also, most apple growers routinely spray their orchards with calcium chloride in the spring to prevent bitter pit. For all I knew, the farmer was spraying the research orchard right while I was reviewing the literature—which would totally mess up any data I collected on calcium concentrations in the fruit.
This was bad. Not only was it a poorly designed research project, but my advisor had misrepresented the data in his grant proposal. He hadn’t reviewed the literature, he had reported averages without mentioning that the standard deviation was greater than the mean, and he had made it sound like this research was going to help apple growers when it almost certainly wouldn’t. He had convinced the USDA to give him tens of thousands of dollars of taxpayer money to fund a graduate student, keep his lab going, and write another scientific publication.
I hoped that this was unusual. I hoped that I had accidentally picked an especially unscrupulous advisor. I sure wouldn’t have known from looking at his CV, which was full of awards, honors, prestigious society memberships, international conference addresses, and dozens of peer-reviewed journal publications. His lab sure looked a lot different on the inside than it did on the outside. And I wondered, as I walked through the building and passed dozens of other labs, were similar things going on in there? Or was I just extremely unfortunate in my choice of labs?
Comments