From Daniel Șterbuleac on FEE.org (Nov. 27, 2022):
Scientific research concerns discovering facts about the natural world and, to the largest extent possible, putting them to good use for mankind. It seems like a noble endeavor, right? You get to choose whether you wish to study any of the four main branches of STEM: science, technology, engineering, or mathematics. As for science, it usually divides by the nature of the studied object into: computer science, physics, astronomy, chemistry, and biology, to mention the most common disciplines. Well, I have chosen the latter field, the biological sciences at their intertwining with medicine.
Which seems like a great choice in our times.
However, working in this field is not a walk in the park. Unless you are blessed by working in the right environment and having the right motivation, such a choice may do you and others more harm than good.
Research Often Is a Wild Goose Chase
The next time you are reading a piece of scientific evidence, hold it in your mind that it is most likely false. This was actually theorized and proven in a 2005 paper, and led to a stirring debate. Let me explain this theory, as simply as possible; it applies to all research fields, STEM and non-STEM.
In order to make a scientific discovery, you (usually) formulate a hypothesis, then you put it to the test. But you cannot test it on the whole statistical population (which consists of all the possible objects of study). You obviously cannot put all cancer patients in the world on an experimental pill. You cannot analyze the effects of an economic change in your country and assume it applies in the same way wherever and whenever. Rather, you design and experiment with a limited set of individuals, and then extrapolate (with a certain amount of uncertainty) to the whole population. For a hypothesis to be considered highly likely to be true, the uncertainty rate should be less than .05 (5 percent).
Experiments may lead to inconclusive results, due to chance (coincidence) or other factors. And there are a lot of factors that can influence the experiments. A researcher may be biased (willingly or not) in his choice of subjects, statistical methods, or experimental protocols. Models may not replicate the real-world scenarios (e.g. animal models in cancer research, statistical models in economics research). An association of two different events could foolishly imply causation; rather, a third, confounding variable, may influence the former two. Another difficulty arises when the initial hypothesis is split into smaller, more specific, ones, which are tested out individually (allowing cherry-picking, which taints results).
In addition to the aforementioned factors, there stands yet another hindrance: researchers are inclined to test out an unusually high number of fallacious hypotheses (as opposed to actual true ones) and erroneously prove (at least part of them) true. The incentive of proving that something apparently false is actually true is enormous. Who would not want his research (“inflation lowers unemployment,” “dark chocolate prevents cancer”) making the headlines and securing further funding?
This problem has been repeatedly confirmed, by simply allowing different researchers to redo the initial analyses, using the same methodology; such attempts take the form of “replication studies.” In an extensive effort to replicate previous studies in cancer research, a tiny 11 percent (6 out of 53) revealed similar conclusions to the initial ones. Comparatively, in Economics, replication surveys are scarce; one report showed that 39 percent (7 out of 18) studies failed replication. This area certainly requires further attention.
Choosing the Right Direction Is Difficult
To research, you must base your assumptions on previous knowledge. Yet, when previous knowledge can be flawed, how can one be sure to follow a fruitful path?
During my work, I had to spend a considerable amount of time not only retrieving past discoveries related to my research, but also assessing their scientific quality. An assumption found in the scientific literature can easily be taken for granted—it has been proven, so it seems. But its proof may be flawed. I recall finding a handful of high-level studies that completely crushed previous conclusions and made me switch my research strategy. Ingenuity can help in taking advantage of scientific discrepancies, but could also end up adding even more “debris” to an overloaded scientific literature.
In extreme situations, the research may be so shoddy that one particular study gives out manipulated results stemming from malevolent conduct (false numbers, figures, charts or tables). Further investigations often mandate the retraction of such published papers. The number of retracted papers increases steadily, but they may only be the tip of the iceberg. This amalgam catches researchers up in a never-ending struggle to pursue the truth.
Publish or Perish
Unless you work as a researcher in a private company, whose purpose is to compete and devise concrete solutions for its clients, you may find your work judged not by its quality, but rather by the scientific output. The aphorism “publish or perish” refers to the pressure to publish research output in renowned journals; its negative connotations resonate throughout academia and research facilities.
Journals compete to attain a higher scientific status. In this way, they can attract higher quality papers and charge more. Unfortunately, the “scientific status” of a journal is routinely measured through scores reflecting the number of citations its published papers received in the past two to three years. Such an abstract number reflects poorly the quality of the published research – citations can be made even to unrelated papers, artificially increasing scores. Editorial houses may take advantage of self-citations to boost their ratings.
Similarly, an article bearing a pompous title or a high-sounding (probably false) result attracts citations and makes it easier to get published in a top journal. In extreme cases, journals have been found to not even check the paper or author before publishing it.
Freedom and Mental Health
A scientific career begins by pursuing a PhD. Although it may seem like a privilege, getting into a PhD has multiple downsides, and they all come down to the supervisor. There are accounts of PhD students being forced to do “quasi-slave” work, being bullied, or working overtime (over three quarters of respondents in a large-scale study work overtime). Later in one’s career, academic research revolves around high expectations, job insecurity (research funding is highly dynamic) and a push for productivity – common recurrent themes found throughout surveys.
Such a highly-competitive work culture combined with a narrow path to success leads frequently to stress and anxiety. A large-scale study revealed 36 percent of PhD students have sought help for anxiety or depression; the same problem was shared by 34 percent of academics worldwide. There are even more researchers showing probable signs of depression: an overwhelming 53 percent among UK research staff.
Conclusion
Despite its barriers and pitfalls, a scientific career can be fulfilling. Worldwide efforts are made in order to create more precise standards in statistical analysis, improve journal ratings, and facilitate doctoral experiences. Through smart work, in the right place (surrounded by like-minded people) and at the right time (for the research to find its use), the best minds of the world make our life easier and unravel what had been waiting to be revealed.
Yet, such right places and moments are rare; finding one’s place in the scientific world can be challenging. Anyone wanting to pursue such a career in scientific research should do their “research” first. [source]
Good information for any potential researcher. Something to keep in mind.
No comments:
Post a Comment