Science Science Technology

What can Frankenstein teach us about bad science?

A scientific consensus
is only worth
acknowledging when it
follows a scientific
method, so failing to
replicate or reproduce
data could permanently
damage the authority of

At its core, Frankenstein is a story about understanding that the ethical and practical limits of science are not the same. Just because you can, it does not necessarily mean you should. But revisiting the story in the nightmarish year we find ourselves in could uncover lessons we previously overlooked about how to stop bad science.

The world is in no short supply of conspiracy theorists. From governments advocating herd immunity as a viable option for a disease, despite it being found to infect people multiple times, to whatever the latest trendy way to blame marginalised groups is, unfounded myths are becoming increasingly normalised. What has become clear due to the emergence of these theories in mainstream culture is what makes these lies believable: a basis in actual science.

Frankenstein author Mary Shelley was initially inspired by overhearing two men she was travelling with poorly explaining the work of Erasmus Darwin, an ancestor of Charles Darwin, who studied the spontaneous emergence of life in rotting flesh. The theory, at the time named ‘spontaneous generation’, argued that living flesh could emerge from non-living matter. While the underlying theory behind it was disproven, this movement spurred great interest in the discovery of the role of electric impulses in the functioning of animal bodies. All these highly discussed and exaggerated theories made their way into Shelley’s horror classic in later reinterpretations as the mysterious method by which Frankenstein’s monster is brought to life.

Despite the concepts at the basis of Frankenstein being deeply rooted in the scientific era of the late enlightenment, they had little base in reality to support their extreme claims. With the benefit of hindsight, we now know that they obtained insight into interesting phenomenon but failed to maintain a level head throughout. What led scientists to extremes was the opportunity to benefit from radical claims without the need to verify them past an initial discovery.

Today, things are sadly not too different. Numerous figures studying the development of science and technology have noticed an increase in papers making significant claims of revolutionary discoveries, while those verifying others’ discoveries are declining. Scientific journals such as Nature have been raising awareness of the lack of negative results being published for years, reporting a 22 per cent increase in papers presenting a positive conclusion from 1990 to 2007. Ignoring the unlikely possibility that scientists magically got better at getting things right the first time around, this shows a push away from verifying research and towards presenting new and exciting topics more likely to obtain funding. 

Despite a greater understanding of the world around us, we still fall prey to the same flaws as entertainers shocking frog legs in Victorian England. We fail to reward thorough research, especially when it disproves the ideas we give for granted. Instead, we privilege science that overpromises in order to stand out from the crowd. In a way, this is a natural consequence of an academic machine that is more competitive than it ever was. The number of papers cited has more than doubled in the past decade, and it seems to be growing exponentially. Individual bad apples cannot be blamed for such drastic changes in culture, and we must understand the structural root causes.

Publish or perish is, by this point, a well-documented part of academic culture. Researchers must constantly be working on new and noteworthy publications to earn their position as scholars. While this may seem reasonable, it is also a relatively recent phenomenon. The earliest recorded use of the aphorism is from the 1920s, and the accelerating rates of citations worldwide show a significant shift in how academics are incentivised to invest their time. If scientists are expected to have a eureka moment multiple times a year, few will have the time and resources to replicate other people’s findings.

It is easy to imagine that disinformation is increasing out of a sheer denial of science, with fault only lying on those rejecting empirical evidence, but this ignores the responsibilities of scientific institutions. More research is not necessarily better research, and this is especially true when bad actors are at play to actively spin whatever research can benefit their claims. A scientific consensus is only worth acknowledging when it follows a scientific method, so failing to replicate or reproduce data could permanently damage the authority of experts. To ignore structural flaws in contemporary science means to give up on the very ideas that made scientific advancement possible.

Modern interpretations of Frankenstein play heavily on the angry mob, which eventually discovers the monster and burns him alive. This Halloween I would like to focus more on older interpretations of the story that centre around Victor Frankenstein. He was a scientist whose fault was not only hubris, but also his refusal to take responsibility for his creation. Junk science and the widespread rejection of experts is a monster brought to life by a distorted version of the scientific institutions we idealise. Nothing would be scarier than letting it die before understanding it and reforming it to something we can be proud of.

Illustration: Eve Miller

By Alice Spaccasassi

Science and technology editor at The Student.