In today’s increasingly interdisciplinary world of research, a field known as digital humanities has emerged. Simply put, it is the integration of computing technology into humanities research.
Yet, digital humanities encompasses a long – and difficult to separate – list of techniques. These include geospatial mapping, data visualisation, digital forensics, digital pedagogy, 3D modelling, and literary text analysis. However, in understanding the premise of this nascent field, it is helpful to focus in on how it is used to analyse literary texts, which by no means encompasses the field, but often serves as its synecdoche.
Frank Moretti, one of the pioneers of digital humanities, describes the field as a transformative force in literary studies. It focuses less on examining individual texts, and looks at the whole of literature, because this is the way to (er…) understand literature as a whole. He writes: “A novel a day every day of the year would take a century or so … and then, a field this large cannot be understood by stitching together separate bits of knowledge about individual cases, because it is not a sum of individual cases: it is a collective system, that should be grasped as such, as a whole…”
“Digital humanities encompasses […] geospatial mapping, data visualisation, digital forensics, digital pedagogy, 3D modelling, and literary text analysis”
Taking a step back and examining corpora of literature involves scientific strategies like statistics, computational modeling, and quantitative analysis. That is right – it seems odd, but the field of digital humanities is applying these hard science methods to studying literature. Digital humanities propose that using this approach can help “uncover previously unknown, invisible, or under-remarked-upon patterns in texts across broad swathes of time”. It creates a means for literary scholars to do research that is testable, and reach conclusions that are repeatable.
An interesting recent study in this field received quite a bit of press. Researchers in the Poland’s Institute of Nuclear Physics used statistics to study classic texts, examining the length of sentences. Their finding was that the variation in lengths produced a pattern known in maths as a fractal. Fractals are repeating patterns that also interestingly appear in nature, such as in the structure of miniscule snowflakes and galaxies. The comparison to this mathematical structure is not perfect, as fractals repeat to infinity, whereas books are clearly finite.
In the genre of stream of consciousness, the researchers found an even more complex pattern, fractals of fractals, known as multifractals. While the jargon in this research can be understood on a surface level, really getting at the implications and deeper meaning of these findings is quite difficult for a lay audience.
Beyond this intriguing finding about fractals, where does the field of digital humanities seem to be heading? One view is that the current state of the field is comparable to natural philosophy of the 1700s, when, “experiments with microscopes, air pumps, and electrical machines were, at first, perceived as nothing more than parlor tricks before they were revealed as useful in what we would now call scientific experimentation”. In other words, it will take time for the field to clarify its tools, and it needs to do so before it can answer questions and provide a clearer sense of where it is headed.
Image: Quinn Dombrowski