Like Vianna, I too am interested in the number of reading practices necessary to interpret the descriptions of the literary field the algorithms produced. I was also surprised by how frequently the authors acknowledged the limitations of their own arguments. Underwood even says that his methodology “has a significant weak spot” (93) and “turning those models into fully satisfying stories could take several more decades” (109). In my experience, it’s rare to find this kind of frankness in literary studies where the task of the essay is to produce a clear argument that appears to have few (if any) weaknesses. There seems to be a liberty in this scientific approach to literature that takes description as its primary task. Specific arguments seem to still be reliant on close readings of text.
In acknowledging the limitations of digitization, Algee-Hewitt, et al and Underwood checked my unrealistic hopes that distant reading and digitization would provide a more inclusive or expansive vision of the literary field. As Algee-Hewitt, et al noted, “with digital technology the relationship between the three layers has changed; the corpus of a project can now easily be (almost) as large as the archive, while the archive itself is becoming—in modern times—(almost) as large as all of published literature” (2). For me, the notion of an expanding archive could be the remedy to the historical violences and silences (to borrow from Saidiya Hartman) of the archive, but as both critics observe, the archive is still a significant limitation to that kind of work. Despite the grand scale of studies like these, “libraries don’t buy books for representative samples; they want books they consider worth preserving; good books; good, according to the principles that are likely to be similar to those that lead to the formations of canons” (2). As Underwood mentions, finding “some way to measure the effects of imbalances” when “sheer underrepresentation in the data set, by itself, is an eloquent fact” remains a challenge (94,95).