The debate about computational literary studies (CLS) is stuck. Forceful arguments are repeatedly made as to why literary studies must now—or could never—involve quantification, statistics, and algorithms (not least in this journal) with little sense of either side convincing the other of their case. Surveying this debate over the past decade, I propose that what seems a complete divergence of opinion obscures a fundamental agreement: that computation is separate from literary phenomena. For the field’s critics, this distinction makes CLS an oxymoron; for its proponents, both ways of knowing can contribute to literary studies, and there is critical potential in working across the divide. Yet the perception of a divide remains, and it prevents either effective critiques of reductive uses of computation (in literary studies and beyond) or productive engagements with computation’s constitutive effects (including for literary textuality and subjectivity). In charting this divide as it characterizes and limits apparently very different arguments, I connect claims about technology and subjectivity made in critiques and defenses of CLS to the separation of matter and meaning commonly referred to as Cartesian dualism. With both sides maintaining this arrangement, the debate about CLS is sealed off from technocultural inquiries in multiple fields (including literary studies) and from much of what matters in and as contemporary literary phenomena. The performative approaches to scientific and literary materiality I use to elucidate problems with the existing debate also help to characterize, explain the need for, and make legible where it already exists, a different—performative—CLS. Attuned to the coconstitution of computational methods and objects, with each other, and with literary subjectivities and textualities, this CLS builds on and extends existing critical paradigms to enable literary studies in the postprint era.
#COVID, Crisis, and the Search for Story in the Platform Age
Wattpad is a popular online writing website in which individuals write, upload, and comment on original stories. In 2020, the platform had more than a hundred million registered users. In this article, we use a mixture of close and distant reading methods to study how lay authors wrote about the COVID-19 global pandemic during its first year. We examine some of the formal and generic norms these authors used to narrativize this event; how such norms evolved over time as the pandemic dragged on; and how these online COVID stories differ from more established online genres, such as mystery and romance. Overall, this article explores how a large reading and writing public, leveraging the novel affordances of user generated content, came to respond to a massive social crisis in real time, before they knew how it would end. This exploration allows us to accomplish two things. First, we are able to situate real-time pandemic stories against the retrospective narratives that we expect from literary fiction. How does writing crisis in real time and in a collaborative mode produce its own unique plot and narrative structures, and how do stories written in the immediate wake of the pandemic anticipate later mainstream cultural productions (fiction, film, television)? Second, we gain a broader understanding of how new genres of writing emerge within a cultural ecosystem increasingly defined by generic predictability and the recycling of familiar cultural intellectual property (IP), such as The Avengers and Harry Potter. COVID-19 dramatically disrupted global economic, political, and health systems. How did it also disrupt cultural systems?
Lines Left to Cross: Deglobalization and the Domestic Western in Bong Joon-ho’s Parasite
This article situates Bong Joon-ho’s Parasite (2019) within the historical logics of the Washington Consensus. In this broad context, we might think of the film’s much-heralded class critique as not quite so domestically contained as may initially appear in a film staged primarily in the confines of a single household. Instead, it opens onto a global political economic framework, which it explores through a nested structure in which class dynamics are also mobilized to explore cold-war and trade-war logics, both of which are revealed to be radically interconnected with domestic concerns. Parasite reveals then the inherent dissensus in the Washington Consensus, a dissensus that was always latent but eventually became more explicit. We might say more generally that stories of class difference take on a pointedly different tenor during periods of stagnation; the specific anxiety in Parasite then is not just over the moral fact of social inequality but also specifically about the material distribution of wealth in the face of diminishing resources. The fact of brutal competition emerges from a milieu that seems ostensibly defined by plentitude.
This article defends the theoretical centrality of Michel Foucault’s account of subjectivation for critical responses to neoliberalism against those Marxist critics who claim that his focus on the subject pushed the Left into the fraught terrain of identity politics. A key contention is that a theoretically sophisticated account of subjectivation is a requisite for any philosophically coherent and politically effective theorization of resistance against neoliberalism. Critical accounts of neoliberal subjectivation must be recognized as indispensable for understanding the conditions of possibility for class struggle and not as an alternative theoretical frame to it. The article also relates Foucault’s and Karl Marx’s thought on the question of power and its relationship to the subject. It presents a reading of Foucault as a post-Marxist who modified, rejuvenated, and extended Marx’s views on power and subjectivation. While Foucault developed his understanding of subjectivation as a critical response to the problems he identified in the French Marxist accounts of his time—particularly in the work of Louis Althusser—his appropriation of Marx should be recognized as decisive for the development of his account of productive power and the concomitant understanding of the subject.
Kant’s essay on the question of literary piracy has so far been read as a foundational text in the history of literary property. When Kant refers to the book as a “mute instrument,” scholars of intellectual property already know how to interpret that formulation because they presume the distinction that the contemporary jurisprudence of intellectual property makes between matter and form and its concomitant assumption that print is just an inert, nonagentive medium. In fact, Kant begins his analysis of unauthorized publication not with the question What is an author? but with the question What is publication?. His insight was that unauthorized publication revealed a structural feature of publication in general. The effect of the interposition of print, or the fact that speech begins in and with print, is that the author is structurally alienated from his or her speech. Kant was not an Enlightenment media theorist, but he recognized that speech had to be mediated and therefore delivered to the reader and that it was the medium itself that determined the conditions under which authors and readers could exist and be present to one another and under which speech could become an authorship. In order to create a language channel for public reason, Kant had to take a detour through the legal fiction of agency. Speech might have been an action that could have its existence “only in a person,” but this supposedly innate bond was made by a legal fiction of representation, according to which one person could truly speak “as if” they were another. Kant’s answer to the question What is a book? developed a law that was not so much a law of literary property as a law that sought to suspend the alienating effects of print so as to restore the Enlightenment ideal of communication.
In 1890, the famous Jena Glass Works of Carl Zeiss released the Anastigmat photographic lens to great fanfare. The nearly faultless realism it generated seemed to conclude a chapter in optical technology that had progressed in a predetermined manner since photography’s origins. But why exactly had Zeiss developed its expensive mechanism, and what drove photographers to buy it? This article proposes that the consistent focus and varied depth of field that the Anastigmat provided were not in and of themselves the desired goals of the new corrected lens, but that they were instead visible signals of a pictorial model that makers and consumers had been circling since the public introduction of photography in 1839. The goal was a strict verisimilitude that remained stubbornly external to the medium, an illusionistic standard that had largely been mediated by painting and was now apparently possible in photography as well. But this history of pictorial perfection and the Anagstimat was not inevitable. Other lenses developed around the same time answered to dramatically different technological and aesthetic imperatives. They tell an alternative story of photography’s identity that is less tethered to mimetic fidelity and the idealized human vision with which photography was increasingly associated.