The history of modern aesthetic thought is usually traced to Immanuel Kant and his Critique of the Power of Judgment, with an obligatory nod to Alexander Gottlieb Baumgarten, who had first used the term “aesthetics” in 1735 to identify judgments of taste. Kant’s place in modern aesthetic thought is so secure that it commands acknowledgment: even writers who oppose it root and branch feel the need to frame their work as a response to it. Bentham, by contrast, has scarcely figured in discussions of aesthetics, in spite of his avowed interest in measuring actions and objects in terms of their ability to generate pleasure and losses to it.
Combining statistics, disciplinary knowledge, and common sense, this essay works at the empirical level to isolate a series of technical problems, logical fallacies, and conceptual flaws in an increasingly popular methodology in literary studies variously known as cultural analytics, literary data mining, quantitative formalism, literary text mining, computational textual analysis, computational criticism, algorithmic literary studies, social computing for literary studies, and computational literary studies. While machine learning, pattern mining, neural networks, and much simpler statistical tools certainly have their uses for textual analysis, their usefulness stops with literature and literary studies. The essay gives overviews of a handful of computational literary studies papers, and discusses these examples alongside text mining’s known uses and applications and situations in which these tools would actually be warranted. The nature of my critique is very simple. The problem with computational literary analysis is that the insights it produces are either robust and obvious or not obvious and not robust, a situation not easily overcome given the nature of literary data and the nature of statistical inquiry, despite appeals to the "explorative," the "speculative," or the nascency of the subfield. I explain what it is about the nature of the data and the statistical tools that lead to such outcomes and why, in computational literary studies, there is a fundamental mismatch between the statistical tools that are used and the objects to which they are applied.
Everyone knows that “practical letters” as opposed to “fine arts” or literature shaped “the earliest phase of [American] national life.” That utile focus prevailed from the first years of colonial settlement to the founding years of the republic and “perhaps,” as Constance Rourke once surmised, further beyond into “later phases” of American history as well. Indeed, given the length and complexity of colonial history and culture, it is difficult to think otherwise. The colonial feeding source of what would become “classic American literature” and a library of America was a corpus of functional writings firmly oriented to the world of social, religious, and economic purpose. In such a context one comes to understand how artfulness is not the prerogative of belles lettres or imaginative literature. I say “how” because the perspective entails notable consequences for critical and interpretive method. Engaging with practical letters shifts the theoretical framework for assessing cultural value from aesthetics to axiology. Works like Cotton Mather’s Magnalia Christi Americana, Thomas Jefferson’s Notes on the State of Virginia, Indian treaties, or the Bay Psalm Book are organized according to complex intentions designed toward various public purposes. Like Greek drama (as Aristotle showed), these works are artful to public ends. Unlike Greek drama, their agency and field of action extends beyond their immediate ethos and textual address. They engage contextual fields that are diverse and conflicted. Because that is the explicit focus of these American practical textualities, to understand what they mean requires understanding what they set out to do in saying what they say.
The public discourse about the state of the planet is currently in a paradoxical situation: on the one hand, everyone involved in the politics of climate accepts the idea that Earth behaves as a regulated system that has been dangerously pushed by human action out of its normal conditions of operation; on the other hand, the hypothesis that Earth is indeed a self-regulating system remains highly controversial—and most people do not connect the idea of Earth regulation with Lovelock’s and Margulis’s “discovery” of Gaia. Thus, the common horizon of political action and moral commitment—Earth is a system put out of whack that should be brought back inside some form of order through the regulation of human activity—remains a local and disputed intellectual and scientific idea.
The Swiss pastor Johann Caspar Lavater promoted the discipline of physiognomics in the 1770s as a scientific method to gain a better understanding of humankind. He considered the case of Socrates the physiognomic scandal: Why did this philosopher, the wisest and noblest of men, look like a satyr and thus subhuman? Today not many would consider physiognomics a scientific approach; still, what Lavater considered a scandal remains a puzzle, even though his question should be asked in slightly different terms. The physiognomy of Socrates—as both described in Plato’s and Xenophon’s Symposia and depicted in his sculptured portraits—is an artifact, not a product of nature; therefore, the pertinent question is not why Socrates looked like a satyr but rather why he was made to look like one. From this perspective further questions arise: who made this choice (because it must have been a deliberate choice)? Under what circumstances and with what purpose? These questions are precisely what we will try to answer in this paper.
This essay offers a genealogy of the media concept in the work of Foucault that focuses on his adoption and development of the language of the dispositifin his studies on modern systems of government. My attention is to Foucault’s development of this language, but my interest extends beyond a scholia on Foucaultian terminology. My larger concerns regard how we might develop an account of media that looks to their dispositional powers. Dispositional powers are those potential powers of distributive arrangement of peoples, spaces, and times that may be available in the operational logics of technical objects but that do not determine how and why they function as they do at any given point in time.
Protocols are strategies designed to anticipate and manage emergent contingencies, which originate in the key transitional period in the institutional literacy of post-Roman Europe that took place in the twelfth century. This essay aims to account for that crucial but rarely discussed attribute of protocols, which is that they contain within them processes of critical self-historicization that are fundamental to their basic authorizing procedures. It is both the primary obstacle to and the primary motive for the analysis of protocols, that any comment on the history of a protocol must either defend or critique its current configuration. Protocols are techniques, latent in the nature of things, and so their authority is constantly expiring. This means that when one criticizes protocols relentlessly and even rewrites them drastically, one will not only fail to subvert their original authority but will serve on the contrary as a defender of their principles and a mechanism of their persistence. This fact about protocols is enormously important for their analysis and for thinking about the ways in which protocols have shaped the evolution of societies and cultures in the past and in the present.
Critical studies has come to sing a chorus of collective disavowal of the computer’s visuality. Nicholas Mirzoeff writes, for instance, that computers are not “inherently visual tools,” and Jacob Gaboury has made the case recently even more emphatically: “The computer is not a visual medium.” The reasons for these statements seem relatively straightforward when taking into account the authors’ subsequent explanations. Mirzoeff goes on to say: “The machines process data using a binary system of ones and zeros, while the software makes the results comprehensible to a human user.” Gaboury refines his point by arguing that the computer is “primarily mathematical, or perhaps electrical, but it is not in the first instance concerned with questions of vision or image.” Indeed, given these explanations, there would appear to be no surer illustration of W. J. T. Mitchell’s argument that “there are no visual media,” that all media are instead “‘mixed media,’” comprising multiple sensory modalities, than computer hardware, those rarely seen guts of electronic architecture, the ground-level materiality that undergirds the vibrant colors and sleek displays of the interface.
Care has always been there, yet somehow it has remained invisible. This is the founding lament of the sociology of care. Its mission as a scientific endeavor is to dedicate more attention to a critical infrastructure of social reproduction that needs to be rescued from the corrosive damage of systematic neglect. Care needs care is the mantra of a sociology of care that fashions itself as a progressive project of devotion, conversion, and protection. As Annemarie Mol and her colleagues note, “If care practices are not carefully attended to, there is a risk that they will be eroded.” In this appeal to care about care with care, the object has become the method. But what are the stakes beyond devotion, conversion, and protection?