Critical Inquiry Critical Inquiry

Summer 2020


Volume 46 Issue 4
    • 719Grace Lavery
    • I argue that, in George Eliot’s early, definitive statement of realism in the seventeenth chapter of Adam Bede, realism will only have been accomplished when readers have learned not merely to respect, but to desire, the dysphorically sexed bodies of others. In this sense, I argue, realism shares a central tenet with two of the more controversial and, frankly, neglected dimensions of Freudian thinking—which Sigmund Freud himself took to be indipensable components in the treatment of neurotics—castration complex and penis envy. Though post-Freudian analysts have frequently found these dimensions of libidinal embodiment distasteful, to trans people they are central and in certain respects definitive aspects of social participation. Hence, while trans studies tends to eschew psychoanalysis altogether, and the only psychoanalysts to write about trans people tend to be Lacanians for whom Freud’s therapeutic ambitions were frequently mystified, reappraising the realist dimension of psychoanalytic practice can reveals the trans logic at the core of both Freud’s project and Eliot’s.

    • 745Slavoj Žižek
    • When the threat posed by the digitalization of our lives is debated in our media, the focus is usually on the new phase of capitalism called surveillance capitalism: a total digital control over our lives exerted by state agencies and private corporations. However, important as this surveillance capitalism is, it is not yet the true game changer; there is a much greater potential for new forms of domination in the prospect of a direct brain-machine interface (the “wired brain”). First, when our brain is connected to digital machines, we can cause things to happen in reality just by thinking about them; then, my brain is directly connected to another brain, so that another individual can directly share my experience. Extrapolated to its extreme, the wired brain opens up the prospect of what Ray Kurzweil called Singularity, the divine-like global space of shared awareness. Whatever the (dubious, for the time being) scientific status of this idea, it is clear that its realization will affect the basic features of humans as thinking/speaking beings: the eventual rise of Singularity will be apocalyptic in the complex meaning of the term: it will imply the encounter with a truth hidden in our ordinary human existence—that is, the entrance into a new posthuman dimension, which cannot but be experienced as catastrophic, as the end of our world. But will we still be here to experience our immersion into Singularity in any human sense of the term?

    • 764Christopher Grobe
    • Throughout the 2016 US presidential election, pundits repeatedly described Donald Trump as a performance artist and his campaign as performance art. Meanwhile, his alt-right supporters were mounting performance art shows, debating the meaning of Marina Abramović’s work, and developing their own theories of political performance. For experts in performance theory, such punditry and provocation is like the image in a funhouse mirror. It’s hard to make sense of such bizarre, distorted images—let alone to recognize ourselves in them. This article insists that, nonetheless, we should try. Trump and his movement pose special challenges to American political culture—and also to academic performance theory. His rise has revealed the limitations of a politics (and performance theory) based on norms and their transgression. It has also given the lie to politicians’ false belief that they (and only they) are not performing their politics. The challenge now—for academics, activists, citizens, and journalists alike—is to articulate how performance works, how it provides models of cultural power.

    • 806James Evans and Adrian Johns
    • Introducing this issue’s triptych on algorithms and culture, this article argues that prevailing modes of analysis that focus on the prospects for algorithms “taking over” are no longer useful. It advocates the need for a new conceptual vocabulary, which recognizes that algorithmic and cultural reasoning processes are already enmeshed with each other. The introduction suggests a need for an enterprise of algorithmic epistemology attuned to the fine structure of the ways in which culture and code have interacted in the past and continue to interact today.

    • 813Jeffrey M. Binder
    • Scholars in both digital humanities and media studies have noted an apparent disconnect between computation and the interpretive methods of the humanities. Alan Liu has argued that literary scholars employing digital methods encounter a “meaning problem” due to the difficulty of reconciling algorithmic methods with interpretive ones. Conversely, the media scholar Friedrich Kittler has questioned the adequacy of hermeneutics as a means of studying computers. This paper argues that that this disconnect results from a set of contingent decisions made in both humanistic and mathematical disciplines in the first half of the nineteenth century that delineated, with implications that continue to resonate in the present day, which aspects of human activity would come to be formalized in algorithms and which would not. I begin with a discussion of Nicolas de Condorcet, who attempted, at the height of the 1789 revolution, to turn algebra into a universal language; his work, I argue, exemplifies the form of algorithmic thinking that existed before the Romantic turn. Next, I discuss William Wordsworth’s arguments about the relationship of poetry and science. While Wordsworth is sometimes viewed as a critic of science, I argue that his polemic is specifically targeted at highly politicized projects like Condorcet’s that sought to supplant existing modes of thought with scientific rationality. Finally, I demonstrate the importance of Romantic thought for George Boole, creator of the logic system that would eventually form the basis of digital electronics. The reason Boole was able to succeed where Condorcet had failed, I argue, was that Romantic notions of culture enabled him to reconcile a mechanical view of mathematical reasoning with an organic view of the development of meaning—a dichotomy that remains a key assumption of computer interfaces in the twenty-first century.

    • 835Michael D. Gordin
    • This paper takes three distinct passes through the history of Machine Translation (MT) in the Soviet Union, which is typically understood as concentrating in a single boom period that lasted from roughly 1955 to 1965. In both the Soviet Union and the United States—in explicit competition with each other—there was a tremendous wave of investment in adapting computers to nonnumerical tasks that has only recently drawn the attention of historians, primarily focusing on the American example. The Soviet Union, however, quickly came to assume prominence in the field both in terms of scale and diversity of approaches. At the same moment, Soviet linguists excavated a forgotten precursor, P. P. Smirnov-Troianskii, who had designed a translating machine in the early 1930s. Juxtaposing the multiple contexts in which Smirnov-Troianskii’s machine was reconceptualized and reappropriated for various ends, the article demonstrates the fundamental embodying of the algorithm in the early days of MT and also how the proliferation of narratives about Soviet MT exposes fault lines in contemporary historiography.

    • 867Daniel Navon
    • Human genetics has uncovered a vast trove of medically relevant changes in our genomes—variants and mutations that are both far more common and difficult to interpret than experts anticipated. What will this mean as we move into an era of genomic or “precision” medicine? For over a century the overriding goal of human genetics was to explain the inheritance of traits and conditions that hailed from disciplines like medicine, psychology, and criminology. Yet today, genomics research is calling prevailing categories of human illness and difference into question. Genetic mutations are increasingly used to reclassify disease, disability, and developmental difference—a process I call genomic designation. In recent decades, this has led to the formation of support groups, foundations, specialist clinics, and dedicated literatures for genomically designated conditions like the XXX, NGLY1, Fragile X, and 1p36 Deletion Syndromes. Drawing heavily on the case of 22q11.2 Deletion Syndrome, this paper explains how a genetic test result can radically alter the way a patient is understood and treated. Finding a 22q11.2 microdeletion can lead patients, parents, and caregivers to recast other diagnoses as mere symptoms of an underlying genetic disorder. A 22q11.2DS diagnosis can also redirect medical judgment and practice towards evaluations and even interventions that were not clinically indicated. Finally, a genomically designated diagnosis like 22q11.2DS can realign the very boundary between the normal and the pathological, leading experts and caregivers to reframe clinically nonsignificant findings like an IQ of eighty-seven as the symptom of a genetic disorder. In this way, the growing avalanche of positive genetic test results is disrupting classification and practice in a wide range of disciplines, bringing new populations under the gaze of medical genetics in the process. I conclude by discussing a few salient implications for bioethics and the social studies of science and medicine.