When the threat posed by the digitalization of our lives is debated in our media, the focus is usually on the new phase of capitalism called “surveillance capitalism”: a total digital control over our lives exerted by state agencies and private corporations. However, important as this “surveillance capitalism” is, it is not yet the true game changer; there is a much greater potential for new forms of domination in the prospect of direct brain-machine interface (“wired brain”). First, when our brain is connected to digital machines, we can cause things to happen in reality just by thinking about them; then, my brain is directly connected to another brain, so that another individual can directly share my experience. Extrapolated to its extreme, wired brain opens up the prospect of what Ray Kurzweil called Singularity, the divine-like global space of shared awareness. . . . Whatever the (dubious, for the time being) scientific status of this idea, it is clear that its realization will affect the basic features of humans as thinking/speaking beings: the eventual rise of Singularity will be apocalyptic in the complex meaning of the term: it will imply the encounter with a truth hidden in our ordinary human existence, like the entrance into a new post-human dimension, which cannot but be experienced as catastrophic, as the end of our world. But will we still be here to experience our immersion into Singularity in any human sense of the term?
Whatever else literary realism has in common with psychoanalysis, they share at least this: they are too often assessed purely on the basis of their depictions of objects, and too rarely understood as practices of self-care. Within realism, the objects that detain readers consist of individual characters or character types, historical situations or themes, and poignant little details. Within psychoanalysis, they can include luridly contrived pathologies, theories of psychological development, and vivid symptoms. Yet for their creators, realism and psychoanalysis were both also techniques to be evaluated not just on the basis of their elegance, but on the basis of their efficacy. George Eliot and Sigmund Freud both claimed for their writing a therapeutic power that could help readers and patients lead happier and more fulfilling lives. These descriptive and normative goals sometimes conflicted.
Whether you watch CNN or InfoWars, read Slate or Breitbart News, the Washington Post or the Washington Times, you’ve likely seen two terms side-by-side a lot in recent years: “politics” and “performance art.” As a scholar of performance, I wondered why—and, in my confusion, I turned to Raymond Williams’s Keywords. In moments of historical crisis (real or perceived) words have a way of jumping their tracks. For Williams, culture was one such word: it went rogue in the mid-twentieth century. And no sooner had he focused a bright light on culture than other words started casting strange shadows: not just culture, but also class, art, industry, and democracy. “I could feel these five words as a kind of structure,” Williams recalled. “The relations between them became more complex the more I considered them.” I think I understand now what he means. As someone trying to do performance theory today, I’ve found myself reconsidering terms I know well. Performance art was the first to look strange, but others followed: performance, liminality, and more.
There can be few people in the developed world who remain unconscious of the central role that algorithms play in all our lives. Machine learning systems and other algorithmic devices are inescapable elements in tools we use routinely to get through each day. We navigate cities by appealing to route-mapping apps, rate our dietetic performances by reference to fitness wearables, and orientate our emotional lives around the recommendations of social media. When The New York Review of Books warns that encoded protocols are “taking over,” the tense, at least, is surely wrong: algorithms have already taken over. And for good reason. Such software seems to predict human preferences better than we ourselves can, and to offer us the fastest, most efficient ways to realize them. They guide us efficiently through cities and introduce us in seconds to information and arguments that we might otherwise spend weeks to corral. Algorithms like these command credibility, extending to devotion––and even occasionally to a kind of implicit faith, as witness the unfortunate drivers who have found themselves stranded in the desert, submerged in a lake, or wedged in a narrow bridge after trusting their mapping software more than their own eyes. The traditional media have enjoyed making such cases into comedic cautionary tales. But, as Al Capone put it in The Untouchables, “you laugh because it’s funny, and you laugh because it’s true.” The fact is that all of us who depend on smartphones are more vulnerable to humiliation at their hands than we would like to admit.
When a genetic test uncovers a mutation, it can radically alter how a person is understood and treated. The reason, it turns out, has very little to do with the vision of molecularly targeted miracle therapies that helped animate the Human Genome Project. Despite a handful of exciting developments over the last couple of years in fields like genomic oncology and rare-disease research, the pharmaceutical vision of precision medicine remains mostly promissory.
What comes to mind when you think of the word “algorithm”? This is not a rhetorical question: I am asking you to actually picture the word and what comes along with it. Based on the assumptions that populate the mushrooming scholarship about the history and sociology of algorithms, as well as buzzwords in the blogosphere, “algorithms” seem to have two dominant characteristics. First, they are intimately tied to machines and divorced from people. To be sure, they are coded by people and act on data about people (in many cases), but the domain of the algorithm is in silicon. The second assumption is related: the algorithm lives in Silicon Valley, or one of the multiple other hypertechnological hotspots of the Information Age. What the algorithm is not is embodied in a person, and to the extent that it circulates around Eurasia it is most definitely not Soviet. This essay explores a slice through an alternative history of an algorithmic problem, Machine Translation (MT), in order to show the multiple recastings of a specific set of algorithms in the Soviet Union. Since what it meant to be Soviet and what it meant to machine translate are rather unstable signifiers, the accounts here will remain unsettled, non-monolithic.
In the past decade, with the rise of social media, artificial intelligence, and new forms of automation, the word algorithm has rapidly ascended from an obscure technical term to a media buzzword. But algorithms themselves have existed since antiquity. As the OED puts it, an algorithm is “a procedure or set of rules used in calculation and problem-solving”; a familiar example is long division. Commentators over the centuries were at times awed and at times disturbed by the fact that one could apparently produce real knowledge about the world by arranging symbols on a paper or slate according to mechanical rules. These changing attitudes toward algorithms were a bellwether for the broader epistemological shifts that intellectual historians, such as Michel Foucault in The Order of Things, have detected in the histories of scientific discourses.