Digital and analog: What do these terms mean today? The use and meaning of such terms change through time. The analog, in particular, seems to go through various phases of popularity and disuse, its appeal pegged most frequently to nostalgic longings for nontechnical or romantic modes of art and culture. The definition of the digital vacillates as well, its precise definition often eclipsed by a kind of fever-pitched industrial bonanza around the latest technologies and the latest commercial ventures. One common response to the question of the digital is to make reference to things like Twitter, Playstation, or computers in general. Here one might be correct, but only coincidentally, for the basic order of digitality (the digitality of digitality) has not yet been demonstrated through mere denotation. And the second question—the question of the analog—is harder still, with responses often also consisting of mere denotations of things: sound waves, the phonograph needle, magnetic tape, a sundial. At least denotation itself is analogical. This article will aim to define the analog explicitly and argue, perhaps counterintuitively, that the golden age of analog thinking was not a few decades past, prior to the wide-spread adoption of the computer. In fact, the fields of structuralism and poststructuralism that emerged in the middle to late twentieth century, concurrent with the deployment of the digital computer, were characteristically digital, what with their focus on the symbolic order and logical economies. If anything, the golden age of analog is happening today, all around us, as evidenced by the proliferation of characteristically analog concerns: sensation, materiality, experience, affect, ethics, and aesthetics.
The article investigates the mathematical and philosophical backdrop of the digital ocean as contemporary model, moving from the digitalized ocean of Georg Cantor’s set theory to that of Alan Turing’s computation theory. It examines in Cantor what is arguably the most rigorous historical attempt to think the structural essence of the continuum, in order to clarify what disappears from the computational paradigm once Turing begins to advocate for the structural irrelevance of this ancient ground.
Data as Symbolic Form: Datafication and the Imaginary Media of W. E. B. Du Bois
This article explores datafication as a speculative discourse that fundamentally and instrumentally misunderstands data, not as a representational system, but as an ontology. This analysis of datafication takes a semiotic and mediaarchaeological approach to datafication, understanding it as an imaginary media system, and the article looks to supplementary discourses in data visualization and big data to clarify and expand an understanding of datafication as a prescriptive and speculative idea. This critique is sharpened through the exploration of a detailed study of the early visualizations and unpublished fictions of W. E. B. Du Bois, whose imagined technology of the “megascope” gives a remarkable blueprint for contemporary discourses in data visualization and data science. Du Bois’s liberatory perspective is contrasted to a more paranoid example coming from the self-documented delusional system of Daniel Paul Schreber, who provides a much different vision of a totalizing media system in the form of an Aufschreibesysteme, which seems to presage another side of datafication as a surveillance system. Taken together, datafication, the megascope, and the Aufschreibesysteme work as three speculative media that imagine themselves as different instances or visions of a totalizing media system, a notion that has become especially significant in the technological, cultural, and economic systems of the twenty-first century.
Artificial Antisemitism: Critical Theory in the Age of Datafication
This article is a critical genealogy of Tay, an artificial-intelligence chatbot that Microsoft released on Twitter in 2016, which was quickly hijacked by internet trolls to reproduce racist, misogynist, and antisemitic language. Tay’s repetition and production of hate speech calls for an approach that draws on both media and cultural theory—the Frankfurt School’s dialectical analyses of language and ideology, in particular. Revisiting the Frankfurt School in the age of algorithmic reason shows that, contrary to views foundational to computing, a neural-network chatbot like Tay does not sidestep meaning but rather carries and alters it, with unforeseen social and political consequences. A return to the work of Max Horkheimer and Theodor W. Adorno thus locates ideology in the digital world at the nexus of language’s ability to mean, language and meaning’s susceptibility to computation, and the design of a machine to compute both. Coming to critical terms with the antisemitism produced in Tay’s human-computer synthesis requires, as this article contends, addressing the uncanny embodiment and reflection of thought that is digital computation.
What has philosophy become after computation? Critical positions about what counts as intelligence, reason, and thinking have addressed this question by reenvisioning and pushing debates about the modern question of technology toward new radical visions. Artificial intelligence, it is argued, is replacing transcendental metaphysics with aggregates of data resulting in predictive modes of decision-making, replacing conceptual reflection with probabilities. This article discusses two main positions. While on the one hand, it is feared that philosophy has been replaced by cybernetic metaphysics, on the other hand it is claimed that only the coconstitution of philosophy and automation challenges modern transcendental reason and colonial capital. As Donna Haraway pointed out, cybernetic circuits of communication have constructed counterfactual ontologies against the master narrative of the human. This article addresses recent attempts at redeveloping a critique of natural philosophy that explains cybernetic metaphysics beyond the universal master narratives of mechanicism and vitalism. This article suggests that the recursive system of nature and technology also needs to account for the problem of philosophical decision, which will be explored in terms of mediatic thinking, instrumentality, and automation. This article follows insights from the movie Get Out (2017) to argue that the model of philosophical decision relies on a servomechanic understanding of the medium of thought, based on the colonial abstraction of value in the prosthetic extension of the slave-machine. It explores the nonphilosophical envisioning of automation in terms of a counterfactual theorisation of the negative negation of the medium and argues for a nonoptical darkness entering the space of thinking. Philosophy and automation are neither coupled nor set in opposition as if in a ceaseless mirroring. Instead, both philosophy and automation must transform their self-determining axiomatics in order to host the heretic activities of machine propositions.
The Future Will Not Be Calculated: Neural Nets, Neoliberalism, and Reactionary Politics
This article traces the relationship between neoliberal thought and neural networks through the work of Friedrich Hayek, Donald O. Hebb, and Frank Rosenblatt. For all three, networked systems could accomplish acts of evolution, change, and learning impossible for individual neurons or subjects—minds, machines, and economies could therefore all autonomously evolve and adapt without government. These three figures, I argue, were also symptoms of a broader reconceptualization of reason, decision making, and “freedom” in relation to the state and technology that occurred throughout the 1950s–1970s. I also argue that this genealogy of decision-making underpins contemporary relations between machine learning, reactionary politics, and neoliberal economics.
From Work to Proof of Work: Meaning and Value after Blockchain
The price of Bitcoin is once more soaring. From early October 2020 to early January 2021, the price of a single Bitcoin token went from roughly $10,000 to nearly $65,000, reinspiring the hopes of the crypto-faithful in the inevitability of a future beyond centralized banking and leaving the rest to dread the jargon of computational libertarianism. The speculative betting driving this recent price action, however, belies a more rudimentary and overlooked shift in the digital economy signaled by cryptocurrencies and Bitcoin in particular. Unlike an earlier industrial logic that sought to reduce heat loss and improve efficiency to maximize surplus value, Bitcoin’s proof-of-work system shifts the basis of value production from efficiency to inefficiency. Moreover, it does so by using a cryptographic algorithm whose purpose is to destroy the meaning of its inputs. Through an exploration of Bitcoin’s proof-of-work technics and its inversion of traditional models of value extraction, the text argues that Bitcoin reveals a profound transformation in the nature of surplus represented by computational capitalism.
This article argues that the algorithms known as neural nets underlie a new form of artificial intelligence that we call indexical AI. Contrasting with the once dominant symbolic AI, large-scale learning systems have become a semiotic infrastructure underlying global capitalism. Their achievements are based on a digital version of the sign-function index, which points rather than describes. As these algorithms spread to parse the increasingly heavy data volumes on platforms, it becomes harder to remain skeptical of their results. We call social faith in these systems the naive iconic interpretation of AI and position their indexical function between heuristic symbol use and real intelligence, opening the black box to reveal semiotic function.