It is no longer enough to say that data is big. Data is now in a state of surplus. As we have progressed from the megabyte to the terabyte, the petabyte, and now in 2022 debatably to the zettabyte era—all within the span of a mere two decades—we have witnessed a quantitative increase manifesting itself as a qualitative change. In a well-known 2008 provocation, the editor in chief of Wired, Chris Anderson, announced that the ability to produce and analyze enormous data sets using AI was rendering the bedrock of human knowledge systems—the scientific method itself—obsolete. For the first time in history, correlation began to supersede causation and science advanced “without coherent models, unified theories, or really any mechanistic explanation at all.” It supposedly spelled the end of theory. But theory has faked its own death many times.
What has philosophy become after computation? Critical positions about what counts as intelligence, reason, and thinking have addressed this question by reenvisioning and pushing debates about the modern question of technology towards new radical visions. Artificial Intelligence, it is argued, is replacing transcendental metaphysics with aggregates of data resulting in predictive modes of decision-making, replacing conceptual reflection with probabilities. This article discusses two main positions. While on the one hand, it is feared that philosophy has been replaced by cybernetic metaphysics, on the other hand it is claimed that only the coconstitution of philosophy and automation challenges modern transcendental reason and colonial capital. As Donna Haraway pointed out, cybernetics circuits of communication have constructed counter-factual ontologies against the master narrative of the human. This article addresses recent attempts at redeveloping a critique of natural philosophy that explains cybernetic metaphysics beyond the universal master narratives of mechanicism and vitalism. This article suggests that the recursive system of nature and technology also needs to account for the problem of philosophical decision, which will be explored in terms of mediatic thinking, instrumentality, and automation. This article follows insights from the movie Get Out (2017) to argue that the model of philosophical decision relies on a servo-mechanic understanding of the medium of thought, based on the colonial abstraction of value in the prosthetic extension of the slave-machine. It explores the nonphilosophical envisioning of automation in terms of a counterfactual theorisations of the negative negation of the medium and argue for a nonoptical darkness entering the space of thinking. Philosophy and automation are neither coupled nor set in opposition as if in a ceaseless mirroring. Instead, both philosophy and automation must transform their self-determining axiomatics in order to host the heretic activities of machine propositions.
This article argues that the algorithms known as neural nets underlie a new form of artificial intelligence we call "indexical AI." Contrasting with the once dominant symbolic AI, large-scale learning systems have become a semiotic infrastructure underlying global capitalism. Their achievements are based on a digital version of the sign-function index, which points rather than describes. As these algorithms spread to parse the increasingly heavy data volumes on platforms, it becomes harder to remain skeptical of their results. We call social faith in these systems the "naive iconic interpretation" of AI and position their indexical function between heuristic symbol use and real intelligence, opening the "black box" to reveal semiotic function.
The price of Bitcoin is once more soaring. From early October 2020 to early January 2021 the price of a single Bitcoin token has gone from roughly $10,000 to nearly $65,000, reinspiring the hopes of the crypto-faithful in the inevitability of a future beyond centralized banking, and leaving the rest to dread the jargon of computational libertarianism. The speculative betting driving this recent price action, however, belies a more rudimentary and overlooked shift in the digital economy signaled by cryptocurrencies, and Bitcoin in particular. Unlike an earlier industrial logic that sought to reduce heat loss and improve efficiency to maximize surplus value, Bitcoin’s proof-of-work system shifts the basis of value production from efficiency to inefficiency. Moreover, it does so by using a cryptographic algorithm whose purpose is to destroy the meaning of its inputs. Through an exploration of Bitcoin’s proof-of-work technics and its inversion of traditional models of value extraction, the text argues that Bitcoin reveals a profound transformation in the nature of surplus represented by computational capitalism.
Digital and analog, what do these terms mean today? The use and meaning of such terms change through time. The analog, in particular, seems to go through various phases of popularity and disuse, its appeal pegged most frequently to nostalgic longings for nontechnical or romantic modes of art and culture. The definition of the digital vacillates as well, its precise definition often eclipsed by a kind of fever-pitched industrial bonanza around the latest technologies and the latest commercial ventures. One common response to the question of the digital is to make reference to things like Twitter, Playstation, or computers in general. Here one might be correct, but only coincidentally, for the basic order of digitality (the digitality of digitality) has not yet been demonstrated through mere denotation. And the second question—the question of the analog—is harder still, with responses often also consisting of mere denotations of things: sound waves, the phonograph needle, magnetic tape, a sundial. At least denotation itself is analogical. This article will aim to define the analog explicitly, and argue, perhaps counterintuitively, that the golden age of analog thinking was not a few decades past, prior to the wide-spread adoption of the computer. In fact, the fields of structuralism and poststructuralism that emerged in the middle to late twentieth century, concurrent with the deployment of the digital computer, were characteristically digital, what with their focus on the symbolic order and logical economies. If anything, the golden age of analog is happening today, all around us, as evidenced by the proliferation of characteristically analog concerns: sensation, materiality, experience, affect, ethics, and aesthetics.
This essay is a critical genealogy of Tay, an artificial intelligence chatbot that Microsoft released on Twitter in 2016 which was quickly hijacked by internet trolls to reproduce racist, misogynist, and anti-Semitic language. Tay’s repetition and especially production of hate speech calls for an approach that draws on both media and cultural theory—the Frankfurt School’s analyses of language and ideology, in particular. Revisiting the Frankfurt School in the age of algorithmic reason shows that, contrary to views foundational to computing, a neural network chatbot like Tay does not sidestep meaning, but rather carries and alters it, with unforeseen social and political consequences. A return to the work of Max Horkheimer and Theodor W. Adorno thus locates ideology in the digital world at the nexus of language’s ability to mean, language and meaning’s susceptibility to computation, and the design of a machine to compute both. Coming to critical terms with the anti-Semitism produced in Tay’s human-computer synthesis requires, as this essay contends, addressing the uncanny embodiment and reflection of thought that is digital computation.
This article explores datafication as a speculative discourse that fundamentally and instrumentally misunderstands data, not as a representational system, but as an ontology. This analysis of datafication takes a semiotic and media archaeological approach to datafication, understanding it as an imaginary media system and looks to supplementary discourses in data visualization and big data to clarify and expand an understanding of datafication as a prescriptive and speculative idea. This critique is sharpened through the exploration of a detailed study of the early visualizations and unpublished fictions of W. E. B. Du Bois, whose imagined technology of the “megascope,” gives a remarkable blueprint for contemporary discourses in data visualization and data science. Du Bois’s liberatory perspective is contrasted to a more paranoid example coming from the meticulously documented delusional system of Daniel Paul Schreber who provides a much different vision of a totalizing media system in the form of an Aufschreibesysteme, which seems to presage another side of datafication as a surveillance system. Taken together, datafication, the “megascope,” and the Aufschreibesysteme, work as three speculative media that imagine themselves as different instances or visions of a totalizing media system, a notion that has become especially significant in the technological, cultural, and economic systems of the twenty-first century.
This article traces the relationship between neoliberal thought and neural networks through the work of Friedrich Hayek, Donald O. Hebb, and Frank Rosenblatt. For all three, networked systems could accomplish acts of evolution, change, and learning impossible for individual neurons or subjects—minds, machines, and economies could therefore all autonomously evolve and adapt without government. These three figures, I argue, were also symptoms of a broader reconceptualization of reason, decision making, and “freedom” in relation to the state and technology that occurred throughout the 1950s–1970s. I also argue that this genealogy of decision making underpins contemporary relations between machine learning, reactionary politics, and neoliberal economics.
This is an investigation of the theory of the digital ocean that moves from the digitalized ocean of Georg Cantor’s set theory to that of Alan Turing’s computation theory. The article examines, in Cantor, what is arguably the most rigorous historical attempt to think the structural essence of the continuum, in order to make clear what actually disappears when Turing advocates for the structural irrelevance of this ancient ground.