Critical Inquiry Critical Inquiry

Matthew Kirschenbaum reviews Why We Fear AI

Hagen Blix and Ingeborg Glimmer. Why We Fear AI: On the Interpretation of Nightmares. Brooklyn, N.Y.: Common Notions, 2025. 183pp.

Review by Matthew Kirschenbaum

30 October 2025

Scan bookstore shelves for takedowns of AI and you’ll see sleek hardcovers decked out in primary colors with titles brandishing words like snake oil and con. The prevailing orthodoxy is that AI is some kind of mass-delusion event—indeed, a hallucination such as populates the slop piles of its prose and imagery. By sinking enough barbs, maybe we can burst the balloon! Hagen Blix and Ingeborg Glimmer—neither established academics nor journalists as is typical of the critical AI industry but a research scientist and a tech worker, respectively—take a different view. For them, AI represents the pitch-perfect confluence of technology and political economy, the highest-resolution territory yet for capital’s mapping of the real. AI, in other words, is not hype or a hallucination, it is a material and consequential technology that is perfectly aligned with capital’s disembodied will (here they follow Mark Fisher on the centerlessness of capital). And that’s what makes AI so scary, why (in the words of their title) we fear it so: what was once a brushed-metal cyborg now manifests as workplace and state bureaucracies of containment and control.

One of the authors’ early and important moves is to juxtapose Fisher’s account of capitalism with the emerging theories of language and meaning that underlie language-based generative AI. Fisher tells us that it is difficult to think outside of capital not least because capital has no final agency or authority to which one can appeal—there are the bearers of its will, but they are preceded by the disembodied will of capital itself. Likewise, Blix and Glimmer call attention to a specific argument about language and meaning that has quickly become canonical in critical AI research. Leading experts insist that when conversing with a Large Language Model (LLM), there is no way to discern communicative intent because there is no personhood involved: only the disembodied technology that performs as the “stochastic parrot” of a well-known paper’s title.[1] These two absences, Blix and Glimmer contend—”the nonexistent willer behind the will and the nonexistent author behind the text”—are  superimposed and conflated in the prevailing discourses around AI, a suitably hypnagogic space that becomes the breeding ground for nightmares: “When we imagine an author behind the artificially generated text, we imagine it as the willer behind the will to profit” (p. 9).

From this transposition, Blix and Glimmer move to limn a number of homologies between AI and capital and the neoliberal social structures (like bureaucracies) enfolding them, including the increasing opacity of actual tools and technologies, the absorption and de-skilling of human labor, the privatization and enclosure of collective goods, and above all the manufactured inevitability of both the technology and a collective future in which it is to be deployed. It is here—in the day-to-day of the neoliberal surround—that the authors find confirmation of their thesis that anxieties about capital and AI are really one and the same. In this, as the authors acknowledge, their short book functions as a kind of extended gloss and update of Fisher’s remarks on the call center, which he claims crystallizes the "artificial stupidity” of capital. “Artificial stupidity, artificial intelligence,” Blix and Glimmer jibe. “Tomayto, tomahto” (p. 45). More than the call center, the chatbot, they suggest, is the most literal and extreme condensation we have yet seen of the formal properties of late capitalism. The book’s first half ends with a mic drop: “If we must imagine AI to have a personality, we imagine its personality to be that of Hayek’s ghost that already stares back at us” (p. 57).

Nightmare stuff indeed. I myself resist essentializing arguments about technology, and “AI” is not, in my own view, reducible to the will of capital and capital alone (though maybe Fisher is right, and I just can’t think outside of that frame). At least in this still-formative moment though, it would seem too easy for actual persons and corporations to evade accountability, and the AI space is nothing if not populated by outsized individuals with outsized wealth and outsized influence, which is to say that as willers of the will they surely must bear some measure of personal responsibility. That said, I don’t believe Blix and Glimmer fully believe that either, and the second half of the book furnishes a number of useful case studies that follow some familiar but still tangled trails of collective culpability through commerce, law, society, and culture.

Rather than faith in market logics’ ability to save us once quarterly reports show bamboozled investors that the AI bubble is set to burst (we’ve been hearing this for two-and-a-half years now), Why We Fear AI is perhaps the first book to posit a real structural homology between AI and the capitalist logics and regimes in which it is embedded and which themselves increasingly come to be embedded within these same technical systems (for example the Trump administration’s directives on a nationalized AI policy). Is the AI future inevitable? No more but also no less than the inevitability of capitalism itself. In so stating, Blix and Glimmer have made an original and important contribution to the emerging critical AI literature.

 


[1] See Emily M. Bender et al., “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?,” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Mar. 2021): 610–23.