Shannon Vallor. The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking. New York: Oxford University Press, 2024. 272 pp.
Review by Jaehoon Lee
5 December 2025
Shannon Vallor’s The AI Mirror is a lucid, philosophically ambitious intervention that reorients debates about artificial intelligence from questions of machine agency to questions about human self-formation. Vallor proposes an evocative central metaphor: AI functions less like an alien cognizer than like a mirror that both reflects and distorts our epistemic and moral practices. Her decisive move is to relocate the ethical problem of AI in time—arguing that many contemporary systems are fundamentally backward-facing and that the real peril lies in mistaking their retrospective projections for legitimate accounts of human possibility.
Vallor traces this argument through a lineage that runs from José Ortega y Gasset’s autofabrication to present-day anxieties about algorithmic governance. Large language models and allied systems, she contends, operate by extrapolating from accumulated data; their outputs intensify entrenched norms and narrow imaginative horizons. Human beings, by contrast, are future-oriented agents: moral imagination is not merely a cognitive skill but an ontological stance that allows persons to envision and enact possibilities not reducible to past patterns. Framing the issue this way turns questions of bias or automation into a deeper inquiry about temporal ontology—about what kinds of futures a culture can think into being.
This temporal asymmetry is Vallor’s most original and consequential claim, and it is here that the book produces its strongest philosophical work. Her distinction between recursive prediction and imaginative projection resonates with continental accounts of projection and care, insisting that ethical life requires practices that resist the gravitational pull of predictive systems. Where much public debate treats AI as a technical problem of correction or regulation, Vallor insists on cultivating capacities—judgment, deliberative attention, and moral imagination—that are preconditional to and corrective of responsible technological design.
Vallor grounds this metaphysical claim in pointed case studies. The 2022 episode in which a Google engineer declared its LaMDA chatbot sentient is read not as an isolated journalistic oddity but as symptomatic of a wider epistemic confusion: fluency is easily conflated with understanding. Similarly, the proliferation of generative AI in education and scholarship threatens to outsource practices of judgment and discernment that sustain critical inquiry and democratic life. These examples need not be novel to be illuminating; Vallor’s contribution is to show how familiar technical critiques aggregate into a cultural tendency to accept machine-projected futures as ontologically authoritative.
The book’s strengths are substantial: Vallor writes with clarity and moral seriousness, bringing virtue ethics, phenomenology, and cultural critique into productive conversation. The mirror metaphor captures the recursive loop in which humans train AI that, in turn, trains human sensibilities. Yet the book is not without limits. Readers seeking a granular technical account of attention mechanisms, embeddings, or alignment research will find Vallor’s treatment intentionally nontechnical; her focus is normative and phenomenological rather than engineering. Her policy and design recommendations, while sincere, sometimes lack concrete institutional or design prescriptions that would help bridge philosophical insight with technical implementation.
Even so, The AI Mirror accomplishes something rare and urgent: it shifts the normative frame from What can machines do? to What might we become if we let machine temporality set the horizon? Vallor’s call is not merely cautionary but formative: to cultivate moral imagination is to practice a kind of ethical futurity, to keep open the capacity to envision human forms not yet visible in past data. If we accept her diagnosis, the appropriate response to AI is less a set of technical fixes than a sustained ethical pedagogy—institutions and practices that train citizens to imagine, judge, and choose futures that computation cannot simply extrapolate. In that sense, Vallor’s work is less a manual for engineers than a meditation on how to preserve the human freedom to conceive of new kinds of flourishing.