First #16: The AI Con
The AI Con, as its name misleadingly suggests, is not solely about AI; rather, it is an assiduous exercise in excoriating the runaway hype engulfing the technology industry.
Co-written by computational linguist Emily M. Bender and sociologist Alex Hanna, their book is a wonderful riposte to the magical thinking propelling the indiscriminate deployment of generative AI. The roots of The AI Con can be traced back to Bender & Hanna’s magnificently titled podcast, the Mystery AI Hype Theater 3000, which in its three-year run has become a vehicle for honing their “ridicule as praxis” approach to technology criticism. A similarly light hearted tone is evident throughout The AI Con, under which a far more serious intention unfolds — to furnish the reader with the critical approaches to spot, ridicule and reject current and future manifestations of technology hype.
AI has limits
AI has become part of everyday vocabulary so suddenly that few of us have taken time to interrogate its etymology. For such a nebulous term, Bender & Hanna start by using the opening chapters to define what AI means and where the term originated.
Artificial intelligence is not a new idea. Students of computing history may be familiar with the Dartmouth Summer Research Project in 1956, a workshop where computer scientists met to discuss their research in the burgeoning field of thinking machines. Propelled by Cold War anxiety, the converging fields of military and computational engineering research produced many of the technologies that power the computing technology we take for granted today. It was at this Darmouth conference that the term artificial intelligence was first used.
AI is unlikely to have escaped your notice. You might have already felt its disruptive effects on how we find work, discover music, or find companionship. But what is AI?
First of all, Bender & Hanna advise us to stop using the term altogether. The capabilities of the computing landscape have increased hugely since 1956. We all have access to computational power that academic or military institutions could only dream of six decades ago. Technologies falling under the umbrella of AI have proliferated. Your text messaging autocomplete was once considered AI, until its reliability stabilised and we now cease to pay it any attention.
“To put it bluntly, ‘Al’ is a marketing term. It doesn't refer to a coherent set of technologies. Instead, the phrase ‘artificial intelligence’ is deployed when the people building or selling a particular set of technologies will profit from getting others to believe that their technology is similar to humans, able to do things that, in fact, intrinsically require human judgment, perception, or creativity.” — The AI Con, pg. 5
To help us navigate this confusing landscape, Bender & Hanna define AI as computational technologies falling into five categories: decision making (e.g. automated loan approvals), classification (e.g. image or facial recognition), recommendation engines (e.g. what to watch next on streaming sites), transcription or translation (e.g. automated speech recognition and translators), and text and image generation (e.g. ChatGPT and DALL-E). [1]
As co-authors with research interests spanning linguistics and sociology, The AI Con has a natural sociotechnical and qualitative concern. Bender & Hanna’s narrative takes the reader on a journey to witness how AI’s untrammelled hype is already wrecking havoc in how we perceive and measure intelligence (and the eugenic roots so closely associated with that pursuit); how the relentless pursuit of automation most punitively devalues and exploits the labour of underage, low-income, women, prisoners, racialised and Global South workers, to the wholesale theft and exploitation of creative visual and audio artists, writers and journalists. [2]
With these present and continuing harms in mind, Bender & Hanna urge us to redirect our critical gaze “… away from speculative risks—no matter how exciting the action-movie sequences they conjure up might be—to the actual harms being done now in the name of AI.” (pg. 162). The contention put forward by The AI Con, is that it is a technology emerging from a deeply problematic political ideology, wasteful beyond imagination, steered by a TESCREAL infused vision and echoed in an architectural composition that exists to extract so much from the digital realm and human creativity, while centralising its infrastructural dominance into an empire of sorts — to borrow a metaphor from Karen Hao.
“The danger is not from some hypothetical extinction-level event. The danger emerges from rampant financial speculation, the degradation of informational trust and environments, the normalization of data theft and exploitation, and the data harmonization systems that punish the people who have the least power in our society...” — The AI Con, pg. 152
Bender & Hanna fiendishly anoint generative AI systems as “synthetic text extruding machines” (pg. 72) despite GPT-5 being lauded as possessing PhD levels of intelligence — a meaningless phrase if you ever you take a moment to pause and think what that means. No standardised test exists to measure such intelligence.
At the root of a large language model’s functionality is the system’s (in)ability to reliably infer information from its data. There is no such thing as truth in a large language model — or to banish this passive voice — these are systems which Dr. Dan McQuillan argues have an “adversarial relation to facts”. These statistical text generation engines create sequences of text that are statistically most likely to correspond to a prompt. Combined with people’s natural propensity to assign agency and sentience to automatons — flawed reasoning that humanity has repeated since Friar Bacon’s brazen head captivated onlookers in the thirteenth century — we find ourselves in a situation where astronomical amounts of money are being sprayed in pursuit of an all-seeing, all-knowing artificial general intelligence which, so far, has evaded a consistent or coherent definition.
Bullshit and hype
The danger of generative AI’s uncritical deployment across campuses and corporations is that its principal use case is the one thing that this technology is least suited to fulfilling. There are a swelling constellation of apps, plugins and features being deployed that ask large language models to infer information from data, while the data scientists, product designers and engineers seem unaware or unbothered by the inability of such technology to construct responses to prompts with grounding in truth. If there is no grounding in a shared understanding of truth, then all the technology can do is to give us compelling and persuasive strings of text — and it is up to the human operator to determine the veracity of the resulting text.
We have been seduced by interfaces deliberately designed to act as if they operate deep in thought, which on close inspection belies obvious flaws in its efficacy. The yawning chasm between a technology being sold to us promising magical solutionism and the prosaic reality of its statistical word soup — following in the path of that heedless emperor — to become the latter day gimmick of Sianne Ngai’s imagining that is so riddled with contradictions:
“The gimmick saves us labor.
The gimmick does not save labor (in fact, it intensifies or even eliminates it).
The gimmick is a device that strikes us as working too hard.
The gimmick is a device that strikes us as working too little.
The gimmick is outdated, backwards.
The gimmick is newfangled, futuristic.
...
The gimmick makes something about capitalist production transparent.
The gimmick makes something about capitalist production obscure.” — Theory of the Gimmick, pg. 72
Differentiating between lies and bullshit is at the root of ChatGPT is bullshit, a paper co-authored by Michael Townsen Hicks, James Humphries, and Joe Slater in 2024. They establish how prior knowledge of truth is key because if the resulting information is untrue, then the actor (human or machine) has lied. But what if the actor has an inability to be aware of truth? The offered false information is now firmly in bullshit territory, “that is, speech or text produced without concern for its truth–that is produced without any intent to mislead the audience about the utterer’s attitude towards truth.” And this is where we find ourselves with large language models.
Technologies we are vigorously encouraged to use for interpreting what is true, correct or present in large and complex datasets are at best giving their users a credible string of text that may or may not semantically align with its prompt. It is the reason that LLM’s can not do arithmetic, or reliably and consistently infer facts from data, such as which American states contain the letter R. As Hicks, Humphries and Slater observe that “the problem here isn’t that large language models hallucinate, lie, or misrepresent the world in some way. It’s that they are not designed to represent the world at all; instead, they are designed to convey convincing lines of text.” [3]
How have we got to a situation where the capabilities and limitations of a technology are so divorced from the realities of what it can do? The opening chapter of The AI Con is concerned with breaking down the role of hype in propping up the excitement and investment needed to funded the development of these technologies. Their desire is for us, as readers, to “resist the urge to be impressed… spot AI hype in the wild and …. take back some ownership in our technological future” (pg. 20).
Hype Studies

Hype is a critical factor for promoting any new technology. I had never taken time to critically study hype’s social, cultural, or memetic role until I went to the Hype Studies conference at Universitat Oberta de Catalunya (UOC) in Barcelona last September. Over three days, 100 academics, practitioners, artists, technologists and researchers gathered in-person with about three times as many delegates joining online to share papers, presentations and open-format workshops examining hype as a performative force across multiple disciplines.
I met Johannes Klingebiel, a design researcher based in Munich who has written and published Hype: A Critical Field Guide. This pocket-sized reference book gives much needed clarity on how we can identify and puncture hype in its many forms — a skill that we all need to be much better at cultivating, because digital technologies of the future can and will change.
Andreu Belsunces Gonçalves and Jascha Bareis, two of the Hype Studies conference organising committee, examine how tech companies deliberately and strategically use hype to turn “fictional visions into plausible trajectories.” They call for a concerted effort to build a hype literacy beginning with practitioners in the technology industry, so we can use our influence to correct the narrative of what the technologies can and cannot do, because “assessing hype as a political instrument can help policy makers and regulators, journalists and citizenry to be less vulnerable to the mandates of eugenic future visions that are presented as natural and inevitable by some tech-gurus.” If we are concerned with halting runaway technology hype, we must urge those around us to practice how to discern the gaping holes between the claims promised and realities of what a technology can do.
The next hype?
Bender & Hanna begin The AI Con by telling us “[their] primary goal is to inhibit the next tech bubble” (pg. xi). They argue that we must all contribute to this effort, and not outsource such responsibilities to a tiny cadre of technology CEOs with selfish motives.
Technologies change. Fast forward to 2036, and it is very likely the subject dominating technology discourse will be another set of frameworks, three letter acronyms, or architectures.
For an industry that is so reliant on regular injections of venture capital, there is an increasing pressure for the mathematically eye-watering sums pumped into generative AI startups to produce financial returns exceeding the GDP of some nation states. This aching maw opening up between investment and financial return is not sustainable — personalised ads or a $20 recurring monthly subscription fee will not fill this gap — and the lofty promises of productivity gains are slowly being realised to be nothing more than a damp squib.
The AI Con is a welcome counter-narrative in an ocean of naively optimistic technology literature that cannot or will not engage with limits or flaws that are plainly evident. If we take nothing else from the many case studies, we should at least bookmark Bender & Hanna’s suggested strategies for popping hype bubbles when presented with a magical AI solution (pgs. 164–170). We should be asking: What is being automated? Are these systems being described as human? (which often goes hand-in-hand with the capabilities being overstated) Who benefits from the technology, who is harmed, and what recourse do they have if things go wrong?
The skill we are being asked to (re)train ourselves in is that of critical discernment — can we identify the contradictions between the hype and capabilities of what a technology might be able to do? Reading The AI Con is a manual for keeping ourselves grounded when the wider technology industry seems determined to be carried away on a cloud of empty promises blissfully unaware of the ease with which these fragile systems will drop us back down to Earth to shatter our dreams once we realise the threadbare nature of runway hype.
The AI Con: How To Fight Big Tech’s Hype and Create The Future We Want by Emily M. Bender and Alex Hanna (The Bodley Head, 2025, 274 pages)
Bender & Hanna’s classification both overlaps and differs from the categorisations in Narayanan & Kapoor’s AI Snake Oil (Princeton University Press, 2024), who choose to classify AI technologies into three buckets as generative, predictive AI, and content moderation. ↩︎
It is important to note that these factors are far from being exclusive — in fact the effects and harms labour can and often do intersect. ↩︎
This little test is a good way of demonstrating how LLMs work by giving you a calculated response based on probability rather than parsing and responding to the content of a prompt (a Wikipedia lookup would have been even easier). Responding to my prompt “Which states in the USA contain the letter R?” at the time of writing (29 October 2025), Notion’s AI running on Anthropic’s Claude responded with 21 states that included South Dakota; Microsoft’s Copilot responded with a list of 26 states including Massachusetts, Minnesota, South Dakota, Texas, Washington, Wisconsin, and Wyoming. ChatGPT’s free version responded with 16 states that included Washington. Repeating the prompt for each LLM gives different results — so there is no consistency in these statistically calculated “true” answers! ↩︎
More from Emily M. Bender and Alex Hanna
Aside from Bender & Hanna’s previous academic work (Bender’s contribution to Stochastic Parrots and Hanna’s work on Against Scale have helped me most), you can follow the ongoing work of “ridicule as praxis” that Bender & Hanna host each week on their podcast, Mystery AI Hype Theater 3000.
The Wizard of AI (2023)
I think this short documentary is a great companion piece to The AI Con. In The Wizard of AI, applied media theorist and academic researcher Alan Warburton examines feeling of wonderpanic engulfing our contemplation of generative AI technologies, while grappling with the contradictions becoming evident of critiquing the very tools being used to make the film.
The Wizard of AI (2023, 20 mins) by Alan Warburton
Explore further
Transcripts are available for The UK's Misplaced Enthusiasm (with Gina Neff).
Further reading
- What’s Behind Technological Hype? by Jeffrey Funk (Issues In Science and Technology vol. 36, no. 1, 2019)
- Smoke & Mirrors: How Hype Obscures the Future and How to See Past It by Gemma Milne (Robinson, 2020, 336 pages)
- Words Matter: How Tech Media Helped Write Gig Companies into Existence by Sam Harnett (Working paper, available at SSRN, 2020)
- Eighteen pitfalls to beware of in AI journalism by Sayash Kapoor and Arvind Narayanan (AI as Normal Technology, 2022)
- Theory of the Gimmick: Aesthetic Judgment and Capitalist Form by Sianne Ngai (Harvard University Press, 2022, 416 pages)
- AI and the automation of work by Benedict Evans (2023)
- AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference by Arvind Narayanan and Sayash Kapoor (Princeton University Press, 2024, 320 pages)
- ChatGPT is bullshit by Michael Townsen Hicks, James Humphries, and Joe Slater (Ethics and Information Technology vol. 26, no. 38, 2024)
- Destroy AI by Ali Alkhatib (2024)
- Arenas of Artificial General Intelligence Hype: A visual database of AGI hype discourse by Hailey Hannigan, Sofia Mari Surkau and Katarina Vrablova; based on research by Andreu Belsunces (2025)
- Eleven Theses on Technological Hype as Capital by Vassilis Galanos (Interregnum, 2025)
- Expanding Hype Literacy to Protect Democracy by Andreu Belsunces Gonçalves and Jascha Bareis (Tech Policy Press, 2025)
- Forty percent of ‘AI startups’ in Europe don’t actually use AI, claims report by James Vincent (The Verge, 2025)
- Hype: A Critical Field Guide by Johannes Klingebiel (2025)
- Silicon Valley’s abundance of hype over abundance by Cristina Criddle (Financial Times, 2025)
- Trapped in the Maw of a Stillborn God by Edward Ongweso Jr. (The Tech Bubble, 2025)
And finally...
🇳🇱 Last month, I presented a talk called “The Spirit of Bartleby: In defence of refusal” at ‘No & ...’: A Forum on Technological Refusal at Maastricht University. My talk built on one I first gave at Research by the Sea last year, but it was a more up to date reflection of where my thinking is at on this subject.
🏴 In June, I will be opening day 2 of UX Scotland, an international conference for anyone working in user experience, human-centred or service design. I have called my keynote a firmament inside, and it will be a conceptual art and poetics led approach to examining the effects of self-imposed constraints on our practice. Tickets are available now (with discounts for freelancers, and scholarships available).
📚 You can buy mentioned books from my page on Bookshop.org (affiliate link)
🗄️ Editions #1–15 of First & Fifteenth were published from 2023 until 2025.