![]() |
Sea of Marmara, Türkiye on October 3, 2025 |
For example, a British undergraduate who’s been discussing AI with her cohort told me, “Everyone is asking why am I at university? AI can do my homework and it’s already taken my job. So what are we doing here?” The AI industry has saturated even its users with this mixture of resignation and dread.
This week, I gave a talk about this to an interesting Turkish engineering firm. I’ll reproduce parts of it that are especially relevant here.
There’s a fight between two different pathways for so-called AI. The first is AI as a powerful tool, an übertool that allegedly multiplies the intelligence of its human users like nothing that has come before.
The second pathway is the corporate road to “superintelligence,” just over the horizon. This second path remains the founding mission of the AI industry, especially its dominant AI company, OpenAI. It is rooted in a mistake about the secondary status of history to technology, but this mistake has made its narratives extremely strong.
Both pathways intensify our pre-existing knowledge crisis that threatens experts like themselves as well as the well-being and democratic systems of our societies.
I raise a couple of questions.
The “knowledge society’ was a limited Cold War concept with lots of problems, including its structural preference for technical over cultural knowledge. But it was better than imperial society or Jim Crow society, and had conceptual space in which legitimate prosperity would come not from conquest or exclusion but from ideas, invention, and creativity, all of which could, with much help from social movements, be visible and supported across the entire society. It was liberal capitalism with social democratic characteristics. It was at least not technofeudalism.
My working hypotheses are (1) AI is damaging knowledge in society (as opposed to having specific utilities at work), and (2) lowering the status of professional knowledge including that of the engineers I was speaking with so that they can’t set the standards for technology (as required to achieve social benefits).
My standard for AI is to support advanced intelligence widely spread in the human population- across each society, across the world, global North, global South. It has to leave education undamaged (I’ll leave that whole section of the talk for another time).
I searched the internet for images of “AI for the masses.” The visuals are atomizing and infantalizing.
More worryingly, it arrives at a bad time for human intelligence. Some international data led John Burn-Murdoch to ask, “Have humans passed peak brain power?”
I take standardized test data with a grain of salt, but these international declines arent good. They correlate with smartphone use.
I'm especially concerned about the results to questions about people "having trouble learning new things."
We have almost 3 years since the launch of ChatGPT 3 – the GPT model attached to a chatbot for the first time. And the big shift from 2024 to 2025 is that there’s much more research on the cognitive effects of AI use.
One study, MIT Media Lab’s Kosmyna et al. “Your brain on ChatGPT,” compared three groups of subjects who wrote an essay—one called Brain Only, the second, the Search Engine group, and the third, the Large Langauge Model group, who used an LLM to write their essay. They used physical monitoring of brain activity to “assess their cognitive engagement and cognitive load,” and also interviewed them. The best uses of LLMs was when the Brain Only group, who’d already written the essay on their own, used an LLM to rewrite it. The reverse process, starting with an LLM, was rather disastrous. Really a lot is going on in this 206 page paper, but here’s the result of one question, “Can you quote any sentence from your essay without looking at it? If yes, please, provide the quote.” This slide caused some squirming in the auditorium.
“In the LLM‐assisted group, 83.3 % of participants (15/18) failed to provide a correct quotation, whereas only 11.1 % (2/18) in both the Search‐Engine and Brain‐Only groups encountered the same difficulty.”
The simplest explanation for these results is that LLM writers aren’t really writing the essay with their brains: too much is being outsourced at every stage (even when they aren’t cutting and pasting). Columbia University undergraduate Owen Kichizo Terry very clearly explained (and lamented) the GPT-structuring (non-plagiarizing) process only a few months after ChatGPT’s release; the piece remains a good refresher.
A certain brain rot occurs through “cognitive offloading.” A study by Michael Gerlich, who works at SBS Swiss Business School, found “a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading.”
If you offload thinking, and aren’t actually thinking for yourself, you’ve flatlined your learning.
This gets us to LLMs themselves. 2025 has seen massive amount of work on fixes for high error rates—I see about three new papers a day on this in my Twitter feed—but with success rates that remain ambiguous. ChatGPT5 was greeted like this.
You need users to be able to think for themselves because models that regularly fabricate results need to be monitored constantly.
I do read a ton of contented coder talk—these are high-skill people whose job is to monitor code. AI Pathway 1 seems to be working towards a functionalist standoff.
Similarly, business engineer Andrew Ng, whom you may remember from his MOOC boosterism, is telling coders to just get over 2022 and use AI assistance to do it all at the new sped-up rate. If you do, your human capital will keep you in work—the old learning=earning bargain is intact. AI Pathway 1 is where you AI doesn’t replace but turbocharges you.
This narrative has hit big 2025 bumps. Another MIT study got a lot of attention for concluding that, “Despite $30–40 billion in enterprise investment into GenAI, . . . 95% of organizations are getting zero return.” The reason got less attention: “The core barrier to scaling is not infrastructure, regulation, or talent. It is learning. Most GenAI systems do not retain feedback, adapt to context, or improve over time.” Getting your AI to learn means people learning and then carrying on with system interventions—much as developers continue to try to automate the automation. Monitoring your AI takes time, and obviously limits productivity gains.
Becker et al. of METR in Berkeley, CA., bore this out. They studied task changes in going from AI to non-AI coding and back—they used tools like Cursor Pro and Claude 3.5/3.7 Sonnet--asked their experienced coders how much time AI would save them, then asked afterwards how much time it saved them, but in the meantime measured the actual time. It’s a bit alarming.
It’s not too surprising then that corporations are taking note: the second half of 2025 has seen an AI adoption fade
The productivity blast-off just isn’t happening.
An exasperated engineer explains why, if you would like further detail.
Thousands of AI industry folks are trying to engineer around the engineering limits, but there are likely intrinsic reasons, known to philosophers if not entrepreneurs, about why these limits will remain in place.
However, as AI Pathway 1 falters, the industry is putting even more emphasis on Pathway 2. There’s of course Sam Altman’s “gentle singularity” propaganda that I was too embarrassed to put on a slide.
More concretely, the entire point of Pathway 2 is to replace people with models. This is clear in this graphic from an OpenAI paper (Patwardhan et al., n.d.; covered by Victor Tangermann in Futurism September 30, 2025) that runs a model through 44 occupations, all what we used to regard as pillars of the middle class. Take a particular look at the boxes for Manufacturing, Professional, Scientific, and Technical Services, and Information.
It has of course always been a primal urge under capitalism to replace labor with technology. The traditional mode has been to replace routine labor with technology (assumed, often wrongly, to be unskilled). Now Open AI in particular is seeking to replace non-routine labor, the highest-skilled labor, with AI.
The agonies of high-skill workers in culture--translators, commercial artists, designers, etc--have become widely known. OpenAI and its partners have comparable designs on technical labor. Andrew Ng’s benign Human Capital Theory looks like its fig leaf.
My goal was to convince at least a few engineers that tech and art workers are in this together.
OpenAI’s intention is to show that models are smarter than experts and can replace them. This remains their promise to capital. It’s the real Great Replacement that should inspire much more public attention than it has—not as a reality but as a goal.
Any kind of democratization of the workplace will involve a much more egalitarian approach to “intelligence” than we’ve ever had under capitalism. The AI industry is pushing in the opposite direction. It is implicitly promising frenzied investors that it will first control “high-skill” labor in the way manual labor was as it was industrialized—on the way to replacing it. They seem to hav fixated on replacing medical radiologists, though even one of my preferred AI boosters accepts that this isn’t going well.
What I see is that as progress on Pathway 1 slows down to “good tool” speed, Pathway 2 hype has escalated. My summary:
SO I asked whether we really know that Pathway 2 won’t (and shouldn’t!) work, so that we must make do with Pathway 1, while also chilling it out to “good tool” status. Here’s a slide on Pathway 2’s foundational term.
The AI Industry has mainly tried to brush these off, or “ask for forgiveness” with small payments to media companies, presses, et al. now that it has already assimilated all their stuff.
I listed three tests that “superintelligence” has been failing.
One is that it skips over the ambiguities of “high-skill” jobs. I cited a paper by the economist Daron Acemoglu, which has become well-known for its low estimate of AI’s impact on GDP – “the GDP boost within the next 10 years should also be modest, in the range of 0:93% − 1:16% over 10 years in total.” He also concludes AI will further increase inequality between capital and labor income.
I didn’t read through the slide below, you’ll be relieved to hear, but it was I thought the most important idea in the paper—the difficulty, if not impossibility, of formalizing the not-yet-known in tasks that therefore require (expert) judgment.
I rushed through the philosophical problems with claims to AI-based judgment, cribbing from Brian Cantwell Smith’s The Promise of Artificial Intelligence and my review-essay in Critical AI. (James Meek has a good comparison of AI to human intelligence in LRB.)
Second, AI Pathway 2 fails the economics test of rational investment. A quick summary slide of the main issues (the first two h/t Ed Zitron).
OpenAI’s high burn rate means it really cannot honor the terms of any of the new mega-billion deals it has been striking with Nvidia, Oracle, et al.
For example, Ed Zitron notes that “The OpenAI and NVIDIA’s deal requires OpenAI to build 10 Gigawatts of data center capacity to unlock the remaining $90 billion in funding.” Going through some detail on costs and quantities, he concludes, So, each gigawatt is about $32.5 billion. For OpenAI to actually receive its $100 billion in funding from NVIDIA will require them to spend roughly $325 billion — consisting of $125 billion in data center infrastructure costs and $200 billion in GPUs.
The company doing these mad deals is running major losses. (I do grasp the VC business model of high burn rates today for monopoly in a gigantic market tomorrow. But these wild ratios have even Sam Altman and Jeff Bezos agreeing this is a bubble, while insisting it's a "good" bubble." It will be saved only if the tech-profitability miracle—The Singularity—actually comes to pass, and long before 2030.
Meanwhile, circular investing is starting to cause mainstream concern about systemic risk. This slide came from Bloomberg via Peter Atwater, who has a nice comparison slide with the industry structure behind the subprime mortgage crisis of 2007-09.
The FT’s Richard Waters, generally boosterish, has a good explainer on the risk.
There’s not a lot of visibility into what’s going on,” says [Tomasz] Tunguz, the venture capitalist, of private credit arrangements like these. This type of lending “is leveraged, and it’s one step removed from the banks”, adds [Bill] Janeway. In the event that a data centre project cannot generate the cash flow to support its debt load, the losses could feed back into the banking system, he says.
OpenAI’s CEO, meanwhile, shows little concern about the scale of the spending that lies ahead — even though his company’s revenue, which has reached an annualised run rate of $13bn, is dwarfed by the $1tn of investment it is planning.
In short, AI Pathway 2 fails tests of culture, intelligence, and economics. I’m leaving out its failed social tests—this pathway’s refusal to acknowledge much less mitigate its negative effects on the wider world. I limited myself to one slide on this, still very economic.
AI has sucked up capital investment from utterly crucial arenas for the 2020s, like transportation and climate. We’ll be paying for this distortion for a long time, even if the AI bubble doesn’t explode and vaporize big parts of the global economy.
So, in conclusion, my noir hypothesis about the dominant industry narrative, Pathway 2
At the end of the lecture version, I argued that an important response that engineers and artists can equally do is to elaborate a framework for what capabilities we want in people and turn that into a set of visible narratives that can be added to by the rest of society. These capabilities will include popular powers of programming, as Alan Blackwell argues (see my review linked below).
It’s complicated, so I’ll leave that for another post.
But there’s some good news—if and only if we can drive a stake through superintelligence.
That last box needs the same scale of effort now being spend on AI.
APPENDIX: MY RECENT AI PAPERS
“How Do We Escape Today’s AI?” (review of Karen Hao, The Empire of AI), ISRF Director’s Note September 2025.
Review of Alan F. Blackwell, Moral Codes: Designing Alternatives to AI (MIT Press, 2024).
“The Fight about AI,” ISRF Director’s Note, March 2025
“How to Make ‘AI’ Intelligent; or, The Question of Epistemic Equality.” Critical AI, vol. 1, nos. 1–2, Oct. 2023
0 comments:
Join the Conversation
Note: Firefox is occasionally incompatible with our comments section. We apologize for the inconvenience.