Saturday, May 2, 2026

Liner Note 53. AI and the Public Good

Wayne State, Detroit on April 12, 2019  

This is the corrected text of a talk I gave online to the Wayne State University conference, “Public Budgets, Public Good,” on April 30, 2026.  Many thanks to the audience, whose questions about theory and practice were excellent. Thanks also to the sponsors: Labor@Wayne, AAUP, HELU, and Public Good U. I’m still sorry I wasn’t there in person.

∞∞∞

I’ve always seen the university as a force for the general development of society, having been influence by a tradition includes Humboldt & Fichte, Kant, Hegel, Marx, Douglass, Ida B. Wells, Du Bois, John Dewey, CJR James, and many thinkers since.   This has made it easier to grasp the fact that the university’s largest effects are a combination of non-monetary and public.  These public effects have been rendered “dark matter” by the political and business worlds, which have steered people exclusively toward the private pecuniary effect of the B.A. wage increment over high school. College presidents and other officials have simply echoed them.  This is overbearingly true in the US and the UK, and amounts to a mass miseducation about education. But it is also true elsewhere, and apparently in China.

Figure 1. What People are told to see in a college degree


The dominant discourse makes the 3 other general effects almost invisible, though they are fully acknowledged by economists even as they are downplayed by mainstream methods for not really being quantifiable.

Education in reality produces private non-pecuniary effects, and then 2 kinds of external / public: pecuniary and non-pecuniary. 

Figure 2.


(Humanities discipilnes have a richer taxonomy that we don't formulate and circulate: I'll develop this point elsewhere.) Ironically, given their invisibility, public, non-monetary and general social effects are 2/3rd of the total (McMahon 2009).

Now, the most intelligent aspect of AI hype is that its proponents recognize the importance (and popularity) of public effects and have been leading with them.  Advocates like OpenAI’s Sam Altman and Anthropic’s Dario Amodei have promised that Large Language Models (LLMs) and related technologies lead imminently to superintelligence, which will transform human life for the greater good. 

The advocate discourse ignores or actively marginalizes 9 known problems with AI: cultural bias (racism), opacity, coercion, violations of privacy, mass (ongoing) theft of everyone’s IP, energy use, and environmental damage. 

The hype also rejects an 8th issue: AI’s reinforcement of plutocracy and global inequality through (a) capital concentration, (b) geoeconomic rivalry and (c) destruction of good (skilled, creative) work. This hyperscaler AI points toward the end of the professional-managerial class as a rival to our platform masters.

Hype refuses a 9th AI issue: not only are models themselves not intelligent in the rigorous sense, but that AI use may damage the intelligence we already have by lowering learning in educational systems.  

These nine factors suggest on their face that the net public effects of AI are strongly negative. Majorities of the public agree: in June 2025, only 10% of Americans were “more excited than concerned” about AI. “About half say Al will worsen people's ability to think creatively.” (About half are also “using AI at least a few times a month for their job (55% among employed voters) and in their personal life (51%)”.) We need to hash all this out in a global debate, but we’re not.

To flip it around, how could AI be good for the public? Briefly, it would have to do none of those 9 things and reverse them. Here I’ll just look at #9. 

But before I do that, I have a comment about the 8th issue, the future of good jobs.  AI’s reversal of its intended destruction of knowledge jobs—especially of the pathway to them—would mean supporting the expansion of human access to interesting, self-directed, creative, challenging tasks by doing at least a couple of things: boring, debilitating tasks for people, including full, representative information retrieval; and and what we can call contestatory dialogics, through which one’s thinking is confronted, tested, rejected, restructured, and thus advanced. (See Vivienne Ming’s overview of this use, which I take as enlightened baseline of the first half of 2026.)

This cyborg control of AI by humans isn’t quite what we’re seeing, even in the Rosy Scenario coming from economists. The Rosy Scenario is that AI will have the greatest effects on the most adaptable workers (Manning & Aguirre 2026).  As one paper puts it, “many occupations highly exposed to AI contain workers with relatively strong means to manage a job transition.”  AI kills creative work, but creative workers are good at adapting—the catch being likely “adapting to something less creative.”  These economists offer a deregulated approach that sees good AI deployment flowing from individual adaptative capacities that they are supposed to have already—including brand-new college graduates.

I think governments are obviously going to need to play a major regulatory role.  But it’s more likely that they will make workers fend for themselves, especially in the U.S. Let’s posit weak regulation but also the economists’ Rosy Scenario of Limits on AI for “hard tasks” in which humans will still need to be hired to do them, and where employability in good jobs—the professional-managerial-creative kinds—will be contingent on being able to “exercise [complex] reasoning in AI-enabled environments” (Sonnenfeld et al. 2026).

Back to the 9th problem of lowered intelligence.  For AI to reverse this, it would need to increase learning rather than hurt it.  We’d fine “cognitive gain” in AI-driven education rather than “limited learning.” AI would need to increase learning and knowledge creation rather than hurting these (even just to maintain skilled-worker adaptability).   

Universities, in generating public benefits, do these two big things—knowledge creation, associated with research, and intellectual development, associated with teaching, where universities are expanding and refining consciousness.  

First, teaching. This isn’t about job training. It’s about a contemporary version of Bildung, focusing on intensive learning for capability development.

What role does AI play on campuses in this kind of intellectual development?  

There is plenty of evidence that people are using AI tools for the creation of outputs—like writing their papers for them (Terry 2023).

I have seen no evidence that LLMs offer people unique help with the process of learning—by which I mean the internalization of capabilities and of knowledge.  This happens when you can reproduce in your own head the structure of your mortgage payment calculation your calculator just performed. This happens when as an English speaker you know how to say the word for “enlightenment” in Korean (계발). It is “in your head,” and not retrieved externally from an online translator as I just did. 

People do certainly learn things during a process in which they use AI at various points. In the Vivienne Musk article I cited above, her intellectually successful group was able to engage in contestatory dialogics with their AI tools because they already possessed “two important qualities: perspective-taking and intellectual humility.”  These qualities enable mobility through heterogenous epistemic spaces among other things. They, rather than the AI itself, enabled the humans to maintain an active relation to the AI such that it pushed their learning rather than replacing it.  

The evidence suggests, to generalize, that AI supports learning as an ancillary source of information and of dialogic interaction, that is, when people prevent AI from doing the learning for them. Learning involves getting the knowledge and thoughts into one’s head through active engagement with the incompleteness and problems with one’s understanding as that keeps becoming apparent.  In contrast, the well-understood danger is that people let LLMs substitute for learning by getting them to generate outputs without going through the process themselves.  This substitute for learning, or bypass of learning as a process, is likely the main reason why we’ve seen findings of “cognitive loss,” “cognitive offloading,” and ensuing “brain rot” among AI users (Kosmyna et al. 2025; Gerlich 2025). If you offload thinking, and aren’t actually thinking for yourself, you’ve flatlined your learning.

In other words, AI complements human skill in the creation of outputs. It (default) substitutes for human skill if it dominates the process of learning. When AI does the latter, it suppresses skill development.

This is obvious in other contexts, where AI can’t teach you how to play basketball or the violin really well: you have to practice yourself, and enact the whole range of capabiliteis endlessly yourself so that it’s you who is actually doing them when the time comes. Evidence so far suggests that the moments of learning—the internalization or absorption process--mainly needs to be brain only. 

My first conclusion is that university officials need to do a much better job of distinguishing between helpful and harmful uses of AI.  The official policies that I have seen screen necessary distinctions behind the veil of necessary job training in AI. (After the talk, Bruce Simon kindly sent me the SUNY framework.) 

Second, research. Universities are central to society’s ability to create knowledge, organize it, circulate it, retrieve it, and use it.  Each of these five knowledge processes is difficult and collective, involving the coordination of people and institutions.  A practice like “team science” is just the tip of the iceberg of global and local knowledge collaboration.  It’s massively multiplayer, massively multicultural—the dynamics of knowledge systems boggle the mind and tax our vocabularies.

In addition, knowledge systems are radically heterogenous, mixing types of data, methods, disciplines, disparate mental schema, frameworks, and whole cultures. Understanding any problem requires repeated acts of synthesis across knowledge types, methods, etc.

Universities are primary sites of this gathering of the knowledges. Teaching prepares people to do this gathering, that is, this thinking, throughout their lives.  Research is an essential support for the deepening and the gathering of all knowledges for society.  Universities through this teaching and research are first and foremost public goods.

AI can be a public good, but public effects depend radically on carefully controlled use. As an (impressive) tool of collecting, orchestrating, and presenting knowledge interactively with humans where humans are actively engaged with the tool’s powers and limits, AI is a public good. AI as a tool of extending knowledge—AlphaFold for predicting 3D protein structure—is a public good. AI can complement universities.

Can sloppy AI degrade universities? Yes: this is already happening. Can AI substitute for universities? Absolutely not. I’ve suggested that where it substitutes for learning it blocks learning, which clearly damages this prime university good. 

In research, AI can’t (or shouldn’t) replace the processes of reasoning via the embodied gathering of knowledges taking place in the brain.  It might seem that LLMs won’t damage research in the way that they may damage learning because researchers have already developed their brains as learners. It seems true that advanced learners or experienced professionals do better with AI than novices (hence experienced programmers seem less vulnerable to the AI job freeze than entrants). But cognitive offloading can happen in research as well.  Researching and learning are indissociable, and we don’t yet really understand how research may develop rigidity, narrowness, or invisible potholes by incorporating AI into processes of design, analysis, and also written exposition in papers.

Ming defines an “Information-Exploration Paradox : “As the cost of information approaches zero, human exploration collapses.”  This problem clearly applies to research as much as to learning.

Similarly, when three researchers—economists and computer scientists—recently modeled the impact of Agentic AI on “learning incentives,” they found what they call “knowledge collapse” (Acemoglu, Kong, and Ozdaglar 2026).  To cite them:

Agentic AI delivers context-specific recommendations that substitute for human effort. By contrast, a richer stock of general knowledge complements human effort by raising its marginal return. [Our] model highlights a sharp dynamic tension: while agentic AI can improve contemporaneous decision quality, it can also erode learning incentives that sustain long-run collective knowledge. . . . [T]he economy can [then] tip into a knowledge collapse steady state in which general knowledge vanishes ultimately, despite high-quality personalized advice.

Knowledge collapse would be very bad. The model here is too narrow—there’s evidence that specific advice will also degrade. But even this model based on a very narrow definition of knowledge use through “incentives” finds knowledge collapse.

I end by coming back to the fact that universities are mainly public goods. These goods come in 2 very broad kinds—personal intellectual development with clear collective or public benefits, and knowledge creation, organization, and use, where knowledge is heterogenous, radically diverse, and simultaneously specific and general.  AI will function as a public good only as a complement to these processes, and as a destructive agent when it substitutes for them.

My sense is that most colleges and universities are now seeing the indiscriminate application of AI tools in each of these two domains. This is, over time, an existential threat both to universities and to their primary public goods of intelligence and knowledge.  Use of AI tools needs to be precisely regulated through a process governed by policy developed through—really controlled by—the frontline experience of faculty, graduate student employees, research staff, and students.  We’ll need to work very hard to keep AI use subservient to the public missions of intellectual development and the creation of knowledge.