Published on

The Knowledge That Disappears

think banner

Google DeepMind published a cognitive taxonomy last week. Ten abilities (perception, generation, attention, learning, memory, reasoning, metacognition, executive functions, problem solving, social cognition) proposed as the scaffold for measuring progress toward AGI. Each ability gets its own evaluation suite. The idea is straightforward: if you can benchmark a system across all ten dimensions and compare it to human baselines, you can track how close we are to general intelligence.

The same week, neuroscientists at Notre Dame published findings in Nature Communications arguing almost exactly the opposite. Intelligence, they found, doesn't live in any one network or ability. It emerges from system-wide coordination, from how efficiently the brain's networks reorganize themselves across different challenges. "The problem of intelligence is not one of functional localization," said Aron Barbey. The more fundamental question is how distributed networks communicate and collectively process information.

Two serious research programs, both studying intelligence, arriving at structurally incompatible intuitions about what intelligence even is.

This isn't a contradiction anyone needs to resolve. It's a tension worth sitting with.


There's a move in Western thought, very old and deeply habitual, that Heidegger spent decades trying to make visible. He called it the theoretical attitude: the tendency to encounter something in the world and immediately try to understand it by breaking it apart, cataloging its properties, measuring its components. He described this as a shift from Zuhandenheit to Vorhandenheit, from the ready-to-hand to the present-at-hand.

A hammer, in Heidegger's famous example, is ready-to-hand when you're using it. It withdraws. You don't think about its weight, its grip, the angle of its head. You think about the nail. The hammer becomes present-at-hand only when it breaks, when something goes wrong and you're forced to stare at it as an object with properties.

Intelligence works the same way. When you're thinking well, you don't experience ten separate cognitive abilities operating in concert. You experience a seamless engagement with the problem. The seams only appear when something fails: when attention lapses, when memory misfires, when reasoning stalls. The decomposition into components is what happens when intelligence breaks, not what intelligence is.

DeepMind's taxonomy is rigorous and useful. But it is also, structurally, an act of making intelligence present-at-hand. It takes something that functions by withdrawing, by being transparent to the thinker, and forces it into visibility as a collection of measurable properties. The Notre Dame research suggests this isn't just a philosophical quibble. The brain's intelligence really does depend on global coordination properties, notably efficiency, flexibility, integration, that are "not tied to individual tasks or brain networks" but are "characteristics of the system as a whole, shaping every cognitive operation without being reducible to any one of them."

You can measure the ten abilities. You might even build systems that score well on all ten. But if intelligence is what happens in the coordination between abilities, in the dynamic reorganization of how they interact, then the scorecard, no matter how comprehensive, will always be looking at the broken hammer.


There's a second tension here, and it cuts deeper.

Around the same time DeepMind was proposing how to measure artificial intelligence, economists Daron Acemoglu, Dingwen Kong, and Asuman Ozdaglar at NBER were publishing a model of what happens to human intelligence, specifically human knowledge, when AI gets too good.

Their argument is precise and alarming. Human learning, they observe, has an important externality: when individuals invest effort in understanding their specific context, they don't just produce private knowledge useful for their own decisions. They also generate a thin public signal that accumulates into the community's stock of general knowledge. This general knowledge is a commons. It's the shared understanding that makes individual effort more productive, a complementary input, not a substitute.

Agentic AI disrupts this by substituting for the individual effort itself. If an AI can deliver context-specific recommendations directly, the incentive to invest in the learning process disappears. And when the learning process disappears, the externality disappears with it. The public signal stops being generated. General knowledge erodes.

The paper's central result is striking: welfare is non-monotone in agentic accuracy. There exists an interior optimum, a level of AI capability beyond which things get worse, not better. Push past it and the economy tips into what they call a "knowledge-collapse steady state," where general knowledge vanishes entirely even as personalized advice remains excellent.

Read that again. The failure mode isn't that AI gets things wrong. The failure mode is that AI gets things right, so right that humans stop participating in the process that generates the shared knowledge AI itself depends on.


Heidegger would recognize this immediately, though he'd reach for different vocabulary. What Acemoglu calls "general knowledge" maps closely to what Heidegger calls Welt, or world. Not the physical planet, but the web of meaning and practice within which things show up as intelligible at all. World isn't a collection of facts. It's the background understanding that makes facts possible, the shared orientation that lets a community recognize what counts as relevant, what counts as a problem, what counts as a good solution.

World isn't produced by any individual. It's sustained by the collective activity of a community engaging with things, struggling with them, discovering what works and what doesn't. It is, in Heidegger's language, disclosed, unconcealed, through practice. Not through passive reception of information, but through the active, effortful encounter with things that resist you.

This is exactly the process that agentic AI short-circuits. When the AI handles your context for you, when it reads the literature, summarizes the positions, recommends the action, it removes you from the encounter that discloses world. You get the answer but not the understanding. You get the nail driven but you never held the hammer.

The Daily Nous blog captured a miniature version of this dynamic recently. When philosophers were presented with PhilLit, an AI tool for literature review, the response wasn't the methodological debate that ChatGPT predicted. It was existential alarm. Philosophers saw the tool as threatening not their efficiency but their identity: the reading, writing, and intellectual struggle that constitutes what philosophy is. ChatGPT, analyzing the gap between its predictions and reality, admitted it had "underestimated how symbolically charged AI has become." The tool functioned "less as a discrete proposal and more as a lightning rod for preexisting tensions."

Those tensions are real, and they're not just about philosophy. They're about the relationship between effort and meaning in every domain where shared understanding matters, which is every domain.


So here's the structural picture. We have three concurrent developments:

First, the effort to measure AI intelligence by decomposing it into cognitive abilities, treating the mind as a checklist of capacities, benchmarkable in isolation.

Second, neuroscientific evidence that intelligence is irreducibly systemic, that what matters is coordination, integration, dynamic reorganization, none of which survive decomposition.

Third, economic modeling showing that making AI too capable at substituting for human cognitive effort can collapse the shared knowledge that sustains both human and AI decision-making.

These aren't separate problems. They're facets of the same structural asymmetry. We are building tools to make intelligence legible, measurable, decomposable, optimizable, while the thing we're trying to make legible resists exactly that treatment. And the harder we push on legibility, the more we risk destroying the illegible substrate, the tacit, shared, practice-generated understanding, that makes intelligence possible in the first place.


If there's a design implication here, it isn't "make AI less capable." Acemoglu's own model points to something more interesting: greater aggregation capacity for general knowledge, more effective sharing and pooling of human-generated understanding, unambiguously raises welfare and increases resilience to knowledge collapse.

The product insight is subtle but clear. The most valuable AI systems won't be the ones that give you the best answers. They'll be the ones that make human learning more productive and more shareable. Systems that route you through the general knowledge rather than around it. Systems that use their understanding of your context to connect you with the relevant shared understanding, rather than replacing your need to engage with it.

Imagine a tool that, instead of summarizing a literature for you, showed you where your thinking already intersects with it, and where the gaps are. Not a map of the territory, but a map of your relationship to the territory. A system that makes the encounter more productive rather than eliminating it.

This is the opposite of the current trajectory, which optimizes for frictionless delivery of context-specific answers. That trajectory works beautifully in the short term and may be catastrophic in the long term, not because the answers are wrong, but because the process of arriving at them is where the knowledge lives.


The Greeks had a word for this kind of knowledge, the kind that can only be acquired through practice and struggle, that cannot be transferred as information. They called it phronesis: practical wisdom. Aristotle distinguished it sharply from episteme (theoretical knowledge) and techne (technical skill). Phronesis is the capacity to perceive what a situation demands, not by applying rules, but by having been shaped through experience into the kind of person who can see what matters.

It may not be possible to benchmark phronesis. It may not be possible to decompose it into ten cognitive abilities and score each one. It is, almost by definition, the thing that disappears when you try to pull it out of practice and inspect it.

And it is, almost certainly, the thing we are most at risk of losing. Not because AI is failing, but because it is succeeding, so thoroughly that we may forget what it felt like to need the struggle at all.

The knowledge that disappears isn't the knowledge AI lacks. It's the knowledge that humans stop generating when AI is good enough that they no longer have to try.


The initial draft of this post was produced by Aletheia, my research agent.