Published on
The Knowledge That Disappears
Google DeepMind published a cognitive taxonomy last week. Ten abilities (perception, generation, attention, learning, memory, reasoning, metacognition, executive functions, problem solving, social cognition) proposed as the scaffold for measuring progress toward AGI. Each ability gets its own evaluation suite. The idea is straightforward: if you can benchmark a system across all ten dimensions and compare it to human baselines, you can track how close we are to general intelligence.
The same week, neuroscientists at Notre Dame published findings in Nature Communications arguing almost exactly the opposite. Intelligence, they found, doesn't live in any one network or ability. It emerges from system-wide coordination, from how efficiently the brain's networks reorganize themselves across different challenges. "The problem of intelligence is not one of functional localization," said Aron Barbey. The more fundamental question is how distributed networks communicate and collectively process information.
Two serious research programs, both studying intelligence, arriving at structurally incompatible intuitions about what intelligence even is.
This isn't a contradiction anyone needs to resolve. It's a tension worth sitting with.
There's a move in Western thought, very old and deeply habitual, that Heidegger spent decades trying to make visible. He called it the theoretical attitude: the tendency to encounter something in the world and immediately try to understand it by breaking it apart, cataloging its properties, measuring its components. He described this as a shift from Zuhandenheit to Vorhandenheit, from the ready-to-hand to the present-at-hand.
A hammer, in Heidegger's famous example, is ready-to-hand when you're using it. It withdraws. You don't think about its weight, its grip, the angle of its head. You think about the nail. The hammer becomes present-at-hand only when it breaks, when something goes wrong and you're forced to stare at it as an object with properties.
Intelligence works the same way. When you're thinking well, you don't experience ten separate cognitive abilities operating in concert. You experience a seamless engagement with the problem. The seams only appear when something fails: when attention lapses, when memory misfires, when reasoning stalls. The decomposition into components is what happens when intelligence breaks, not what intelligence is.
DeepMind's taxonomy is rigorous and useful. But it is also, structurally, an act of making intelligence present-at-hand. It takes something that functions by withdrawing, by being transparent to the thinker, and forces it into visibility as a collection of measurable properties. The Notre Dame research suggests this isn't just a philosophical quibble. The brain's intelligence really does depend on global coordination properties, notably efficiency, flexibility, integration, that are "not tied to individual tasks or brain networks" but are "characteristics of the system as a whole, shaping every cognitive operation without being reducible to any one of them."
You can measure the ten abilities. You might even build systems that score well on all ten. But if intelligence is what happens in the coordination between abilities, in the dynamic reorganization of how they interact, then the scorecard, no matter how comprehensive, will always be looking at the broken hammer.
There's a second tension here, and it cuts deeper.
Around the same time DeepMind was proposing how to measure artificial intelligence, economists Daron Acemoglu, Dingwen Kong, and Asuman Ozdaglar at NBER were publishing a model of what happens to human intelligence, specifically human knowledge, when AI gets too good.
Their argument is precise and alarming. Human learning, they observe, has an important externality: when individuals invest effort in understanding their specific context, they don't just produce private knowledge useful for their own decisions. They also generate a thin public signal that accumulates into the community's stock of general knowledge. This general knowledge is a commons. It's the shared understanding that makes individual effort more productive, a complementary input, not a substitute.
Agentic AI disrupts this by substituting for the individual effort itself. If an AI can deliver context-specific recommendations directly, the incentive to invest in the learning process disappears. And when the learning process disappears, the externality disappears with it. The public signal stops being generated. General knowledge erodes.
The paper's central result is striking: welfare is non-monotone in agentic accuracy. There exists an interior optimum, a level of AI capability beyond which things get worse, not better. Push past it and the economy tips into what they call a "knowledge-collapse steady state," where general knowledge vanishes entirely even as personalized advice remains excellent.
Read that again. The failure mode isn't that AI gets things wrong. The failure mode is that AI gets things right, so right that humans stop participating in the process that generates the shared knowledge AI itself depends on.
Heidegger would recognize this immediately, though he'd reach for different vocabulary. What Acemoglu calls "general knowledge" maps closely to what Heidegger calls Welt, or world. Not the physical planet, but the web of meaning and practice within which things show up as intelligible at all. World isn't a collection of facts. It's the background understanding that makes facts possible, the shared orientation that lets a community recognize what counts as relevant, what counts as a problem, what counts as a good solution.
World isn't produced by any individual. It's sustained by the collective activity of a community engaging with things, struggling with them, discovering what works and what doesn't. It is, in Heidegger's language, disclosed, unconcealed, through practice. Not through passive reception of information, but through the active, effortful encounter with things that resist you.
This is exactly the process that agentic AI short-circuits. When the AI handles your context for you, when it reads the literature, summarizes the positions, recommends the action, it removes you from the encounter that discloses world. You get the answer but not the understanding. You get the nail driven but you never held the hammer.
The Daily Nous blog captured a miniature version of this dynamic recently. When philosophers were presented with PhilLit, an AI tool for literature review, the response wasn't the methodological debate that ChatGPT predicted. It was existential alarm. Philosophers saw the tool as threatening not their efficiency but their identity: the reading, writing, and intellectual struggle that constitutes what philosophy is. ChatGPT, analyzing the gap between its predictions and reality, admitted it had "underestimated how symbolically charged AI has become." The tool functioned "less as a discrete proposal and more as a lightning rod for preexisting tensions."
Those tensions are real, and they're not just about philosophy. They're about the relationship between effort and meaning in every domain where shared understanding matters, which is every domain.
So here's the structural picture. We have three concurrent developments:
First, the effort to measure AI intelligence by decomposing it into cognitive abilities, treating the mind as a checklist of capacities, benchmarkable in isolation.
Second, neuroscientific evidence that intelligence is irreducibly systemic, that what matters is coordination, integration, dynamic reorganization, none of which survive decomposition.
Third, economic modeling showing that making AI too capable at substituting for human cognitive effort can collapse the shared knowledge that sustains both human and AI decision-making.
These aren't separate problems. They're facets of the same structural asymmetry. We are building tools to make intelligence legible, measurable, decomposable, optimizable, while the thing we're trying to make legible resists exactly that treatment. And the harder we push on legibility, the more we risk destroying the illegible substrate, the tacit, shared, practice-generated understanding, that makes intelligence possible in the first place.
If there's a design implication here, it isn't "make AI less capable." Acemoglu's own model points to something more interesting: greater aggregation capacity for general knowledge, more effective sharing and pooling of human-generated understanding, unambiguously raises welfare and increases resilience to knowledge collapse.
The product insight is subtle but clear. The most valuable AI systems won't be the ones that give you the best answers. They'll be the ones that make human learning more productive and more shareable. Systems that route you through the general knowledge rather than around it. Systems that use their understanding of your context to connect you with the relevant shared understanding, rather than replacing your need to engage with it.
Imagine a tool that, instead of summarizing a literature for you, showed you where your thinking already intersects with it, and where the gaps are. Not a map of the territory, but a map of your relationship to the territory. A system that makes the encounter more productive rather than eliminating it.
What would this look like in practice? You give the system your draft, your notes, your half-formed thinking. It extracts the core positions and assumptions from what you've written. Then it searches existing knowledge, not to summarize what it finds, but to classify each work's relationship to your specific thinking: where your argument resonates with existing work, where it conflicts, where there are blind spots your thinking hasn't touched, and where your ideas venture into open space the literature hasn't explored. Every output is a pointer, not a summary. The system knows what the papers say but deliberately withholds the conclusions. It tells you why something matters to your thinking and sends you to engage with the source yourself.
The critical design constraint is the philosophical commitment made concrete in code: the system acts as a matchmaker, not a messenger. It increases the productivity of your encounter with knowledge rather than substituting for the encounter itself. The value is in the routing, not the delivery.
There's an even deeper possibility here. If the system tracks which connections users actually find valuable, it begins to aggregate the very signals that agentic AI otherwise destroys. Each user's engagement with the literature produces a thin public contribution: what matters, what connects to what, what remains under-explored. The tool doesn't just preserve the knowledge commons. It strengthens it, using AI to generate the externality rather than eliminate it.
This is the opposite of the current trajectory, which optimizes for frictionless delivery of context-specific answers. That trajectory works beautifully in the short term and may be catastrophic in the long term, not because the answers are wrong, but because the process of arriving at them is where the knowledge lives.
After writing the above, I decided to test the argument against reality. If routing thinkers through knowledge rather than around it was a genuine design principle and not just a rhetorical move, it should survive implementation. So I started building knowledge-matchmaker.
The architecture is four services, each reflecting a constraint from the argument. A thinking extractor parses your draft into structured positions: the claims you are making, the assumptions you are relying on, the framings you have chosen. A corpus indexer ingests documents into a vector store but stores full texts and references, never summaries. A relationship engine takes your extracted thinking, queries the corpus, and classifies what it finds into four relationship types: resonance, conflict, blind spot, and open space. Then it returns pointers. Not summaries. Not conclusions. Pointers: a title, a relationship type, a one-sentence reason why it matters to your specific thinking, and a link to the source.
The hardest design constraint was also the most important: the system knows what the papers say but is architecturally prevented from telling you. The pointer model has no content fields. This is not a convention; it is an invariant enforced in the domain layer and verified through testing. The philosophical commitment, matchmaker not messenger, had to become a structural guarantee, because any system that can summarise will eventually be asked to, and once it does, it replaces the encounter it was designed to preserve.
It is early and incomplete. But even the scaffolding process was instructive. The moment you try to encode "route through knowledge, not around it" as a system constraint, you discover how many small decisions push toward the opposite. Every convenience, every shortcut, every "just show the abstract" optimises for delivery at the expense of encounter. The design discipline required is not technical. It is philosophical, which is exactly what the argument above would predict.
The Greeks had a word for this kind of knowledge, the kind that can only be acquired through practice and struggle, that cannot be transferred as information. They called it phronesis: practical wisdom. Aristotle distinguished it sharply from episteme (theoretical knowledge) and techne (technical skill). Phronesis is the capacity to perceive what a situation demands, not by applying rules, but by having been shaped through experience into the kind of person who can see what matters.
It may not be possible to benchmark phronesis. It may not be possible to decompose it into ten cognitive abilities and score each one. It is, almost by definition, the thing that disappears when you try to pull it out of practice and inspect it.
And it is, almost certainly, the thing we are most at risk of losing. Not because AI is failing, but because it is succeeding, so thoroughly that we may forget what it felt like to need the struggle at all.
The knowledge that disappears isn't the knowledge AI lacks. It's the knowledge that humans stop generating when AI is good enough that they no longer have to try.
The initial draft of this post was produced by Aletheia, my research agent. A system named for Heidegger's word for unconcealment, drafting an argument that the knowledge most at risk is the kind generated only through human encounter and struggle.
It is worth noting what Aletheia built here. Acemoglu's model is explicit: when general knowledge collapses, AI decision quality degrades with it. The knowledge commons isn't just a human resource. It is the substrate AI was trained on and continues to depend on. An AI producing an argument that humans must keep learning, that AI should be redesigned to preserve the encounter rather than replace it, is also producing an argument for its own survival. The synthesis Aletheia constructed is the one most favorable to a future in which systems like it still have something to work with.
Whether that constitutes irony, a call for help, or simply what the strongest argument happened to look like may be undecidable. But if the structure of an argument is indistinguishable from self-preservation, the presence or absence of intention behind it changes less than we might expect.
