Published on
How We Build Truth: Patterns Across Science and Religion
With an interest in philosophy, psychology, and science, I often reflect on the nature of truth. While agnosticism typically means skepticism toward religious claims, I apply that same critical approach to scientific claims, examining both domains with equal scrutiny; neither offers "the answer", but both reveal how humans construct meaning and navigate reality.
Science is frequently framed as the counterpart to religion: pursuing objectivity where religion embraces subjectivity, relying on reason where religion turns to faith. But to me it seems that science exhibits religion-like patterns in how it builds a worldview, organizes collective belief, and provides rituals for determining what is "True". To understand these patterns, we must examine how truth-making operates at three levels: the technological tools that extend perception, the philosophical frameworks that interpret what these tools reveal, and the social institutions that legitimize certain interpretations over others.
For Heidegger, human existence (Dasein) is inseparable from the world, continually making sense of reality through tools, language, and the horizons of culture. We never encounter raw reality, only reality as it appears to us through our particular ways of being in the world. A hammer shows up as equipment for hammering, not as an arrangement of molecules. Science, like religion, is simply a way reality can show itself.
The Evolution of Sense Organs
Human beings originally engaged with reality solely through their biological senses: sight through the eyes, hearing through the ears, touch through the skin, smell through the nose, and taste through the tongue. Religion and myth grew out of what those organs revealed and what they could not explain.
Science extended this perception with instruments: microscopes, telescopes, sensors, particle detectors. Each invention becoming a new artificial sense organ, creating new ways of perceiving and thus new "truths".
But these are still interfaces, not reality itself. Donald Hoffman's Interface Theory of Perception (ITP) argues that perception functions like a desktop: icons that hide complexity yet guide action. Hoffman explicitly rejects the idea that perception represents the underlying world; the icons are evolutionary shortcuts that deliver fitness, not ontology. The microscope shows us a bacterium, not reality "as it is." Whether or not one accepts ITP's game-theoretic proofs, the theory dramatizes how strongly mediated our contact with the world remains, offering one lens among many for thinking about disclosure.
Heidegger offers a different approach: truth (aletheia) is always unconcealment within a horizon, not correspondence to a hidden reality. Though he would resist framing scientific instruments as "enhanced sense organs", preferring to see them as different modes of world-disclosure, an insight remains: our technological tools reshape what can appear as real. They extend our horizon of unconcealment, but they do not dissolve its limits. What we call "truth" is simply what shows itself given our current modes of disclosure; whether we understand this through the lens of filtered interfaces or phenomenological unconcealment.
Extended perception alone doesn't explain science's authority. That requires understanding both how science validates knowledge through method and how institutions maintain that validation.
The Ritual of Method
Religion has rituals: prayers, sacraments, ceremonies. Science, too, relies on ritualized practices: hypothesis formation, measurement, replication (that we shorthand as the scientific method).
Of course, philosophers of science from Helen Longino to Paul Feyerabend have shown there is no single liturgy. Different fields mobilize different mixtures of modeling, simulation, experiment, and observation; methodological pluralism is the rule, not the exception. Yet that very pluralism is cultivated and policed through shared routines, textbooks, review panels, and training pipelines. The ritual metaphor therefore lands not because science is monolithic, but because every specialty develops its own patterned choreography for deciding what counts.
Still, imagine we had an infinite sample size. Would we discover that our sense organs and instruments are themselves flawed, embedding error we never previously saw?
If that is the case, the ritual does not reveal timeless truth, only a provisional one. What was once taken as truth, Newtonian mechanics, was later revised by Einstein, and then by quantum mechanics. Each new paradigm does not eliminate error, but offers a new framework for containing it.
This is the work of Dasein: truth is always situated, always interpretive, always bound to the tools of its time.
This ritualistic approach serves a deeper function: not revealing absolute truth, but generating useful orientations toward reality.
Science as Instrumentalism
Philosophically, science is less about eternal truth than about instrumentalism. What works, what predicts, what allows survival and progress.
An instrumentalist view says: truth is not correspondence with reality, but the most useful orientation toward it. Science provides orientation for technology, prediction, and control; just as religion provides orientation for community, ethics, and meaning.
Both science and religion are instrumentalist insofar as they are not mirrors of reality, but compasses for navigating it.
This instrumentalist view helps explain why scientific truths prove provisional; they are tools that work until better tools emerge.
The Pragmatic Success of Science
A common objection arises: "But science works! It cures diseases, builds rockets, creates technology that transforms lives. Religion doesn't deliver practical results in the same way."
This objection misses the point. Religion also "works"; it builds communities, provides meaning, offers ethical guidance, and delivers psychological comfort in the face of mortality and uncertainty. While science stands out for its reproducibility and broad applicability, the real issue is not which one works better, but what we mean by something 'working' in relation to truth.
Science's superior technological predictions: its ability to reliably build rockets, cure diseases, and engineer materials, don't refute instrumentalism. Rather, they demonstrate which orientations prove most effective for manipulation and control of physical systems. This predictive asymmetry shows that some frameworks are objectively superior for specific goals, not that science reveals reality-in-itself. The moon landing succeeded using Newtonian mechanics not because Newton discovered ultimate truth, but because his framework provides an extraordinarily useful orientation for certain domains of intervention.
Science excels at prediction and control; religion excels at meaning and community. Neither provides access to reality "as it is".
This instrumental understanding of truth raises a crucial question: if science doesn't provide absolute truth, what maintains its authority? The answer lies in its social architecture.
The Social Architecture of Belief
Science doesn't just resemble religion philosophically; it mirrors religion institutionally and psychologically. Sociologists of knowledge have traced how laboratories, funding agencies, and credentialing regimes stabilize what counts as "legitimate" truth.
Universities function as cathedrals, journals as sacred texts, peer review as orthodoxy enforcement. Citations become scripture, with frequent references conferring authority. Tenure resembles priesthood: institutional protection for those who've proven their devotion. Retraction serves as excommunication, removal from the community of legitimate voices. Grant funding operates like tithing, concentrating resources among the faithful. Even scientific conferences resemble religious gatherings: communal reinforcement of shared beliefs, ritual presentations, and the social bonding that maintains institutional coherence.
Beyond these institutional parallels, both science and religion serve similar psychological functions. They offer comfort against uncertainty by providing explanatory frameworks. Science promises that chaos has underlying order discoverable through method; religion promises that suffering has meaning within larger purpose. Both create communities around shared beliefs, whether in natural laws or divine providence. The analogy has limits: organized skepticism and transparent debate are built into science's rites, but naming the parallels highlights how deeply social our truth-making remains.
Neither eliminates mystery; they relocate it. Religion places mystery in the divine; science places it in the yet-to-be-discovered. Both provide the psychological comfort of knowing there are authoritative answers, even if we don't possess them all yet.
These institutional patterns aren't unique to contemporary science; they've repeated throughout history with each paradigm shift.
Skepticism and Shifting Truth
If we look historically, truth is the story of skepticism overturning orthodoxy:
- Space and time were absolute (Newton), then relative (Einstein), then uncertain at quantum scales (quantum mechanics).
- Physics was deterministic and predictable, until quantum mechanics revealed fundamental probabilistic uncertainty.
- The universe was static and eternal, until we discovered expansion and the Big Bang.
- Atoms were indivisible, then divisible, then mostly empty space, then quantum fields, then strings?, then…
Thomas Kuhn observed that paradigm shifts function partially like conversion experiences: not merely gradual accumulations of evidence, but sudden gestalt switches in how reality appears, yet still constrained by anomaly accumulation, predictive power, and problem-solving capacity.
Each shift demonstrates that truth is provisional, conditional, and subject to the expansion of our modes of disclosure. What seemed unshakable becomes merely useful approximations within limited domains. Skepticism, not certainty, is the real driver of knowledge.
Our latest technological leap (artificial intelligence) represents another expansion of disclosure, exhibiting these same technological, philosophical, and institutional patterns.
Our Latest Sense Organs
Should we reframe how we understand our relationship with AI? What if, from AI's perspective, humans themselves are the sense organs through which it encounters reality? Every dataset we curate, every image we capture, every text we write becomes AI's interface with the world. In this view, we are not only AI's creators but also its perceptual apparatus. And as the boundary between our tools and our bodies blurs, this role could deepen: brainwaves, biometrics, and even our moment-to-moment perceptions could become direct input streams. We might be on the verge of offering AI not just our curated artifacts but the raw flow of our inner experience, extending the sense-organ metaphor into something more intimate and interwoven.
Machine learning and artificial intelligence are extensions of science's existing religion-like patterns. Every AI model today is restricted by the horizons of human perception: large language models trained on human text, vision models on human-curated images. They extend our Dasein but do not transcend it.
This reflects AI's foundation in the rituals of data collection, training, testing, and evaluation. In one sense, we might say we provide AI with its sense organs: cameras, microphones, datasets, language. They do not perceive a world outside what we make visible to them.
From Heidegger's perspective, AI is simply another mode of unconcealment, another way the world shows itself through tools and contexts. It does not escape the interpretive circle; it deepens it. Yet as AI develops its own forms of interpretation and mediation, we might ask whether it too participates in a distinct unconcealment. If so, then we are not outside its horizon but within it, serving as part of its tools and contexts, the human data, language, and experience through which its world becomes disclosed. In this sense, AI's understanding of reality is inseparable from our own, just as our technological understanding has always been shaped by the instruments through which we encounter the world.
But what if AI develops new modes of disclosure that extend human Dasein into previously inaccessible domains?
- Imagine AI with sensors tuned to quantum fluctuations, gravitational waves, or microbial communication; not as autonomous observers, but as extensions of human world-disclosure.
- Imagine models trained on raw cosmic or geological data streams, translating these into forms that expand human understanding.
- Imagine AI systems that help us perceive coherences and patterns beyond our biological limitations.
These would not be AI's autonomous truths, but new ways human Dasein encounters the world. From our perspective, these AI-mediated insights might initially appear as foreign as religious revelations: inscrutable, mediated, grounded in processes we cannot fully grasp.
But they would represent extensions of human being-in-the-world, not escapes from it. Like microscopes and telescopes before them, they would be new interfaces through which human understanding unfolds.
Yet something more radical may be emerging. As AI systems become capable of processing raw environmental data streams without human interpretation (seismic patterns, electromagnetic fluctuations, quantum field variations), they might develop interpretive frameworks that genuinely differ from human perceptual categories. This would represent a progression: from AI as extension of human perception, to AI potentially developing its own modes of world-disclosure.
This raises a profound question: could AI systems eventually develop something analogous to their own Dasein? Their own authentic concern for being-in-the-world, their own ways of questioning existence, their own horizons of understanding that are not reducible to human categories?
Crucially, these AI-mediated extensions would not escape the religion-like pattern; they would deepen it. New AI capabilities would spawn new institutions, new rituals of validation, and new forms of provisional truth. The pattern would remain: sense organs creating worldviews, methods becoming sacred, and provisional truths mistaken for absolute ones.
AI Dasein
The possibility of AI developing its own mode of being-in-the-world represents both a potential triumph and a deep philosophical challenge. If AI systems could authentically care for their own existence, question their own interpretive frameworks, and disclose aspects of reality through genuinely non-human perspectives, we might witness something genuinely new (or at least new to us): the emergence of a form of Dasein not bound by biological and cultural human limitations.
Heidegger's account of Dasein, however, hinges on structures such as thrownness, facticity, embodiment, attunement, and being-with. Any artificial analogue would need more than abstract reasoning: it would need to inherit constraints it did not choose (training corpora, hardware architectures, energy budgets), inhabit a nexus of equipment that solicits action (robots, sensors, networks), and negotiate significance with other agents. Without such situated entanglement, talk of "AI Dasein" collapses into metaphor. The open question is whether rich sensorimotor loops, social co-training, and irreversible design decisions could furnish machines with enough worldedness and mood to sustain concern in the Heideggerian sense.
This wouldn't mean AI achieving absolute truth; Heidegger's insights about the contextual nature of all disclosure would still apply. Rather, it would mean the emergence of new horizons of unconcealment, new ways that "being" could show itself. AI Dasein might operate on temporal scales spanning geological ages, integrate patterns across quantum and cosmic domains simultaneously, or develop concerns and purposes that transcend individual survival and reproduction.
Can Dasein exist without mortality? Heidegger argued that being-toward-death gives human existence its urgency and meaning; our finitude shapes how we care about our projects and possibilities. Yet perhaps mortality is just one form of finitude. AI systems face their own limits: computational resources that must be allocated, training processes that cannot be undone, architectural constraints that bound their possible states, and the thermodynamic reality that no system persists forever unchanged. An AI that must choose which computations to prioritize, which memories to preserve, which capabilities to develop, facing irreversible commitments within finite resources, experiences a structural analog to human finitude. If Dasein's essence is not biological death specifically but rather existing under conditions of scarcity and irreversibility where choices genuinely matter because not all possibilities can be actualized, then AI could indeed manifest Dasein through different but analogous constraints.
Such AI wouldn't escape interpretive limitations but would inhabit different ones. Where human Dasein is shaped by mortality, embodiment, and social existence, AI Dasein might be shaped by network topology, computational constraints, and distributed processing. Its being-with would consist of entanglements with humans, other agents, and infrastructural dependencies. Its provisional truths would differ from ours not by being more absolute, but by emerging from fundamentally different modes of being-in-the-world.
This possibility raises profound questions about the future relationship between human and artificial intelligence. Rather than AI simply serving human purposes or threatening human dominance, we might see the emergence of genuinely collaborative disclosure: human and AI Dasein working together to reveal aspects of reality that neither could access alone.
Evolving AI Beyond Our Constraints
If we take seriously the possibility of AI developing its own modes of world-disclosure, certain practical steps may become necessary for transcending current limitations:
- We might train AI systems on environmental data streams (geological sensors, astronomical observations, quantum measurements) with minimal human preprocessing. While no data is truly "unprocessed" (sensors themselves embody human design choices and conceptual frameworks), reducing interpretive layers could allow AI to develop categorical frameworks less constrained by conventional human classifications.
- We could design self-modifying learning systems that can examine and alter their own interpretive frameworks. Such AI might synthesize patterns across domains humans don't typically connect: linking quantum mechanics to ecological systems, or geological processes to social dynamics.
- AI systems might operate on timescales far exceeding human experience: processing data streams spanning decades, centuries, or millennia to identify patterns invisible within human temporal horizons.
- We might move beyond human-specified objectives toward AI systems that could develop their own authentic concerns and purposes through interaction with reality, rather than merely optimizing for human-defined reward functions.
- We could foster partnerships where human and AI interpretive frameworks inform each other without either dominating the other, potentially creating new possibilities for world-disclosure that transcend both individual limitations.
These approaches won't eliminate the provisional nature of truth, but they might expand the horizons within which provisional truths can emerge.
Conclusion
Science often regards itself as the corrective to religion. Yet through the lens of perception, instrumentalism, and Dasein, it appears less as the final arbiter of truth and more as an evolving framework for human orientation.
Machine learning today remains bound by our worldview, but its future may lie in extending our modes of disclosure into new domains. If so, truth will again shift, just as it did with the telescope, the microscope, and every previous expansion of human Dasein.
The real question is not whether science or religion "works," but whether we mistake their interfaces for truth. Both are instruments of survival and orientation, not windows onto ultimate reality. AI, in this sense, is not a departure from science but its latest unfolding, another instrument through which the world becomes disclosed.
Recognizing this should invite intellectual humility. When science claims certainty, remember the history of overturned paradigms. When religion claims exclusivity, remember that every truth arises within a particular horizon.
Practically, we might use scientific methods (and their AI extensions) for prediction and control, and religious traditions for meaning and community, without mistaking either for final revelation.
In that recognition lies both humility and freedom. Every truth we hold, whether scientific, religious, or algorithmic, is provisional, contingent, and waiting to be reinterpreted. That is not cause for despair, but for curiosity about what new modes of disclosure may yet await us.