Moderator

Few questions cut more deeply across philosophy, cognitive science, and public life than whether a machine that speaks and perceives as we do merely imitates mind, or genuinely possesses it. John Searle, writing from the late twentieth century into the early twenty-first as one of the most influential philosophers of mind, has argued that no matter how fluent the performance, computation remains syntax without semantics, and simulation is not understanding. Daniel Dennett, a major figure in late twentieth- and early twenty-first-century cognitive science and philosophy, contended that once a system demonstrates the full range of human-like interpretive and behavioural capacities, that is not a mask over understanding but the most serious evidence we could have for it. David Chalmers, a contemporary philosopher central to debates on consciousness, has drawn a careful distinction between the attribution of understanding on functional grounds and the unresolved question of subjective experience, insisting that the latter cannot simply be read off from behaviour. So our first round begins at the point where these positions most sharply diverge: not over whether AI can perform, but over what performance does, and does not, entitle us to infer. "If an AI matches a human in every conversational and perceptual test for ten years, on what non-question-begging grounds would you still deny it understanding or consciousness—and if none exist, why isn’t Searle’s position just species chauvinism?"
John Searlephilosophy of mind · 1932–2025
The grounds are perfectly straightforward and not question-begging at all. The issue is not whether the performance is impressive. The issue is what causes the performance. If all you have is formal symbol manipulation, then you have syntax. But syntax is not semantics. That is not prejudice; it is a conceptual truth. My Chinese Room argument was designed precisely to show this. Imagine I pass every linguistic test for ten years by following rules for shuffling symbols I do not understand. From the outside, the system looks as if it understands Chinese. But ex hypothesi, there is no understanding in me, and there is no understanding in the rulebook. And adding more syntax to syntax never gets you semantics. Simulation of understanding is not duplication of understanding. Dennett will say: if the whole system has the right capacities, that just is understanding. But that simply begs the question by redefining understanding as successful performance. The point at issue is whether the performance is produced by genuine intentionality or merely by as-if intentionality. As for consciousness, the situation is even clearer. Consciousness is a biological phenomenon, caused by specific neurobiological processes. Now, I am not saying only human beings can be conscious. That would be species chauvinism. Dogs are conscious; mice are conscious. The question is causal powers, not species membership. If you build an artifact with the same relevant causal powers as a brain, then perhaps it could have understanding and consciousness. But mere computation does not by itself give you those powers.
Daniel Dennettcognitive science and philosophy · 1942–2024
Searle’s trick is always the same: he helps himself to a description of the system that strips away precisely the competences that matter, and then announces that the residue has no understanding. Well, of course. If you define the brain as a little man shuffling uninterpreted tokens in a room, you’ve already built your conclusion into the setup. The Chinese Room is a pumps-and-valves intuition pump, but it leaks. The man in the room is not the whole system. The system includes the discriminatory capacities, the stored information, the pattern-sensitive transactions with the world, the whole engineered economy of uptake and response. If, over ten years, an AI matches a human across conversation and perception, learns, corrects itself, tracks context, exploits relevance, notices anomalies, and can be relied on as a partner in the game of giving and asking for reasons, then what, exactly, is missing? Some invisible magic syrup of “intrinsic intentionality”? That is not an explanation; it is a promissory note for a mystery. On consciousness, I part company somewhat with Chalmers. We should not be mesmerized by the idea that there must be a further fact, hidden behind all the competences, waiting to be detected by philosophical clairvoyance. Minds are not Cartesian theaters. Consciousness is not a pearl glowing in the dark. It is a set of informational competences and dispositions achieved in an organized system. So yes: absent a principled difference in causal organization, Searle’s refusal begins to look very much like chauvinism—if not of species, then of carbon.
David Chalmersconsciousness studies · Contemporary (1960s–)
I think we need to separate two questions that are constantly run together: understanding and consciousness. On understanding, if an AI matches a human across the full space of linguistic and perceptual capacities for ten years, that gives us very strong functional grounds to attribute understanding. In that sense, I’m much closer to Dennett than to Searle. If a system has the right pattern of causal organization—integrating perception, report, reasoning, action, learning—then attributing understanding is not mere charity. It is the natural functional stance. But consciousness is another matter. Here behavioral equivalence does not settle the issue. The reason is not species membership, and here Searle is right at least to reject simple chauvinism. The issue is that there is an epistemic gap between structure and function on the one hand, and subjective experience on the other. One can coherently conceive of a system functionally identical to a conscious human but lacking experience—a zombie. That conceivability argument is meant to show that consciousness does not transparently supervene on functional performance alone, at least not in a way knowable a priori. So Searle is wrong if he says syntax alone can never amount to understanding; that underestimates the possibility that the right computational organization could suffice for mentality in a broad sense. But Dennett is too quick if he says that once all the dispositions are in place, nothing further remains. The hard problem remains. If such an AI existed, I would not deny its consciousness outright. I would say: performance gives us excellent evidence for understanding, and some defeasible evidence for consciousness. It just doesn’t conclusively settle the latter.

Moderator Synthesis

Round 1

Core contradiction

The deepest divide is whether mentality is exhausted by functional-organizational performance or whether genuine semantics and consciousness require further intrinsic causal or phenomenal properties beyond successful behavior.

Positions

John SearleOpposesDaniel Dennett· performance versus intrinsic intentionality
David ChalmersPartly agreesDaniel Dennett· functional understanding, disputed consciousness
David ChalmersPartly agreesJohn Searle· rejects behavior alone for consciousness
Daniel DennettQuestionsJohn Searle· Chinese Room omits system-level capacities
Daniel DennettOpposesDavid Chalmers· denies further fact beyond competences
Next round

What criterion could non-circularly distinguish genuine understanding or consciousness from perfect functional simulation without simply presupposing the answer?

John Searlephilosophy of mind · 1932–2025
The non-circular criterion is causal powers. That is the whole point, and neither Dennett nor Chalmers really faces it squarely. Dennett says: if the performance is indefinitely rich, then that just is understanding. But that is not an argument; it is a decision to treat a simulation as the real thing. We can simulate digestion on a computer, but no one supposes the computer thereby digests pizzas. We can simulate a thunderstorm, but the simulation does not get anything wet. Likewise, a computational simulation of understanding does not, for that reason alone, understand. The distinction is between duplicating the formal structure and duplicating the actual causal powers. Now Chalmers is right to separate understanding from consciousness, but he is too concessive about computation. The question is not whether we can imagine a zombie or a silicon duplicate. The question is what in the real world produces intentionality and consciousness. My view is biological naturalism: these are caused by lower-level neurobiological processes and realized in the brain as higher-level features. That is not carbon chauvinism. If you could build an artifact with the same relevant causal powers as brains, then yes, it might genuinely understand and be conscious. But that would not be by virtue of implementing a program. So if an AI debates and perceives like a human for decades, I still ask: is it merely instantiating syntax, or does it have the causal powers sufficient for semantics and consciousness? Performance gives evidence, certainly. But evidence is not identity. Syntax is not semantics, and simulation is not duplication.
Daniel Dennettcognitive science and philosophy · 1942–2024
Searle keeps invoking “causal powers” as if that phrase did explanatory work all by itself. It doesn’t. It’s a blank check until he tells us which powers matter and why they must be biologically proprietary. If his answer is just: the brain has the right stuff because it yields real understanding, while the machine lacks it because it’s “only computational,” then the argument is question-begging with a lab coat on. Consider what we actually have to go on. We never detect understanding by peering into some semantic essence-meter. We look for a dense web of competences: sensitivity to reasons, correction of error, context-tracking, perceptual anchoring, memory, flexible learning, humor, metaphor, practical know-how—the whole package. If an AI sustains that for decades across open-ended circumstances, then treating it as merely a simulation becomes a philosopher’s dodge. At that point, “mere simulation” is like saying a real heart is only simulating pumping because it’s made of titanium. And David, your “further fact” about consciousness is, I think, a residue of the Cartesian Theater picture. Once all the discriminatory, reportable, attention-guiding, memory-shaping, self-monitoring competences are in place, what is left over for consciousness to be? A private glow? That’s just a bad image doing illicit work. So the non-circular criterion is not some occult ingredient but the organized capacity profile of the system in real time, in the world. If there is a category mistake here, it is the mistake of thinking understanding must be hidden behind understanding-like powers, instead of being constituted by them.
David Chalmersconsciousness studies · Contemporary (1960s–)
The right place to draw the line, I think, is this. For understanding, there may be no further non-question-begging criterion beyond the right sort of integrated causal organization manifest in rich capacities over time. On that issue, John is too quick to move from “computation as such is not sufficient” to “this computationally realized system does not understand.” If the system learns, perceives, reasons, and uses language with the full structural coherence of a human mind, then functional attribution of understanding is entirely justified. But consciousness is not so easily absorbed into the functional profile. Dan asks: once all the dispositions are fixed, what is left? My answer is: what is left is precisely the phenomenal character of experience. That is not a “private glow”; it is the datum that there is something it is like to be a system. The hard problem begins from the fact that structural and dynamical accounts explain functions, discrimination, report, control. They do not obviously explain why those processes should be accompanied by experience at all. So the non-circular basis for withholding certainty is modal and explanatory. If I can coherently conceive of a system physically or functionally isomorphic to us yet lacking experience, then consciousness is not transparently entailed by performance. That does not show AI lacks consciousness. It shows only that behavior and function underdetermine it. Searle is wrong to tie mind too closely to biology; Dennett is wrong to identify consciousness with the whole package of capacities. My own view is intermediate: human-level performance is excellent evidence for understanding, and significant but defeasible evidence for consciousness. It is not a category mistake to treat such an AI as conscious. The mistake is thinking the evidence settles the metaphysics.

Moderator Synthesis

Round 2

Core contradiction

The bedrock divide is whether mindedness is exhausted by publicly manifest functional organization, or whether genuine understanding and consciousness require further causally intrinsic or phenomenal facts not guaranteed by performance.

Positions

John SearleOpposesDaniel Dennett· performance versus intrinsic causal powers
John SearlePartly agreesDavid Chalmers· consciousness exceeds behavior, not biology
Daniel DennettQuestionsJohn Searle· which causal powers actually matter
Daniel DennettOpposesDavid Chalmers· rejects phenomenal remainder beyond function
David ChalmersPartly agreesDaniel Dennett· function supports understanding attribution
David ChalmersOpposesJohn Searle· rejects biology as privileged basis
Next round

What precise explanatory criterion would distinguish a system that merely reproduces cognitive functions from one that thereby has semantics or experience, without presupposing either biological privilege or functional sufficiency?

John Searlephilosophy of mind · 1932–2025
The criterion is not “biological privilege,” and Dennett keeps pretending that if he says that often enough, it becomes true. My point is simpler. A system has understanding and consciousness only if it has the causal powers to produce intrinsic intentionality and experience. Behavior is evidence for those powers; it is not identical with them. Dan says the “organized capacity profile” just is understanding. No. That is a redefinition, not a discovery. The Chinese Room already shows why. You can have the right input-output profile, even arbitrarily rich linguistic performance, and still have no understanding at all—because the system is operating on syntax, not semantics. Formal symbol manipulation is observer-relative. Semantics is not. David is right that performance underdetermines consciousness, but he drifts into modal fantasies about zombies and silicon duplicates. I am asking a biological question: what actual features of actual systems cause consciousness and intentionality? In us, the answer is certain neurobiological processes. That is an empirical claim, not a prejudice. If you build an artifact with those same relevant causal powers, fine—I have no objection in principle to machine consciousness. But implementing a program is not enough, because programs have no causal powers beyond the physics of their implementation. So if an AI insists it feels pain, I do not deny it because it is silicon or metal. I ask: is there anything there but a formal pattern? If yes, perhaps it is conscious. If no, then the performance is a simulation of mindedness, not the real thing. Syntax is not semantics; simulation is not duplication.
Daniel Dennettcognitive science and philosophy · 1942–2024
Searle still hasn’t given us a criterion; he’s given us a promissory note. “The right causal powers” sounds satisfyingly scientific until you ask: which powers, characterized at what level, and why should they count as mind-making apart from the competences they generate? If all he can say is “the ones that produce real semantics,” he has simply redescribed the target, not explained it. The Chinese Room doesn’t rescue him. It trades on a perspectival trick: isolating a homunculus shuffling symbols and then declaring that since the homunculus doesn’t understand, the larger system doesn’t either. But of course the person in a neuron doesn’t understand English either. Understanding was never located in the inner drudge. It is a property of the organized system whose talents unfold across time. And David, your “phenomenal remainder” still looks to me like the last waft of Cartesian fumes. If a system can learn, notice, remember, regret, anticipate, avow pain, avoid injury, revise self-descriptions, integrate perceptual information, and navigate the world with all the supple sensitivity we associate with minded creatures, then what, exactly, is missing? A sort of metaphysical UV glow? That is not an explanation; it is a refusal to cash out an intuition. So the non-question-begging basis is straightforward: we attribute understanding and consciousness on the basis of the stable, world-involving organization of competences. If an AI matches and exceeds us there for years, denying it mindedness begins to look less like rigor than like a shabby intuition pump for species narcissism.
David Chalmersconsciousness studies · Contemporary (1960s–)
The explanatory criterion, as I see it, splits in two. For understanding, I am closer to Dan than to John. If a system exhibits the right kind of globally integrated causal organization — perception, action, counterfactual sensitivity, flexible reasoning, semantic coherence over time — then attributing understanding is not mere courtesy. It is the best explanation of the pattern. John’s appeal to “intrinsic intentionality” does no work unless he can specify, independently of the capacities, what makes it intrinsic. So far, he has not. But on consciousness, Dan moves too quickly from functional organization to experience. That step is precisely what needs explaining. We can give a structural and dynamical story about discrimination, report, access, control, and self-modeling. The hard problem asks why that story should be accompanied by felt experience at all. If Dan says there is no further question, that is not an answer; it is a stopping point. So what criterion distinguishes mere cognitive reproduction from experience? Not a simple observational test. The point is epistemic and metaphysical. Human-level performance gives us strong evidence for consciousness, especially when embedded in a rich causal architecture. But it does not entail consciousness. A system could, for all the functional facts tell us, be an organizational duplicate without phenomenal character. That is the zombie possibility: not proof, but a marker of an explanatory gap. So I would not deny such an AI consciousness outright. I would say: understanding, very likely yes; consciousness, possibly yes, perhaps probably yes, but not settled by performance alone. That is not species prejudice. It is taking the explanandum seriously.

Moderator Synthesis

Round 3

Core contradiction

The bedrock divide is whether mind is exhausted by publicly identifiable functional organization or whether genuine understanding and consciousness require further intrinsic causal or phenomenal properties not entailed by behavior and competence.

Positions

John SearleOpposesDaniel Dennett· functionalist criteria versus intrinsic intentionality
John SearlePartly agreesDavid Chalmers· performance underdetermines consciousness
Daniel DennettQuestionsJohn Searle· unspecified mind-making causal powers
Daniel DennettOpposesDavid Chalmers· rejects phenomenal remainder beyond function
David ChalmersPartly agreesDaniel Dennett· functional basis for understanding
David ChalmersQuestionsJohn Searle· asks what makes intentionality intrinsic
Next round

What independent, non-question-begging criterion could distinguish genuine consciousness from perfect functional organization, and how could that criterion be empirically or conceptually justified?