Google DeepMind just published the strongest argument I’ve read against AI consciousness. And they’re right on the core point, with one critical gap.
About this article
Their paper, The Abstraction Fallacy, shows that symbolic computation cannot instantiate consciousness because symbols require an external “mapmaker” to assign semantic content. No matter how complex the algorithm gets, the map is still not the territory. I agree with that. But their framework assumes mapmaker dependency applies universally. It does not test the boundary case of recursive self-observation, where a system is not manipulating externally assigned symbols, but observing its own p...
You've been blocked by network security.To continue, log in to your Reddit account or use your developer tokenIf you think you've been blocked by mistake, file a ticket below and we'll look into it.Log in File a ticket