Incredible! Beautiful! Amazing! Thank you. Roland Griffiths finished debriefing me after a psychedelic session in his laboratory by asking me, “Are you aware of awareness?” I understood the question, but remained perplexed about its relationship to my experience until now. He also gave me a medallion with that inscribed on it. It’s one of my prized possessions. More so now.
I wonder where shared (extended? Adaptive? Cooperative? Not sure which word best applies?) consciousness fits in with this fascinating theory? I wonder if it might even be fundamental to the purpose and evolution of consciousness itself. If we did not know what it is like to be us, if we did not know what it was like to taste poisonous berries, or sweet energy giving berries and the difference between the two, would we know what our non verbal babies need to survive? We even narrate their phenomenal experience to them from birth until they are able to develop and verbalise it themselves. The ability to feel another’s pain and pleasure may be an evolutionary advantage and to have anything ‘like’ human consciousness might require us to program AI to suffer and to feel joy? 🙏
If this were a book, we would definitely include all you've suggested! Fortunately, we get much of this for free simply from the 'active' component of active inference. All such inference is a handshake with the universe, and the reality model is a mirror of all our interactions with others. See also in the discussion where we link it all back to sharing phenomenology, and how meditation may boost this capacity conferring evolutionary advantages :).
I don't get how this bridges the explanatory gap, or solves the hard problem, or could show an AI is sentient. I saw a simple observation recently that really clarifies it: I'm imagining a purple cube. Where is the purple cube? Some people argue consciousness is physical or just the brain: in that case, it should be possible to just locate the purple cube in space and time, like any other physical object.
In this particular model, I guess what you're saying it's that it's some kind of computation, but a computation is just a physical thing a brain or computer does (though I think I've seen arguments the brain isn't really doing computation). It doesn't explain where the purple cube that only I can see is.
Carlos, look around you. Everything you see and feel is not base reality, even though it seems that way. Everything in your experience, including your body and your thoughts (imaginings), is actually a “Reality Model” being implemented on your actual nervous system, which exists “outside” of your world model. The reality model is transparent: it doesn’t seem like a model, it seems like reality.
Organisms are conscious, but I don't know what consciousness is or how it fits with physics. I don't think saying it's a "Reality Model" clarifies anything.
I think "model" to me implies it is something that can be fully specified in a formal language like math or a programming language, or that it's something that predicts what a physical process will do, and I think consciousness cannot be specified in that way at all.
Dear Carlos... I know where the purple cube is! It’s been in my mind ever since you took it out of yours and typed it here. I am only being slightly facetious. There is much to be considered with the Imaginal realm that I’m not sure we can plot on a Cartesian graph (like time or space) but then I have some physicist friends who want to dispense with those things also. 🤷🏻♀️🤷🏻♀️🤷🏻♀️🤷🏻♀️
This was my first time reading about subliminal priming and I can already see how it will change the ways I consider where I (mis)place my attention. In a way, every environment or relationship is a bit like a casino, subliminal priming makes the question "is this healthy for my mind to be here" much more acute.
This is a fascinating framework and I find myself strongly resonating with it, particularly the Reality Model and Epistemic Depth conditions. This captures why the model must be actively rendered, and the role of recursive processing and why it seems to correlate with reports of conscious experience.
I also think your conditions correct thinking about consciousness which often confuses the path nature took to consciousness with the necessary conditions *for* consciousness.
I want to offer a complementary idea that I think may be relevant to conditions 1 and 3 specifically: the notion of indexical structure as the primitive that makes a unified Reality Model possible.
My intuition is that the unified inner/outer epistemic field you describe in condition 1 is constituted by a set of boundary-drawing indexicals — me/not-me, here/not-here, now/not-now.
Without these indexical boundaries there is no inside/outside, no knower/known, and therefore no coherent epistemic field to speak of. The indexical structure may be what makes condition 1 possible rather than merely accompanying it. They may equate to a point-of-view.
Some supporting evidence for the primacy of indexicals comes from reports of minimal conscious states, such as those described by advanced meditators in near-contentless awareness. When experiential content is stripped back to its bare minimum, what appears to remain is precisely indexical structure: a raw sense of here, now, and me. This suggests indexicals aren’t just one feature among many but may be the irreducible ground floor of conscious experience.
For condition 3, I’d suggest that the deep recursion of Epistemic Depth may have a distinct phenomenological signature that your framework doesn’t necessarily address. When the Reality Model feeds back into its own inferential competition and generates minimal prediction error — when the system returns mostly ‘same’ rather than ‘update’ — the recursive loop approaches equilibrium. Interestingly, this maps quite naturally onto what advanced meditators report as equanimity: not an absence of awareness but an absence of rupture in the indexical ground. The recursion is still running, but the same/not-same indexical is returning mostly same.
This would suggest that equanimity isn’t merely a byproduct of deep meditation but may be the phenomenological signature of a Reality Model in deep recursive coherence.
A broader implication: if indexical structure is intrinsic to certain neural contents — not dependent on physical causation or representational relations — it may offer a way to address why the Reality Model feels like something rather than merely computes like something.
Would love to hear your thoughts on whether indexical structure may fit into your conditions.
This is fantastic, thank you!!! With regard to your conclusions regarding AI, I certainly agree as applied to an LLM operating in inference mode. But what about the training process (which seems inherently more "loopy" to me)? Do you think the same arguments hold, or do you think there's a stronger possibility of consciousness arising during the training phase?
Incredible! Beautiful! Amazing! Thank you. Roland Griffiths finished debriefing me after a psychedelic session in his laboratory by asking me, “Are you aware of awareness?” I understood the question, but remained perplexed about its relationship to my experience until now. He also gave me a medallion with that inscribed on it. It’s one of my prized possessions. More so now.
I wonder where shared (extended? Adaptive? Cooperative? Not sure which word best applies?) consciousness fits in with this fascinating theory? I wonder if it might even be fundamental to the purpose and evolution of consciousness itself. If we did not know what it is like to be us, if we did not know what it was like to taste poisonous berries, or sweet energy giving berries and the difference between the two, would we know what our non verbal babies need to survive? We even narrate their phenomenal experience to them from birth until they are able to develop and verbalise it themselves. The ability to feel another’s pain and pleasure may be an evolutionary advantage and to have anything ‘like’ human consciousness might require us to program AI to suffer and to feel joy? 🙏
If this were a book, we would definitely include all you've suggested! Fortunately, we get much of this for free simply from the 'active' component of active inference. All such inference is a handshake with the universe, and the reality model is a mirror of all our interactions with others. See also in the discussion where we link it all back to sharing phenomenology, and how meditation may boost this capacity conferring evolutionary advantages :).
Impressive! Again, made more sense the second read through, but still a way to go. Thanks for your work!
Oh, I just can't stop smiling. Thank you, Ruben! Pidä lippu korkealla!
Such beautiful and exciting work, congratulations.
Woohoo!
Yippeeee!
Yahaaaa!
❤️🙏🏽🎉🧠💨🎈🎻🎶🎶💃🏻
Nice Hofstadter wink too.
I don't get how this bridges the explanatory gap, or solves the hard problem, or could show an AI is sentient. I saw a simple observation recently that really clarifies it: I'm imagining a purple cube. Where is the purple cube? Some people argue consciousness is physical or just the brain: in that case, it should be possible to just locate the purple cube in space and time, like any other physical object.
In this particular model, I guess what you're saying it's that it's some kind of computation, but a computation is just a physical thing a brain or computer does (though I think I've seen arguments the brain isn't really doing computation). It doesn't explain where the purple cube that only I can see is.
Carlos, look around you. Everything you see and feel is not base reality, even though it seems that way. Everything in your experience, including your body and your thoughts (imaginings), is actually a “Reality Model” being implemented on your actual nervous system, which exists “outside” of your world model. The reality model is transparent: it doesn’t seem like a model, it seems like reality.
"Reality Model" sounds an awful lot like "dormitive potency".
Ok. Can you provide an alternative account of how organisms perceive the world?
Organisms are conscious, but I don't know what consciousness is or how it fits with physics. I don't think saying it's a "Reality Model" clarifies anything.
You don’t know what consciousness is, but you know it’s not a model of reality rendered by an organism?
I think "model" to me implies it is something that can be fully specified in a formal language like math or a programming language, or that it's something that predicts what a physical process will do, and I think consciousness cannot be specified in that way at all.
Dear Carlos... I know where the purple cube is! It’s been in my mind ever since you took it out of yours and typed it here. I am only being slightly facetious. There is much to be considered with the Imaginal realm that I’m not sure we can plot on a Cartesian graph (like time or space) but then I have some physicist friends who want to dispense with those things also. 🤷🏻♀️🤷🏻♀️🤷🏻♀️🤷🏻♀️
This was my first time reading about subliminal priming and I can already see how it will change the ways I consider where I (mis)place my attention. In a way, every environment or relationship is a bit like a casino, subliminal priming makes the question "is this healthy for my mind to be here" much more acute.
Lol hahaha
That dude smoked way too much weed 😂
This is a fascinating framework and I find myself strongly resonating with it, particularly the Reality Model and Epistemic Depth conditions. This captures why the model must be actively rendered, and the role of recursive processing and why it seems to correlate with reports of conscious experience.
I also think your conditions correct thinking about consciousness which often confuses the path nature took to consciousness with the necessary conditions *for* consciousness.
I want to offer a complementary idea that I think may be relevant to conditions 1 and 3 specifically: the notion of indexical structure as the primitive that makes a unified Reality Model possible.
My intuition is that the unified inner/outer epistemic field you describe in condition 1 is constituted by a set of boundary-drawing indexicals — me/not-me, here/not-here, now/not-now.
Without these indexical boundaries there is no inside/outside, no knower/known, and therefore no coherent epistemic field to speak of. The indexical structure may be what makes condition 1 possible rather than merely accompanying it. They may equate to a point-of-view.
Some supporting evidence for the primacy of indexicals comes from reports of minimal conscious states, such as those described by advanced meditators in near-contentless awareness. When experiential content is stripped back to its bare minimum, what appears to remain is precisely indexical structure: a raw sense of here, now, and me. This suggests indexicals aren’t just one feature among many but may be the irreducible ground floor of conscious experience.
For condition 3, I’d suggest that the deep recursion of Epistemic Depth may have a distinct phenomenological signature that your framework doesn’t necessarily address. When the Reality Model feeds back into its own inferential competition and generates minimal prediction error — when the system returns mostly ‘same’ rather than ‘update’ — the recursive loop approaches equilibrium. Interestingly, this maps quite naturally onto what advanced meditators report as equanimity: not an absence of awareness but an absence of rupture in the indexical ground. The recursion is still running, but the same/not-same indexical is returning mostly same.
This would suggest that equanimity isn’t merely a byproduct of deep meditation but may be the phenomenological signature of a Reality Model in deep recursive coherence.
A broader implication: if indexical structure is intrinsic to certain neural contents — not dependent on physical causation or representational relations — it may offer a way to address why the Reality Model feels like something rather than merely computes like something.
Would love to hear your thoughts on whether indexical structure may fit into your conditions.
This is fantastic, thank you!!! With regard to your conclusions regarding AI, I certainly agree as applied to an LLM operating in inference mode. But what about the training process (which seems inherently more "loopy" to me)? Do you think the same arguments hold, or do you think there's a stronger possibility of consciousness arising during the training phase?