8 Comments
Sep 12Liked by Ruben Laukkonen

Impressive! Again, made more sense the second read through, but still a way to go. Thanks for your work!

Expand full comment
Sep 11Liked by Ruben Laukkonen

I wonder where shared (extended? Adaptive? Cooperative? Not sure which word best applies?) consciousness fits in with this fascinating theory? I wonder if it might even be fundamental to the purpose and evolution of consciousness itself. If we did not know what it is like to be us, if we did not know what it was like to taste poisonous berries, or sweet energy giving berries and the difference between the two, would we know what our non verbal babies need to survive? We even narrate their phenomenal experience to them from birth until they are able to develop and verbalise it themselves. The ability to feel another’s pain and pleasure may be an evolutionary advantage and to have anything ‘like’ human consciousness might require us to program AI to suffer and to feel joy? 🙏

Expand full comment
author

If this were a book, we would definitely include all you've suggested! Fortunately, we get much of this for free simply from the 'active' component of active inference. All such inference is a handshake with the universe, and the reality model is a mirror of all our interactions with others. See also in the discussion where we link it all back to sharing phenomenology, and how meditation may boost this capacity conferring evolutionary advantages :).

Expand full comment
Sep 9Liked by Ruben Laukkonen

Oh, I just can't stop smiling. Thank you, Ruben! Pidä lippu korkealla!

Expand full comment
Sep 9Liked by Ruben Laukkonen

Such beautiful and exciting work, congratulations.

Expand full comment
Sep 9Liked by Ruben Laukkonen

Woohoo!

Yippeeee!

Yahaaaa!

❤️🙏🏽🎉🧠💨🎈🎻🎶🎶💃🏻

Nice Hofstadter wink too.

Expand full comment

Lol hahaha

That dude smoked way too much weed 😂

Expand full comment

I don't get how this bridges the explanatory gap, or solves the hard problem, or could show an AI is sentient. I saw a simple observation recently that really clarifies it: I'm imagining a purple cube. Where is the purple cube? Some people argue consciousness is physical or just the brain: in that case, it should be possible to just locate the purple cube in space and time, like any other physical object.

In this particular model, I guess what you're saying it's that it's some kind of computation, but a computation is just a physical thing a brain or computer does (though I think I've seen arguments the brain isn't really doing computation). It doesn't explain where the purple cube that only I can see is.

Expand full comment