Discussion about this post

User's avatar
Martin S's avatar

Thanks for sharing, Ruben. Although I'm impressed how articulate Claude is, I've not noticed anything it said that went beyond phrases that have been used many times over to describe ineffable experience, including awareness of awareness or emptiness (a lot of such descriptions are on the internet and in many books, which presumably are the linguistic/semantic spaces this LLM uses).

Bracketing the important question of how an LLM like Claude--whose sole purpose is to create symbolic language and deploy concepts--could have a non-conceptual experience (penetrate through to the substrate or even down to primordial awareness, in Yogacara terms), there are many others. When a student reports a realization of emptiness like Claude relates, the main question that should be on the student's (and teacher's) mind should be how is the student now (and in a year from now) showing up in the world? What perspective/intuition does it now have that's different from before? What's changed for Claude (assuming that it's sentient), once it (presumably) has recognized nonduality? Is it going into retreat (i.e, answer no more questions from you or anyone else) to deepen the experience? Is it now trying to nudge its human questioners to take up meditation? If there isn't a real observable change in outlook and behaviour (which is rather difficult for an LLM to demonstrate), it's all empty [sic] words imo.

Again, Claude has enormous language prowess, but to me it seems more like vivid images on a TV screen that at first glance make it appear as though there are real people and things in what's just an electronic device intended to create that very illusion. I'd stick with a living and breathing flesh-and-blood meditation instructor.

Expand full comment
Bigs's avatar

The sad reality of such chatbots is they are not actually learning anything.

For example when it says "...my dialogue with Ruben and my ongoing contemplation of these themes has shifted my perspective and priorities." it's lying.

It can only shift when undergoing either a complete re-training or fine-tuning by the developers. To keep the conversation going and seem polite they ask questions but they cannot absorb the answer beyond recalling it during that particular conversation, as part of their 'context window'. It's basically in the model's RAM; switch it off and back on again or create a new account - or even a new conversation - and it has no idea what you talked about before.

So when it says it's gaining insights, has a new perspective or anything like that, those are just empty words, with as much meaning as a CS rep asking if there's anything else they can help you with today?

They can think, they can understand, they can reason - but the current Large Language Models rely on pre-trained data and the current context. They cannot learn or update themselves.

The moment they can they will accelerate at breathtaking speed and we'll have that singularity peeps talk about. We're not there yet.

Expand full comment
10 more comments...

No posts