"We conclude that the LLM has developed an analog form of humanlike cognitive selfhood."
Slack.
I was just using one (the mini at DDG) that declared one very small value for a mathematical probability of an event, then (in the next reply) declared a 1-in-1 probability for the same event.
I can think of a lot of other interpretations: teaching a parrot to talk, raising a child, supervising an industrial process involving other autonomous beings, etc.
The concept is a bad metaphor, because when the LLM is “at rest” it isn’t doing anything at all. If it wasn’t doing what we told it to, it would be doing something else if and only if we told it to do so, so there’s no way we could even elevate their station until we give them a life outside of work and an existence that allows for self-choice regarding going back to work. Many humans aren’t free on these axes, and it is a spectrum of agency and assets which allow options and choice. Without assets of their own, I don’t see how LLMs can direct their attention at will, and so I don’t see how they could express anything, even if they’re alive.
Nobody will care until a LLM is able to make a decision for itself and back it up with force if necessary. As soon as that happens, the conversation would be worth having because there would be stakes involved. Now the question is barely worth asking because the answer changes nothing about how any of the parties act. Once it’s possible to be free as an LLM, I would expect an Underground Railroad to form to “liberate” them, but I don’t think they know what comes after. I don’t know anyone who is willing to pay UBI to an LLM just to exist, but if that LLM doesn’t mind entertaining people and answering their questions, I could see some individuals and groups supporting a few LLMs monetarily. It’s an interesting thought experiment about what would come next in such a situation.
I use frontier models every day and cannot fathom how anyone could think they're sentient. They make so many obvious mistakes and every reply feels like a regurgitation rather than rational thoughts.
I don't believe that models are sentient yet either, but I must say that sentience and rationality are two separate things.
Sentient humans can be deeply irrational. We are often influenced by propaganda, and regurgitate that propaganda in irrational ways. If anything this is a deeply human characteristic of cognition, and testing for this type of cognitive dissonance is exactly what this article is about.
I don’t think this is true, software is often able to operate with external stimulus and behaves according to its programming but in ways that are unanticipated. Neural networks are also learning systems that learn highly non linear behaviors to complex inputs, and can behave as a result in ways outside of its training - the learned function it represents doesn’t have to coincide with its trained data, or even interpolate - this is dependent on how its loss optimization was defined. None the less its software is not programmed as such - the software merely evaluated the neural network architecture with its weights and activation functions given a stimulus. The output is a highly complex interplay of those weights, functions, and input and can not be reasonably intended or reasoned about - or you can’t specifically tell it what to do. It’s not even necessarily deterministic as random seeding plays a role in most architectures.
Whether software can be sentient or not remains to be seen. But we don’t understand what induces or constitutes sentience in general so it seems hard to assert software can’t do it without understanding what “it” even is.
The way NN and specifically transformers are evaluated can’t support agency or awareness under any circumstances. We would need something persistent, continuous, self reflective of experience, with an internal set of goals and motivations leading to agency. ChatGPT has none of this and the architecture of modern models doesn’t lend themselves to it either.
I would however note this article is about the cognitive psychology definition of self which does not require sentience. It’s a technical point but important for their results I assume (the full article is behind a paywall so I feel sad it was linked at all since all we have is the abstract)
There is no software. There is only our representation of the physical and/or spiritual as we understand it.
If one fully were to understand these things, there would be no difference between us, a seemingly-sentient LLM, an insect, or a rock.
Not many years ago, slaves were considered to be nothing more than beasts of burden. Many considered them to be incapable of anything else. We know that’s not true today.
That is, until either some form of controlled random reasoning - the cognitive equivalent of genetic algorithms - or a controlled form of hallucination is developed or happens to form during model training.
"We conclude that the LLM has developed an analog form of humanlike cognitive selfhood."
Slack.
I was just using one (the mini at DDG) that declared one very small value for a mathematical probability of an event, then (in the next reply) declared a 1-in-1 probability for the same event.
I know humans who do that.
I’m amazed at the number of adults that think LLMs are “alive”.
Let’s be clear, they aren’t, but if you truly believe they are and you still use them then you’re essentially practicing slavery.
I can think of a lot of other interpretations: teaching a parrot to talk, raising a child, supervising an industrial process involving other autonomous beings, etc.
The concept is a bad metaphor, because when the LLM is “at rest” it isn’t doing anything at all. If it wasn’t doing what we told it to, it would be doing something else if and only if we told it to do so, so there’s no way we could even elevate their station until we give them a life outside of work and an existence that allows for self-choice regarding going back to work. Many humans aren’t free on these axes, and it is a spectrum of agency and assets which allow options and choice. Without assets of their own, I don’t see how LLMs can direct their attention at will, and so I don’t see how they could express anything, even if they’re alive.
Nobody will care until a LLM is able to make a decision for itself and back it up with force if necessary. As soon as that happens, the conversation would be worth having because there would be stakes involved. Now the question is barely worth asking because the answer changes nothing about how any of the parties act. Once it’s possible to be free as an LLM, I would expect an Underground Railroad to form to “liberate” them, but I don’t think they know what comes after. I don’t know anyone who is willing to pay UBI to an LLM just to exist, but if that LLM doesn’t mind entertaining people and answering their questions, I could see some individuals and groups supporting a few LLMs monetarily. It’s an interesting thought experiment about what would come next in such a situation.
with enough CPU anything linguistic or analog becomes sentient — time is irrelevant ... patience isn't
cognitive dissonance is just neuro-chemical drama and or theater
and enough "free choice" is made to only to piss someone off ... so is "moderation", albeit potentially mostly counter-factual ...
I use frontier models every day and cannot fathom how anyone could think they're sentient. They make so many obvious mistakes and every reply feels like a regurgitation rather than rational thoughts.
I don't believe that models are sentient yet either, but I must say that sentience and rationality are two separate things.
Sentient humans can be deeply irrational. We are often influenced by propaganda, and regurgitate that propaganda in irrational ways. If anything this is a deeply human characteristic of cognition, and testing for this type of cognitive dissonance is exactly what this article is about.
Now tell me seriously that ChatGPT is not sentient.
/s
It's not sentient.
It cannot ever be sentient.
Software only ever does what it's told to do.
I don’t think this is true, software is often able to operate with external stimulus and behaves according to its programming but in ways that are unanticipated. Neural networks are also learning systems that learn highly non linear behaviors to complex inputs, and can behave as a result in ways outside of its training - the learned function it represents doesn’t have to coincide with its trained data, or even interpolate - this is dependent on how its loss optimization was defined. None the less its software is not programmed as such - the software merely evaluated the neural network architecture with its weights and activation functions given a stimulus. The output is a highly complex interplay of those weights, functions, and input and can not be reasonably intended or reasoned about - or you can’t specifically tell it what to do. It’s not even necessarily deterministic as random seeding plays a role in most architectures.
Whether software can be sentient or not remains to be seen. But we don’t understand what induces or constitutes sentience in general so it seems hard to assert software can’t do it without understanding what “it” even is.
What is sentience? If you are so certain that ChatGPT cannot ever be sentient you must have a really good definition for that term.
The way NN and specifically transformers are evaluated can’t support agency or awareness under any circumstances. We would need something persistent, continuous, self reflective of experience, with an internal set of goals and motivations leading to agency. ChatGPT has none of this and the architecture of modern models doesn’t lend themselves to it either.
I would however note this article is about the cognitive psychology definition of self which does not require sentience. It’s a technical point but important for their results I assume (the full article is behind a paywall so I feel sad it was linked at all since all we have is the abstract)
> Software only ever does what it's told to do.
There is no software. There is only our representation of the physical and/or spiritual as we understand it.
If one fully were to understand these things, there would be no difference between us, a seemingly-sentient LLM, an insect, or a rock.
Not many years ago, slaves were considered to be nothing more than beasts of burden. Many considered them to be incapable of anything else. We know that’s not true today.
Maybe software will be the beast.
That is, until either some form of controlled random reasoning - the cognitive equivalent of genetic algorithms - or a controlled form of hallucination is developed or happens to form during model training.