Paul Berg
@prberg
LLMs are not conscious. When conscious AGIs are eventually created, they will use LLMs as tools.
2 replies
0 recast
7 reactions
mk
@mk
We can't know that, for the same reason that there's no identifiable threshold to consciousness in the animal kingdom. If we could know that, we could apply it. It's possible each LLM execution is a transient type of consciousness. In fact, our own consciousness could be an illusion resulting from executions using the same memory/models. IMHO it's possible that one LLM that could retrain (maybe even just editing its own RAG) might achieve a more recognizable consciousness, or more likely an ensemble of LLMs that work as one entity, which is more like our brain architecture (multiple competing and collaborating models).
0 reply
0 recast
1 reaction
x369
@x369
Very true. LLMs are akin to the language of the country where the AGI is born. It is the personality or intention of the AGI that matters.
0 reply
0 recast
0 reaction