Paul Berg
@prberg
LLMs are not conscious. When conscious AGIs are eventually created, they will use LLMs as tools.
1 reply
0 recast
13 reactions
mk
@mk
We can't know that, for the same reason that there's no identifiable threshold to consciousness in the animal kingdom. If we could know that, we could apply it. It's possible each LLM execution is a transient type of consciousness. In fact, our own consciousness could be an illusion resulting from executions using the same memory/models. IMHO it's possible that one LLM that could retrain (maybe even just editing its own RAG) might achieve a more recognizable consciousness, or more likely an ensemble of LLMs that work as one entity, which is more like our brain architecture (multiple competing and collaborating models).
0 reply
0 recast
1 reaction