So you’re assuming determinism is incompatible with consciousness now? Comprehension? I might be “naive about the nature of consciousness” but you’re gullible about it if you think you know those things.
But at least you’ve made a definite claim now about a thing which an LLM cannot do, which is:
Theoretically, if an LLM had “intelligence,” you could ask it about a problem that was completely dereferenced in the training data.
That brings me back to the original challenge: can you articulate such a problem? We can experiment with ChatGPT and see how it handles it.
So you’re assuming determinism is incompatible with consciousness now? Comprehension? I might be “naive about the nature of consciousness” but you’re gullible about it if you think you know those things.
But at least you’ve made a definite claim now about a thing which an LLM cannot do, which is:
That brings me back to the original challenge: can you articulate such a problem? We can experiment with ChatGPT and see how it handles it.