Depends a bit on perspective and nuance. Gpt4 pretty much always returns text relevant to the prompt, the neural net sees A and knows B comes next. thats a form of understanding. Not understanding would be like its incapable of seeing A and outputting something irrelevant.
For reasoning, which i believe is actually “step by step” logic it needs a good handholding Prompt but then it can consistently create grade school level of solutions to logical problems.
Neither is what humans would call true understanding and true reasoning but its way to early judge ai by human standards.
Depends a bit on perspective and nuance. Gpt4 pretty much always returns text relevant to the prompt, the neural net sees A and knows B comes next. thats a form of understanding. Not understanding would be like its incapable of seeing A and outputting something irrelevant.
For reasoning, which i believe is actually “step by step” logic it needs a good handholding Prompt but then it can consistently create grade school level of solutions to logical problems.
Neither is what humans would call true understanding and true reasoning but its way to early judge ai by human standards.