You must log in or register to comment.
Although I’m using AI more and more for writing related tasks, I still find it constantly making simple rudimentary errors of logic. If it is advancing as this research paper claims, why are we still seeing so many of these types of hallucination errors?
If it is advancing as this research paper claims, why are we still seeing so many of these types of hallucination errors?
I mean, the research could be true, while the AI is merely achieving a level of reasoning of a houseplant or a bug, while streaming randomized garbage the rest of the time.
That would still be a promising sign of progress, while not of any current practical use.
(Edit: which is guess is pretty often what science progress looks like.)