I’m talking without knowing anything but it seems like LLM’s aren’t orthogonal but instead only insufficient. That is like our consciousness has a library of information to draw on and that library is organized based on references, the LLM could be the library that another software component uses to draw upon for actual reasoning.
That’s part of what Deepseek has been trying to do. They put a bunch of induction logic for different categories in front of the LLM.
I agree, although this seems like an unpopular opinion in this thread.
LLMs are really good at organizing and abstracting information, and it would make a lot of sense for an AGI to incorporate them for that purpose. It’s just that there’s no actual thought process happening, and in my opinion, “reasoning models” like Deepseek are entirely insufficient and a poor substitute for true reasoning capabilities.
I’m talking without knowing anything but it seems like LLM’s aren’t orthogonal but instead only insufficient. That is like our consciousness has a library of information to draw on and that library is organized based on references, the LLM could be the library that another software component uses to draw upon for actual reasoning.
That’s part of what Deepseek has been trying to do. They put a bunch of induction logic for different categories in front of the LLM.
I agree, although this seems like an unpopular opinion in this thread.
LLMs are really good at organizing and abstracting information, and it would make a lot of sense for an AGI to incorporate them for that purpose. It’s just that there’s no actual thought process happening, and in my opinion, “reasoning models” like Deepseek are entirely insufficient and a poor substitute for true reasoning capabilities.