• Blue_Morpho@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      edit-2
      2 days ago

      I’m talking without knowing anything but it seems like LLM’s aren’t orthogonal but instead only insufficient. That is like our consciousness has a library of information to draw on and that library is organized based on references, the LLM could be the library that another software component uses to draw upon for actual reasoning.

      That’s part of what Deepseek has been trying to do. They put a bunch of induction logic for different categories in front of the LLM.

      • moonlight@fedia.io
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        1 day ago

        I agree, although this seems like an unpopular opinion in this thread.

        LLMs are really good at organizing and abstracting information, and it would make a lot of sense for an AGI to incorporate them for that purpose. It’s just that there’s no actual thought process happening, and in my opinion, “reasoning models” like Deepseek are entirely insufficient and a poor substitute for true reasoning capabilities.

    • moonlight@fedia.io
      link
      fedilink
      arrow-up
      3
      arrow-down
      3
      ·
      2 days ago

      I don’t think so, or rather, we don’t know yet. LLMs are not the full picture, but they might be part of it. I could envision a future AGI that has something similar to a modern LLM as the “language / visual centers of the brain”. To continue that metaphor, the part that’s going to be really difficult is the frontal lobe.

      edit: Orthogonal to actual reasoning? Sure. But not to “general AI”.

    • MartianSands@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      2 days ago

      That’s not obviously the case. I don’t think anyone has a sufficient understanding of general AI, or of consciousness, to say with any confidence what is or is not relevant.

      We can agree that LLMs are not going to be turned into general AI though