• Imalostmerchant@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    I hear you. You make very good points.

    I’m tempted to argue that many humans aren’t generally intelligent based on your definition of requiring original thought/solving things they haven’t been told/trained on, but we don’t have to go there. Lol

    Can you expand on your last paragraph? You’re saying if the model was trained on more theory and less examples of solved problems it might be improved?

    • itsralC@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      If I’m being completely honest, now that I’ve woken up with a fresh mind, I have no idea where I was going with that last part. Giving LLMs access to tools like running code so that they can fact check or whatever is a really good idea (that is already being tried) but I don’t think it has anything to do with the problem at hand.

      The real key issue (I think) is getting AI to keep learning and iterating over itself past the training stage. Which is actually what many people call AGI/the “singularity”.