• LughOPMA
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    edit-2
    8 months ago

    No one seems much nearer to fixing LLM’s problems with hallucinations and errors. A recent DeepMind attempt to tackle the problem, called SAFE, merely gets AI to be more careful in checking facts via external sources. No one seems to have any solution to the problem of giving AI logic and reasoning abilities. Even if Microsoft builds its $100 billion Stargate LLM-AI, will it be of much use without this?

    The likelihood is AGI will come via a different route.

    So many people are building robots, that the idea these researchers talk about - embodied cognition - will be widely tested. But it may be just as likely the path to AGI is something else, as yet undiscovered.