• hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 hours ago

    I would say this right here should be enough to not do business with them, but they sell AI slop, so I wouldn’t be doing business with them anyway.

  • samus12345@lemm.ee
    link
    fedilink
    English
    arrow-up
    23
    ·
    20 hours ago

    Instead of admitting uncertainty, AI models often prioritize creating plausible, confident responses, even when that means manufacturing information from scratch.

    This is an example of human behavior we DON’T want AIs to emulate.

  • snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    5
    ·
    edit-2
    1 day ago

    The AI mashed information together that didn’t go together in that context and returned something that was not correct. It was wrong, but did not invent anything.

  • Lost_My_Mind@lemmy.worldM
    link
    fedilink
    arrow-up
    15
    arrow-down
    8
    ·
    24 hours ago

    To summerize, an AI bot, which isn’t smart enough to think for itself, decided to think for itself. It then created a new policy that when programers switch between machines, it logs you out. Why? Because. Just because.

    This is what the AI decided. New policy being led by no one, and the only reason it gets called out is because THIS change is instantly noticable. If the new policy affected you over time, it may never be called out, because thats been the policy for 6 months now.

    But the fact remains that AI just decided to lead humans. The decision was made by no one. THIS change was a low stakes change. In that I mean nobody was hurt. Nobody died. Nobody was in danger. Nobody had medications altered.

    But this is the road we’re traveling. Maybe one day the AI decides that green lights make traffic flow better, so now without warning, all the lights in a city are just green.

    Or maybe AI is in charge of saving a company money, and decides that paying for patients insulin costs the company a lot of money, without direct profit. So it cancels that coverage.

    There’s a near infinate amount of things that an AI can logically think makes sense because it has only a limited set of data on the human experience.

    AI will NEVER know what it’s like to be human. It can only cobble together an outcome based on what little data we feed it. What comes next is just an educated guess from an uneducated unempathetic machine.

    • GreenMartian@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      6 hours ago

      AI just decided to lead humans. The decision was made by no one. THIS change was a low stakes change.

      AI didn’t make the change. AI made no policy changes. The logout thing was a backend bug. The only thing the AI did was hallucinate that the bug was actual policy.

      That said, I agree with your sentiment regarding where the world is heading. If it weren’t for pesky regulations and optics, the military would have been flying 100% AI killer drones.

      • aesthelete@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        16 hours ago

        AI didn’t make the change. AI made no policy changes. The logout thing was a backend bug. The only thing the AI did was hallucinate that the bug was actual policy.

        And honestly, it’s completely fair that it would behave this way if its training data contained actual interactions with support agents or developers apologizing for shitty software. I don’t even know how many times I’ve encountered people in my career that insisted that – to quote 30 Rock – they had built the bookshelf that way on purpose, and that they wanted the books to slide off.

    • zqps@sh.itjust.works
      link
      fedilink
      arrow-up
      16
      ·
      edit-2
      14 hours ago

      Unless this is a draft for a sci-fi short story, you should look into how current AI models actually work. They cannot “decide” or logically think about anything. You’re humanizing an algorithm because it can produce text that sounds like it came out a brain, but that doesn’t make it a form of cognition.