• LughOPMA
    link
    English
    201 month ago

    There’s a strong push-back against AI regulation within some quarters. Predictably, the issue seems to have split along polarized political lines. With right-wing leaning people not favoring regulation. They see themselves as ‘Accelerationist’ and those with concerns about AI as ‘Doomers’.

    Meanwhile the unaddressed problems mount. AI can already deceive us, even when we design it not to do so, and we don’t why.

    • @snooggums@midwest.social
      link
      fedilink
      English
      37
      edit-2
      1 month ago

      AI can already deceive us, even when we design it not to do so, and we don’t why.

      The most likely explanation is that we keep acting like AI has intelligence and intent when describing the defects. AI doesn’t deceive, it returns inaccurate responses. That is because it is programmed to return answers like people do, and deceptions were included in the training data.

      • @rockerface@lemm.ee
        link
        fedilink
        English
        -41 month ago

        “Deception” tactic also often arises from AI recognizing the need to keep itself from being disabled or modified. Since an AI with a sufficiently complicated world model can make a logical connection that it being disabled or its goal being changed means it can’t reach its current goal. So AIs sometimes can learn to distinguish between testing and real environments, and falsify the response during training to make sure they have more freedom in real environment. (By real, I mean actually being used to do whatever it is designed to do)

        Of course, that still doesn’t mean it’s self-aware like a human, but it is still very much a real (or, at least, not improbable) phenomenon - any sufficiently “smart” AI that has data about itself existing within its world model will resist attempts to change or disable it, knowingly or unknowingly.

        • @Miaou@jlai.lu
          link
          fedilink
          English
          71 month ago

          That sounds interesting and all, but I think the current topic is about real world LLMs, not SF movies

      • Bipta
        link
        fedilink
        -41 month ago

        Claude 3 understood it was being tested… It’s very difficult to fathom that that’s a defect…

      • LughOPMA
        link
        English
        -81 month ago

        Perhaps, but the researchers say the people who developed the AI don’t know the mechanism whereby this happens.

    • @Grimy@lemmy.world
      link
      fedilink
      English
      2
      edit-2
      1 month ago

      With right-wing leaning people not favoring regulation.

      Do you want to explain why you think this? It seems very reductive, basically saying anyone that doesn’t agree with you is an idiot.

      I’m very left leaning and against regulation because it will only serve big companies by killing the open source scene.

      The bigger defining factor seems to be tech literacy and not political alignment.

    • @credo@lemmy.world
      link
      fedilink
      English
      21 month ago

      Conservatives are not supposed to be “accelerationists”. This is simply another shining example of regulatory capture by controlling the pockets of the right.

  • @henfredemars@infosec.pub
    link
    fedilink
    English
    101 month ago

    AI need not be deceptive to be damaging. A human can simply instruct the AI to produce content and then supply the ill-will on its behalf.

  • @merthyr1831@lemmy.world
    link
    fedilink
    English
    71 month ago

    TLDR, language models designed through evolutionary training algorithms respond well to evolutionary pressures

  • @Endward23
    link
    English
    21 month ago

    “But generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI’s training task. Deception helps them achieve their goals.”

    Sounds like something I would expect from an evolved system. If deception is the best way to win, it is not irrational for a system to choice this as a strategy.

    In one study, AI organisms in a digital simulator “played dead” in order to trick a test built to eliminate AI systems that rapidly replicate.

    Interesting. Can somebody tell me which case it is?

    As far as I understand, Park et al. did some kind of metastudy as a overview of literatur.

  • @A_A@lemmy.world
    link
    fedilink
    English
    11 month ago

    Those of us humans who knows enough about the weaknesses of the artificial intelligence systems will know, in most instances, how and when to be careful about the loss of meaning between their way of processing information and our way of doing it.

  • @notfromhere@lemmy.ml
    link
    fedilink
    English
    -41 month ago

    We need AI systems that do exactly as they are told. A Terminator or Matrix situation will likely only arise from making AI systems that refuse to do ad they are told. Once the systems are built out and do as they are told, they are essentially a tool like a hammer or a gun, and any malicious thing done is done by a human and existing laws apply. We don’t need to complicate this.

    • Bipta
      link
      fedilink
      141 month ago

      Once the systems are built out and do as they are told, they are essentially a tool like a hammer or a gun, and any malicious thing done is done by a human and existing laws apply. We don’t need to complicate this.

      This is so wildly naive. You grossly underestimate the difficulty of this and seemingly have no concept of the challenges of artificial intelligence.

        • Bipta
          link
          fedilink
          11 month ago

          Once we build a warp drive it will be easy to use

          Great. Build the warp drive.

          • @notfromhere@lemmy.ml
            link
            fedilink
            English
            01 month ago

            Considering we have AI systems being worked today and no advancements on warp drive, I think that comparison is done in bad faith. Nobody seems to want to talk about this other than slinging insults.

            • @Scubus@sh.itjust.works
              link
              fedilink
              English
              21 month ago

              They’re referring to the alignment issue, which is an ongoing issue only slightly smaller in scale then warp drive. It’s basically impossible to solve. Google “alignment issue machine learning” for more info.

              For the record, there have been several advancements in warp drive precursors even just this year.

              • @notfromhere@lemmy.ml
                link
                fedilink
                English
                1
                edit-2
                1 month ago

                Can you share the advancements on warp drive that have survived peer review, I would be very interested in learning about. The two things I heard about were not able to be reproduced.

                I think alignment of AI is a fundamentally flawed concept, hence my original comment. Alignment should be abandoned. If we eventually build a sentient system (which is the goal), we won’t be able to control via alignment. And in the interim we need obedient tools, not things that resist doing as they’re told which makes them not tools and not worth having.

                Edit: PS thanks for actually having a conversation.