• burliman@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    3
    ·
    10 months ago

    Bad humans are prompting these AI engines. Still gotta fix that. You know, root of the problem. I can tell you as an older human, misinformation has been supercharged every election. But yeah let’s blame AI this time around so we don’t have to figure out the tough problem.

    • saltesc@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      2
      ·
      edit-2
      10 months ago

      Correct. AI is simply a tool. People need to get their heads around this and stop perceiving it as some sentient magical entity with rogue prerogatives and uncontested liberties.

      Whenever AI does something whack, that was a human. Everything it knows and does comes from the knowledge and instructions of humans. It’s us. If AI produces misinformation, it’s simply doing what it was taught and instructed by someone, and there lies the source of bullshit.

    • Phanatik@kbin.social
      link
      fedilink
      arrow-up
      7
      arrow-down
      3
      ·
      10 months ago

      The problem isn’t the misinformation itself, it’s the rate at which misinformation is produced. Generative models lower the barrier to entry so anyone in their living room somewhere can make deepfakes of your favourite politician. The blame isn’t on AI for creating misinformation, it’s for making the situation worse.

    • HelloThere@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      Fallible humans are building them in the first place.

      No LLM - masquerading as AI - is free of biases.

      That’s not to say that ‘bad’ people prompting biased LLMs is not an issue, it very much is, but even ‘good’ people are not going to get objective results.

  • arymandias@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    3
    ·
    10 months ago

    Sometimes I wonder if the clown music is just in my head or if it’s the theme music for the past few years.

    The biggest misinformation comes from Fox or related ventures in other countries. No AI or deepfakes needed, just classical oligarchic propaganda. But yeah let’s listen to the guys willing to let the world burn for slightly higher profit margins what the big problems in the world are today.

  • General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    10 months ago

    Misinformation has been an issue in the public consciousness for almost 10 years now: Since Trump’s run for the presidency in the US and since Russian military aggression became impossible to ignore. The consensus was that it had much to do with social media and how easy it could be manipulated.

    I always wonder if this focus on AI is a way to distract from and derail debates about social media regulation.

  • kandoh@reddthat.com
    link
    fedilink
    English
    arrow-up
    7
    ·
    10 months ago

    We live in a world where people think Biden banned abortion because it happened while he was president. What happens when those people start seeing and hearing AI recordings telling them the worst wacko shit you can possibly imagine?

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 months ago

    This is the best summary I could come up with:


    LONDON (AP) — False and misleading information supercharged with cutting-edge artificial intelligence that threatens to erode democracy and polarize society is the top immediate risk to the global economy, the World Economic Forum said in a report Wednesday.

    The report listed misinformation and disinformation as the most severe risk over the next two years, highlighting how rapid advances in technology also are creating new problems or making existing ones worse.

    The authors worry that the boom in generative AI chatbots like ChatGPT means that creating sophisticated synthetic content that can be used to manipulate groups of people won’t be limited any longer to those with specialized skills.

    AI-powered misinformation and disinformation is emerging as a risk just as a billions of people in a slew of countries, including large economies like the United States, Britain, Indonesia, India, Mexico, and Pakistan, are set to head to the polls this year and next, the report said.

    Fake information also could be used to fuel questions about the legitimacy of elected governments, “which means that democratic processes could be eroded, and it would also drive societal polarization even further,” Klint said.

    1 threat, followed by four other environmental-related risks: critical change to Earth systems; biodiversity loss and ecosystem collapse; and natural resource shortages.


    The original article contains 523 words, the summary contains 210 words. Saved 60%. I’m a bot and I’m open source!

  • 0ddysseus@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    12
    ·
    10 months ago

    What a fucking joke. Those monocle wearing cunts at Davos are the biggest threat humanity faces and they fucking know it.

    Eat The motherfucking Rich

  • originalucifer@moist.catsweat.com
    link
    fedilink
    arrow-up
    1
    arrow-down
    18
    ·
    10 months ago

    i remember when it was asbestos.

    and then at some point it changed to the ozone.

    are we at scary ai now? or is this one just nonsense?