• LughOPMA
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    5 months ago

    Palantir’s panel at a recent military conference where they joked and patted themselves on the back about the work their AI tools are doing in Gaza was like a scene with human ghouls in the darkest of horror movies.

    Estimates vary as to how many of the 30,000-40,000 dead in Gaza are military combatants, but they seem to average about 20%. This seems like a terrible record of failure for an AI tool that touts its precision.

    Why does the US government want to reward and endorse this tech? Why aren’t people more alarmed? By any measure, surely Palantir’s demonstrated track record is one of failure. The Israel-Hammas war is the first time the world has seen AI used in significant warfare. It’s a grim indication for the future.

    • jmcs@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      5 months ago

      Palantir’s “AI” is crap and it’s clear it has tons of false positives. But even if it was 100% accurate it wouldn’t prevent the civilian deaths if the military getting the report carpet bombs everything around the identified terrorists without caring if there are civilians around - and when the Israeli spokesperson has a very precise estimate of how many Hamas members they killed but not even a ballpark number of estimated civilian deaths it’s clear that’s what’s happening here.

    • threelonmusketeers@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 months ago

      but they seem to average about 20%. This seems like a terrible record of failure for an AI tool that touts its precision.

      That does seem pretty bad.

      To play devil’s advocate for a moment, what systems were they using before implementing the AI tool? Were those systems better? Seems like a low bar to beat…