• PlantPowerPhysicist@discuss.tchncs.de
      link
      fedilink
      arrow-up
      10
      ·
      8 hours ago

      I have a (rather different!) application that’s released as a Flatpak, and GPU acceleration is CUDA-only there, too. It supports ROCm when compiled locally, but ROCm just can’t work through the sandbox at this point, unfortunately. Not for lack of trying.

      If you have an example of a Flatpak where it does work, I’d love to see their manifest so I can learn from it.

      • tiramichu@lemm.ee
        link
        fedilink
        arrow-up
        16
        arrow-down
        2
        ·
        edit-2
        10 hours ago

        I did buy a (secondhand) nvidia card specifically for AI worlkloads because yes, I realised that this is what the AI dev community has settled on, and if I try to avoid nvidia I will be making life very hard for myself.

        But that doesn’t change the fact that it still absolutely sucks that nvidia have this dominance in the space, and that it is largely due to what tooling the community has decided to use, rather than any unique hardware capability which nvidia have.

        • theunknownmuncher@lemmy.world
          link
          fedilink
          arrow-up
          7
          ·
          edit-2
          10 hours ago

          Huh? I can’t speak to “video processing”, but “NVIDIA dominance” isn’t applicable at all for AI, at least generative AI. Pretty much all frameworks for LLM either officially support AMD or have an AMD fork. And literally all image gen frameworks that I am aware of officially support AMD.

          I typically see people recommending to buy an AMD GPU over NVIDIA in AI communities…

          What AI workloads do you run?

          EDIT: Even the app in the OP only supports NVIDIA GPUs for the prepackaged flathub installation. You can run this software with an AMD GPU.