• ysjet@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    3 days ago

    Source is the commercial and academic uses I’ve personally seen as an academic-adjacent professional that’s had to deal with this sort of stuff at my job.

    • KeenFlame@feddit.nu
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      What was the data you saw on what volume of requests to non-llm models as they relate to utility? I can’t figure out what profession have access to this kind of statistic? It would be very useful to know, thx.

      • ysjet@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 days ago

        I think you’ve misunderstood what I was saying- I don’t have spreadsheets of statistics on requests for LLM AIs vs non-LLM AIs. What I have is exposure to a significant amount of various AI users, each running different kinds of AIs, and me seeing what kind of AI they’re using, and for what purposes, and how well it works or doesn’t.

        Generally, LLM-based stuff is really only returning ‘useful’ results for language-based statistical analysis, which NLP handles better, faster, and vastly cheaper. For the rest, they really don’t even seem to be returning useful results- I typically see a LOT of frustration.

        I’m not about to give any information that could doxx myself, but the reason I see so much of this is because I’m professionally adjacent to some supercomputers. As you can imagine, those tend to be useful for AI research :P

        • KeenFlame@feddit.nu
          link
          fedilink
          arrow-up
          1
          ·
          9 hours ago

          Ah ok that’s too bad. Super computers typically don’t have tensor cores though, and most LLM use is presumably client use on ready trained models which desktop or mobile cpus can manage now so it will be impossible to know then