• errer@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    14 hours ago

    Run your LLMs locally if you really want a therapist, you don’t need any of the extra crap the company offer in their online versions.

      • chicken@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 hours ago

        3090 is great because it has about the same amount of vram as newer cards, and vram is what determines which models you can run on it.

      • Ech@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        10 hours ago

        I was able to run llms on a 1080. They were admittedly small ones, but a 3090 is enough to be usable. That said, I’m not convinced it’s a good idea to use it for therapy. I expect it’s about as useful as talking into a mirror.

        • DancingBear@midwest.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 hours ago

          I don’t know what to do with myself anymore I just want to be able to be with you and be with you I don’t know what to do about that I don’t know what to do I don’t know what to do I’m not gonna do that I don’t know what to do and I don’t know what to do but I don’t know what to do so I’m just trying I know that you don’t know how much you have and you know what you know but I’m just not trying and you can’t tell I’m not gonna tell anybody else how much I know I don’t know what I don’t even care what you do I don’t care I just know that I don’t know what to do with me and I’m just saying what do what do I

          This is how LLM works

          • Ech@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 hours ago

            I mean, it can be pretty coherent and impressive with the right LLM, but even then it’s mostly just functioning to generate what you want it to, and it’s certainly not going to provide professional insight for those that need it.

            • DancingBear@midwest.social
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 hours ago

              Yea it can be a lot more coherent, the above was just the autocorrect options on my phone that I pressed again and again.

              But the LLM is not “listening” to you it is just providing the next likely words in a conversation based on what is in it’s database I believe…

              Ai therapy sounds like a terrible idea…

          • Ech@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 hours ago

            Talking things out to ourselves can often be useful. It could certainly be good for that.

      • Monkey With A Shell@lemmy.socdojo.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 hours ago

        I think I’m at a 3060 or so and it works decently depending on the model. I can generally get away with around 13B, or some 20+ Q4 or so but they get real slow by that point.

        It’s a lot of messing around to find something that performs decent while not being so limited as to get crazy repetitive or saying loony things.