• Phen@lemmy.eco.br
    link
    fedilink
    arrow-up
    4
    ·
    21 hours ago

    I once asked chatgpt for the name of an anime that I couldn’t remember. I described the whole premise of the anime and then some details of the final episode. Chatgpt says “Stein’s gate”. I say “no, it’s not as famous as that”. It then says “Erased”, and proceeds to describe it, showing that I was actually very effective in my description of it.

    I asked “why did you say Stein’s Gate if I described Erased so well?”: “you mentioned time travel and Stein’s gate is a popular anime about time travel”

    “but I also mentioned a lot of other stuff that don’t match anything with Stein’s gate, like the details about the villain or how the time traveling works”

    “yeah my bad I just went by popularity”

    “next time, how can I phrase my questions in a way that would make you consider the whole input instead of just using some key information in it?”

    “you could have mentioned details about the villain or how the time traveling works and I would have used that to rule out Stein’s gate”

    “but I did”

    “yeah sorry about that, next time try giving me details about the villain or how the time traveling works”

    • solsangraal@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      20 hours ago

      hilarious. just the other day i had a whole conversation with it about how to send feedback to openai, and it was giving me bogus wrong instructions the whole time before it finally said something like they keep changing everything, just email them

      • Phen@lemmy.eco.br
        link
        fedilink
        arrow-up
        2
        ·
        19 hours ago

        I’ve recently been working with some niche tool that has very little documentation on the web but is open source and has a ton of discussions on public email groups. Chatgpt is sometimes able to figure out what param I need to send for specific stuff in that tool even if there are zero Google matches for the param name, but more often than not it just hallucinates stuff or mention things that no longer exists. I’ve created the habit of always asking things like “is that right?” or “is that answer up to date?” before even reading the first response from it and it often replies with things like: “no, that param only exists in some other similar tool” or “no that API has been deprecated” and shit like that.

        If it were up to me I wouldn’t even be using chatgpt at all due to all the time it wasted with random stuff it makes up, but whatever training data openAI used, it surely had more information about the niche stuff I’m working with than the web does at this point - so sometimes it can still save me time too.

        • solsangraal@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          18 hours ago

          i use it exclusively to condense 1000 word text blocks down to 100 words, and it’s good enough at that that the time it saves me is worth the (sometimes) slightly lower quality than i could have done myself