• @knightly@pawb.social
    link
    fedilink
    English
    14
    edit-2
    3 months ago

    I’m a former singularitarian, and sadly, we live in a universe that will not be seeing a technological singularity.

    Moore’s Law has been dead for over a decade, tech isn’t advancing like it did when we were kids, and we’ve reached the hard physical limits of electronic transistor technology. Even if we manage to get one of the proposed alternatives to work (photonics, spintronics, plasmonics, etc), the most we’ll see is one or two more price-performance doublings before those hit a wall too.

    The technological curve isn’t exponential, it’s sigmoid. Those economists know what they’re talking about because they’ve internalized Alvin Toffler’s “Limits to Growth” as a prerequisite for futures studies.

    • @Phoenix5869
      link
      English
      83 months ago

      Holy shit, finally someone else who gets what i’m saying!

      I completely agree. Moore’s law is dead, photonic computing and graphene transistors (which i’ve heard are set to replace it) probably won’t be here for a while, i agree that tech has slowed down, and overall, things are not looking good.

      I am very scared of the possibility of a long period of slow, incremental growth. But unfortunately, i think deep down i know it’s a very real possibility. The world of 2030 may look pretty much the same as today, with 2040 not looking much different than that.

      I’m a former singularitarian,

      I’m glad to see that a former singularitarian has seen the truth. While i wasn’t too deep into the Kurzweil Koolaid, i did at one point think that we were getting AGI in a matter of a couple decades. With the slowdown of computing progress, that clearly isn’t happening.

      • @Wanderer@lemm.ee
        link
        fedilink
        English
        43 months ago

        The thing is the human brain is very small and very efficient and has some limits on what it is made from being biological in nature.

        As the human brain exists we know it is possible to make. So if we make something as equally as functional then whatever we make we just make a new version 10 times as big.

        The problem is making that first artifical brain, but when we make that I don’t see how we couldn’t have an explosion in intelligence.

        • @Phoenix5869
          link
          English
          23 months ago

          How exactly are we supposed to replicate the human brain, when we barely understand it?

              • @knightly@pawb.social
                link
                fedilink
                English
                03 months ago

                Which is why neural network computer science needs psychologists and sociologists to regulate it.

                It’s only a matter of time before corps start trying to simulate human brains, but even the smaller models deserve at least the same level of consideration that we give to animals.

            • @Phoenix5869
              link
              English
              13 months ago

              I get what you’re trying to say, but making fire and understanding the human brain are not even remotely on the same level.

          • MxM111
            link
            fedilink
            13 months ago

            We do not have task of replicating brain, only intelligence. And even there it is not replicating that we want or we do.

        • MxM111
          link
          fedilink
          13 months ago

          Human brain is not very efficient. It just barely made efficient enough to start civilization - it did not have time to evolve within civilization to become more intelligent. Think about how more intelligent we would be if we were to continue evolving in the same direction of smart civilization builders for another million years.

          • @Wanderer@lemm.ee
            link
            fedilink
            English
            13 months ago

            Okay. If anything that makes it more likely will with have some huge intelligence jump

    • @randomsnark@lemmy.ml
      link
      fedilink
      English
      33 months ago

      I can’t find a book called Limits to Growth by Alvin Toffler. Were you thinking of the Donella Meadows et al book of that title, or some other book by Toffler? Or has my google-fu just failed me? If the latter I’d love a link or something so I can check it out.

      • @knightly@pawb.social
        link
        fedilink
        English
        33 months ago

        No, that’s my bad. For some reason I was also thinking of Alvin Toffler’s “Future Shock” and got the authors mixed up. XD

    • @EspiritdescaliMA
      link
      English
      23 months ago

      Limits to Growth predicts collapse though, so I rather hope it’s not accurate

    • MxM111
      link
      fedilink
      23 months ago

      Interesting to see this statement when LLMs today are so powerful, and like just 3 years ago, nobody even heard about ChatGPT.

      • @knightly@pawb.social
        link
        fedilink
        English
        33 months ago

        If we were on the singularity timeline we’d have actual AI at this point, not just Big Autocomplete.

        • MxM111
          link
          fedilink
          03 months ago

          It is actual AI, and it is very good one. It is just not AGI, yet.

          People make mistakes associating method of training with final result. Plus, are you sure that big part of your intelligence is not autocomplete?