• LughOPMA
    link
    English
    113 months ago

    It amazes me that as famous as the concept of the tech singularity has become, how little its implications enter most people’s thoughts. When most people talk about the future, they do it without any regard for its implications. Even more amazingly, when it comes to academics and intellectuals paid to think about the future, almost none of them ever do. I’ve yet to see an Economist who seems to know about the concept. When Economists make predictions about the effect of technology on our economic future, they are far more likely to reference trends from the early 20th, or even 19th century.

    I suspect all the problems and opportunities the tech singularity will create won’t be dealt with in advance in a planned orderly fashion. Rather it will be like March 2020 with Covid, and suddenly we’ll be scrambling for emergency responses to a brand new reality.

    • @knightly@pawb.social
      link
      fedilink
      English
      14
      edit-2
      3 months ago

      I’m a former singularitarian, and sadly, we live in a universe that will not be seeing a technological singularity.

      Moore’s Law has been dead for over a decade, tech isn’t advancing like it did when we were kids, and we’ve reached the hard physical limits of electronic transistor technology. Even if we manage to get one of the proposed alternatives to work (photonics, spintronics, plasmonics, etc), the most we’ll see is one or two more price-performance doublings before those hit a wall too.

      The technological curve isn’t exponential, it’s sigmoid. Those economists know what they’re talking about because they’ve internalized Alvin Toffler’s “Limits to Growth” as a prerequisite for futures studies.

      • @Phoenix5869
        link
        English
        83 months ago

        Holy shit, finally someone else who gets what i’m saying!

        I completely agree. Moore’s law is dead, photonic computing and graphene transistors (which i’ve heard are set to replace it) probably won’t be here for a while, i agree that tech has slowed down, and overall, things are not looking good.

        I am very scared of the possibility of a long period of slow, incremental growth. But unfortunately, i think deep down i know it’s a very real possibility. The world of 2030 may look pretty much the same as today, with 2040 not looking much different than that.

        I’m a former singularitarian,

        I’m glad to see that a former singularitarian has seen the truth. While i wasn’t too deep into the Kurzweil Koolaid, i did at one point think that we were getting AGI in a matter of a couple decades. With the slowdown of computing progress, that clearly isn’t happening.

        • @Wanderer@lemm.ee
          link
          fedilink
          English
          43 months ago

          The thing is the human brain is very small and very efficient and has some limits on what it is made from being biological in nature.

          As the human brain exists we know it is possible to make. So if we make something as equally as functional then whatever we make we just make a new version 10 times as big.

          The problem is making that first artifical brain, but when we make that I don’t see how we couldn’t have an explosion in intelligence.

          • @Phoenix5869
            link
            English
            23 months ago

            How exactly are we supposed to replicate the human brain, when we barely understand it?

                • @knightly@pawb.social
                  link
                  fedilink
                  English
                  03 months ago

                  Which is why neural network computer science needs psychologists and sociologists to regulate it.

                  It’s only a matter of time before corps start trying to simulate human brains, but even the smaller models deserve at least the same level of consideration that we give to animals.

              • @Phoenix5869
                link
                English
                13 months ago

                I get what you’re trying to say, but making fire and understanding the human brain are not even remotely on the same level.

            • MxM111
              link
              fedilink
              13 months ago

              We do not have task of replicating brain, only intelligence. And even there it is not replicating that we want or we do.

          • MxM111
            link
            fedilink
            13 months ago

            Human brain is not very efficient. It just barely made efficient enough to start civilization - it did not have time to evolve within civilization to become more intelligent. Think about how more intelligent we would be if we were to continue evolving in the same direction of smart civilization builders for another million years.

            • @Wanderer@lemm.ee
              link
              fedilink
              English
              13 months ago

              Okay. If anything that makes it more likely will with have some huge intelligence jump

      • @randomsnark@lemmy.ml
        link
        fedilink
        English
        33 months ago

        I can’t find a book called Limits to Growth by Alvin Toffler. Were you thinking of the Donella Meadows et al book of that title, or some other book by Toffler? Or has my google-fu just failed me? If the latter I’d love a link or something so I can check it out.

        • @knightly@pawb.social
          link
          fedilink
          English
          33 months ago

          No, that’s my bad. For some reason I was also thinking of Alvin Toffler’s “Future Shock” and got the authors mixed up. XD

      • MxM111
        link
        fedilink
        23 months ago

        Interesting to see this statement when LLMs today are so powerful, and like just 3 years ago, nobody even heard about ChatGPT.

        • @knightly@pawb.social
          link
          fedilink
          English
          33 months ago

          If we were on the singularity timeline we’d have actual AI at this point, not just Big Autocomplete.

          • MxM111
            link
            fedilink
            03 months ago

            It is actual AI, and it is very good one. It is just not AGI, yet.

            People make mistakes associating method of training with final result. Plus, are you sure that big part of your intelligence is not autocomplete?

      • @EspiritdescaliMA
        link
        English
        23 months ago

        Limits to Growth predicts collapse though, so I rather hope it’s not accurate

    • MxM111
      link
      fedilink
      33 months ago

      From the linked paper published in 1993:

      Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.

      30 years was last year. So, he died because he was upset that his most famous prediction did not happen :(

  • AFK BRB Chocolate
    link
    fedilink
    English
    93 months ago

    I really loved both A Fire Upon the Deep and Rainbows End. The latter was particularly fascinating.

  • @Septimaeus@infosec.pub
    link
    fedilink
    English
    53 months ago

    His theory will be increasingly relevant in the coming years, I think. It won’t look the same, and will be more of a period/epoch than a discrete event, but the kernel was true, to the extent of statistical inevitability.

    Unfortunately, the way singularity theory is handled by pop culture, and honestly most of the community, especially the those of the EA group, is terribly flawed. In particular, the fear of so-called AGI, murderous robots etc, is mostly unfounded, while the real dangers and their current relevance remain mostly unaddressed.

    I’ll write up an explanation if enough people are interested, but in summary:

    1. Runaway AGI is highly improbable due to (a) conditional probability and (b) thermodynamics, and it has nothing to do with the heat wall.
    2. Estimating the arrival of the singularity is silly, because it has already begun.
    3. The critical technological safeguard against negative outcomes isn’t redundant kill switches and blackbox barriers. It’s just machine ethics, a nascent research space that currently has little profit incentive, and thus has barely made it off the ground.
  • LanternEverywhere
    link
    fedilink
    13 months ago

    He missed seeing it by just a few years. We’re clearly in the early stages of it starting to happen for real

    • bane_killgrind
      link
      fedilink
      33 months ago

      There’s an optimistic and a cynical perspective there.

      Optimist yes.

      Cynicism says, these LLMs are just statistical generation models that create outputs that are statistically similar to the training data in relation to a prompt.

      That’s not AI, that’s derivatives automated.

      • @meyotch@slrpnk.net
        link
        fedilink
        English
        1
        edit-2
        3 months ago

        But it’s really good at spitting out JavaScript code that works the first time you run it. Of all the languages I have tried an LLM assistant with, the JavaScript output is the best. Im guessing that’s because it had almost every working webpage on the internet to learn from.

        I mention this because how is being able to construct working code from a plain language description not a type of intelligence? Perhaps a narrow form, but the proof is in the pudding, it outputs working code that fits an arbitrary purpose.

        Just bringing that up for discussion. I don’t really care if LLM are ‘intelligent’ or not, but the utility is obvious. Even if the LLM isn’t smart, it still speeds progress by acting as an extension of my own so called intelligence.

        • bane_killgrind
          link
          fedilink
          13 months ago

          It’s just another set of grammar. It’s telling a story about variables.

  • @EspiritdescaliMA
    link
    English
    13 months ago

    Very sad to hear this. Loved his books and his ideas were visionary at the time and mainstream now.