• Blue_Morpho@lemmy.world
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    9
    ·
    1 day ago

    He seems right about everything. It’s weird though that you can’t say llms are useful without being downvoted.

    Like if tech doesn’t achieve fully conscious super human intelligence, it’s useless.

    • knightly the Sneptaur@pawb.social
      link
      fedilink
      English
      arrow-up
      75
      arrow-down
      1
      ·
      1 day ago

      Any real utility they might have had is wholly overshadowed by the massive capital overinvestment and shoehorning it into everything by folks grifting on that overpromise.

      • PattyMcB@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        1
        ·
        1 day ago

        Absolutely. AI in everything is counterproductive, especially if it’s bad, or if it has nothing to do with the function of the system into which it’s shoehorned.

      • Scratch@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        1 day ago

        And the environmental impact of training and running llms, just so I can ask GPT why my code no work?!

      • Blue_Morpho@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        6
        ·
        1 day ago

        Capital over investment then crash is just how capitalism rolls. It’s been that way forever.

        The Internet had over investment and a crash in 2000. Game consoles, and home computers before that. Decades ago when I was looking into more office space for my ISP, the real estate agent talked about how the Internet was the latest bubble in a long chain of tech bubbles he had seen. He talked about the minicomputer bubble from the late 1960’s.

        Even in the sub market of AI there have been hypes and crashes like neural nets from 30 years ago. Today voice recognition and image to text is in everything yet no one complains, “Why is AI shoe horned into my camera app?” It’s because it is no longer seen as AI, but a feature.

    • Ogmios@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 day ago

      Way back people attempted to make automatons from cogs and gears, yet while that didn’t work the basic technology was still extremely useful for appropriate applications.

      • veroxii@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        I saw a quote the other day and I’m paraphrasing: “AI is not going to replace your job. It’s going to be replaced by someone who knows how to properly use and leverage AI”

      • Blue_Morpho@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        edit-2
        1 day ago

        I’m talking without knowing anything but it seems like LLM’s aren’t orthogonal but instead only insufficient. That is like our consciousness has a library of information to draw on and that library is organized based on references, the LLM could be the library that another software component uses to draw upon for actual reasoning.

        That’s part of what Deepseek has been trying to do. They put a bunch of induction logic for different categories in front of the LLM.

        • moonlight@fedia.io
          link
          fedilink
          arrow-up
          6
          arrow-down
          1
          ·
          1 day ago

          I agree, although this seems like an unpopular opinion in this thread.

          LLMs are really good at organizing and abstracting information, and it would make a lot of sense for an AGI to incorporate them for that purpose. It’s just that there’s no actual thought process happening, and in my opinion, “reasoning models” like Deepseek are entirely insufficient and a poor substitute for true reasoning capabilities.

      • moonlight@fedia.io
        link
        fedilink
        arrow-up
        3
        arrow-down
        3
        ·
        1 day ago

        I don’t think so, or rather, we don’t know yet. LLMs are not the full picture, but they might be part of it. I could envision a future AGI that has something similar to a modern LLM as the “language / visual centers of the brain”. To continue that metaphor, the part that’s going to be really difficult is the frontal lobe.

        edit: Orthogonal to actual reasoning? Sure. But not to “general AI”.

      • MartianSands@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        1 day ago

        That’s not obviously the case. I don’t think anyone has a sufficient understanding of general AI, or of consciousness, to say with any confidence what is or is not relevant.

        We can agree that LLMs are not going to be turned into general AI though

  • Comtief@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    Idk if its about to burst,I find llm-s quite helpful. Agree about the rest, though

      • Alteon@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 hours ago

        Drafting emails and cleaning them up. Summarizing PDF’s Assisting with more complex math or engineering concepts to at least give me some ideas of where I can start looking (i.e. like wiki, but faster). Materials analysis - I can feed it a spectrograph, and it can output several AISI or AMS specification numbers to look at. Cleaning up papers or reports that I’ve written. Helped with Resume. It’s extremely helpful. It’s a great starting point on a lot of things.

      • Comtief@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        23 hours ago

        Summarizing or analyzing longer texts. For example, I struggle with attention when listening to an audiobook, so I let it summarize the chapter for me so I get caught up better. Translating words or sentences between languages, its much better than deepl/Google translate etc at it. Also translating subtitles, can translate whole episodes within minutes these days. They have helped me out from a few tight spots with coding and scripts as well at work.

        Like cases where I’d normally spend hours troubleshooting and googling because its above my paygrade, I can solve the problems much easier with an LLM. Its still just a tool though.

  • PattyMcB@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    2
    ·
    1 day ago

    Just like “THE CLOUD” AI had it’s uses, but just like the cloud, I think we’ll see a big pulling back from it’s use in everything and anything. I would definitely consider this trend a bubble

  • NigelFrobisher@aussie.zone
    link
    fedilink
    English
    arrow-up
    4
    ·
    23 hours ago

    People around me keep talking about exponential improvements to models being guaranteed even though that seems to have stopped happening a while ago now.

  • casmael@lemm.ee
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    1 day ago

    ‘Ai’ is a sack of shit and it should get back in the bin where it belongs.

  • Rowan Brad Quni
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    3
    ·
    22 hours ago

    It seems difficult that anyone could make that statement anywhere near definitively until we have quantum computers in our pockets. The real issue with scaling is the binary digital computing paradigm, not the limits of AI/neural network intelligence. In fact, it really depends on how you define “intelligence” and my own research into a unified “theory of everything” indicates that ours as humans is fundamentally repetitive imitation (mimicry). No different than AI learning algorithms, simply more advanced: we have far more neurons than any AI model.

  • sunzu2@thebrainbin.org
    link
    fedilink
    arrow-up
    4
    arrow-down
    3
    ·
    1 day ago

    Llm is a usefull tool but it is just that a tool. Makes white collar worker more productive.

    The worker still has to have the skills, knowledge and experience to make professional decisions. Llm can’t replace that and I doubt it will any time soon.

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    5
    ·
    edit-2
    7 hours ago

    I lot of the critics were right about it for all the wrong reasons. You don’t get to claim victory on that, it’s actually better to make a good argument that later turns out to have a fatal flaw.

    I don’t know this exact guy’s history, though. He might not be in the picture. Here’s the article he mentioned from 3 years ago. He makes some pretty good points, and predicts something like chain-of-thought, although I have a feeling radiology AI has come a long way. The post shown about the market trajectory from about a year ago was also spot on.

    Tangentially, I fully believe a human whitehouse staffer chose to use top-level internet domains to break the tariffs down. An LLM would done something more bespoke. If you have shitty resources and an unclear request it’s not even a terrible approach.

  • cronenthal@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    7
    ·
    edit-2
    1 day ago

    I just wrote down my thoughts on this topic today… https://discuss.tchncs.de/post/33798614

    First, let’s get illusions out of the way: LLMs are very fancy database-queries that yield complex and often useful results with very simple input. That is some impressive technology by any means, but it’s about as “intelligent” as SQL.

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 day ago

      I don’t think I’d support that parallel. With SQL, I always get the correct result back (as long as the DB keeps running). And I’ve tried a bit of programming and asking questions lately, and I must say AI isn’t really 100% accurate. And it really depends on the complexity of the query and whether there are any traps on the way. Because in contrast to databases, AI will make up an answer to most questions. Even if it’s wrong. And it’ll also go ahead and sprinkle in some inaccuracies here and there. I personally struggle a bit with that kind of behaviour. It’s super useful to be able to ask expert questions. But I think I like traditional databases, knowledge-bases and documentation better. Becuase with that it’s super clear to me whether I get 100% accurate information. Or if I’m reading random answers from Stack Overflow… And AI is often not alike a knowledge database, but one that also rolls dice and decides to fool me every now and then.

      • cronenthal@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 day ago

        Trust me, the analogy works if you understand what you’re actually getting back from an LLM. Most people think of it as an answer to a question, but in reality you get a series of tokens in response to your query of tokens. In many cases this will very, very closely resemble something useful, like human language or source code, but that’s really just our human interpretation of the result. To the LLM it’s just a probable series of numbers without any meaning whatsoever. And, given the same inputs and settings, the responses are entirely predictable, there is no intelligence at work here. It’s really a very complex way to query a very complex and large dataset which incidentally spits out results that have an uncanny resemblance to human language. Somehow this is enough to fool the majority of people into thinking this system is intelligent, and you can see why. But it’s not. And the companies involved do their very best to keep this illusion alive for as long as they can.

        • hendrik@palaver.p3x.de
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          1 day ago

          I get what you say. I’m still not convinced. With something like like SQL, my query is an exact order. Fetch me these rows, do these operations on those fields and return that. And it does exactly that. While with LLMs, I put in human language, it translates that into some unknown representation and does autocomplete. Which I think is a different mechanism. And also in consequence, it’s a different thing that gets returned. I’m thinking of something like asking a database an exact question. Like count the number of users and answer me which servers have the most users. You get the answer to that question. While if I query an AI, that also gives me an answer. And it may be deterministic once I set the temperature to zero. But I found LLMs tend to “augment” their answer with arbitrary “facts”. Once it knows that Reddit for example is a big platform, it won’t really look at the news article I gave and the numbers in it. If it’s a counter-intuitive finding, it’ll rather base its answer on its background knowledge and disregard the other numbers, leading to an incorrect answer. And that tends to happen to me with more complex things. So I think it isn’t the correct tool for things like summarizations, or half the things databases are concerned with.

          With simpler things, I’m completely on your side. It’ll almost every time get simple questions right. And it has an astounding pile of knowledge available. It seems to be able to connect information, apply it to other things. I’m always amazed by what it can do. And its shortcomings. I think a lot of those aren’t very obvious. I’m a bit curious whether we’re one day able to improve LLMs to a state where we can steer AI into being truthful (or creative), control what it bases its responses on…

          I mean we kind of want that. I frequently see some Github bot or help bot return incorrect answers. At the same time we want things like Retrieval Augmented Generation, AI assistants helping workers to be more efficient. Or doctors to avoid mistreatment, look through the patient’s medical records… But I think these people often confuse AI with a database that gives a summary. And I don’t think it is. It will do for the normal case. But you really have to pay attention to what current AI really is, if you use it for critical applications, because it’s knowledgeable, but at the same time not super smart. And it tends to be weird with all edge-cases.

          And I think that’s kind of the difference. “Traditional” computing will handle edge-cases just as well as the regular stuff. It’ll look up information and it will match the query or won’t return anything. And it can’t answer a lot of questions unles you tell the computer exactly what steps to do.