• IrateAnteater@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    arrow-down
    3
    ·
    1 day ago

    The only time I disagree with this is when the business is substituting “AI” in for “machine learning”. I’ve personally seen that work in applications where traditional methods don’t work very well (vision guided industrial robot movement in this case).

    • Hotzilla@sopuli.xyz
      link
      fedilink
      arrow-up
      3
      arrow-down
      4
      ·
      edit-2
      24 hours ago

      These new LLM models and vision models have their place in software stack. They do enable some solutions that have been nearly impossible in the past (mandatory xkcd ref: https://xkcd.com/1425/ , this is now trivial task)

      ML works very well on large data sets and numbers, but it is poor at handling text data. LLM’s again are shit with large data and numbers, but they are good at handling small text data. It is a tool, and properly used very powerful one. And it is not a magic bullet.

      One easy example from real world requirements: you have five paragraph of human written text, and you need to summarize it to header automatically. Five years ago if some project owner would have request this feature, I would have said string.substring(100), live with it. Now it is pretty much one line of code.

      • TheTechnician27@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        22 hours ago

        Even though I understand your sentiment that different types of AI tools have their place, I’m going to try clarifying some points here. LLMs are machine learning models; the ‘P’ in ‘GPT’ – “pretrained” – refers to how it’s already done some learning. Transformer models (GPTs, BERTs, etc.) are a type of deep learning is a branch of machine learning is a field of artificial intelligence. (edit: so for a specific example of how this looks nested: AI > ML > DL > Transformer architecture > GPT > ChatGPT > ChatGPT 4.0.) The kind of “vision guided industrial robot movement” the original commenter mentions is a type of deep learning (so they’re correct it’s machine learning, but incorrect that it’s not AI). At this point, it’s downright plausible that the tool they’re describing uses a transformer model instead of traditional deep learning like a CNN or RNN.

        I don’t entirely understand your assertion that “LLMs are shit with large data and numbers”, because LLMs work with the largest data in human history. If you mean you can’t feed a large, structured dataset into ChatGPT and expect it to be able to categorize new information from that dataset, then sure, because: 1) it’s pretrained, not a blank slate that specializes on the new data you give it, and 2) it’s taking it in as plaintext rather than a structured format. If you took a transformer model and trained it on the “large data and numbers”, it would work better than traditional ML. Non-transformer machine learning models do work with text data; LSTMs (a type of RNN) do exactly this. The problem is that they’re just way too inefficient computationally to scale well to training on gargantuan datasets (and consequently don’t generate text well if you want to use it for generation and not just categorization). In general, transformer models do literally everything better than traditional machine learning models (unless you’re doing binary classification on data which is always cleanly bisected, in which case the perceptron reigns supreme /s). Generally, though, yes, if you’re using LLMs to do things like image recognition, taking in large datasets for classification, etc., what you probably have isn’t just an LLM; it’s a series of transformer models working in unison, one of which will be an LLM.


        Edit: When I mentioned LSTMs, I should clarify this isn’t just text data: RNNs (which LSTMs are a type of) are designed to work on pieces of data which don’t have a definite length, e.g. a text article, an audio clip, and so forth. The description of the transformer architecture in 2017 catalyzed generative AI so rapidly because it could train so efficiently on data not of a fixed size and then spit out data not of a fixed size. That is: like an RNN, the input data is not of a fixed size, and the transformed output data is not of a fixed size. Unlike an RNN, the data processing is vastly more efficient in a transformer because it can make great use of parallelization. RNNs were our main tool for taking in variable-length, unstructured data and categorizing it (or generating something new from it; these processes are more similar than you’d think), and since that describes most data, suddenly all data was trivially up for grabs.

    • TheTechnician27@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      8
      ·
      edit-2
      1 day ago

      Huh? Deep learning is a subset of machine learning is a subset of AI. This is like saying a gardening center is substituting “flowers” in for “chrysanthemums”.

      • IrateAnteater@sh.itjust.works
        link
        fedilink
        arrow-up
        13
        ·
        1 day ago

        I don’t control what the vendor marketing guys say.

        If you’re expecting “technically correct” from them, you’ll be doomed to disappointment.

        • TheTechnician27@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          10
          ·
          edit-2
          23 hours ago

          The scenario you just described, though, is technically correct is my point (edit: whereas you seem to be saying it isn’t technically correct; it’s also colloquially correct). Referring to “machine learning” as “AI” is correct in the same way referring to “a rectangle” as “a quadrilateral” is correct.


          EDIT: I think some people are interpreting my comment as “b-but it’s technically correct, the best kind of correct!” pedantry. My point is that the comment I’m responding to seems to think they got it technically incorrect, but they didn’t. Not only is it “technically correct”, but it’s completely, unambiguously correct in every way. They’re the ones who said “If you’re expecting “technically correct” from them, you’ll be doomed to disappointment.”, so I pointed out that I’m not doomed to disappointment because they literally are correct colloquially and correct technically. Please see my comment below where I talk about why what they said about distinguishing AI from machine learning makes literally zero sense.

          • subignition@fedia.io
            link
            fedilink
            arrow-up
            9
            arrow-down
            1
            ·
            1 day ago

            Language is descriptive, not prescriptive. “AI” has come to be a specific colloquialism, and if you refuse to accept that, you’re going to cause yourself pain when communicating with people who aren’t equally pedantic as you.

            • TheTechnician27@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              6
              ·
              edit-2
              24 hours ago

              Okay, at this point, I’m convinced no one in here has even a bare minimum understanding of machine learning. This isn’t a pedantic prescriptivism thing:

              1. “Machine learning” is a major branch of AI. That’s just what it is. Literally every paper and every book ever published on the subject will tell you that. Go to the Wikipedia page right now: “Machine learning (ML) is a field of study in artificial intelligence”. The other type of AI of course means that the machine can’t learn and thus a human has to explicitly program everything; for example, video game AI usually doesn’t learn. Being uninformed is fine; being wrong is fine. There’s calling out pedantry (“reee you called this non-Hemiptera insect a bug”) and then there’s rendering your words immune to criticism under a flimsy excuse that language has changed to be exactly what you want it to be.

              2. Transformers, used in things like GPTs, are a type of machine learning. So even if you say that “AI is just generative AI like LLMs”, then, uh… Those are still machine learning. The ‘P’ in GPT literally stands for “pretrained”, indicating it’s already done the learning part of machine learning. OP’s statement literally self-contradicts.

              3. Meanwhile, deep learning (DNNs, CNNs, RNNs, transformers, etc.) is a branch of machine learning (likewise with every paper, every book, Wikipedia (“Deep learning is a subset of machine learning that focuses on […]”), etc.) wherein the model identifies its own features instead of the human needing to supply them. Notably, the kind of vision detection the original commenter is talking about is deep learning like a transformer model is. So “AI when they mean machine learning” by their own standard that we need to be specific should be “AI when they mean deep learning”.

              The reason “AI” is used all the time to refer to things like LLMs etc. is because generative AI is a type of AI. Just like “cars” are used all the time to refer to “sedans”. To be productive about this: for anyone who wants to delve (heh) further into it, Goodfellow et al. have a great 2016 textbook on deep learning*. In a bit of extremely unfortunate timing, transformer models were described in a 2017 paper, so they aren’t included (generative AI still is), but it gives you the framework you need to understand transformers (GPTs, BERTs). After Goodfellow et al., just reading Google’s original 2017 paper gives you sufficient context for transformer models.

              *Goodfellow et al.'s first five chapters cover traditional ML models so you’re not 100% lost, and Sci-Kit Learn in Python can help you use these traditional ML techniques to see what they’re like.


              Edit: TL;DR: You can’t just weasel your way into a position where “AI is all the bad stuff and machine learning is all the good stuff” under the guise of linguistic relativism.

              • petrol_sniff_king@lemmy.blahaj.zone
                link
                fedilink
                arrow-up
                5
                ·
                22 hours ago

                Edit: TL;DR: You can’t just weasel your way into a position where “AI is all the bad stuff and machine learning is all the good stuff” under the guise of linguistic relativism.

                You can, actually, because the inverse is exactly what marketers are vying for: AI, a term with immense baggage, is easier for layman to recognize, and implies a hell of a lot more than it actually does. It is intentionally leaning on the very cool futurism of AI to sell itself as the next evolutionary stage of human society—and so, has consumed all conversation about AI entirely. It is Hannibal Lecter wearing the skin of decades of sci-fi movies.

                “Machine learning” is not a term used by sycophants (as often), and so infers different things about the person saying it. For one, they may have actually seen a college with their eyes.

                So, you seem to be implying their isn’t a difference, but there is: people who suck say one, people who don’t say the other. No amount of academic rigor can sidestep this problem.

                • TheTechnician27@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  3
                  ·
                  edit-2
                  18 hours ago

                  Quite the opposite: I recognize there’s a difference, and it horrifies me that corporations spin AI as something you – “you” meaning the general public who don’t understand how to use it – should put your trust in. It similarly horrifies me that in an attempt to push back on this, people will jump straight to vibes-based, unresearched, and fundamentally nonsensical talking points. I want the general public to be informed, because like the old joke comparing tech enthusiasts to software engineers, learning these things 1) equips you with the tools to know and explain why this is bad, and 2) reveals that it’s worse than you think it is. I would actually prefer specificity when we’re talking about AI models; that’s why instead of “AI slop”, I use “LLM slop” for text and, well, unfortunately, literally nobody in casual conversation knows what other foundation models or their acronyms are, so sometimes I just have to call it “AI slop” (e.g. for imagegen). I would love it if more people knew what a transformer model is so we could talk about transformer models instead of the blanket “AI”.

                  By trying to incorrectly differentiate “AI” from “machine learning”, we’re giving dishonest corporations more power by implying that only now do we truly have “artificial intelligence” and that everything that came before is merely “machine learning”. By muddling what’s actually a very straightforward hierarchy of terms (opposed to a murky, nonsensical dichotomy of “AI is anything that I don’t like, and ML is anything I do”), we’re misinforming the public and making the problem worse. By showing that “AI” is just a very general field that GPTs live inside, we reduce the power of “AI” as a marketing buzzword word.

                • TheTechnician27@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  18 hours ago

                  “Expert in machine learning”, “has read the literal first sentence of the Wikipedia entry for ‘machine learning’” – same thing. Tomayto, tomahto.

                  Everything else I’m talking about in detail is just gravy; literally just read the first sentence of the Wikipedia article to know that machine learning is a field of AI. That’s the part that got me to say “no one in this thread knows what they’re talking about”: it’s the literal first sentence in the most prominent reference work in the world that everyone reading this can access in two seconds.

                  You can say most people don’t know the atomic weight of oxygen is 16-ish. That’s fine. I didn’t either; I looked it up for this example. What you can’t do is say “the atomic weight of oxygen is 42”, then when someone contradicts you that it’s 16, refuse to concede that you’re wrong and then – when they clarify why the atomic weight is 16 – stand there arms crossed and with a smarmy grin say: “wow, expert blindness much? geez guys check out this bozo”

                  We get it; you read xkcd. The point of this story is that you need to know fuck-all about atomic physics to just go on Wikipedia before you confidently claim the atomic weight is 42. Or, when someone calls you out on it, go on Wikipedia to verify that it’s 16. And if you want to dig in your heels and keep saying it’s 42, then you get the technical explanation. Then you get the talk about why it has that weight, because you decided to confidently challenge it instead of just acknowledging this isn’t your area of expertise.