Okay, at this point, I’m convinced no one in here has even a bare minimum understanding of machine learning. This isn’t a pedantic prescriptivism thing:
“Machine learning” is a major branch of AI. That’s just what it is. Literally every paper and every book ever published on the subject will tell you that. Go to the Wikipedia page right now: “Machine learning (ML) is a field of study in artificial intelligence”. The other type of AI of course means that the machine can’t learn and thus a human has to explicitly program everything; for example, video game AI usually doesn’t learn. Being uninformed is fine; being wrong is fine. There’s calling out pedantry (“reee you called this non-Hemiptera insect a bug”) and then there’s rendering your words immune to criticism under a flimsy excuse that language has changed to be exactly what you want it to be.
Transformers, used in things like GPTs, are a type of machine learning. So even if you say that “AI is just generative AI like LLMs”, then, uh… Those are still machine learning. The ‘P’ in GPT literally stands for “pretrained”, indicating it’s already done the learning part of machine learning. OP’s statement literally self-contradicts.
Meanwhile, deep learning (DNNs, CNNs, RNNs, transformers, etc.) is a branch of machine learning (likewise with every paper, every book, Wikipedia (“Deep learning is a subset of machine learning that focuses on […]”), etc.) wherein the model identifies its own features instead of the human needing to supply them. Notably, the kind of vision detection the original commenter is talking about is deep learning like a transformer model is. So “AI when they mean machine learning” by their own standard that we need to be specific should be “AI when they mean deep learning”.
The reason “AI” is used all the time to refer to things like LLMs etc. is because generative AI is a type of AI. Just like “cars” are used all the time to refer to “sedans”. To be productive about this: for anyone who wants to delve (heh) further into it, Goodfellow et al. have a great 2016 textbook on deep learning*. In a bit of extremely unfortunate timing, transformer models were described in a 2017 paper, so they aren’t included (generative AI still is), but it gives you the framework you need to understand transformers (GPTs, BERTs). After Goodfellow et al., just reading Google’s original 2017 paper gives you sufficient context for transformer models.
*Goodfellow et al.'s first five chapters cover traditional ML models so you’re not 100% lost, and Sci-Kit Learn in Python can help you use these traditional ML techniques to see what they’re like.
Edit: TL;DR: You can’t just weasel your way into a position where “AI is all the bad stuff and machine learning is all the good stuff” under the guise of linguistic relativism.
Edit: TL;DR: You can’t just weasel your way into a position where “AI is all the bad stuff and machine learning is all the good stuff” under the guise of linguistic relativism.
You can, actually, because the inverse is exactly what marketers are vying for: AI, a term with immense baggage, is easier for layman to recognize, and implies a hell of a lot more than it actually does. It is intentionally leaning on the very cool futurism of AI to sell itself as the next evolutionary stage of human society—and so, has consumed all conversation about AI entirely. It is Hannibal Lecter wearing the skin of decades of sci-fi movies.
“Machine learning” is not a term used by sycophants (as often), and so infers different things about the person saying it. For one, they may have actually seen a college with their eyes.
So, you seem to be implying their isn’t a difference, but there is: people who suck say one, people who don’t say the other. No amount of academic rigor can sidestep this problem.
Quite the opposite: I recognize there’s a difference, and it horrifies me that corporations spin AI as something you – “you” meaning the general public who don’t understand how to use it – should put your trust in. It similarly horrifies me that in an attempt to push back on this, people will jump straight to vibes-based, unresearched, and fundamentally nonsensical talking points. I want the general public to be informed, because like the old joke comparing tech enthusiasts to software engineers, learning these things 1) equips you with the tools to know and explain why this is bad, and 2) reveals that it’s worse than you think it is. I would actually prefer specificity when we’re talking about AI models; that’s why instead of “AI slop”, I use “LLM slop” for text and, well, unfortunately, literally nobody in casual conversation knows what other foundation models or their acronyms are, so sometimes I just have to call it “AI slop” (e.g. for imagegen). I would love it if more people knew what a transformer model is so we could talk about transformer models instead of the blanket “AI”.
By trying to incorrectly differentiate “AI” from “machine learning”, we’re giving dishonest corporations more power by implying that only now do we truly have “artificial intelligence” and that everything that came before is merely “machine learning”. By muddling what’s actually a very straightforward hierarchy of terms (opposed to a murky, nonsensical dichotomy of “AI is anything that I don’t like, and ML is anything I do”), we’re misinforming the public and making the problem worse. By showing that “AI” is just a very general field that GPTs live inside, we reduce the power of “AI” as a marketing buzzword word.
“Expert in machine learning”, “has read the literal first sentence of the Wikipedia entry for ‘machine learning’” – same thing. Tomayto, tomahto.
Everything else I’m talking about in detail is just gravy; literally just read the first sentence of the Wikipedia article to know that machine learning is a field of AI. That’s the part that got me to say “no one in this thread knows what they’re talking about”: it’s the literal first sentence in the most prominent reference work in the world that everyone reading this can access in two seconds.
You can say most people don’t know the atomic weight of oxygen is 16-ish. That’s fine. I didn’t either; I looked it up for this example. What you can’t do is say “the atomic weight of oxygen is 42”, then when someone contradicts you that it’s 16, refuse to concede that you’re wrong and then – when they clarify why the atomic weight is 16 – stand there arms crossed and with a smarmy grin say: “wow, expert blindness much? geez guys check out this bozo”
We get it; you read xkcd. The point of this story is that you need to know fuck-all about atomic physics to just go on Wikipedia before you confidently claim the atomic weight is 42. Or, when someone calls you out on it, go on Wikipedia to verify that it’s 16. And if you want to dig in your heels and keep saying it’s 42, then you get the technical explanation. Then you get the talk about why it has that weight, because you decided to confidently challenge it instead of just acknowledging this isn’t your area of expertise.
Okay, at this point, I’m convinced no one in here has even a bare minimum understanding of machine learning. This isn’t a pedantic prescriptivism thing:
“Machine learning” is a major branch of AI. That’s just what it is. Literally every paper and every book ever published on the subject will tell you that. Go to the Wikipedia page right now: “Machine learning (ML) is a field of study in artificial intelligence”. The other type of AI of course means that the machine can’t learn and thus a human has to explicitly program everything; for example, video game AI usually doesn’t learn. Being uninformed is fine; being wrong is fine. There’s calling out pedantry (“reee you called this non-Hemiptera insect a bug”) and then there’s rendering your words immune to criticism under a flimsy excuse that language has changed to be exactly what you want it to be.
Transformers, used in things like GPTs, are a type of machine learning. So even if you say that “AI is just generative AI like LLMs”, then, uh… Those are still machine learning. The ‘P’ in GPT literally stands for “pretrained”, indicating it’s already done the learning part of machine learning. OP’s statement literally self-contradicts.
Meanwhile, deep learning (DNNs, CNNs, RNNs, transformers, etc.) is a branch of machine learning (likewise with every paper, every book, Wikipedia (“Deep learning is a subset of machine learning that focuses on […]”), etc.) wherein the model identifies its own features instead of the human needing to supply them. Notably, the kind of vision detection the original commenter is talking about is deep learning like a transformer model is. So “AI when they mean machine learning” by their own standard that we need to be specific should be “AI when they mean deep learning”.
The reason “AI” is used all the time to refer to things like LLMs etc. is because generative AI is a type of AI. Just like “cars” are used all the time to refer to “sedans”. To be productive about this: for anyone who wants to delve (heh) further into it, Goodfellow et al. have a great 2016 textbook on deep learning*. In a bit of extremely unfortunate timing, transformer models were described in a 2017 paper, so they aren’t included (generative AI still is), but it gives you the framework you need to understand transformers (GPTs, BERTs). After Goodfellow et al., just reading Google’s original 2017 paper gives you sufficient context for transformer models.
*Goodfellow et al.'s first five chapters cover traditional ML models so you’re not 100% lost, and Sci-Kit Learn in Python can help you use these traditional ML techniques to see what they’re like.
Edit: TL;DR: You can’t just weasel your way into a position where “AI is all the bad stuff and machine learning is all the good stuff” under the guise of linguistic relativism.
You can, actually, because the inverse is exactly what marketers are vying for: AI, a term with immense baggage, is easier for layman to recognize, and implies a hell of a lot more than it actually does. It is intentionally leaning on the very cool futurism of AI to sell itself as the next evolutionary stage of human society—and so, has consumed all conversation about AI entirely. It is Hannibal Lecter wearing the skin of decades of sci-fi movies.
“Machine learning” is not a term used by sycophants (as often), and so infers different things about the person saying it. For one, they may have actually seen a college with their eyes.
So, you seem to be implying their isn’t a difference, but there is: people who suck say one, people who don’t say the other. No amount of academic rigor can sidestep this problem.
Quite the opposite: I recognize there’s a difference, and it horrifies me that corporations spin AI as something you – “you” meaning the general public who don’t understand how to use it – should put your trust in. It similarly horrifies me that in an attempt to push back on this, people will jump straight to vibes-based, unresearched, and fundamentally nonsensical talking points. I want the general public to be informed, because like the old joke comparing tech enthusiasts to software engineers, learning these things 1) equips you with the tools to know and explain why this is bad, and 2) reveals that it’s worse than you think it is. I would actually prefer specificity when we’re talking about AI models; that’s why instead of “AI slop”, I use “LLM slop” for text and, well, unfortunately, literally nobody in casual conversation knows what other foundation models or their acronyms are, so sometimes I just have to call it “AI slop” (e.g. for imagegen). I would love it if more people knew what a transformer model is so we could talk about transformer models instead of the blanket “AI”.
By trying to incorrectly differentiate “AI” from “machine learning”, we’re giving dishonest corporations more power by implying that only now do we truly have “artificial intelligence” and that everything that came before is merely “machine learning”. By muddling what’s actually a very straightforward hierarchy of terms (opposed to a murky, nonsensical dichotomy of “AI is anything that I don’t like, and ML is anything I do”), we’re misinforming the public and making the problem worse. By showing that “AI” is just a very general field that GPTs live inside, we reduce the power of “AI” as a marketing buzzword word.
You know, I’ve seen many examples of “Expert Blindness” before, but I don’t think I’ve ever seen a single example that so perfectly encapsulates it.
Bravo! Mind if I use this message in our weekly messaging get-together at work?
“Expert in machine learning”, “has read the literal first sentence of the Wikipedia entry for ‘machine learning’” – same thing. Tomayto, tomahto.
Everything else I’m talking about in detail is just gravy; literally just read the first sentence of the Wikipedia article to know that machine learning is a field of AI. That’s the part that got me to say “no one in this thread knows what they’re talking about”: it’s the literal first sentence in the most prominent reference work in the world that everyone reading this can access in two seconds.
You can say most people don’t know the atomic weight of oxygen is 16-ish. That’s fine. I didn’t either; I looked it up for this example. What you can’t do is say “the atomic weight of oxygen is 42”, then when someone contradicts you that it’s 16, refuse to concede that you’re wrong and then – when they clarify why the atomic weight is 16 – stand there arms crossed and with a smarmy grin say: “wow, expert blindness much? geez guys check out this bozo”
We get it; you read xkcd. The point of this story is that you need to know fuck-all about atomic physics to just go on Wikipedia before you confidently claim the atomic weight is 42. Or, when someone calls you out on it, go on Wikipedia to verify that it’s 16. And if you want to dig in your heels and keep saying it’s 42, then you get the technical explanation. Then you get the talk about why it has that weight, because you decided to confidently challenge it instead of just acknowledging this isn’t your area of expertise.
I’ll be adding this to the presentation.