I work in tech in one of those companies that thinks AI can solve literally everything, and what I hear from leadership on these kinds of reports is that we just haven’t figured out how to measure the success yet. So probably it won’t be the trigger.
The writing is on the wall though, a couple of weeks ago an exec came flying in with this great idea for how we can replace an entire product with AI. Honestly we probably could, it wasn’t a bad idea. The issue came in with the cost…even at the heavily subsidized by venture capital cost, it was going to cost us more per customer than what they pay us, just to do this one small part of the overall workflow, using AI. So sure, we could build out to this AI that does all this cool shit for us (except when it goes off the rails and does it wrong, which definitely happens). But the cost was prohibitive NOW. Imagine how expensive it will be when the real costs start propagating down to the buyers.
What I’ve learned about AI working adjacent to it is that it actually can do quite a lot, as long as you don’t need it to be perfect and as long as the real total cost of using it is actually less than writing real code that does the same thing. Which is not gonna apply to the vast majority of dumb things people are trying to do with AI. That’ll be the real trigger; shit goes off the rails because the AI can’t do it predictably and/or as the true cost starts becoming apparent nobody can afford it anymore. And all the companies who are reliant on it will collapse entirely.
I just found out that my company has AI Pillars starting with 1-productivity aid to [a second thing I can’t remember] to 3-agentic to 4-worker replacement. They want 100% of workers in finance to be using AI in the first pillar.
Pillars 3 and 4 are completely hypothetical because AI doesn’t do math and track numbers correctly or repeatably and it cannot go off the rails with financial info.
It really doesn’t make sense to use pillars as these are more stagegates for implementation
Yeah what’s ridiculous is that AI is useful. When used by an expert programmer, it is helpful, it can write code quicker, it can get projects from 0 to like 95% of the way there very very fast. It’s not perfect, it’s not going to like revolutionise the human experience, it’s not going to be a widescale replacement for workers, but it is a helpful tool. Still a huge fucking bubble though, because you can make money selling AI but not the kind of money the companies like OpenAI are hoping they can make.
looks like this is from january, so I guess it would have if it could have. Sites like HN are full of “but have you tried it THIS week??? Otherwise your opinion is invalid”
I don’t know what to expect from a correction since there’s just constantly a “new, better model” every day, thousands on hugging face alone, let alone proprietary ones. It makes the effort to prove the negative (ie. These models are not useful for most tasks) very high so I don’t typically argue with people about it. I think that the constant treadmill to keep up with the “improvements” is part of the marketing, to be honest.
I’m sure this is typical of every bubble but to be honest I’m not sure what’s going to pop it if GPT-5 didn’t, if OpenAI’s P/L statements don’t, if businesses aren’t fatigued yet with all of the “news” about new models, if the economics not making sense hasn’t stopped anyone… it’s a weird time in tech right now.
Reports like this are going to pop the bubble lol
I work in tech in one of those companies that thinks AI can solve literally everything, and what I hear from leadership on these kinds of reports is that we just haven’t figured out how to measure the success yet. So probably it won’t be the trigger.
The writing is on the wall though, a couple of weeks ago an exec came flying in with this great idea for how we can replace an entire product with AI. Honestly we probably could, it wasn’t a bad idea. The issue came in with the cost…even at the heavily subsidized by venture capital cost, it was going to cost us more per customer than what they pay us, just to do this one small part of the overall workflow, using AI. So sure, we could build out to this AI that does all this cool shit for us (except when it goes off the rails and does it wrong, which definitely happens). But the cost was prohibitive NOW. Imagine how expensive it will be when the real costs start propagating down to the buyers.
What I’ve learned about AI working adjacent to it is that it actually can do quite a lot, as long as you don’t need it to be perfect and as long as the real total cost of using it is actually less than writing real code that does the same thing. Which is not gonna apply to the vast majority of dumb things people are trying to do with AI. That’ll be the real trigger; shit goes off the rails because the AI can’t do it predictably and/or as the true cost starts becoming apparent nobody can afford it anymore. And all the companies who are reliant on it will collapse entirely.
I just found out that my company has AI Pillars starting with 1-productivity aid to [a second thing I can’t remember] to 3-agentic to 4-worker replacement. They want 100% of workers in finance to be using AI in the first pillar. Pillars 3 and 4 are completely hypothetical because AI doesn’t do math and track numbers correctly or repeatably and it cannot go off the rails with financial info.
It really doesn’t make sense to use pillars as these are more stagegates for implementation
Using AI to do math is the funniest thing to me when Python exists
Doing very complex math to fail at very simple math, AI in a nutshell.
Yeah what’s ridiculous is that AI is useful. When used by an expert programmer, it is helpful, it can write code quicker, it can get projects from 0 to like 95% of the way there very very fast. It’s not perfect, it’s not going to like revolutionise the human experience, it’s not going to be a widescale replacement for workers, but it is a helpful tool. Still a huge fucking bubble though, because you can make money selling AI but not the kind of money the companies like OpenAI are hoping they can make.
Its not a coincidence that suddenly a lot of banks have been issuing warnings over the AI bubble. They already know.
looks like this is from january, so I guess it would have if it could have. Sites like HN are full of “but have you tried it THIS week??? Otherwise your opinion is invalid”
I don’t know what to expect from a correction since there’s just constantly a “new, better model” every day, thousands on hugging face alone, let alone proprietary ones. It makes the effort to prove the negative (ie. These models are not useful for most tasks) very high so I don’t typically argue with people about it. I think that the constant treadmill to keep up with the “improvements” is part of the marketing, to be honest.
I’m sure this is typical of every bubble but to be honest I’m not sure what’s going to pop it if GPT-5 didn’t, if OpenAI’s P/L statements don’t, if businesses aren’t fatigued yet with all of the “news” about new models, if the economics not making sense hasn’t stopped anyone… it’s a weird time in tech right now.
AI, Tesla, and crypto have got me wondering if bubbles even pop any more