I work in tech in one of those companies that thinks AI can solve literally everything, and what I hear from leadership on these kinds of reports is that we just haven’t figured out how to measure the success yet. So probably it won’t be the trigger.
The writing is on the wall though, a couple of weeks ago an exec came flying in with this great idea for how we can replace an entire product with AI. Honestly we probably could, it wasn’t a bad idea. The issue came in with the cost…even at the heavily subsidized by venture capital cost, it was going to cost us more per customer than what they pay us, just to do this one small part of the overall workflow, using AI. So sure, we could build out to this AI that does all this cool shit for us (except when it goes off the rails and does it wrong, which definitely happens). But the cost was prohibitive NOW. Imagine how expensive it will be when the real costs start propagating down to the buyers.
What I’ve learned about AI working adjacent to it is that it actually can do quite a lot, as long as you don’t need it to be perfect and as long as the real total cost of using it is actually less than writing real code that does the same thing. Which is not gonna apply to the vast majority of dumb things people are trying to do with AI. That’ll be the real trigger; shit goes off the rails because the AI can’t do it predictably and/or as the true cost starts becoming apparent nobody can afford it anymore. And all the companies who are reliant on it will collapse entirely.
I just found out that my company has AI Pillars starting with 1-productivity aid to [a second thing I can’t remember] to 3-agentic to 4-worker replacement. They want 100% of workers in finance to be using AI in the first pillar.
Pillars 3 and 4 are completely hypothetical because AI doesn’t do math and track numbers correctly or repeatably and it cannot go off the rails with financial info.
It really doesn’t make sense to use pillars as these are more stagegates for implementation
Yeah what’s ridiculous is that AI is useful. When used by an expert programmer, it is helpful, it can write code quicker, it can get projects from 0 to like 95% of the way there very very fast. It’s not perfect, it’s not going to like revolutionise the human experience, it’s not going to be a widescale replacement for workers, but it is a helpful tool. Still a huge fucking bubble though, because you can make money selling AI but not the kind of money the companies like OpenAI are hoping they can make.
I work in tech in one of those companies that thinks AI can solve literally everything, and what I hear from leadership on these kinds of reports is that we just haven’t figured out how to measure the success yet. So probably it won’t be the trigger.
The writing is on the wall though, a couple of weeks ago an exec came flying in with this great idea for how we can replace an entire product with AI. Honestly we probably could, it wasn’t a bad idea. The issue came in with the cost…even at the heavily subsidized by venture capital cost, it was going to cost us more per customer than what they pay us, just to do this one small part of the overall workflow, using AI. So sure, we could build out to this AI that does all this cool shit for us (except when it goes off the rails and does it wrong, which definitely happens). But the cost was prohibitive NOW. Imagine how expensive it will be when the real costs start propagating down to the buyers.
What I’ve learned about AI working adjacent to it is that it actually can do quite a lot, as long as you don’t need it to be perfect and as long as the real total cost of using it is actually less than writing real code that does the same thing. Which is not gonna apply to the vast majority of dumb things people are trying to do with AI. That’ll be the real trigger; shit goes off the rails because the AI can’t do it predictably and/or as the true cost starts becoming apparent nobody can afford it anymore. And all the companies who are reliant on it will collapse entirely.
I just found out that my company has AI Pillars starting with 1-productivity aid to [a second thing I can’t remember] to 3-agentic to 4-worker replacement. They want 100% of workers in finance to be using AI in the first pillar. Pillars 3 and 4 are completely hypothetical because AI doesn’t do math and track numbers correctly or repeatably and it cannot go off the rails with financial info.
It really doesn’t make sense to use pillars as these are more stagegates for implementation
Using AI to do math is the funniest thing to me when Python exists
Doing very complex math to fail at very simple math, AI in a nutshell.
Yeah what’s ridiculous is that AI is useful. When used by an expert programmer, it is helpful, it can write code quicker, it can get projects from 0 to like 95% of the way there very very fast. It’s not perfect, it’s not going to like revolutionise the human experience, it’s not going to be a widescale replacement for workers, but it is a helpful tool. Still a huge fucking bubble though, because you can make money selling AI but not the kind of money the companies like OpenAI are hoping they can make.