what’s more likely is that OpenAI just lost all their talent to other companies/startups
Talent alone can’t just exponentially improve something that has fundamentally peaked. They could say “oh this is the limit of this kinda model, I guess we need a different kind of architecture from the ground up” if they are lucky, or will just keep trying and failing.
It’s like fitting a linear model to a parabola points, you can keep trying and improve the results a bit every time but you have a limit, you can’t overcome that unless you change how your model works.
Current AI is just a statistical model, it’s not intelligent. They made a model that can take data and generate similar data, they’ve exhausted resources training it, and it’s hit the limits. Unless it can actually think, extrapolate from the data it has in a coherent way, it’s not going to grow.
AI models have a logarithmic progress. In the end the amount of resources needed to increase the performance goes up rapidly. If you have to double the number of processors and data to add another few percent you eventually run out of data and money. This was true with previous systems, even chess engines. It was expected here too, and will be true for the successor of LLMs.
Yes, I think that is what the problem is here. Some people used to have the idea that more scaling would be enough for reasoning to appear, but that hasn’t happened.
the idea that more scaling would be enough for reasoning to appear
That’s kinda like saying that with bigger and more boilers you can eventually make a steam engine fly. To be fair, somebody eventually flew a steam powered plane but it was never a success story.
Yeah, and the problem is also the expectation. They don’t want to say the product is done, it’s the best it can do because it hurts their shares, money source.