Artificial intelligence specialists warned the Defense Department on Thursday that China has caught up in the AI arms race and has even pulled ahead of America in some areas.
They are trying good uses too, like using LLMs in robotics and cars (for both movement and planning), but the chats are just more visible for now, with understanble reasons.
I’m a somewhat leery about that sort of usage to be honest because it’s nearly impossible to guarantee correctness. The problem is that LLMs don’t really have an internal world model that’s sufficiently sophisticated to deal with all the random things that can happen, and react appropriately. On top of that, you can’t correct behavior by explaining what went wrong the way you would with a human. I think best application for this sort of tech is where it analyzes data and then helps a human make the final decision. A good recent example of this was China using machine learning to monitor the health of rail lines to do proactive maintenance.
LLMs would probably be best used in systems, like multiple LLMs and normal programs each with their strenghs covering the other’s weaknesses. And perhaps having programs, or even other LLMs that shut it off if anything goes wrong.
Something weird happened to a robot?
The brain or part of it (as there can be multiple LLMs toghether each trained to do one or a few things only) or a more powerful LLM overseeing many robots identifies that and stop it, waiting for a better LLM offsite or a human to say something.
I mean, if the thing happening is so weird that there is no data about it available then perhaps not even a human would be able to deal well with it, meaning that an LLM doesn’t need to be perfect to be very useful.
Even if the robots had problems and would bug out causing a lot of damage we could still take a lot of people away from work and let the robots to do it if the robots can work and make enough to replenish their own losses by themselves. And with time any problem should be fixable anyway, so we might as well try.
Using a combination of specialized systems is definitely a viable approach, but I think there’s a more fundamental issue that needs to be addressed. The main difference between humans and AI when it comes to decision making is that with people you can ask questions about why they made a certain choice in a given situation. This allows for correction of wrong decisions and guidance towards better ones. However, with AI, it’s not as simple because there is no shared context or intuition for how to interact with the physical world. This is due to AIs having lack of human intuition about how the physical world behaves that we develop by interacting with it from the day we’re born. This forms the basis of understanding in a human sense. As a result, AI lacks this capacity for genuine understanding of the tasks it’s accomplishing and making informed decisions.
To ensure machines can operate safely in the physical world and effectively interact with humans, we’d need to follow a similar process as with human child development. This involves training through embodiment and constructing an internal world model that allows the AI to develop an intuition about how objects behave in the physical realm. Then we could teach it language within this context. What we’re doing with LLMs is completely backwards in my opinion. We just feed them a whole bunch of text, and then they figure out relationships within that text, but none of that is anchored to the physical world in any way.
The model needs to be trained to interact with the physical world through reinforcement to create an internal representation of the world that’s similar to our own. This would give us a shared context that we can use to communicate with the AI, and it would have actual understanding of the physical world that’s similar to our own. It’s hard to say whether current LLM approaches are flexible enough to support this sort of a world model, so we’ll have to wait and see what the ceiling for this stuff is. I do think we will figure this out eventually, but we may need more insights into how the brain works before that happens.
I think the tech like LLMs is fundamentally useful, it’s just that it mostly gets applied in really frivolous ways in the west.
They are trying good uses too, like using LLMs in robotics and cars (for both movement and planning), but the chats are just more visible for now, with understanble reasons.
I’m a somewhat leery about that sort of usage to be honest because it’s nearly impossible to guarantee correctness. The problem is that LLMs don’t really have an internal world model that’s sufficiently sophisticated to deal with all the random things that can happen, and react appropriately. On top of that, you can’t correct behavior by explaining what went wrong the way you would with a human. I think best application for this sort of tech is where it analyzes data and then helps a human make the final decision. A good recent example of this was China using machine learning to monitor the health of rail lines to do proactive maintenance.
LLMs would probably be best used in systems, like multiple LLMs and normal programs each with their strenghs covering the other’s weaknesses. And perhaps having programs, or even other LLMs that shut it off if anything goes wrong.
Something weird happened to a robot?
The brain or part of it (as there can be multiple LLMs toghether each trained to do one or a few things only) or a more powerful LLM overseeing many robots identifies that and stop it, waiting for a better LLM offsite or a human to say something.
I mean, if the thing happening is so weird that there is no data about it available then perhaps not even a human would be able to deal well with it, meaning that an LLM doesn’t need to be perfect to be very useful.
Even if the robots had problems and would bug out causing a lot of damage we could still take a lot of people away from work and let the robots to do it if the robots can work and make enough to replenish their own losses by themselves. And with time any problem should be fixable anyway, so we might as well try.
Using a combination of specialized systems is definitely a viable approach, but I think there’s a more fundamental issue that needs to be addressed. The main difference between humans and AI when it comes to decision making is that with people you can ask questions about why they made a certain choice in a given situation. This allows for correction of wrong decisions and guidance towards better ones. However, with AI, it’s not as simple because there is no shared context or intuition for how to interact with the physical world. This is due to AIs having lack of human intuition about how the physical world behaves that we develop by interacting with it from the day we’re born. This forms the basis of understanding in a human sense. As a result, AI lacks this capacity for genuine understanding of the tasks it’s accomplishing and making informed decisions.
To ensure machines can operate safely in the physical world and effectively interact with humans, we’d need to follow a similar process as with human child development. This involves training through embodiment and constructing an internal world model that allows the AI to develop an intuition about how objects behave in the physical realm. Then we could teach it language within this context. What we’re doing with LLMs is completely backwards in my opinion. We just feed them a whole bunch of text, and then they figure out relationships within that text, but none of that is anchored to the physical world in any way.
The model needs to be trained to interact with the physical world through reinforcement to create an internal representation of the world that’s similar to our own. This would give us a shared context that we can use to communicate with the AI, and it would have actual understanding of the physical world that’s similar to our own. It’s hard to say whether current LLM approaches are flexible enough to support this sort of a world model, so we’ll have to wait and see what the ceiling for this stuff is. I do think we will figure this out eventually, but we may need more insights into how the brain works before that happens.