• DamarcusArt@lemmygrad.ml
    link
    fedilink
    arrow-up
    24
    ·
    edit-2
    7 months ago

    TBH, whenever I hear an American “AI expert” say something, I have a knee jerk reaction to assume the opposite. Though pulling ahead of the smoke and mirrors “AI” that US companies do isn’t exactly a challenge.

      • KrasnaiaZvezda@lemmygrad.ml
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        7 months ago

        They are trying good uses too, like using LLMs in robotics and cars (for both movement and planning), but the chats are just more visible for now, with understanble reasons.

        • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
          link
          fedilink
          arrow-up
          7
          arrow-down
          1
          ·
          7 months ago

          I’m a somewhat leery about that sort of usage to be honest because it’s nearly impossible to guarantee correctness. The problem is that LLMs don’t really have an internal world model that’s sufficiently sophisticated to deal with all the random things that can happen, and react appropriately. On top of that, you can’t correct behavior by explaining what went wrong the way you would with a human. I think best application for this sort of tech is where it analyzes data and then helps a human make the final decision. A good recent example of this was China using machine learning to monitor the health of rail lines to do proactive maintenance.

          • KrasnaiaZvezda@lemmygrad.ml
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            7 months ago

            LLMs would probably be best used in systems, like multiple LLMs and normal programs each with their strenghs covering the other’s weaknesses. And perhaps having programs, or even other LLMs that shut it off if anything goes wrong.

            Something weird happened to a robot?

            The brain or part of it (as there can be multiple LLMs toghether each trained to do one or a few things only) or a more powerful LLM overseeing many robots identifies that and stop it, waiting for a better LLM offsite or a human to say something.

            I mean, if the thing happening is so weird that there is no data about it available then perhaps not even a human would be able to deal well with it, meaning that an LLM doesn’t need to be perfect to be very useful.

            Even if the robots had problems and would bug out causing a lot of damage we could still take a lot of people away from work and let the robots to do it if the robots can work and make enough to replenish their own losses by themselves. And with time any problem should be fixable anyway, so we might as well try.

            • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              7 months ago

              Using a combination of specialized systems is definitely a viable approach, but I think there’s a more fundamental issue that needs to be addressed. The main difference between humans and AI when it comes to decision making is that with people you can ask questions about why they made a certain choice in a given situation. This allows for correction of wrong decisions and guidance towards better ones. However, with AI, it’s not as simple because there is no shared context or intuition for how to interact with the physical world. This is due to AIs having lack of human intuition about how the physical world behaves that we develop by interacting with it from the day we’re born. This forms the basis of understanding in a human sense. As a result, AI lacks this capacity for genuine understanding of the tasks it’s accomplishing and making informed decisions.

              To ensure machines can operate safely in the physical world and effectively interact with humans, we’d need to follow a similar process as with human child development. This involves training through embodiment and constructing an internal world model that allows the AI to develop an intuition about how objects behave in the physical realm. Then we could teach it language within this context. What we’re doing with LLMs is completely backwards in my opinion. We just feed them a whole bunch of text, and then they figure out relationships within that text, but none of that is anchored to the physical world in any way.

              The model needs to be trained to interact with the physical world through reinforcement to create an internal representation of the world that’s similar to our own. This would give us a shared context that we can use to communicate with the AI, and it would have actual understanding of the physical world that’s similar to our own. It’s hard to say whether current LLM approaches are flexible enough to support this sort of a world model, so we’ll have to wait and see what the ceiling for this stuff is. I do think we will figure this out eventually, but we may need more insights into how the brain works before that happens.

  • Wheaties [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    21
    ·
    7 months ago

    I can’t believe China has already met the very high bar set by american innovators, someone check the servers at twitter-dot-com to make sure the advanced code for Grok™is still secure

  • NothingButBits@lemmygrad.ml
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    7 months ago

    Caught up

    When the US manages to use AI to help maintain and detect problems across thousands of kms of railroad, let me know.

  • What_Religion_R_They [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    7 months ago

    Andrew Moore, a former Google executive advising U.S. Central Command on AI, said he’s seen China surpass America in several areas. He also said he respects the seriousness of the communist competitors’ work in the tech world.

    “There are two application areas where they have outperformed hyperscalers in the United States. I’m not going to say where they are right now, what they are right now,” Mr. Moore said at the symposium, which was attended by defense and intelligence officers, tech companies, and other high-level stakeholders.

    che-smile