But in August 2020, after 28 years in the US, Zhu astonished his colleagues and friends by suddenly moving back to China, where he took up professorships at two top Beijing universities and a directorship in a state-sponsored AI institute. The Chinese media feted him as a patriot assisting “the motherland” in its race toward artificial intelligence. US lawmakers would later demand to know how funders such as UCLA and the Pentagon had ignored “concerning signs” of Zhu’s ties to a geopolitical rival. In 2023, Zhu became a member of China’s top political advisory body, where he proposed that China should treat AI with the same strategic urgency as a nuclear weapons programme.

Zhu insists that these ideas are built on sand. A sign of true intelligence, he argues, is the ability to reason towards a goal with minimal inputs – what he calls a “small data, big task” approach, compared with the “big data, small task” approach employed by large language models like ChatGPT. AGI, Zhu’s team has recently said, is characterised by qualities such as resourcefulness in novel situations, social and physical intuition, and an understanding of cause and effect. Large language models, Zhu believes, will never achieve this. Some AI experts in the US have similarly questioned the prevailing orthodoxy in Silicon Valley, and their views have grown louder this year as AI progress has slowed and new releases, like GPT-5, have disappointed. A different path is needed, and that is what Zhu is working on in Beijing.

  • cfgaussian@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    26 days ago

    Large language models, Zhu believes, will never achieve this.

    I’ve been saying this ever since the LLM hype began. What i find particularly worrying is how many educated and supposedly intelligent people fell for the hype. Just because an algorithm can get good at mimicking human language patterns when fed enough data doesn’t mean there is actual intelligence behind it. Machine learning is a very useful tool with a lot of applications but LLMs are going to hit a hard wall at some point without real AGI.

  • алсааас [she/they]@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    25 days ago

    Incredible things are happening at BigAI

    Jokes aside, I was really intrigued by this a bit longer article (though to me it seemed far from a long read, maybe less than 20 minutes? Time went by really quickly, having had to pause my Star Trek episode of the day 😸). One of the best articles I have yet read from The Guardian 💀💀

    I do find Zhu’s philosophy interesting, but his dogmatic stance against LLMs or transformer models in general might have become a serious shot in his own foot.
    The fact that in 2023 (!) his former mentor Mumford convinced him to shift that stance and to take them into (more or less) honest consideration – his stance having morphed from a categorical refusal of even the mention of them to “They have their place but I believe them to be far from a general solution” – seems to me to be a game changer that might have boosted BigAI potential tremendiously for the future.

    But it seems like the PRC is hedging it’s bets anyway so I do not think that BigAI only following Zhu’s former relatively dogmatic vision would have had much of a negative effect on Chinas machine learning development in general.