• h3ndrik@feddit.de
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    edit-2
    8 months ago

    I think the recent LLMs are a breakthrough. Back in the 1970s people used expert systems. And you needed to put in lots and lots of if-then-rules by hand. Do the tedious job of creating knowledgebases and semantic connections… And the results were pretty limited. Nowadays you just feed in all the texts from the internet, do some magic and it’ll autocomplete “This is how you change the oil in your car in 7 easy steps” and answer it. They can write emails or roleplay a helpful assistant. Translating text is something that works quite well. All of that was unthinkable 10 years ago and utter scifi. The transformer architecture from 2017 has been some good progress. And I don’t see why they shouldn’t improve in capabilities in the time to come. I think it’ll be incremental improvement though, not a single paper and I wake up tomorrow and somebody invented sentient robots. All of that marketing claims are just hype in my opinion, too.