• V ‎ ‎ @beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    I predict the problem is parallel to what initially limited AI efforts in the 80-2010s, lack of information and ability to process that information. Knowing you can pull a string but not push it is a common example of reasoning that isn’t available in the context of test or static image parsing. Multimodal helps, but we need to figure out how to train without needing to retrain the entire network especially for larger datasets like video.

    • LughOPMA
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Perhaps. The problem with this line of thought is that it assumes reasoning will arise spontaneously, but doesn’t know how. It doesn’t inspire much confidence as the basis for a hypothesis.

      • V ‎ ‎ @beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Reasoning isn’t innate to organic networks either. It’s a byproduct of pattern matching generalizing to wider stimuli and recognizing the differences. Convolutional networks don’t memorize every breed of cat, they recognize the patterns (features) that define them. Reasoning is an extension of this. I can’t push a string and I can’t unscramble an egg are also patterns, the pattern of unreciprocal or irreversible relationships. Extending these to new situations is applied reasoning. Same idea as transformer models creating new poems in styles not common before, generalize patterns to new situations. Question is how do we train to accommodate generalization without detracting from accuracy and how do we replicate neuroplasticity in a digital network.