• Primarily0617
      link
      fedilink
      16
      edit-2
      8 months ago

      multibillion dollar company discovers the memory hierarchy

      could the next big leap be integrating instructions and data in the same memory store?

    • pelya
      link
      fedilink
      English
      38 months ago

      The article does not say if they’d innovated enough to produce capacitor-based DRAM with the CPU on the same die. I guess it would come in 1GB variant if they managed that.

      • @proctonaut@lemmy.world
        link
        fedilink
        English
        28 months ago

        I haven’t really kept up with it but doesn’t the zen architecture have separate L1 and L2 for each core?

        • @MostlyHarmless@programming.dev
          link
          fedilink
          English
          18 months ago

          Even if it does it isn’t the same thing. The article explains everything

          NorthPole is made of 256 computing units, or cores, each of which contains its own memory… The cores are wired together in a network inspired by the white-matter connections between parts of the human cerebral cortex, Modha says. This and other design principles — most of which existed before but had never been combined in one chip — enable NorthPole to beat existing AI machines

  • @SturgiesYrFase@lemmy.ml
    link
    fedilink
    English
    78 months ago

    While this will speed up loads of things, it also feels like this will end up being another way to remove upgradability from devices. Want more ram in your desktop buy a new cpu.

    • @glimse@lemmy.world
      link
      fedilink
      English
      78 months ago

      I mean the physical distance between the RAM and CPU will eventually be the limiting factor right? It’s inevitable for more reasons than profit

    • Patapon Enjoyer
      link
      fedilink
      English
      7
      edit-2
      8 months ago

      The article says the whole CPU has like 200MB of memory, so it’s not really replacing the RAM already in PCs. Plus this seems focused on AI applications, not general computing.

      • @SturgiesYrFase@lemmy.ml
        link
        fedilink
        English
        28 months ago

        hits blunt That’s just your opinion, man.

        And that’s fair, at the same time it’s still quite new, once it’s matured a bit I could definitely see this being how things go until…idk, hardlight computing or w.e

      • @DaPorkchop_@lemmy.ml
        link
        fedilink
        English
        18 months ago

        So… they’ll probably add some slower, larger-capacity memory chips on the side, and then they’ll need to copy data back and forth between the slow off-chip memory and the fast on-chip memory… I’m pretty sure they’ve just invented cache