Yesterday Framework unveiled a small form factor desktop based on Halo Strix.

Halo strix seems to require memory with high bandwidth, specifically 256-bit LPDDR5x, according to the specs.

Allegedly, the company said they tried to find a solution to use modular memory (e.g. lpcamm) but it did not work out signal integrity wise (@36:10, from the unveiling video above and here.)

So I’m wondering exactly, why not?

It seems LPCAMM2, offers a 128-bits bus and can scale today up to 7500-8500 MT/s.

This would offer 7500 x 128 / 8 = 120GB/s. Would it not have been possible to simply place two LPCAMM2 modules, to cover the full extent of the 256-bit bus and reach the 256 GB/s, by using the 8000 MT/s configuration?

Did they reach integrity issues because they tried to reach the speeds using only one LPCAMM2 stick? That would indeed have been impossible. Maybe LPCAMM2 can not be combined (perhaps due to space as they are using the mini-ITX motherboard format)? Or am I missing something?

  • sp3ctr4l@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    18 days ago

    I think you may be confused about how bus widths work.

    If you connected 2 seperate memory modules with 128 bit bus widths… that doesn’t add up to a combined/total bus width of 256 bit for the busses together.

    It means you have 2 seperate memory modules, more overall memory, but they are both still going to top out at 128 bit data transfer standards.

    Its like… if you have one tunnel with a speed limit of 128 kph, and then you build a second, adjacent tunnel with 128 kph speed limit… you can now have twice as many vehicles (theoretically), but they’re still driving through the tunnels at the same speed.

    EDIT: Maybe a more accurate example would be if you have two tunnels, each 128 lanes wide, neither of them are going to be able to fit a 256 lane wide monster truck.

    Also, the max supported memory of a Halo Strix 395+ is 128 GB.

    So… even with your I think flawed example, you’d have to use 2 64 GB LPDDR5x modules … because the adding a second 128 GB module to an existing 128 GB module would be completely useless, the CPU would just ignore or do undefined things with the additional 128 GB of memory.

    Also, the Halo Strix CPU bandwidth isn’t 100% going to the memory. Some of that is reserved for the L1 L2 and L3 caches, probably other stuff as well.

    Beyond that…

    The extremely new ‘hotshit’ LPDDR5x ram is texhnically:

    LPDDR5 8533

    Whereas Framework is using:

    LPDDR5 8000

    … where the latter number indicates MT/s of throughput.

    Why isn’t Framework using the fancier hotshit stuff?

    They were not able to negotiate any of it at a reasonable price from any manufacturers, all the output for the hotshit is only available in prebuilt laptops, its all going to other vendors.

    You can’t just buy LPDDR5 8533 on the open market right now, as a consumer, to the best of my knowledge. Its B2B only, purchased from Samsung or Micron or whoever, and then assembled into laptops by HP or Dell or w/e, and then sold to BestBuy or Amazon sellers, and then sold to end consumers.

    The people making laptops with LPDDR5 8533 memory are likely paying premiums and/or making huge volume orders… Framework almost certainly doesn’t have the money or overall organizational size to do something like that.

    Why aren’t Framework using LPCAMM or LPCAMM2 memory?

    The answer is right there in the Linus vid you linked, but maybe you misunderstood it.

    The Halo Strix wasn’t designed to work with LPCAMM/2 memory. The 256 bit bus on the CPU wasn’t designed to work with the smaller busses on LPCAMM/2 memory. It would be sending, and expecting to recieve data to and from the memory on lanes that the memory does not posses.

    It would be somewhat analagous to trying to run a 64 bit program on a 32 bit OS. The program won’t work because it is sending and attempting to recieve data via pathways that do not exist in the OS.

    Except that in that example, you can have a software translation compatability layer, at least if the analogy is reversed and its a 32 bit program on a 64 bit OS.

    But… you can’t do that in hardware without basically a massive chipset driver overhaul, and it might end up just being impossible anyway.

    AMD would be the only people likely capable of developing that as a feature upgrade, and Framework would likely have had to cajole and pay AMD a significant amount of money into attempting to develop that, which would have taken a lot of time, and might result in failure anyway.

    So, Framework decided on pushing out a product that is actually viable now.

    • The Hobbyist@lemmy.zipOP
      link
      fedilink
      arrow-up
      0
      ·
      18 days ago

      I was trying to reason from how GPUs occasionally use a so called clamshell design where, if I understand correctly, they split their bus to reach double the number of memory chips. The chips are paired and respond to the same addresses but then each provide part of the data which is then combined.

      Your example for vehicles got me confused, because as you point out, if you double the number of lanes while keeping the speed the same, you do effectively double the number of vehicles passing per unit of time, which is the bandwidth we are trying to achieve.

      I’m sorry if I’m missing some important details but I am still rather confused.

      PS: as per the specific framework memory speed specs, the halo strix chip maxes out at 8000, so 8533 is not supported, as per the specs I linked in the the post.