• Twirrim a day ago

    CXL is going to be really interesting.

    On the positive side, you can scale out memory quite a lot, fill up PCI slots, even have memory external to your chassis. Memory tiering has a lot of potential.

    On the negative side, you've got latency costs to swallow up. You don't get distance from CPU for free (there's a reason the memory on your motherboard is as close as practical to the CPU) https://www.nextplatform.com/2022/12/05/just-how-bad-is-cxl-.... CXL spec for 2.0 is at about 200ns of latency added to all calls to what is stored in memory, so when using it you've got to think carefully about how you approach using it, or you'll cripple yourself.

    There's been work on the OS side around data locality, but CXL stuff hasn't been widely available, so there's an element of "Well, we'll have to see".

    Azure has some interesting whitepapers out as they've been investigating ways to use CXL with VMs, https://www.microsoft.com/en-us/research/wp-content/uploads/....

    • tanelpoder a day ago

      Yup, for best results you wouldn't just dump your existing pointer-chasing and linked-list data structures to CXL (like the Optane's transparent mode was, whatever it was called).

      But CXL-backed memory can use your CPU caches as usual and the PCIe 5.0 lane throughput is still good, assuming that the CXL controller/DRAM side doesn't become a bottleneck. So you could design your engines and data structures to account for these tradeoffs. Like fetching/scanning columnar data structures, prefetching to hide latency etc. You probably don't want to have global shared locks and frequent atomic operations on CXL-backed shared memory (once that becomes possible in theory with CXL3.0).

      Edit: I'll plug my own article here - if you've wondered whether there were actual large-scale commercial products that used Intel's Optane as intended then Oracle database took good advantage of it (both the Exadata and plain database engines). One use was to have low latency durable (local) commits on Optane:

      https://tanelpoder.com/posts/testing-oracles-use-of-optane-p...

      VMware supports it as well, but using it as a simpler layer for tiered memory.

      • packetlost a day ago

        > You probably don't want to have global shared locks and frequent atomic operations on CXL-backed shared memory (once that becomes possible in theory with CXL3.0).

        I'd bet contested locks spend more time in cache than most other lines of memory so in practice a global lock might not be too bad.

        • tanelpoder a day ago

          Yep agreed, for single-host with CXL scenarios. I wrote this comment thinking about a hypothetical future CXL3.x+ scenario with multi-host fabric coherence where one could in theory put locks and control structures that protect shared access to CXL memory pools into the same shared CXL memory (so, no need for coordination over regular network at least).

        • samus 21 hours ago

          DBMSs have been managing storage with different access times for decades and it should be pretty easy to adapt an existing engine. Or you could use it as a gigantic swap space. No clue whether additional kernel patches would be required for that.

        • GordonS a day ago

          Huh, 200ns is less than I imagined; even if it is still almost 100x slower than regular RAM, it's still around 100x faster than NVMe storage.

          • Dylan16807 a day ago

            Regular RAM is 50-100ns.

            • jauntywundrkind a day ago

              Most cross-socket traffic is >100ns.

            • temp0826 21 hours ago

              I have never had to go deep into NUMA configuration personally but couldn't it be leveraged here?

              • wmf 21 hours ago

                Yes, if you want your app to be aware of CXL you can configure it as a separate NUMA node.

                • tanelpoder 21 hours ago

                  Optane memory modules also present themselves as separate (memory only) NUMA nodes. They’ve given me a chance to play with Linux tiered memory, without having to emulate the hardware for a VM

              • immibis a day ago

                What kind of motherboard, CPU, cables, switches, and end devices would I need to buy to have a CXL network?

                • afr0ck a day ago

                  CXL uses the PCIe physical layer, so you just need to buy hardware that understands the protocol, namely the CPU and the expansion boards. AMD Genoa (e.g. EPYC 9004) supports CXL 1.1 as well as Intel Saphire Rapids and all subsequent models do. For CXL memory expansion boards, you can get from Samsung or Marvell. I got a 128 GB model from Samsung with 25 GB/s read throughput.

                  • wmf a day ago

                    CXL networking is still in the R&D stage.

                  • imtringued 18 hours ago

                    The latency concern is completely overblown because CXL has cache coherence. The moment you do a second request to the same page it will be a cache hit.

                    I would be more worried about memory bandwidth. You can now add so much memory to your servers that it might take minutes to do a full in-memory table scan.

                    • justincormack 17 hours ago

                      Cache lines are 64 bytes, not page size.

                  • mdaniel a day ago

                    > Buy From One of the Regions Below > Egypt

                    :-/

                    But, because I'm a good sport, I actually chased a couple of those links figuring that I could convert Egyptian Pound into USD but <https://www.sigma-computer.com/en/search?q=CXL%20R5X4> is "No results", and similar for the other ones that I could get to even load

                    • tanelpoder a day ago

                      Yeah I saw the same. I've been keeping an eye on the CXL world for ~5 years and so far it's 99% announcements, unveilings and great predictions. But the only CXL cards a consumer/small business can buy are some experimental-ish 64GB/128GB cards that you can actually buy today. Haven't seen any of my larger clients use it either. Both Intel Optane and DSSD storage efforts got discontinued after years of fanfare, from technical point of view, I hope that the same doesn't happen to CXL.

                      • afr0ck a day ago

                        I think Meta has already rolled out some CXL hardware for memory tiering. Marvell, Samsung, Xconn and many others have built various memory chips and switching hardware up to CXL 3.0. All recent Intel and AMD CPUs support CXL.

                      • sheepscreek a day ago

                        That is pretty hilarious. I wonder what’s the reason behind this. Maybe they wanted plausible deniability in case someone tried to buy it (“oh the phone lines were down, you’ll have to go there to buy one”).

                        • eqvinox 20 hours ago

                          I think someone just forgot to delete an option somewhere and it "crept in", and it really isn't supposed to have a "buy" link at all at this point.

                          • antonvs 20 hours ago

                            Ok, I rented a camel and went to the specified location, but there was nothing there but some scorpions and an asp. What gives?

                        • bri3d a day ago

                          CXL is a standard for compute and I/O extension over PCIe signaling which has been around for a few years, with a couple of available RAM boards (from SMART and others).

                          I think the main bridge chipsets come from Microchip (this one) and Montage.

                          This Gigabyte product is interesting since it’s a little lower end than most AXL solutions - so far AXL memory expansion has mostly appeared in esoteric racked designs like the particularly wild https://www.servethehome.com/cxl-paradigm-shift-asus-rs520qa... .

                          • bobmcnamara a day ago

                            CXL seems so much cleaner than the old AMD way of plumbing an FPGA through the second CPU socket.

                            • undefined a day ago
                              [deleted]
                            • pella a day ago

                              I’m really looking forward to GPU-CXL integration.

                              "CXL-GPU: Pushing GPU Memory Boundaries with the Integration of CXL Technologies" https://arxiv.org/abs/2506.15601

                              • eqvinox 20 hours ago

                                The "AI" marketing on this is positively silly (and a good reflection of how weird everything has gotten in this industry.)

                                Do like the card though, was waiting for someone to make an affordable version (or rather: this looks affordable, I hope it will be both that and actually obtainable. CXL was kinda locked away so far…)

                                • trebligdivad a day ago

                                  My god - a CXL product! That's really surprising anything go that far. I'd been expecting external CXL boxes, not internal stuff.

                                  • alberth a day ago

                                    As someone not well versed in GPU and CXL, would someone mind explaining the significance of this.

                                    • wmf 21 hours ago

                                      This looks like the first CXL card you could actually buy. It's been coming soon for years. It also confirms that both Intel and AMD workstation CPUs support CXL.

                                    • nmstoker 16 hours ago

                                      Assuming you have the requisite CPU and motherboard with this card, does the memory just appear as normal under Linux/Windows/whatever OS is installed? Or do you need to get special drivers or other particular software to make use of it?

                                      • roscas a day ago

                                        That is amazing. Most consumer boards will only have 32 or 64. To have 512 is great!

                                        • justincormack a day ago

                                          You havent seen the price of 128GB DDR5 RDIMMs, they are maybe $1300 each.

                                          A lot of the initial use cases of CXL seem to be to use up lots of older DDR4 RDIMMs in newer systems to expand memory, eg cloud providers have a lot.

                                          • kvemkon a day ago

                                            Micron DDR5-5600 for 900 Euro (without VAT, business).

                                          • tanelpoder a day ago

                                            ... and if you have the money, you can use 3 out of 4 PCIe5 slots for CXL expansion. So that could be 2TB DRAM + 1.5TB DRAM-over-CXL, all cache coherent thanks to CXL.mem.

                                            I guess there are some use cases for this for local users, but I think the biggest wins could come from the CXL shared memory arrays in smaller clusters. So you could, for example, cache the entire build-side of a big hash join in the shared CXL memory and let all other nodes performing the join see the single shared dataset. Or build a "coherent global buffer cache" using CPU+PCI+CXL hardware, like Oracle Real Application Clusters has been doing with software+NICs for the last 30 years.

                                            Edit: One example of the CXL shared memory pool devices is Samsung CMM-B. Still just an announcement, haven't seen it in the wild. So, CXL arrays might become something like the SAN arrays in the future - with direct loading to CPU cache (with cache coherence) and being byte-addressable.

                                            https://semiconductor.samsung.com/news-events/tech-blog/cxl-...

                                            • cjensen a day ago

                                              Both of the supported motherboards support installation of 2TB of DRAM.

                                              • reilly3000 a day ago

                                                Presumably this is about adding more memory channels via pcie lanes. I’m very curious to know what kind of bandwidth one could expect with such a setup, as that is the primary bottleneck for inference speed.

                                                • Dylan16807 a day ago

                                                  The raw speed of PCIe 5.0 x16 is 63 billion bytes per second each way. Assuming we transfer several cache lines at a time the overhead should be pretty small, so expect 50-60GB/s. Which is on par with a single high-clocked channel of DRAM.

                                            • jonhohle a day ago

                                              Why did something like this take so long to exist? I’ve always wanted swap or tmpfs available on old RAM I have lying around.

                                              • gertrunde a day ago

                                                Such things have existed for quite a long time...

                                                For example:

                                                https://en.wikipedia.org/wiki/I-RAM

                                                (Not a unique thing, merely the first one I found).

                                                And then there are the more exotic options, like the stuff that these folk used to make: https://en.wikipedia.org/wiki/Texas_Memory_Systems - iirc - Eve Online used the RamSan product line (apparently starting in 2005: https://www.eveonline.com/news/view/a-history-of-eve-databas... )

                                                • numpad0 16 hours ago

                                                  Yeah. I can't count how many times I've seen descriptions of northbridge links smelling like the author knows it's PCIe under the hood. I've also seen someone explaining that it can't be done on most CPUs unless all cache systems are turned off because (IO?)MMU don't allow caching of MMIO addresses outside DRAM range.

                                                  The technical explanations for the fact that you (boolean)can't have extra DRAM controllers on PCIe is increasingly sounding like market segmentation reasons than purely technical ones. x86 is a memory mapped I/O platform. Why we can't just have RAM sticks on RAM addresses.

                                                  The reverse of this works btw. NVMe drives can use Host Memory Buffer to cache reads and writes on system RAM - the feature that jammed and caught fire on recently rumored bad ntfs.sys incident in Windows 11.

                                                  • kvemkon a day ago

                                                    I'd have rather a question why we had single (or already dual) core CPUs with dual-channel memory controller and now we have 16-core CPUs but still with only dual-channel RAM.

                                                    • justincormack 17 hours ago

                                                      AMD EPYC has 12 channel, 24 on dual socket. AMD sell machines with 2 (consumer), 4 (threadripper), 6 (dense edge), 8 (threadripper pro) and 12 memory channels (EPYC high end). Next generation EPYC will have 16 channels. Roughly if you look at the AMD options, they give you 2 memory channels per 16 cores. CPUs tend to be somewhat limited in what bandwidth they can use, eg on Apple Silicon you cant actually consume all the memory bandwidth on the wider options just on the CPUs, its mainly useful for the GPU. DDR5 was double speed of DDR4, and speeds have been ramping up too, so there have been improvements there.

                                                      • Dylan16807 a day ago

                                                        DDR1 and DDR2 were clocked 20x and 10x slower than DDR5. The CPU cores we have now are faster but not that much faster, and with the typical user having 8 or fewer performance cores 128 bits of memory width has stayed a good balance.

                                                        If you need a lot of memory bandwidth, workstation boards have DDR5 at 256-512 bits wide. Apple Silicon supports that range on Pro and Max, and Ultra is 1024.

                                                        (I'm using bits instead of channels because channels/subchannels can be 16 or 32 or 64 bits wide.)

                                                        • bobmcnamara a day ago

                                                          Intel and AMD I'd reckon. Apple went wide with their busses.

                                                          • to11mtm a day ago

                                                            Well, Each Channel needs a lot of pins. I don't think all 288/262 pins need to go to the CPU, but a large number of them do, I'd wager; The old LGA 1366 (Tri-Channel) and LGA 1151 (Dual Channel) are probably as close as we can get to a simple reference point [0].

                                                            Apple FBOW, based on a quick and sloppy count of a reballing jig [1], has something on the order of 2500-2700 balls on an M2 CPU.

                                                            I think AMD's FP11 'socket' (it's really just a standard ball grid array) pinout is something on the order of 2000-2100 balls and that gets you four 64 Bit DDR channels (I think Apple works a bit different and uses 16 bit channels, thus the 'channel count' for an M2 is higher.)

                                                            Which is a roundabout way of saying, AMD and Intel probably can match the bandwidth but to do so likely would require moving to soldered CPUS which would be a huge paradigm shift for all the existing boardmakers/etc.

                                                            [0] - They do have other tradeoffs; namely that 1151 has built in PCIE, on the other hand the link to the PCH is AFAIR a good bit thinner than the QPI link on the 1366.

                                                            [1] - https://www.masterliuonline.com/products/a2179-a1932-cpu-reb... . I counted ~55 rows along the top and ~48 rows on the side...

                                                            • bobmcnamara 10 hours ago

                                                              Completely agree, and this is a bit of a ramble...

                                                              I think part of might be that Apple recognized that integrated GPUs require a lot of bulk memory bandwidth. I noticed this with their tablet derivative cores having memory bandwidth that tended to scale with screen size but Samsung and Qualcomm didn't bother for ages. And it sucked doing high speed vision systems on their chips because of it.

                                                              For years Intel had been slowly beefing up the L2/L3/L4.

                                                              M1Max is somewhere between Nvidia 1080 and 1080TI in bulk bandwidth. The lowest end M chips aren't competitive, but near everything above that overlaps even the current gen NVIDA 4050+ offerings

                                                              • to11mtm an hour ago

                                                                Good ramble though :)

                                                                Yeah, Apple definitely realized that they should do something and for as much as I don't care for their ecosystem I think they were very smart in how they handled the need for memory bandwidth, e.x. having more 16 bit channels vs fewer 64 bit channels probably allows for better power management characteristics as far as being able to relocating data on 'sleep'/'wake' and thus being able to leave more of the ram powered off.

                                                                That plus the good UMA impl has left the rest of the industry 'not playing catchup' i.e.

                                                                - Intel failing to capitalize on the opportunity of a 'VRAM heavy' low end card to gain market share,

                                                                - AMD failing to bite the bullet and meaningfully try to fight Nvidia on memory/bandwidth margin...

                                                                - Nvidia just raking that margin in...

                                                                - By this point you'd think Qualcomm would just do an 'AI Accelerator' reference platform just to try....

                                                                - I'm guessing whatever efforts are happening in China, they are too busy trying to fill internal needs to bother boasting and tipping their hat; better to let outside companies continue to overspend on the current paradigm.

                                                          • christkv a day ago

                                                            Check out Strix Halo 395+ it’s got 8 memory channels up to 128 GB and 16 cores

                                                            • Dylan16807 a day ago

                                                              That's a true but misleading number. It's the equivalent of "quad channel" in normal terms.

                                                            • kmeisthax 21 hours ago

                                                              [dead]

                                                            • aidenn0 a day ago

                                                              (S)ATA or PCI to DRAM adapters were widely available until NAND became cheaper per bit than DRAM, at which point the use for it kind of went away.

                                                              IIRC Intel even made a DRAM card that was drum-memory compatible.

                                                              • undefined a day ago
                                                                [deleted]
                                                                • Dylan16807 a day ago

                                                                  RAM controllers are expensive enough that it's rarely worth pairing them with old RAM lying around.

                                                                • JonChesterfield a day ago

                                                                  I don't get it. The point of (ddrN) memory is latency. If its on the far side of pcie latency is much worse than the system memory. In what sense is this better than ssd on the far side of pcie?

                                                                  • wmf a day ago

                                                                    It's only ~2x worse latency than main memory but 100x lower than SSD.

                                                                    • JonChesterfield a day ago

                                                                      I'm finding ~50ns best case for pcie, ~10ns for system. Which is a lot closer than I expected.

                                                                      • adgjlsfhk1 a day ago

                                                                        no system ram is 10ns. That's closer to L2 cache.

                                                                    • Cr8 a day ago

                                                                      pcie devices can also do direct transfers to each other - if you have one of these and a gpu its relatively quick to move data between them without bouncing through main ram

                                                                    • nottorp 19 hours ago

                                                                      For every gold rush, make and sell shovels.

                                                                      • amirhirsch a day ago

                                                                        The i in that logo seems like it’s hurting the A

                                                                        • jauntywundrkind a day ago

                                                                          I wonder whose controller they are using.

                                                                          For a memory controller, that thing looks hot!

                                                                        • fithisux 19 hours ago

                                                                          The Amiga approach resurrected.