• 0xbadcafebee 15 hours ago

    You can already do this with some GPU drivers:

      GRUB_CMDLINE_LINUX_DEFAULT="quiet splash amdttm.pages_limit=5242880 ttm.pages_limit=5242880"
    
    One downside is your kernel isn't going to reserve that memory away from userland. You will still see all the memory at system level as "free". As the GPU driver starts using it, other apps/the OS will try to use the "free" memory, not knowing how much of it is in use (it may show up as "cache", or not at all). Then OOM killer starts going or programs start crashing, and at some point the OS tips over or GPU driver crashes. You can add loads of swap as a compromise and it works okay, if a bit slow.

    In any case, loading a gigantic model just to use system RAM is absurdly slow (due to mem bandwidth), like 1-5 t/s, so it's not practical. It'd take a whole day to process one 86k token request. Just pay a cloud provider $0.01 to do it in 10 seconds.

    • adrian_b 2 hours ago

      With discrete GPUs, using system RAM is slow not due to mem bandwidth, but due to PCIe bandwidth, which is the bottleneck.

      For example, 16x PCIe 4.0: 256 Gb/s, 16x PCIe 5.0: 512 Gb/s, while 2x DDR5-6400 DIMMs: 819 Gb/s. The actual throughput is lower for both PCIe and DDR5, due to communication overhead.

      On server/workstation motherboards which may have 4, 8 or 12 DIMMs instead of 2, the ratio between memory bandwidth and PCIe bandwidth becomes proportionally higher, so the memory throughput achievable by the GPU becomes a very small fraction of the system memory bandwidth.

      • Tsiklon an hour ago

        The difference between DDR4 and 5 is quite substantial. I have a fully loaded Cascade Lake Mac Pro - 6 channels of DDR4-2933 gets me to about 120GB/s or 960Gb/s. PCIe 3.0 is a major Achilles heel of what would be a capable workstation system with modern nvidia GPUs precisely for the reason you document.

        • zozbot234 an hour ago

          > slow not due to mem bandwidth, but due to PCIe bandwidth, which is the bottleneck.

          > On server/workstation motherboards ... the memory throughput [to system RAM] achievable by the GPU becomes a very small fraction of the system memory bandwidth.

          Yes, this is a critical point. It means that this is only realistically useful for prefill, which is compute- and not memory-bandwidth bound.

          • shdudns an hour ago

            Sorry, I'm a bit of a noob on llm. What is "prefill"? As opposed to what?

            • natechlin 44 minutes ago

              Prefill - module computes KV cache over input toks, up to the last token in your input (the 'prompt'), at which point it can then begin -

              Decode - the model chooses a new token to append to the end of the current token list (i.e. it generates a token), then computes the new tokens KVs.

              Decode is basically prefill 1 tok -> add 1 tok -> prefill 1 more tok -> ....

              but in the initial prefill stage it doesn't need to do generation, since you've provided the toks.

              • ghm2199 37 minutes ago

                And Incidentally prefill would also be how caching,say, a system prompt saves you some $ for API usage with LLM providers. They only compute the kv cache for the new tokens after the system prompt.

        • jmward01 13 hours ago

          The point is not how fast it is now. The point is that this opens new possibilities that can be built on. Potentially models that are trained with slightly different architectures to optimize to this use case. Possibly others come to improve this path. Possibly HW manufacturers make a few small adjustments that remove bottlenecks. Who knows, the next person may combine CPU compute with this mem sharing to get another token a second. Then the next person does predictive loading into memory to keep that bandwith 100% maxed and usable. Then the next does and the next does. Before you know it there is a real thing there that never existed.

          This is a great project. I love the possibilities it hints at. Thanks for building it!

          • smallnamespace 11 hours ago

            It’s architecturally not a good approach. System RAM is much slower so you should put data that doesn’t need to be used often on it. That knowledge is at the application layer. Adding a CUDA shim makes system RAM appear like VRAM, which gets things to run, but it will never run very well.

            The benchmarks at the bottom mention memory tiering and manually controlling where things go, but if your application already does that, then you probably don’t also need a CUDA shim. The application should control the VRAM to system memory transfers with boring normal code.

            • jbverschoor 7 hours ago

              Not true for unified systems. And for strix halo you need to dedicate the amount which is annoying.

              You’re basically stating that swapping is also a bad idea. And to take it further, any memory or storage is a bad idea because there’s L1 cache/SRAM which is faster then the rest

              • Tuna-Fish 3 hours ago

                On some workloads, swapping is a bad idea.

                The fundamental problem here is that the workload of LLMs is (vastly simplified) a repeated linear read of all the weights, in order. That is, there is no memory locality in time. There is literally anti-locality; When you read a set of weights, you know you will not need them again until you have processed everything else.

                This means that many of the old approaches don't work, because time locality is such a core assumption underlying all of them. The best you can do is really a very large pool of very fast ram.

                In the long term, compute is probably going to move towards the memory.

                • zozbot234 3 hours ago

                  The main blocker with swapping is not even the limited bandwidth, it's actually the extreme write workload on data elements such as the per-layer model activations - and, to a much lesser extent, the KV-cache. In contrast, there are elements such as inactive experts for highly sparse MoE models, where swapping makes sense since any given expert will probably be unused. You're better off using that VRAM/RAM for something else. So the logic of "reserve VRAM for the highest-value uses, use system RAM as a second tier, finally use storage as a last resort or for read-only data" is still quite valid.

                • dataflow 6 hours ago

                  > You’re basically stating that swapping is also a bad idea.

                  Is that a crazy thing to say? I can't recall the last time I was grateful for swap; it might've been before 2010.

                  • literalAardvark 2 hours ago

                    If you've used any unreserved VM ever you're grateful for swapping.

                    Somewhat indirectly but still.

                  • Tsiklon an hour ago

                    Strix Halo’s unified setup is pretty cool. In systems with 128GB of memory, in BIOS set the dedicated GPU memory to the smallest permitted and the Drivers will use the whole main memory pool appropriately in Linux and Windows

                    • stuaxo an hour ago

                      Does this work on the open source amdgpu drivers ?

                      I've been a bit too busy to turn mine on for a while.

                      • Tsiklon 22 minutes ago

                        I’ve had no issues running GPT-OSS 120b with decent performance on the machine (HP Zbook Ultra G1a). Running on Bluefin/Universal Blue and Windows.

                    • imtringued 6 hours ago

                      It's not true for unified systems, because they have no secondary RAM that could be used to extend the GPU memory.

                      It's pretty weird to insist on a counterargument that has no implications or consequences to the presented argument.

                      Yes, swapping is a bad idea.

                      Your second argument also falls flat, because the standard CUDA hardware setup doesn't use CXL so cache coherence isn't available. You're left with manual memory synchronization. Pretending that GPUs have cache for system RAM when they don't is pretty suspect.

                    • timnetworks 10 hours ago

                      Some people are not concerned with having it run the fastest, just having it run at all may be enough.

                      • m-schuetz 8 hours ago

                        From my experience, accessing system RAM from the GPU is so slow, it might as well count as "does not work". It's orders of magnitudes faster to memcpy large swaths of memory that you are going to use to the GPU, rather than accessing system mem from a kernel which then takes ages to wait for that small block/page of memory, then waits again for the next small page/block of memory, etc. Latency hiding doesnt work anymore if the latency is that large.

                        • nl 8 hours ago

                          But then you can use CPU/RAM offload, which already allows you to offload without a kernel module.

                    • robotswantdata 2 hours ago

                      12 channel ddr5 5600 ECC is around 500gbs which in real world works very well for large MoE

                      • adrian_b an hour ago

                        You mean 500 GB/s, not Gb/s (actually 537 GB/s).

                        Unfortunately that does not matter. Even in a cheap desktop motherboard the memory bandwidth is higher than of 16-lane PCIe 5.0.

                        Therefore the memory bandwidth available to a discrete GPU is determined by its PCIe slot, not by the system memory.

                        If you install multiple GPUs, in many MBs that will halve the bandwidth of the PCIe slots, for an even lower memory throughput.

                      • lelanthran 9 hours ago

                        > any case, loading a gigantic model just to use system RAM is absurdly slow (due to mem bandwidth), like 1-5 t/s, so it's not practical. It'd take a whole day to process one 86k token reques

                        So don't use it for large requests. Ideal for when you just want to categorise things, for example, "does this task need a shell" or "bucket this email into one of help request, bill due or personal comms".

                        • zozbot234 7 hours ago

                          The best use is actually for a layer that "almost fits" into VRAM, such that automated offloading to system RAM will be rare enough that it doesn't impact performance.

                          • usrusr an hour ago

                            As in when your secondary memory is fast enough, after the first 10% of the model are processed you can swap their memory with the part for 50% to 60% and when that is done you swap back to have the 0-10% ready in time for the next iteration?

                            Sounds ambitious, for the small improvement in effective capacity. In particular when I start wondering if real life speed differences would be small enough for that 10% increase, or if it would be even smaller. And that's before factoring in power/cooling cost for saturating another interface.

                        • RobotToaster 6 hours ago

                          Would MoE models work better with this approach?

                        • aruametello an hour ago

                          Post traumatic "nvidia TurboCache" disorder triggered.

                          https://en.wikipedia.org/wiki/TurboCache

                          (Not the same thing 1:1, but worth the joke anyway)

                          • nl 14 hours ago

                            This is really interesting engineering, but I agree with the other commentators that the benchmarking makes it hard to understand the contribution various factors are having.

                            The ExLlamaV3 EXL3 2bpw (8 GB, full VRAM) row is an order of magnitude faster than the baseline - but the baseline seems to be the 32GB model running with the KV cache shared to system memory only (I think?)

                            But if a 8GB model gives sufficient quality then it seems like that would have worked without the shared memory thing?

                            I think the useful apples-to-apples benchmark is currently the Ollama + GreenBoost shim (baseline) (2-5 tps) vs ExLlamaV3 + GreenBoost cache (8–20 tps) comparison.

                            It would be really useful to see this compared with the existing llama CPU/memory offload. There is a note at the start ("Offload layers to CPU — works, but drops token/s by 5–10× because CPU RAM has no CUDA coherence") - but it is unclear if that 5-10x token speed drop is compared to running a model completely in GPU or compared to the greenboost approach.

                            I think it is vs GPU, in which case it seems likely the performance is similar to what greenboost is giving but probably much more stable.

                            • kristianp 12 hours ago

                              ExLlamaV3 EXL3 2bpw is likely the 30b parameter GLM 4.7 Flash quantised down to 2 bits, the unstated assumption is that you need to check the 2bpw quantisation works well enough for your use case.

                              The reported size of the ModelOpt FP8, 16 GB, sounds wrong to me. If its 8 bits per parameter it is going to be a similar size to the glm-4.7-flash:q8_0. They repeat this a few times in the readme.

                            • daneel_w 15 hours ago

                              Related, a couple of years ago: https://old.reddit.com/r/Amd/comments/15t0lsm/i_turned_a_95_...

                              "I turned a $95 AMD APU into a 16GB VRAM GPU and it can run stable diffusion!"

                              • 3abiton 15 hours ago

                                > it can generate a 50 steps 512x512 image around 1 minute and 50 seconds.

                                I have the 4650G APU, and the best way to describe it is: lacking of support. This was even more true 3 yo than now. rocm (is) was absolutely dogshit then, I know this because I tried to do the same when that post was made. You have to compile everything from scratch, get the relevant patches, and even then, xformers which is a library that accelerate diffusion model inferencing was not supported for renoir or rocm back then. Yes, you can generate an image, but it was much slower, and rigged with bugs. You couldn'rt update rocm because it broke compatibility, and it was partly the reason I got into nixos. That being said, those APUs are a power house. Nowadays I can run decent agentic workflows on them (I have 64gb of ddr4 ram, ie APU can suck as much as it needs with the latest linux kernels).

                                Just note, diffusion models are still second class citizens on AMD apus even GPUs. But then again, nothing close right now on the market except for what apple offers.

                                • nl 14 hours ago

                                  The Ryzen AI CPU/GPUs (Ryzan AI 395+ etc) seem to have increasing support - https://lemonade-server.ai/ now has support for the NPU as well as the combined CPU/GPU (which I guess is a APU but is different to the G series of APUs I think?)

                                  But I'm always interested in first hand experiences of how good is it really - I'm pretty cynical about the idea that AMD actually knows what it takes to build good software end-to-end.

                                  • 3abiton 14 hours ago

                                    I also have one, and indeed support is very much frictionless now compared to a year ago. But again, not thanks to AMD, as initially it was purely community driven. Strix halo was not even supported by ROCm (officially), and we had to deal with therock images, then donato made the toolbox, and then lemonade came through. I am really surprised how AMD approached this. They made big promises, they threw the hardware out, it really is amazing piece of hardware given what you can do with it, but it was left hanging without support for AI stack for months even though it had it in its name. Contrast that with the DGX spark (yes it had and still had bugs in its kernels, but cuda worked on day 1) and you can see the difference. Nvidia is selling an ecosystem, AMD is selling hardware. I really hope AMD focus on the software layer more.

                                    • nl 9 hours ago

                                      I believe Lemonade is the AMD team right?

                                      But yes I agree with you about their lack of prioritization for software!

                                    • zozbot234 7 hours ago

                                      Note that Lemonade Server uses NPU low-level code that's proprietary, not available as open source. It would be nice to work on a fully open alternative, perhaps by exposing the NPU itself as a Vulkan Compute-capable device, that shaders can be auto-compiled to.

                                • bguberfain 32 minutes ago

                                  "A watchdog kernel thread monitors RAM and NVMe pressure and signals userspace before things get dangerous." - which kind of danger this type of solution can have?

                                  • ninjagoo 2 hours ago

                                    This is awesome! Normally, offloading layers to the CPU RAM means that the compute for those layers occurs on the CPU instead of the GPU, generally speaking. The CPU is orders of magnitude slower than the GPU.

                                    With this approach the compute occurs on the GPU, with the tradeoff that layers in RAM have to be moved back-and-forth through PCI-DMA. It seems to me that this should offer a speedup vs compute split between GPU and CPU. The amount of speedup will depend on how many layers would have been on CPU compute, minus the reduction due to moving those layers between RAM and the GPU.

                                    What's slower? Compute on the CPU or moving data from RAM to GPU through PCI-DMA?

                                    • yjtpesesu2 15 hours ago

                                      How does this differ from anything llama.cpp offers, regarding offloading layers? The repo consistently refers to "DDR4". Is there a reason DDR5 won't work with this?

                                      • svnt 15 hours ago

                                        The readme opens with this:

                                        > I have an RTX 5070 with 12 GB VRAM and I wanted to run glm-4.7-flash:q8_0, which is a 31.8 GB model. The standard options are:

                                        > Offload layers to CPU — works, but drops token/s by 5–10× because CPU RAM has no CUDA coherence. You end up waiting. Use a smaller quantization — you lose quality. At q4_0 the model is noticeably worse on reasoning tasks.

                                        > Buy a bigger GPU — not realistic for consumer hardware. A 48 GB card costs more than a complete workstation.

                                        > None of those felt right, so I built an alternative: route the overflow memory to DDR4 via DMA-BUF, which gives the GPU direct access to system RAM over PCIe 4.0 without a CPU copy involved.

                                        And then limps home with this caveat on the closest thing to a benchmark:

                                        > The PCIe 4.0 link (~32 GB/s) is the bottleneck when the model overflows VRAM. The best strategy is to shrink the model until it fits — either with EXL3 quantization or ModelOpt PTQ — and use GreenBoost's DDR4 pool for KV cache only.

                                        I think the reason it refers it to DDR4 is because that is how the user explained it to their coding agent. LLMs are great at perpetuating unnecessary specificity.

                                        • moffkalast 5 hours ago

                                          Given that 32 GB/s is significantly worse than CPU to RAM speeds these days, does the additional compute really make it any faster in practice? The KV cache is always on the GPU anyway unless you're doing something really weird, so it won't affect ingestion, and generation is typically bandwidth bound. With something like ×16 PCIe 6.0 it would actually make sense, but nothing less than that, or maybe for smaller dense models that are more compute bound with 8x PCIe 6.0 or 16x 5.0 but that's already below DDR5 speeds.

                                          • zozbot234 4 hours ago

                                            Additional compute is generally a win for prefill, while memory bandwidth is king for decode. KV cache however is the main blocker for long context, so it should be offloaded to system RAM and even to NVMe swap as context grows. Yes that's slow on an absolute basis but it's faster (and more power efficient, which makes everything else faster) than not having the cache at all, so it's still a huge win.

                                        • segmondy 2 hours ago

                                          I was wondering the same, but llama.cpp was written to offload to system ram. If this really works, then the advantage could be that one could run transformers / sglang, etc or other tools that don't offload to system ram. However, I want to see the numbers. Perhaps I'll give this a try, but I need a throw away box I could trash if something goes wrong, but have none at the moment.

                                          • kcb 15 hours ago

                                            CUDA has had managed memory that pages between VRAM and system RAM for a decade. Problem is doing so is unusably slow for AI purposes. Seems like an unnecessary layer here.

                                            • hrmtst93837 6 hours ago

                                              That slowness is almost useful. It makes the failure mode obvious instead of letting a 'transparent' layer hide it until some sloppy alloc or tensor blowup starts paging through system RAM or NVMe and the whole job turns into a smoke test for your storage stack.

                                              For actual training, explicit sharding and RAM mapping are ugly, but at least you can see where the pressure is and reason about it. 'Transparent' often just means performance falls off a cliff and now debugging it sucks.

                                            • xienze 15 hours ago

                                              Presumably it means that software doesn’t have to write the same sort of layer offloading support. It’ll “just work” as if you had X GB of VRAM all along.

                                              • yjtpesesu2 15 hours ago

                                                so, magic?

                                            • wewewedxfgdf 2 hours ago

                                              Why don't they just put ram slots on the card so you can augment the fast ram

                                              • M95D 2 hours ago

                                                Speed and reliability. A connector of any kind reduces signal quality. Data lines need to be longer, because the memory slot won't fit under the radiator where the memory chips are now, and that adds even more electrical interference and degrades signal.

                                                Also, we had memory slots on '90s cards. They were extremely expensive and proprietary. Ever saw a Matrox VRAM card? I never did.

                                                • HighGoldstein 2 hours ago

                                                  > A connector of any kind reduces signal quality.

                                                  Like the M.2 connector?

                                                  > Data lines need to be longer

                                                  Like the data lines going all the way to an on-motherboard storage device?

                                                  • adrian_b an hour ago

                                                    The current DIMM and SODIMM modules cannot be used for much higher speeds than are available now.

                                                    This is why there are several proposals of improved forms for memory modules, which use different sockets, like LPCAMM2, which should be able to work with faster memories.

                                                    However even LPCAMM2 is unlikely to work at the speeds of soldered GDDR7.

                                                    • varispeed an hour ago

                                                      Can't they make it easier to solder / desolder?

                                                    • literalAardvark 2 hours ago

                                                      Soldered stuff is still dramatically better than the M2 connector (than any connector really). You've never wondered why RAM doesn't use PCI Express?

                                                      • zbentley 2 hours ago

                                                        > Like the M.2 connector?

                                                        Yes, though likely something with a higher pin count since memory access is more likely to be random and can be parallel versus block storage.

                                                        > Like the data lines going all the way to an on-motherboard storage device?

                                                        Yes. Why would a GPU manufacturer/packager take on that cost, if it’s presently served well enough for most people by offloading it onto other parts of the system?

                                                    • VHRanger 2 hours ago

                                                      GDDR7x doesn't come in dimm factor?

                                                      In general soldered ram seems to get much higher bandwidth than removeable ram. See ryzen AI Max vs 9950x max ram throughputfor example

                                                      • nic547 39 minutes ago

                                                        Strix Halo uses a 256bit memory interface, the normal desktop processors only have a 128bit interface, that's the biggest difference in bandwidth. For more bandwidth you need to go to a Threadripper.

                                                        Strix Halo seems to use LPDDR with 8000 MT/s, which is a bit faster than the usual 5600 MT/s-6400 MT/s "normal" DDR5-DIMMs (Albeit (expensive) faster ones seem to exist), so there's a slight edge towards soldered memory (not sure about LPCAMM2 and similar tech).

                                                        GDDR7 is a different league, a 5070 Ti also has a 256bit memory interface, but has 896GB/s bandwidth, compared to strix halo with 256GB/s

                                                        • wewewedxfgdf 2 hours ago

                                                          We are talking here about slower ram to augment.

                                                        • timmmmmmay 2 hours ago

                                                          connectors are bad for signal integrity and GDDR is particularly picky about this

                                                          • wewewedxfgdf 2 hours ago

                                                            We're talking about ordinary RAM to augment, like a cache.

                                                            Not as GPU VRAM expansion.

                                                        • Havoc 14 hours ago

                                                          > The best strategy is to shrink the model until it fits — either with EXL3 quantization or ModelOpt PTQ — and use GreenBoost's DDR4 pool for KV cache only.

                                                          Does this make sense? I'd have thought the KV is guaranteed to be used 100% of the time while say in a MoE the same can't be said of the weights.

                                                          Though I suppose if you're shooting for huge context then having that allocation go into ram makes sense specially when its allocated but not used yet

                                                          • alexeldeib 11 hours ago

                                                            KV cache is, well, a cache that can fill up and trigger eviction. You require enough space to execute at least 1 fwd pass of 1 request at your context length. KV cache hits reduce TTFT by avoiding prefill. You don’t get to skip decode.

                                                            MoE is kinda related in terms of lower usage requirements vs a dense model of same total param size, but I think your mental model is a bit off.

                                                            • zozbot234 7 hours ago

                                                              KV cache is also eminently swappable if you have fast storage, since it mostly sees small append-only writes per token - it's not rewritten continuously like the activations. (I believe it's even better if you use cached input tokens across requests, since that portion of KV cache can then be recycled and save a single ~KV-cache sized write per request.) Accessing swapped-out cache may be slow, but it's highly preferable to not having that cache amount at all and recomputing from scratch.

                                                          • ma2kx 14 hours ago

                                                            The physical bottleneck to system memory remains. Therefore, I assume that better results are achieved by manually adjusting which layers are offloaded.

                                                            I would prefer to use system memory to cache different models, focusing on things like embedding, rerankers, and TTS. This is sufficient to run a more complex RAG locally, for example, via Mem0, and then use a larger LLM via the cloud.

                                                            • dr_kretyn 33 minutes ago

                                                              Is there a similar initiative for AMD?

                                                              • angry_octet an hour ago

                                                                I have a system with an ungodly amount of Optane memory and I'm hoping this will work.

                                                                • yjftsjthsd-h 16 hours ago

                                                                  Previously: https://news.ycombinator.com/item?id=47384557

                                                                  (Still cool, still would benefit from better benchmarks)

                                                                  • bhewes 15 hours ago

                                                                    This has been fun we can task our nemotron-3-super model to run over night when our desktops are idle. 4070s and 96gb of ram works fine. Slow but does it's job.

                                                                    • Insanity 13 hours ago

                                                                      Extend your VRAM using RAM, then extend your RAM using Swap.

                                                                      • lokimoon 2 hours ago

                                                                        I’m gonna build a zfs raidz2 on floppy disks to catch the overflow

                                                                        • system2 11 hours ago

                                                                          And burn the swap pagesys file to a rewritable DVD to complete the cycle. It will be super fast that way.

                                                                          • krige 9 hours ago

                                                                            Extend your RAM using RAM Doubler!

                                                                            • FooBarWidget 8 hours ago

                                                                              Then extend your disk space using DoubleSpace/DriveSpace!

                                                                              • lossyalgo 3 hours ago

                                                                                I did that. It worked, until it didn't, and then I learned how to format my 340MB HDD and re-install DOS 6.22. Fun times!

                                                                                • Datagenerator 8 hours ago

                                                                                  Just to be sure install Stacker (from STAC electronics) too

                                                                              • SV_BubbleTime 12 hours ago

                                                                                If you are doing video models, this is an excellent way to murder your SSD.

                                                                                Do not put swap on an SSD you care about at all.

                                                                                • zozbot234 7 hours ago

                                                                                  You can of course monitor SMART wearout indicators to check whether this is happening. Casual use of swap for non LLM-use is actually fine since "cold" ephemeral data will be swapped out first and that will never get written to; KV cache is mostly fine since it's similarly append-only so writes are tolerably small; but yes, more general LLM inference totally breaks that limited-writes pattern and will wear out/kill your media.

                                                                                  • Insanity 11 hours ago

                                                                                    I was writing it somewhat tongue-in-cheek and not as a serious suggestion. But thanks for adding the disclaimer, that's good advice!

                                                                                    • duskdozer 5 hours ago

                                                                                      zram swap otoh should be relatively 'free'

                                                                                      • zozbot234 5 hours ago

                                                                                        LLM working memory is not compressible so ZRAM doesn't buy you anything.

                                                                                      • rvz 9 hours ago

                                                                                        > Do not put swap on an SSD you care about at all.

                                                                                        This.

                                                                                        Many people rediscovering what the purpose of swap files are, but will still find a way to abuse it without knowing that they are actually destroying their SSD.

                                                                                    • dwroberts 4 hours ago

                                                                                      The title here needs changing, this is for nvidia cards but it is not an official project and has nothing to do with them

                                                                                      (Feels especially deceptive when there is another top story right with the headline “nvidia nemoclaw” which is an official project)

                                                                                      • paultendo 15 hours ago

                                                                                        Could be a very useful way to do some overnight tasks using spare RAM. Possibly things like LLM-based categorisation, labelling, data cleansing. That's what comes to mind for me anyway.

                                                                                        • MaxikCZ 5 hours ago

                                                                                          Neat part is every task becomes overnight task when you start offloading to RAM.

                                                                                        • armada651 12 hours ago

                                                                                          Doesn't Windows already do this by default? I can already run models bigger than my GPU VRAM and it will start using up to 50% of my system RAM as "shared memory". This is on a Desktop PC without a shared memory architecture.

                                                                                          • nickjj 3 hours ago

                                                                                            Yep I had a GeForce 750 Ti (2 GB) and I was able to run a ton of things on Windows without any issues at all.

                                                                                            As soon as I switched to Linux I had all sorts of problems on Wayland where as soon as that 2 GB was reached, apps would segfault or act in their own unique ways (opening empty windows) when no GPU memory was available to allocate.

                                                                                            Turns out this is a problem with NVIDIA on Wayland. On X, NVIDIA's drivers act more like Windows. AMD's Linux drivers act more like Windows out of the box on both Wayland and X. System memory gets used when VRAM is full. I know this because I got tired of being unable to use my system after opening 3 browser tabs and a few terminals on Wayland so I bought an AMD RX 480 with 8 GB on eBay. You could say my cost of running Linux on the desktop was $80 + shipping.

                                                                                            A few months ago I wrote a long post going over some of these details at https://nickjanetakis.com/blog/gpu-memory-allocation-bugs-wi.... It even includes videos showing what it's like opening apps both on Wayland and X with that NVIDIA card.

                                                                                            • Yokohiii 11 hours ago

                                                                                              The nvidia windows driver enables RAM swapping by default.

                                                                                              Great way to backstab you if you prefer inference speed.

                                                                                              • 3836293648 11 hours ago

                                                                                                I don't think Windows does this, but Ollama does

                                                                                                • whywhywhywhy 3 hours ago

                                                                                                  It's the drivers but it was a relatively recent addition, think it was added when either the 30xx or 40xx series shipped and the lower cards had pitiful VRAM so they enabled it by default so they'd work with all games.

                                                                                                  Most people who know it does this turns it off because it kicks in too early so if you have 24GB it'll offload to RAM and tank your inference speed when you hit around 22GB use.

                                                                                                  https://nvidia.custhelp.com/app/answers/detail/a_id/5490/~/s...

                                                                                                  • lastdong 3 hours ago

                                                                                                    Nicely linked!

                                                                                                  • nodja 11 hours ago

                                                                                                    NVIDIA's GPU drivers on windows 100% do this

                                                                                                    https://i.imgur.com/c0a3vUy.png

                                                                                                • 152334H 5 hours ago

                                                                                                  Nobody mentioning how this project is vibecoded slop?

                                                                                                    > The code is really bad with completely uneeded parts. The LLM (Qwen 2.5 7B) has hardcoded the i9 14700KF topology, and has variables related to it never used... It's even funnier that the show hardware function always prints the same string. There are even random pip log files. Why did this slop got coverage here?
                                                                                                  
                                                                                                  https://www.phoronix.com/forums/forum/linux-graphics-x-org-d...
                                                                                                  • Berazu 5 hours ago

                                                                                                    I wish there was a way to extend RAM/NVMe with GPU VRAM. :(

                                                                                                    • nuopnu 4 hours ago

                                                                                                      There are vram disks, so at least you can use it for the swap.

                                                                                                    • sabareesh 15 hours ago

                                                                                                      I wish it provided benchmark comparing Direct RAM offload vs CPU offload vs Full VRAM

                                                                                                      • felipe_aramburu 13 hours ago

                                                                                                        How does this relate to cuCascade https://github.com/nvidia/cucascade

                                                                                                        • bandrami 5 hours ago

                                                                                                          Qu'ils mangent de la brioche

                                                                                                          • tandr 4 days ago

                                                                                                            Some simpler benchmark table would be great. May I suggest Ollama on base machine, Ollama with T1, Ollama with T1+T2 etc. on midsize and big models to compare token/sec?

                                                                                                            • pabs3 3 days ago

                                                                                                              Would be great to get this into mainline Linux.

                                                                                                              • brador 5 hours ago

                                                                                                                Could this work on steam deck?

                                                                                                                • NooneAtAll3 9 hours ago

                                                                                                                  nvidia failed to provide gpu with actually meaningful amount of vram

                                                                                                                  and instead of improving the actual product, it decided to "solve the problem in software"

                                                                                                                  I expect this greenboost to fall and burn, honestly...

                                                                                                                  • cma 9 hours ago

                                                                                                                    > it decided to "solve the problem in software"

                                                                                                                    This isn't made by nvidia

                                                                                                                    • shmeeed 3 hours ago

                                                                                                                      Still kinda true, though. As other commenters have pointed out, their Windows drivers do similar stuff.

                                                                                                                  • holoduke 16 hours ago

                                                                                                                    The is extremely slow and not useful in my opinion.

                                                                                                                    • daneel_w 15 hours ago

                                                                                                                      It makes the difference between being able to run a lot of machine learning tasks, and not being able at all. Pretty useful.

                                                                                                                      • majorchord 16 hours ago

                                                                                                                        I would say it depends entirely on your usecase. I don't think there can be a simple "not useful" generalization that applies to everyone.

                                                                                                                        • jauntywundrkind 15 hours ago

                                                                                                                          Man I wish that was a canned response that could be deployed on demand! Well said.

                                                                                                                          I really appreciate thriftful & resourceful points of view. Exploring what if, looking for use is such a great virtue.

                                                                                                                        • bigwheels 16 hours ago

                                                                                                                          Can you elaborate beyond the shallow/superficial dismissal?

                                                                                                                          • whywhywhywhy 3 hours ago

                                                                                                                            If it takes seconds in VRAM it can take tens of minutes running the same thing offloaded to RAM if it hasn't been designed to do it.

                                                                                                                          • ozgrakkurt 7 hours ago

                                                                                                                            It is about as useful as rtx