• refibrillator 2 hours ago

    vLLM supports MLA for Deepseek models as of 3 weeks ago. 3x higher generation throughput and 10x token memory capacity.

    https://github.com/vllm-project/vllm/releases/tag/v0.7.1

    MHA is still faster in low QPS regime apparently.

    https://neuralmagic.com/blog/enhancing-deepseek-models-with-...

    Also published this month was theoretical proof showing that for the same KV Cache overhead, MLA consistently offers greater expressive power than GQA. Furthermore, widely used GQA-based pre-trained models (e.g. LLaMA, Qwen, Mixtral) can be converted into MLA-based models.

    https://arxiv.org/pdf/2502.07864

    • shihab an hour ago

      For future readers, note that those 3x and 10x figures are compared to vLLM's own previous release, and NOT compared to Deepseek's implementation.

      I am very curious to see how well-optimized Deepseek's code is compared to leading LLM serving softwares like vLLM or SGLang.

      • menaerus an hour ago

        Pretty significant improvements. However, my back on the napkin math suggests that MLA, FlashAttention and similar optimizations will provide the benefits only when memory access time dominates the compute in attention implementation? Those would be the prefill-phase (or TTFT) and training (when batch_size >> 1) but not the decode phase (inference)?

        • rfoo an hour ago

          You've got it backwards. After FlashAttention, it's the decoding part being bound mainly by memory access. With FA as long as you have enough batch size you can push training/prefill to be compute-bound.

          • menaerus 30 minutes ago

            I don't think I got it backwards, I believe what I said is correct - FA does not improve inference time.

            From the authors of FlashAttention:

            > This [decoding] operation has been optimized with FlashAttention (v1 and v2 recently) in the training case, where the bottleneck is the memory bandwidth to read and write the intermediate results

            And then they continue with:

            > However, these optimizations don’t apply directly to the inference case, because the bottlenecks are different. For training, FlashAttention parallelizes across the batch size and query length dimensions. During inference, the query length is typically 1 ... With a batch size of 1, FlashAttention will use less than 1% of the GPU!

            And then they come up with a different proposal, FlashDecoding, that optimizes for inference time:

            > Our new approach Flash-Decoding is based on FlashAttention, and adds a new parallelization dimension: the keys/values sequence length. It combines the benefits of the 2 approaches from above. Like FlashAttention, it stores very little extra data to global memory, however it fully utilizes the GPU even when the batch size is small, as long as the context length is large enough.

            Link: https://crfm.stanford.edu/2023/10/12/flashdecoding.html

            • rfoo 11 minutes ago

              That's correct, because FA can't turn inference time from memory-access bound into compute-bound. But your claim on that decoding is compute-bound is plainly wrong.

              FA, compared to naive implementation, made training / prefill (i.e. when you can have multiple tokens in the same sequence visible) compute-bound instead of memory-access bound.

              So, currently, on MHA/GQA, with Flash Attention, training/prefill is compute-bound, whereas decoding is memory-access-bound.

              Before FA, both prefill / decode are bound by memory-access. FA solved the problem of training/prefill. But because kvcache is large, decoding is inherently bound by memory-access.

              Our goal is always to make everything compute-bound.

        • albertzeyer an hour ago

          I also just read that paper. But I wonder, even though MLA is strictly more powerful, do you really gain by that in experiments? This paper doesn't really do too much experimental comparisons. GQA on the other side should still be faster (no need to an extra linear transformation).

        • helloericsf 8 hours ago

          X:https://x.com/deepseek_ai/status/1893836827574030466 BF16 support Paged KV cache (block size 64) 3000 GB/s memory-bound & 580 TFLOPS compute-bound on H800

          • WithinReason 3 hours ago

            That's 90% bandwidth efficiency and 60% compute efficiency

            https://www.nvidia.com/en-us/data-center/h100/

            • helloericsf 2 hours ago

              They don't have h100. wink,wink.

              • rfoo 2 hours ago

                They have H800s which have exactly same memory bandwidth and max FLOPS.

          • eigenvalue 4 hours ago

            Nice, probably saved a bunch of FANG devs a lot of hours of work trying to knock this off.

            • mohsen1 6 hours ago

              I'm confused. Wasn't there sanctions against Chinese companies about Hopper GPUs? Are they just admitting that they had access to H100 against the US sanctions?!

              • thot_experiment 6 hours ago

                Just the H100, the H800 is a region-specific version of the card for china with shitty nvlink bandwidth which makes it rougher for making big clusters, but deepseek was able to mitigate the impact of that by being clever (rumored to have made significant use of PTX assembly instead of just using CUDA, we'll probably find out in the releases this week)

                • ahofmann 5 hours ago

                  It isn't illegal for chinese companies to buy H100 cards. It is illegal for USA companies to sell them to China. So the "admit" part wouldn't be on Chinas side.

                  • jofzar 26 minutes ago

                    It's also totally legal to sell h100 cards to a country that is very close to China.

                    Unrelated, it's always impressed me how Singapore buys 15% of the world's h100's. Really is the AI development capital of the world.

                  • Tiberium 5 hours ago

                    H800 is the export variant that they had access to. They directly reference it in the repo:

                    >Achieving up to 3000 GB/s in memory-bound configuration and 580 TFLOPS in computation-bound configuration on H800 SXM5, using CUDA 12.6.

                    • WiSaGaN 5 hours ago

                      H20 is a Hopper GPU, and they are allowed to be sold in China.

                      • feverzsj 4 hours ago

                        The secret ingredient is smuggling.

                        • tasuki 2 hours ago

                          I'd be very careful when using that word in this situation. If China wants X, and another country has X, who are you to say they shouldn't trade with each other?

                          • blackeyeblitzar 2 hours ago

                            Why does anyone need to be careful using that word? What a bizarre way to try to intimidate someone over speech.

                            Another country has X because they were expected (in the terms of their purchase) to not sell it to an adversary. So yes they’re supposed to honor that agreement and are not supposed to trade that particular thing X with each other. Not doing so invites sanctions and other consequences. Is it worth the risk just to do business with a dictatorship? Probably not.

                            • defrost 2 hours ago

                              If free citizens in the USofA have {X} and China has sanctioned Germany from having {X} should the free citizens of the USofA honor that agreement they made with China to not sell to Germany when they acquired {X} from China?

                              How about if they got {X} from Mexico ( who got it from Agnes .. ) ?

                              • Keyframe 41 minutes ago

                                Some purchases come with strict protocols coded into contracts. Try buying F-35 and selling it to China, for example; See what happens. Other risk you not being able to purchase for yourself anymore and possible sanctions. H100 and others are under export control, I'm just not sure if it's an explicit export control or automatic, like what famously made PowerMac G4 a weapon export. I found a source there was an executive order for hardware exceeding 1e26 floating point operations or 1e23 integer operations. In any case, if an item is under export control that means paperwork and, if you're eligible to purchase, paperwork includes you also signing what you can and cannot do with the item purchased.

                              • 55555 an hour ago

                                Smuggling is normally thought of as hiding something when crossing a border/checkpoint. In this case, it would simply be nvidia violating US sanctions. The goods would have never entered or exited the USA so it's a strange or incorrect use of the word smuggling.

                                • sangnoir an hour ago

                                  It's not intimidation, its merely correcting and an inappropriate usage of a word. What exactly do you think smuggling is?

                                • randomNumber7 2 hours ago

                                  Donald Trump?

                            • behnamoh 6 hours ago

                              Open AI is back!

                              • echelon 4 hours ago

                                The real "Open" AI.

                                • fsndz 3 hours ago

                                  DeepSeek is just the gift that keeps on giving. I now agree with people who say open source AI will win: https://open.substack.com/pub/transitions/p/deepseek-is-comi...

                                  • baq 2 hours ago

                                    Open sourcing is the runner-up’s way to ensure the current best player doesn’t steal the whole market. The elephant in the room is obviously the cluster size required, it hardly matters for normal people that the weights are free. We needed more efficiency breakthroughs.

                                    • PeterStuer 2 hours ago

                                      It matters a lot, even if you never intend to run it yourself or look at the code.

                                      It means that people can and will provide this service, and 1000's will build on this and make offers that you can use in either a commodity base market, or with a specific niche target.

                                      It means regulatory capture and control will be much, much harder to execute.

                                      It means AI might continue to be a benefit also to you rather than just a way to control, propagandize and exploit you.

                                      • fsndz an hour ago

                                        absolutely on point!

                                      • helsinkiandrew 2 hours ago

                                        > .. it hardly matters for normal people that the weights are free. We needed more efficiency breakthroughs.

                                        That atleast allows other companies/research labs to develop competing cutting edge LLM technology and come up with efficiency breakthroughs. The alternative is for the tech to be hidden inside OpenAI and FANGs or released as old versions.

                                        • echelon 27 minutes ago

                                          Today's H100 cluster models are tomorrow's computing at the edge models.

                                          With the next wave of investment targeting local on-device robotics, I'm way more bullish about local AI than vertical SaaS AI.

                                  • rvz 5 hours ago

                                    This is the minimum bar that I expect very elite programmers should be striving for in the age of AI and DeepSeek should be studied as an example and this is the only just the first of many projects from them.

                                    There is an extremely high chance (in fact a 99.9% chance) that an AI did not build this and the ones who are able to build or adapt projects like this which are deep into hardware systems will be the most sort after.

                                    Not the horrendous JS or even TS slop across GitHub that is extremely easy for an AI to generate correctly.

                                    You've got until 2030 to decide. And my advice is to study the codebases of pytorch (backends), DeepSeek, tinygrad and ggml.

                                    • jbm 3 hours ago

                                      It's an interesting opinion, but I read the exact same opinions about JS developers in 2008 too.

                                      I do agree that if you are "only" a developer, you will have to be in some sort of tightly defined niche, and how long those niches survive is anyone's guess.

                                      • KeplerBoy an hour ago

                                        What do you mean with "only" developer? Someone who just knows how to code when given a spec but lacking domain knowledge (in this case ai math and hardware optimization) and larger context?

                                      • menaerus 2 hours ago

                                        I agree that DeepSeek continues to prove themselves as a great example of engineering but the number of job positions requiring this type of knowledge IME is typically very very low so I am not sure if this would be the right advice to follow. Though I wish it was different.

                                        • PeterStuer 2 hours ago

                                          Honest question:

                                          Do you feel GenAI coding is substantially different from the lineage of 4GL to 'low code' approaches?

                                          Reason I'm asking is because despite all promises al suffered from what Spolsky coined the 'leaky abstraction' problem.

                                          Once something goes wrong, the user is left without recourse in a sea of additional complexity created by the tooling that was meant to not have to deal with it in the first place.

                                          My own opinion is that GenAI is different because of (a) its recursive reflexive potential (you can use the tool itself to help you past the failure) and (b) it shifts the input out of the necessity for algorithmic/systemic thinking (which may come as a surprise to the audience here but my experience has taught me is alien to dare I say the majority of people).

                                          Now don't get me wrong. We have not reached the point where (a)+(b) make it to where you don't need application layer devs, but we are definitely seeing some progress.

                                          As for going deeper into the stack to "escape" AI, I would venture that is probably a non starter as the deeper you go the more constrained the domain is, so your escape strategy relies on AI reasoning making little progress, where AI reasoning has always been more successful in smaller well defined spaces.

                                          • beernet 4 hours ago

                                            LLM generated comments are so 2024

                                            • BoorishBears 3 hours ago

                                              Nothing about that comment implies it's LLM generated, and it's bizzare how it's being received since it's a pretty reasonable take.

                                            • WithinReason 3 hours ago

                                              AI is already writing optimized GPU code:

                                              https://sakana.ai/ai-cuda-engineer/

                                              • mirekrusin 3 hours ago

                                                Comments around that page suggest it's more of a facepalm than anything else.

                                          • m3kw9 4 hours ago

                                            MHGA making hopper great again

                                            • nokun7 5 hours ago

                                              In my view, FlashMLA’s exclusive targeting of Hopper GPUs restricts its cross-platform use, and the lack of comprehensive documentation, vague compatibility with wider frameworks, and absence of benchmark comparisons or trade-off insights reduce its ease of use and adaptability. While it holds potential for specialists with tailored requirements, its specialized nature and limited community backing indicate it’s not yet a broadly practical tool, requiring more detailed guides and expanded hardware support to unlock its full capabilities.

                                              • deyiao 7 hours ago

                                                I heard their inferencing framework is way lower than typical deployment methods. Can this be verified from that open-source project? How does it stack up against vllm or llama.cpp

                                                • reissbaker 6 hours ago

                                                  By "lower" you mean cheaper/better?

                                                  I suspect it's much higher throughput than vLLM, which in turn is much higher throughput than llama.cpp. The MLA kernel they just open-sourced seems to indicate that, although we'll see how it does in third party benchmarks on non-hobbled GPUs vs FlashAttention. They only released the BF16 version — whereas most people, including DeepSeek themselves, serve in FP8 — so it might not be immediately useful to most companies quite yet, although I imagine there'll be FP8 ports soon enough.

                                                  • nialv7 3 hours ago

                                                    i think they meant lower level.

                                                    • bee_rider 3 hours ago

                                                      It seems hard to guess. Could be lower level, lower performance, or lower compute cost.

                                                  • helloericsf 6 hours ago

                                                    What do you mean by "lower"? To my understanding, they will open 5 infra related repos this week. Let's revisit your comparison question on Friday.

                                                    • feverzsj 4 hours ago

                                                      Maybe. Apple ditched them in China, because their infra can't handle large scale users.

                                                      • helloericsf 4 hours ago

                                                        Don't think the decision is based on infra, or any technical reasons. It's more on the service support side. How a 200-person company supports 44M iPhone users in China?

                                                        • chvid 32 minutes ago

                                                          Is that true? I thought Apple was going to use their own infrastructure.

                                                          • tw1984 31 minutes ago

                                                            deepseek doesn't have any experience on support a 50 million user base. that was the reason cited by apple a few weeks ago.

                                                          • find0x90 6 hours ago

                                                            I don't see any use of PTX, might be in one of the other repos they plan to release.

                                                            • DesiLurker an hour ago

                                                              right, I think PTX use is a bigger deal than its getting coverage for. this opens an opening for other vendors to get their foot in with PTX to LLVM-ir translation for existing cuda kernels.