• simgt 4 hours ago

    > I replicated David Ng's RYS method [...] found something I didn't expect.

    > Transformers appear to have discrete "reasoning circuits" — contiguous blocks of 3-4 layers that act as indivisible cognitive units. Duplicate the right block and the model runs its reasoning pipeline twice. No weights change. No training. The model just thinks longer.

    How did you not expect that if you read his post? That's literally what he discovered, two years ago.

    For anyone interested, there's more meat in the post and comments from last week: https://news.ycombinator.com/item?id=47322887

    • regularfry 3 hours ago

      That's explicitly not the unexpected part. Read the rest of the post.

      • yorwba 3 hours ago

        After reading both the original post and this submission, what do you think is new here?

        • regularfry an hour ago

          > The weird part: different duplication patterns create different cognitive "modes" from the same weights. Double-pass boosts math. Triple-pass boosts emotional reasoning. Interleaved doubling (13,13,14,14,15,15,16) creates a pure math specialist. Same model, same VRAM, different routing.

          As far as I can see that's not implied by the original post.

          But that's beside the point: quoting the bit where the poster says "here's what I'm building on top of" and using that to imply they haven't done anything new is a bit pointless, no?

          • simgt 27 minutes ago

            You're right that my quote was misleading, I overlooked "the weird part" in the post because it didn't seem new to me either.

            Here's the section in the original post that covers it: https://dnhkng.github.io/posts/rys/#the-brain-scanner All heatmaps are split by tasks and show an optimal point for each. The resulting routing he chose is a trade-off for both tasks, there isn't much else to do unless you intend to train a router anyway.

            > So the ‘math organ’ has boundaries on both sides. Too few layers and you get nothing — you’ve cut into the circuit and it can’t complete its operation. Too many layers and you also get nothing — you’ve included tissue from a neighbouring circuit that doesn’t belong. Pre-training carved these structures out of the layer stack, and they only work whole. It also doesn’t translate to other tasks, as the heatmap for EQ scores doesn’t have this patch.

          • jstanley an hour ago

            It's all new to me.

      • 4bpp 11 hours ago

        Assuming the benchmarks are sound (rather than capturing a fluke), the provided explanation still does not pass the smell test. As far as I can tell, there is nothing about the training process of these models that would encourage them to make the output of any layer apart from (n-1) meaningful as the input of layer n, unless perhaps these layers were initialised as identity and the training process did not get to change them much. (Plausible for middle layers?)

        Considering this, I think (again, assuming the benchmarks themselves are sound) the most plausible explanation for the observations is (1) the layers being duplicated are close to the identity function on most inputs; (2) something happened to the model in training (RLHF?) that forcefully degraded its reasoning performance; (3) the mechanism causing the degradation involves the duplicated layers, so their duplication has the effect of breaking the reasoning-degrading mechanism (e.g. by clobbering a "refusal" "circuit" that emerged in post-training).

        More concisely, I'm positing that this is an approach that can only ever break things, and rather than boosting reasoning, it is selectively breaking things deleterious to reasoning.

        • ACCount37 3 hours ago

          Empirical findings tell a very different tale: all LLM layers use vaguely compatible internal representations. And middle layers in particular can be almost interchangeable - a lot of what they seems to be "iterative refinement of the same representations". Proven by various probes and ablations, but the most obvious one is probably the good old logit lens.

          This is likely to be shaped by tied embeddings and skips on one end, and maybe training pressures on the other.

          The very top of FF stack and the very bottom of FF stack both reflect the same token embeddings - and this propagates through the model, setting up a shared identity space. Skip connections propagate that through the layers. No explicit shared identity imposed, but there is an implicit one set by the architecture. Fairly well established.

          (Now: highly speculative! Attention over past tokens creates an implicit "robustness/convergence" pressure? The model can't be "certain" if it'll have access to the right representations at a given layer, because representations depend not just on the past layers, but also on the highly uncertain contents of previous tokens as passed through attention. Which in turn depends on more of the same, increasing variance further. So the training causes: "each layer can't be certain of what it will have access to, so it develops to refine anything it currently has access to in a convergent fashion, because that's what's useful under pressure of attention-induced uncertainty".)

          LLMs are notoriously nonfragile, and robust to perturbations. Far more so if you anneal with SFT/distillation after your model surgery, although this wasn't done here. Plenty of weird franken-LLM experiments prove that empirically.

          So I'm not too surprised to find that someone has managed to improve benchmark performance on a few narrow tasks by duplicating a few middle layers. "Duplicating a few layers that were doing convergent iterative refinement benefits a few tasks that suffered from insufficient depth of convergent iterative refinement" is a fairly reasonable hypothesis, in my eyes.

          The chances of duplication "breaking something somewhere" are high, and I would expect the capability profile of an unannealed franken-LLM like this to have a few gaps in it if evaluated extensively against the original. But "franken-LLM layer duplication can actually improve some things" is far too plausible with what we know to be dismissed pre-emptively.

          • 4bpp 2 hours ago

            That's interesting, could you point me to some source on these findings?

            It seems to me that the difference between "iterative improvement" as you put it and "close to the identity" (as in the output is close to the input for most of the volume of the input space) as I put it is fairly subtle, anyway. One experiment I would like to see is what happens to the reasoning performance if rather than duplicating the selected layers, they are deleted/skipped entirely. If the layers improve reasoning by iterative improvement, this should make the performance worse; but if they contain a mechanism that degrades reasoning and is not robust against unannealed self-composition, it should make the performance similarly better.

          • jstanley an hour ago

            > As far as I can tell, there is nothing about the training process of these models that would encourage them to make the output of any layer apart from (n-1) meaningful as the input of layer n

            Right, I had the same thought.

            Even if the output was in the same "format", does the LLM even have any way to know which order the outputs will go in? The ordering of the nodes is part of our representation of the network, it's not fundamental to it.

            It would be like shuffling the bytes in a PNG file and expecting the program still to understand it as a PNG file.

            The more I think about this, the more I don't get this at all.

            • getnormality 14 minutes ago

              > something happened to the model in training (RLHF?) that forcefully degraded its reasoning performance

              I've been seeing more people speculating like this and I don't understand why. What evidence do we have for RLHF degrading performance on a key metric like reasoning? Why would this be tolerated by model developers?

              Can someone point to an example of an AI researcher saying "oops, RLHF forcefully degrades reasoning capabilities, oh well, nothing we can do"?

              It strikes me as conspiracist reasoning, like "there's a car that runs on water but they won't sell it because it would destroy oil profits".

              • zozbot234 2 hours ago

                > far as I can tell, there is nothing about the training process of these models that would encourage them to make the output of any layer apart from (n-1) meaningful as the input of layer n

                Wouldn't "pass-through" identity connections have exactly that effect? These are quite common in transformer models.

                • 4bpp 2 hours ago

                  Yeah, that's what I meant with "initialised as identity and the training process did not get to change them much".

                  • SCLeo 35 minutes ago

                    There are explicit residual connections in a transformer block. Look up "residual connections" in Google images and you will see.

                • kirill5pol 10 hours ago

                  Basically all of them are using residual connections so it’s not that surprising honestly

                • Karuma 12 hours ago

                  Wow, every single word in the original post and on that README.md is pure LLM. How sad.

                  In any case, this has been done at least since the very first public releases of Llama by Meta... It also works for image models. There are even a few ComfyUI nodes that let you pick layers to duplicate on the fly, so you can test as many as you want really quickly.

                  • xlayn 12 hours ago

                    Fair point on the writing style, I used Claude extensively on this project, including drafting. The experiments and ideas are mine though.

                    On the prior art: you're right that layer duplication has been explored before. What I think is new here is the systematic sweep toolkit + validation on standard benchmarks (lm-eval BBH, GSM8K, MBPP) showing exactly which 3 layers matter for which model. The Devstral logical deduction result (0.22→0.76) was a surprise to me.

                    If there are ComfyUI nodes that do this for image models, I'd love links, the "cognitive modes" finding (different duplication patterns that leads to different capability profiles from the same weights) might be even more interesting for diffusion models.

                • kgeist 11 hours ago

                  Heh, for a couple last days, I've been doing this exact kind of "neuroanatomy" on Qwen2.5/Qwen3 too. Fascinating stuff. To make it easier to fiddle with the network, I created a small inference engine that is stripped of all the framework magic, just raw matmuls and all (main inference loop is just 50 lines of code!). For example, it's trivial to remove a layer: i just skip it in code with a simple "if". I've found that removing some layers doesn't appear to change anything (based on the vibes at least). If you remove some later layers, the model forgets how to insert the EOS token and keeps chatting ad finitum (still coherently). Removing earliest layers makes the model generate random garbage. Turns out abliteration is not hard to do, 10 examples was enough to find the refusal vector and cancel most refusals. Interestingly, I've found that refusal happens in the middle layers too (I think, layer 12 out of 26)

                  From what I understand, transformers are resistant to network corruption (without complete collapse) thanks to residual connections.

                  I tried to repeat some layers too but got garbage results. I guess I need to automate finding the reasoning layers too, instead of just guessing.

                  • edg5000 an hour ago

                    Very interesting stuff

                    • gmerc 5 hours ago

                      Hook it up in autoresearch?

                    • hackpert an hour ago

                      We found evidence of specific layer-localized "reasoning" circuits in a few models last year too! A very much work-in-progress paper is here: https://openreview.net/forum?id=mTjGBrkdtz

                      • the_harpia_io 43 minutes ago

                        the "reasoning circuits" framing is doing a lot of work here. tidy story, fits the result, but I'm not sure it explains anything - you duplicated layers and got better scores, everything else is interpretation. that said .22→.76 is hard to dismiss as noise. been poking at similar questions from a code analysis angle, which layers actually "understand" semantic structure vs just pattern-matching tokens, and the results are messier - no clean 3-4 layer cognitive units. might be that logical deduction tasks are just uniquely susceptible to this

                        • samlinnfer 41 minutes ago

                          @dang ban this AI bot spam

                          >is doing a lot of work here. tidy story, fits the result

                          That's 100% AI talk

                        • Lerc 3 hours ago

                          That weird part is kind of what I was expecting.

                          This goes to the thing that I posted on the thread a couple of days ago. https://news.ycombinator.com/item?id=47327132

                          What you need is a mechanism to pick the right looping pattern, Then it really does seem to be Mixture of experts on a different level.

                          Break the model into input path, thinking, output path. and make the thinking phase a single looping layer of many experts. Then the router gets to decide 13,13,14,14,15,15,16.

                          Training the router left as an exercise to the reader.

                          • simgt 40 minutes ago

                            If you're adding a model to do the "routing" you're basically putting learned backward connections and you end up with a RNN

                          • taliesinb 12 hours ago

                            There is an obvious implication: since the initial models were trained without loops, it is exceedingly unlikely that a single stack of consecutive N layers represents only a single, repeatable circuit that can be safely looped. It is much more likely that the loopable circuits are superposed across multiple layers and have different effective depths.

                            That you can profitably loop some say 3-layer stack is likely a happy accident, where the performance loss from looping 3/4 of mystery circuit X that partially overlaps that stack is more than outweighed by the performance gain from looping 3/3 of mystery circuit Y that exactly aligns with that stack.

                            So, if you are willing to train from scratch, just build the looping in during training and let each circuit find its place, in disentangled stacks of various depths. Middle of transformer is:

                            (X₁)ᴹ ⊕ (Y₁∘Y₂)ᴺ ⊕ (Z₁∘Z₂∘Z₃)ᴾ ⊕ …

                            Notation: Xᵢ is a layer (of very small width) in a circuit of depth 1..i..D, ⊕ is parallel composition (which sums the width up to rest of transformer), ∘ is serial composition (stacking), and ᴹ is looping. The values of ᴹ shouldnt matter as long as they are > 1, the point is to crank them up after training.

                            Ablating these individual circuits will tell you whether you needed them at all, but also roughly what they were for in the first place, which would be very interesting.

                            • taliesinb 12 hours ago

                              And i bet these would be useful in initial and final parts of transformer too. Because syntactic parsing and unparsing of brackets, programming language ASTs, etc is highly recursive; no doubt current models are painfully learning "unrolled" versions of the relevant recursive circuits, unrolled to some fixed depth that must compete for layers with other circuits, since your total budget is 60 or whatever. Incredibly duplicative and by definition unable to generalize to arbitrary depth!

                              • awwaiid 11 hours ago

                                Maybe another idea, no idea if this is a thing, you could pick your block-of-layers size (say... 6) and then during training swap those around every now and then at random. Maybe that would force the common api between blocks, specializaton of the blocks, and then post training analyze what each block is doing (maybe by deleting it while running benchmarks).

                                • taliesinb 12 hours ago

                                  Amusingly, you need only have circuits of prime depth, though you should probably adjust their widths using something principled, perhaps Euler's totient function.

                                • Imanari 5 hours ago

                                  Fascinating! I wonder if new training techniques could emerge from this. If we say layer-1=translater, layer2-5=reasoner, layer6 retranslater, could we train small 6 layer models but evaluate their performance in a 1>n*(2-5)>6 setup to directly train towards optimal middle-layers that can be looped? You'd only have to train 6 layers but get the duplication-benefit of the middle layers for free.

                                  • zozbot234 2 hours ago

                                    Yes, training directly for a diverse mix of "looped" inference procedures makes a lot of sense as a way of allowing for increased inference-time compute. It would likely be complementary to the usual thinking approach, which essentially runs the "loop" LLM-wide - and, critically, yields interpretable output which lets us see what the LLM is thinking about.

                                  • SyzygyRhythm 13 hours ago

                                    If running twice is good, then is running N times even better? I wonder if you could even loop until some kind of convergence, say hitting a fixed point (input equals output). I wonder if there's even a sort of bifurcation property where it sometimes loops A->A->A, but other times A->B->A, or more, rather like the logistic map fractal.

                                    • BoredomIsFun an hour ago

                                      Phi-4-14b with layers duplicated (phi-4-25b) has increassed performance. Phi-4-49b has degraded vs 14b.

                                      • xlayn 12 hours ago

                                        I explored that, again with Devstral, but the execution with 4 times the same circuit lead to less score on the tests.

                                        I chat with the model to see if the thing was still working and seemed coherent to me, I didn't notice anything off.

                                        I need to automate testing like that, where you pick the local maxima and then iterate over that picking layers to see if it's actually better, and then leave the thing running overnight

                                        • smusamashah 6 hours ago

                                          Can Karpathy's autoresearch be used on this to explore what works and what does not? That is supposed to automate research like this from what I understand.

                                        • imtringued 5 hours ago

                                          That's how deep equilibrium models were discovered.

                                          Whats's more. It was found out that you only need a single looped layer to be equivalent to a multi layer network.

                                        • puppykito an hour ago

                                          I find it so cute that making the LLM think twice before outputting something makes it smarter.

                                          • woadwarrior01 13 hours ago

                                            Reminds me of Solar 10.7B, which was a very good model for its size ~2 year ago and the "Depth Up-Scaling" technique behind it. Although, that involved continued training after repeating the layers.

                                            https://arxiv.org/abs/2312.15166

                                            • christianqchung 12 hours ago

                                              Why test on Qwen 2.5 when Qwen 3 has been out for about a year, and Qwen 3.5 for a month? My problem with this is ironically entirely vibes based: that for some reason, LLMs love to talk about Qwen 2.5 instead of anything newer.

                                              • snats 11 hours ago

                                                you can also have removed layers of models and keep the same score in benchmarks [1].

                                                i feel that sometimes a lot of the layers might just be redundant and are not fully needed once a model is trained.

                                                [1] https://snats.xyz/pages/articles/pruningg.html

                                                • gukoff an hour ago

                                                  How do you run these models on AMD GPUs?

                                                  • dhsorens30 5 hours ago

                                                    the token costs are real. we switched to smaller models for 80% of tasks and barely noticed

                                                    • edg5000 an hour ago

                                                      Which types of tasks, in your experience, show negligable improvement when using larger models? And for what types of tasks do you feel even the best models deliver mediocre results?

                                                    • nowittyusername 13 hours ago

                                                      There's still a lot of low hanging fruit left IMO. Good find and rather funny to think about as you can have someone simply clone the various layers multiple times and instead of spending millions of dollars retraining the model increase performance significantly with "this one trick".

                                                      • xlayn 12 hours ago

                                                        The other interesting point is that right now I'm copy pasting the layers, but a patch in llama.cpp can make the same model now behave better by a fact of simply following a different "flow" without needing more vram...

                                                        if this is validated enough it can eventually lead to ship some kind of "mix" architecture with layers executed to fit some "vibe?"

                                                        Devstral was the first one I tried and optimize for math/eq, but that din't result in any better model, then I added the reason part, and that resulted in "better" model

                                                        I used the devstral with the vibe.cli and it look sharp to me, thing didn't fail, I also used the chat to "vibe" check it and look ok to me.

                                                        The other thing is that I pick a particular circuit and that was "good" but I don't know if it was a local maxima, I think I ran just like 10 sets of the "fast test harness" and pick the config that gave the most score... once I have that I use that model and run it against the llm_eval limited to only 50 tests... again for sake of speed, I didn't want to wait a week to discover the config was bad

                                                        • skerit 3 hours ago

                                                          I've been running my own (admittedly naïve) experiments of new, wacky ideas for both LLMs (well, SLMs) and for Image-Super-Resolution models.

                                                          I'm just trying different kinds of attention mechanisms, different configurations of the network, adding loops, ... All kind of wacky ideas. And the real weird thing is that 99% of the ideas I try work at all.

                                                        • kristianp 10 hours ago

                                                          The method used here by David Ng, was discussed a few days ago at https://news.ycombinator.com/item?id=47322887

                                                          • rao-v 13 hours ago

                                                            I’d love to believe this is real, but I’m pretty sure you will lose performance on a “fair” mix of tasks, even after fine tuning. I know multiple teams have explored recurrent layers (great for limited VRAM) but I don’t think it’s ever been found to be optimal.

                                                            • edg5000 an hour ago

                                                              This is very cool

                                                              • BoredomIsFun an hour ago

                                                                please post it on /r/localllama

                                                                • zhangchen 13 hours ago

                                                                  this lines up with what pruning papers have been finding, the middle layers carry most of the reasoning weight and you can often drop the outer ones without much loss. cool to see the inverse also works, just stacking them for extra passes.

                                                                  • getnormality 10 hours ago

                                                                    Didn't we recently see another hack, where you could get better performance by repeating the prompt?

                                                                    I wonder if they work for similar reasons.

                                                                    • colejhudson 13 hours ago

                                                                      Would you be able to publish the individual benchmarks for Qwen2.5-Coder-32B? GSM8K specifically would be useful to look at.

                                                                      • xlayn 13 hours ago

                                                                        I published the results for devstral... results folder of the github https://github.com/alainnothere/llm-circuit-finder/tree/main...

                                                                        I'm using the following configuration --tasks gsm8k_cot,ifeval,mbpp,bbh_cot_fewshot_logical_deduction_five_objects,mbpp I did also try humaneval but something in the harness is missing and failed...

                                                                        notice that I'm running 50 tests for each task, mostly because of time limitation as it takes like two hours to validate the run for the base model and the modified one.

                                                                        I'll also try to publish the results of the small tests harness when I'm testing the multiple layers configurations, for reference this is phi-4-Q6_K.gguf, still running, I'm now giving more importance to the Reason factor, the reason factor comes from running a small subset of all the problems in the task config above

                                                                        Initially I tried the approach of the highest math/eq but in resulted in models that were less capable overall with the exception of math, and math like in the original research is basically how good was the model at giving you the answer of a really though question, say the cubic root of some really large number... but that didn't translate to the model being better at other tasks...

                                                                          Config  | Lyr | Math   | EQ    | Reas   | Math Δ  | EQ Δ  | Reas Δ  | Comb Δ
                                                                          --------|-----|--------|-------|--------|---------|-------|---------|-------
                                                                          BASE    |   0 | 0.7405 | 94.49 | 94.12% |     --- |   --- |     --- |    ---
                                                                          (6,9)   |   3 | 0.7806 | 95.70 | 94.12% | +0.0401 | +1.21 |  +0.00% |  +1.21
                                                                          (9,12)  |   3 | 0.7247 | 95.04 | 94.12% | -0.0158 | +0.55 |  +0.00% |  +0.55
                                                                          (12,15) |   3 | 0.7258 | 94.14 | 88.24% | -0.0147 | -0.35 |  -5.88% |  -6.23
                                                                          (15,18) |   3 | 0.7493 | 95.74 | 88.24% | +0.0088 | +1.25 |  -5.88% |  -4.63
                                                                          (18,21) |   3 | 0.7204 | 93.40 | 94.12% | -0.0201 | -1.09 |  +0.00% |  -1.09
                                                                          (21,24) |   3 | 0.7107 | 92.97 | 88.24% | -0.0298 | -1.52 |  -5.88% |  -7.41
                                                                          (24,27) |   3 | 0.6487 | 95.27 | 88.24% | -0.0918 | +0.78 |  -5.88% |  -5.10
                                                                          (27,30) |   3 | 0.7180 | 94.65 | 88.24% | -0.0225 | +0.16 |  -5.88% |  -5.73
                                                                          (30,33) |   3 | 0.7139 | 94.02 | 94.12% | -0.0266 | -0.47 |  +0.00% |  -0.47
                                                                          (33,36) |   3 | 0.7104 | 94.53 | 94.12% | -0.0301 | +0.04 |  +0.00% |  +0.04
                                                                          (36,39) |   3 | 0.7017 | 94.69 | 94.12% | -0.0388 | +0.20 |  +0.00% |  +0.20
                                                                          (6,10)  |   4 | 0.8125 | 96.37 | 88.24% | +0.0720 | +1.88 |  -5.88% |  -4.01
                                                                          (9,13)  |   4 | 0.7598 | 95.08 | 94.12% | +0.0193 | +0.59 |  +0.00% |  +0.59
                                                                          (12,16) |   4 | 0.7482 | 93.71 | 88.24% | +0.0076 | -0.78 |  -5.88% |  -6.66
                                                                          (15,19) |   4 | 0.7617 | 95.16 | 82.35% | +0.0212 | +0.66 | -11.76% | -11.10
                                                                          (18,22) |   4 | 0.6902 | 92.27 | 88.24% | -0.0504 | -2.23 |  -5.88% |  -8.11
                                                                          (21,25) |   4 | 0.7288 | 94.10 | 88.24% | -0.0117 | -0.39 |  -5.88% |  -6.27
                                                                          (24,28) |   4 | 0.6823 | 94.57 | 88.24% | -0.0583 | +0.08 |  -5.88% |  -5.80
                                                                          (27,31) |   4 | 0.7224 | 94.41 | 82.35% | -0.0181 | -0.08 | -11.76% | -11.84
                                                                          (30,34) |   4 | 0.7070 | 94.73 | 94.12% | -0.0335 | +0.23 |  +0.00% |  +0.23
                                                                          (33,37) |   4 | 0.7009 | 94.38 |100.00% | -0.0396 | -0.12 |  +5.88% |  +5.77
                                                                          (36,40) |   4 | 0.7057 | 94.84 | 88.24% | -0.0348 | +0.35 |  -5.88% |  -5.53
                                                                          (6,11)  |   5 | 0.8168 | 95.62 |100.00% | +0.0762 | +1.13 |  +5.88% |  +7.02
                                                                          (9,14)  |   5 | 0.7245 | 95.23 | 88.24% | -0.0160 | +0.74 |  -5.88% |  -5.14
                                                                          (12,17) |   5 | 0.7825 | 94.88 | 88.24% | +0.0420 | +0.39 |  -5.88% |  -5.49
                                                                          (15,20) |   5 | 0.7832 | 95.86 | 88.24% | +0.0427 | +1.37 |  -5.88% |  -4.52
                                                                          (18,23) |   5 | 0.7208 | 92.42 | 88.24% | -0.0197 | -2.07 |  -5.88% |  -7.95
                                                                          (21,26) |   5 | 0.7055 | 92.89 | 88.24% | -0.0350 | -1.60 |  -5.88% |  -7.48
                                                                          (24,29) |   5 | 0.5825 | 95.04 | 94.12% | -0.1580 | +0.55 |  +0.00% |  +0.55
                                                                          (27,32) |   5 | 0.7088 | 94.18 | 88.24% | -0.0317 | -0.31 |  -5.88% |  -6.19
                                                                          (30,35) |   5 | 0.6787 | 94.69 | 88.24% | -0.0618 | +0.20 |  -5.88% |  -5.69
                                                                          (33,38) |   5 | 0.6650 | 94.96 | 88.24% | -0.0755 | +0.47 |  -5.88% |  -5.41
                                                                          (6,12)  |   6 | 0.7692 | 95.39 | 94.12% | +0.0287 | +0.90 |  +0.00% |  +0.90
                                                                          (9,15)  |   6 | 0.7405 | 94.65 | 94.12% | -0.0000 | +0.16 |  +0.00% |  +0.16
                                                                          (12,18) |   6 | 0.7582 | 94.57 | 88.24% | +0.0177 | +0.08 |  -5.88% |  -5.80
                                                                          (15,21) |   6 | 0.7828 | 93.52 | 88.24% | +0.0423 | -0.98 |  -5.88% |  -6.86
                                                                          (18,24) |   6 | 0.7308 | 92.93 | 94.12% | -0.0097 | -1.56 |  +0.00% |  -1.56
                                                                          (21,27) |   6 | 0.6791 | 92.54 | 82.35% | -0.0615 | -1.95 | -11.76% | -13.72
                                                                      • XCSme 13 hours ago

                                                                        But if it got worse on other tests, it doesn't do much good, right?

                                                                        • ekianjo 13 hours ago

                                                                          Which tests are worse?

                                                                          • XCSme 13 hours ago

                                                                            Hard to tell, they only mention a few ones that got better, not clear results on others

                                                                            • xlayn 12 hours ago

                                                                              You can check here the results for Devstral, speed limits me, but these are the results for the first 50 tests of the command

                                                                                # Run lm-evaluation-harness
                                                                                lm_eval --model local-chat-completions \
                                                                                    --model_args model=test,base_url=http://localhost:8089/v1/chat/completions,num_concurrent=1,max_retries=3,tokenized_requests=False \
                                                                                    --tasks gsm8k_cot,ifeval,mbpp,bbh_cot_fewshot_logical_deduction_five_objects,mbpp \
                                                                                    --apply_chat_template --limit 50 \
                                                                                    --output_path ./eval_results
                                                                        • BoredomIsFun an hour ago

                                                                          Phi-4-25 is another example.

                                                                          • m3kw9 11 hours ago

                                                                            What, just randomly choose some "layer" and duplicate it and give some arbitrary reasoning went from 0.2 -> 0.7, i don't know man. You need to use real benchmarks.

                                                                          • Singlaw 12 hours ago

                                                                            What does this do?