I wonder how much improvement is owed to which changes. I've also never heard of "Muon - Momentum Orthogonalized by Newton-schulz" being used.
EDIT: there's a bit more info on his twitter - https://x.com/kellerjordan0
It looks like he created this optimizer. Works on 2D matrices only.
Just needs a Zero To Hero series episode offering line by line commentary to follow along on why each choice was made over alternatives.
Cool work. No license?
do you have a baseline of the regular implementation with 3x learning rate?
So it compresses info better.
That is literally intelligence.
It's not.
I suppose you aren't a fan of the https://en.wikipedia.org/wiki/Hutter_Prize .
> The goal of the Hutter Prize is to encourage research in artificial intelligence (AI). The organizers believe that text compression and AI are equivalent problems.
I believe that they believe that and that it _could_ be true. That's far from declaratively stating that they are the same thing, as if there was some sort of evidence and consensus of such a claim.
Seems like this is a modded NanoGPT not the original.
Yes. It’s literally called “Modded-NanoGPT”.