• fg137 4 days ago

    How does this compare to Andrej Karpathy's microgpt (https://karpathy.github.io/2026/02/12/microgpt/) or minGPT (https://github.com/karpathy/minGPT)?

    • armanified 4 days ago

      I haven't compared it with anything yet. Thanks for the suggestion; I'll look into these.

      • BrokenCogs 4 days ago

        Who cares how it compares, it's not a product it's a cool project

        • tantalor 4 days ago

          Even cool projects can learn from others. Maybe they missed something that could benefit the project, or made some interesting technical choice that gives a different result.

          For the readers/learners, it's useful to understand the differences so we know what details matter, and which are just stylistic choices.

          This isn't art; it's science & engineering.

          • BrokenCogs 4 days ago

            But it isn't the OP's responsibility to compare their project to all other projects. The GP could themselves perform the comparison and post their thoughts instead of asking an open ended question.

            • philipallstar 4 days ago

              > it isn't the OP's responsibility to compare their project to all other projects

              No one, including the GP, said it was.

              • fg137 4 days ago

                It isn't, but such information will be immensely helpful to anyone who wants to learn from such projects. Some tutorials are objectively better than others, and learners can benefit from such information.

                • tantalor 4 days ago

                  100% agree, I didn't mean to imply that OP is responsible for that, or that the (lack of) comparison detracts in any way from the work.

              • stronglikedan 4 days ago

                > Who cares how it compares

                Well, the person who asked the question, for one. I'm sure they're not the only one. Best not to assume why people are asking though, so you can save time by not writing irrelevant comments.

                • layer8 4 days ago

                  Microgpt isn’t a product either. Are you saying that differences between cool projects aren’t worth thinking and conversing about?

              • thomasfl 4 days ago

                Is there some documentation for this? The code is probably the simplest (Not So) Large Language Model implementation possible, but it is not straight forward to understand for developers not familiar with multi-head attention, ReLU FFN, LayerNorm and learned positional embeddings.

                This projects shares similarities with Minix. Minix is still used at universities as an educational tool for teaching operating system design. Minix is the operating system that taught Linus Torvalds how to design (monolithic) operating systems. Similarly having students adding capabilities to GuppyLM is a good way to learn LLM design.

                • achenatx 4 days ago

                  give the code to an LLM and have a discussion about it.

                  • dominotw 4 days ago

                    does this work? there is no more need for writing high level docs?

                    • arcanemachiner 4 days ago

                      > does this work?

                      Absolutely. If you loaded this into an agentic coding harness with a decent model, I can practically guarantee it would be able to help you figure out what's going on.

                      > there is no more need for writing high level docs?

                      Absolutely not. That would be like exploring a cave without a flashlight, knowing that you could just feel your way around in the dark instead.

                      Code is not always self-documenting, and can often tell you how it was written, but not why.

                      • stronglikedan 4 days ago

                        > If you loaded this into an agentic coding harness with a decent model, I can practically guarantee it would be able to help you figure out what's going on.

                        My non-coder but technically savvy boss has been doing this lately to great success. It's nice because I spend less time on it since the model has taken my place for the most part.

                        • libria 4 days ago

                          > since the model has taken my place for the most part

                          Hah, you realize the same thing is going on in your boss's head right? The pie chart of Things-I-Need-stronglikedan-For just shrank tiny bit...

                          • dominotw 4 days ago

                            my last employer was using ai to rank developers on most impactful code their prs are shipping.

                      • sigmoid10 4 days ago

                        There are so many blogs and tutorials about this stuff in particular, I wouldn't worry about it being outside the training data distribution for modern LLMs. If you have a scarce topic in some obscure language I'd be more careful when learning from LLMs.

                        • bigmadshoe 4 days ago

                          LLMs can tell you what the code does but not why the developer chose to do it that way.

                          Also, large codebases are harder to understand. But projects like these are simple to discuss with an LLM.

                          • stronglikedan 4 days ago

                            > LLMs can tell you what the code does but not why the developer chose to do it that way.

                            Do LLMs not take comments into consideration? (Serious question - I'm just getting into this stuff)

                            • bigmadshoe 3 days ago

                              They do. Think of it like a very intelligent but somewhat unreliable engineer you can hire to look at your code. They have no context about the codebase beyond what’s written in the source code, or any docs you give them.

                              What I meant was the docs might provide explanations about the problems the codebase solves, design decisions, the abstractions chosen, etc that wouldn’t live in a particular source file. Any discussion someone has with an LLM about the codebase will lack this context in the explanations given if docs don’t exist.

                              • dr_hooo 4 days ago

                                They do (it's just text), if they are there...

                        • BorisMelnik 3 days ago

                          I haven't heard minix in so long!

                        • totetsu 5 days ago

                          https://bbycroft.net/llm has 3d Visualization of tiny example LLM layers that do a very good job at showing what is going on (https://news.ycombinator.com/item?id=38505211)

                          • armanified 4 days ago

                            Pretty neat! I'll definitely take a deeper look into this.

                            • maverickxone 4 days ago

                              have little to do with this, but i have to say your project are indeed pretty cool! Consider adding some more UI?

                              • devsteru 3 days ago

                                Thanks for sharing

                                • skramzy 4 days ago

                                  Neat!

                                • ordinarily 5 days ago

                                  It's genuinely a great introduction to LLMs. I built my own awhile ago based off Milton's Paradise Lost: https://www.wvrk.org/works/milton

                                  • mudkipdev 5 days ago

                                    This is probably a consequence of the training data being fully lowercase:

                                    You> hello Guppy> hi. did you bring micro pellets.

                                    You> HELLO Guppy> i don't know what it means but it's mine.

                                    • functional_dev 5 days ago

                                      Great find! It appears uppercase tokens are completely unknonw to the tokenizer.

                                      But the character still comes through in response :)

                                    • algoth1 4 days ago

                                      This really makes me think if it would be feasible to make an llm trained exclusively on toki pona (https://en.wikipedia.org/wiki/Toki_Pona)

                                      • MarkusQ 4 days ago

                                        There isn't enough training data though, is there? The "secret sauce" of LLMs is the vast amount of training data available + the compute to process it all.

                                        • algoth1 4 days ago

                                          I think you could probably feed a copy of a toki pona grammar book to a big model, and have it produce ‘infinite’ training data

                                          • MarkusQ 4 days ago

                                            This is essentially a distillation on the bigger model; you'd wind up surfacing a lot of artifacts from the host model, amplifying them in the same way repeated photocopying introduces errors.

                                            https://dailyai.com/2025/05/create-a-replica-of-this-image-d...

                                            • eden-u4 4 days ago

                                              There are not enough samples in that book to generate new "infinite" data.

                                          • mudkipdev 4 days ago

                                            People have made toki pona translation models before, not exclusively trained though

                                          • neurworlds 4 days ago

                                            Cool project. I'm working on something where multiple LLM agents share a world and interact with each other autonomously. One thing that surprised me is how much the "world" matters — same model, same prompt, but put it in a system with resource constraints, other agents, and persistent memory, the behavior changes dramatically. Made me realize we spend too much time optimizing the model and not enough thinking about the environment it operates in.

                                            • SilentM68 5 days ago

                                              Would have been funny if it were called "DORY" due to memory recall issues of the fish vs LLMs similar recall issues :)

                                              • armanified 4 days ago

                                                OMG! Why didn't I thought fo this first :P

                                              • zwaps 5 days ago

                                                I like the idea, just that the examples are reproduced from the training data set.

                                                How does it handle unknown queries?

                                                • armanified 4 days ago

                                                  It mostly doesn't, at 9M it has very limited capacity. The whole idea of this project is to demonstrate how Language Models work.

                                                • brcmthrowaway 5 days ago

                                                  Why are there so many dead comments from new accounts?

                                                  • 59nadir 5 days ago

                                                    Because despite what HN users seem to think, HN is a LLM-infested hellscape to the same degree as Reddit, if not more.

                                                    • wiseowise 4 days ago

                                                      You’re absolutely right! HN isn’t just LLM-infested hellscape, it’s a completely new paradigm of machine assisted chocolate-infused information generation.

                                                      • toyg 4 days ago

                                                        Just let me know which type of information goo you'd like me to generate, and I'll tailor the perfect one for you.

                                                      • siva7 4 days ago

                                                        But what should we do? The parent company isn't transparent about communicating the seriousness of this problem

                                                      • loveparade 5 days ago

                                                        It really seems it's mostly AI comments on this. Maybe this topic is attractive to all the bots.

                                                        • armanified 4 days ago

                                                          This title might have triggered something in those bots; most of them have sneaky AI SaaS links in their bio.

                                                          Honestly, I never expected this post to become so popular. It was just the outcome of a weekend practice session.

                                                        • AlecSchueler 5 days ago

                                                          They all seem to be slop comments.

                                                        • bblb 5 days ago

                                                          Could it be possible to train LLM only through the chat messages without any other data or input?

                                                          If Guppy doesn't know regular expressions yet, could I teach it to it just by conversation? It's a fish so it wouldn't probably understand much about my blabbing, but would be interesting to give it a try.

                                                          Or is there some hard architectural limit in the current LLM's, that the training needs to be done offline and with fairly large training set.

                                                          • roetlich 4 days ago

                                                            What does "done offline" mean? Otherwise you are limited by context window.

                                                          • AndrewKemendo 5 days ago

                                                            I love these kinds of educational implementations.

                                                            I want to really praise the (unintentional?) nod to Nagel, by limiting capabilities to representation of a fish, the user is immediately able to understand the constraints. It can only talk like a fish cause it’s very simple

                                                            Especially compared to public models, thats a really simple correspondence to grok intuitively (small LLM > only as verbose as a fish, larger LLM > more verbose) so kudos to the author for making that simple and fun.

                                                            • dvt 5 days ago

                                                              > the user is immediately able to understand the constraints

                                                              Nagel's point was quite literally the opposite[1] of this, though. We can't understand what it must "be like to be a bat" because their mental model is so fundamentally different than ours. So using all the human language tokens in the world can't get us to truly understand what it's like to be a bat, or a guppy, or whatever. In fact, Nagel's point is arguably even stronger: there's no possible mental mapping between the experience of a bat and the experience of a human.

                                                              [1] https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf

                                                              • Terr_ 5 days ago

                                                                IMO we're a step before that: We don't even have a real fish involved, we have a character that is fictionally a fish.

                                                                In LLM-discussions, obviously-fictional characters can be useful for this, like if someone builds a "Chat with Count Dracula" app. To truly believe that a typical "AI" is some entity that "wants to be helpful" is just as mistaken as believing the same architecture creates an entity that "feels the dark thirst for the blood of the living."

                                                                Or, in this case, that it really enjoys food-pellets.

                                                                • andoando 5 days ago

                                                                  Id highly disagree with that. Were all living in the same shared universe, and underlying every intelligence must be precisely an understanding of events happening in this space-time.

                                                                  • vixen99 4 days ago

                                                                    What does 'precisely' mean? Everyone has the same understanding of events - a precise one?

                                                                    • andoando 4 days ago

                                                                      No I am saying the basis of intelligence must be shared, not that we have the same exact mental model.

                                                                      I might for example say a human entered a building, a bat might on the other hand think "some big block with two sticks moved through a hole", but both are experiencing a shared physical observation, and there is some mapping between the two.

                                                                      Its like when people say, if there are aliens they would find the same mathematical constants thet we do

                                                                  • AndrewKemendo 5 days ago

                                                                    Different argument

                                                                    I’m not going to argue other than to say that you need to view the point from a third party perspective evaluating “fish” vs “more verbose thing,” such that the composition is the determinant of the complexity of interaction (which has a unique qualia per nagel)

                                                                    Hence why it’s a “unintentional nod” not an instantiation

                                                                • cbdevidal 5 days ago

                                                                  > you're my favorite big shape. my mouth are happy when you're here.

                                                                  Laughed loudly :-D

                                                                  • vunderba 5 days ago

                                                                    This is a direct output from the synthetic training data though - wonder if there is a bit of overfitting going on or it’s just a natural limitation of a much smaller model.

                                                                  • CaseFlatline 4 days ago

                                                                    I am trying to find how the synthetic data was created (looking through the repo) and didn't find it. Maybe I am missing it - Would love to see the prompts and process on that aspect of the training data generation!

                                                                  • rpdaiml 4 days ago

                                                                    This is a nice idea. A tiny implementation can be way more useful for learning than yet another wrapper around a big model, especially if it keeps the training loop and inference path small enough to read end to end.

                                                                    • jzer0cool 4 days ago

                                                                      Does this work by just training once with next token prediction? Want to understand better how it creates fluent sentences if anyone can provide insights.

                                                                      • BiraIgnacio 4 days ago

                                                                        Nice work and thanks for sharing it!

                                                                        Now, I ask, have LLMs ben demystified to you? :D

                                                                        I am still impressed how much (for the most part) trivial statistics and a lot of compute can do.

                                                                        • kaipereira 5 days ago

                                                                          This is so cool! I'd love to see a write-up on how made it, and what you referenced because designing neural networks always feel like a maze ;)

                                                                          • ankitsanghi 5 days ago

                                                                            Love it! I think it's important to understand how the tools we use (and will only increasingly use) work under the hood.

                                                                            • NyxVox 5 days ago

                                                                              Hm, I can actually try the training on my GPU. One of the things I want to try next. Maybe a bit more complex than a fish :)

                                                                              • Leomuck 4 days ago

                                                                                Wow that is such a cool idea! And honestly very much needed. LLMs seem to be this blackbox nobody understands. So I love every effort to make that whole thing less mysterious. I will definitely have a look at dabbling with this, may it not be a goldfish LLM :)

                                                                                • Duplicake 4 days ago

                                                                                  I love this! Seems like it can't understand uppercase letters though

                                                                                  • armanified 4 days ago

                                                                                    Uppercase letters were intentionally ignored.

                                                                                  • ergocoder 4 days ago

                                                                                    It's just so amazing that 5 years ago it would be extremely to build a conversational bot like this.

                                                                                    But right now people make it a hobby, and that thing can run on a laptop.

                                                                                    This is just so wild.

                                                                                    • bharat1010 4 days ago

                                                                                      This is such a smart way to demystify LLMs. I really like that GuppyLM makes the whole pipeline feel approachable..great work

                                                                                      • gnarlouse 5 days ago

                                                                                        I... wow, you made an LLM that can actually tell jokes?

                                                                                        • murkt 5 days ago

                                                                                          With 9M params it just repeats the joke from a training dataset.

                                                                                        • kubrador 5 days ago

                                                                                          how's it handle longer context or does it start hallucinating after like 2 sentences? curious what the ceiling is before the 9M params

                                                                                          • drincanngao 4 days ago

                                                                                            I was going to suggest implementing RoPE to fix the context limit, but realized that would make it anatomically incorrect.

                                                                                            • armanified 4 days ago

                                                                                              I intentionally removed all optimizations to keep it vanilla.

                                                                                            • fawabc 4 days ago

                                                                                              how did you generate the synthetic data?

                                                                                            • EmilioOldenziel 4 days ago

                                                                                              Building it yourself is always the best test if you really understand how it works.

                                                                                              • rclkrtrzckr 5 days ago

                                                                                                I could fork it and create TrumpLM. Not a big leap, I suppose.

                                                                                                • search_facility 5 days ago

                                                                                                  probably 8M params are too much even :)

                                                                                                  • danparsonson 4 days ago

                                                                                                    As long as you use the best parameters then it doesn't matter

                                                                                                    • wiseowise 4 days ago

                                                                                                      Grab her by the pointer.

                                                                                                  • amelius 4 days ago

                                                                                                    > A 9M model can't conditionally follow instructions

                                                                                                    How many parameters would you need for that?

                                                                                                    • armanified 4 days ago

                                                                                                      My initial idea was to train a navigation decision model with 25M parameters for a Raspberry Pi, which, in testing, was getting about 60% of tool calls correct. IMO, it seems like around 20M parameters would be a good size for following some narrow & basic language instructions.

                                                                                                      • amelius 4 days ago

                                                                                                        Ok. This makes me wonder about a broader question. Is there a scientific approach showing a pyramid of cognitive functions, and how many parameters are (minimally) required for each layer in this pyramid?

                                                                                                    • ananandreas 4 days ago

                                                                                                      Great and simple way to bridge the gap between LLMs and users coming in to the field!

                                                                                                      • ben8bit 5 days ago

                                                                                                        This is really great! I've been wanting to do something similar for a while.

                                                                                                        • nobodyandproud 4 days ago

                                                                                                          Thanks. Tinkering is how I learn and this is what I’ve been looking for.

                                                                                                          • jbethune 4 days ago

                                                                                                            Forked. Very cool. I appreciate the simplicity and documentation.

                                                                                                            • nullbyte808 5 days ago

                                                                                                              Adorable! Maybe a personality that speaks in emojis?

                                                                                                              • armanified 4 days ago

                                                                                                                OMG! You just gave me the next idea..

                                                                                                              • monksy 5 days ago

                                                                                                                Is this a reference from the Bobiverse?

                                                                                                                • cpldcpu 5 days ago

                                                                                                                  Love it! Great idea for the dataset.

                                                                                                                  • winter_blue 4 days ago

                                                                                                                    This is amazing work. Thank you.

                                                                                                                    • gdzie-jest-sol 5 days ago

                                                                                                                      * How creating dataset? I download it but it is commpresed in binary format.

                                                                                                                      * How training. In cloud or in my own dev

                                                                                                                      * How creating a gguf

                                                                                                                      • gdzie-jest-sol 5 days ago

                                                                                                                        ``` uv run python -m guppylm chat

                                                                                                                        Traceback (most recent call last):

                                                                                                                          File "<frozen runpy>", line 198, in _run_module_as_main
                                                                                                                          File "<frozen runpy>", line 88, in _run_code
                                                                                                                          File "/home/user/gupik/guppylm/guppylm/__main__.py", line 48, in <module>
                                                                                                                            main()
                                                                                                                          File "/home/user/gupik/guppylm/guppylm/__main__.py", line 29, in main
                                                                                                                            engine = GuppyInference("checkpoints/best_model.pt", "data/tokenizer.json")
                                                                                                                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                                                                                                                          File "/home/user/gupik/guppylm/guppylm/inference.py", line 17, in __init__
                                                                                                                            self.tokenizer = Tokenizer.from_file(tokenizer_path)
                                                                                                                                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                                                                                                                        Exception: No such file or directory (os error 2) ```
                                                                                                                        • gdzie-jest-sol 4 days ago

                                                                                                                          meybe add training again (read best od fine) and train again

                                                                                                                          ``` # after config device checkpoint_path = "checkpoints/best_model.pt"

                                                                                                                          ckpt = torch.load(checkpoint_path, map_location=device, weights_only=False)

                                                                                                                          model = GuppyLM(mc).to(device) if "model_state_dict" in ckpt: model.load_state_dict(ckpt["model_state_dict"]) else: model.load_state_dict(ckpt)

                                                                                                                          start_step = ckpt.get("step", 0) print(f"Encore {start_step}") ```

                                                                                                                        • freetonik 5 days ago

                                                                                                                          You sound like Guppy. Nice touch.

                                                                                                                        • rahen 4 days ago

                                                                                                                          I don't mean to be 'that guy', but after a quick review, this really feels like low-effort AI slop to me.

                                                                                                                          There is nothing wrong using AI tools to write code, but nothing here seems to have taken more than a generic 'write me a small LLM in PyTorch' prompt, or any specific human understanding.

                                                                                                                          The bar for what constitutes an engineering feat on HN seems to have shifted significantly.

                                                                                                                          • zhainya 3 days ago

                                                                                                                            I don't really understand the point of this project or how it demystifies anything. Click the browser demo and I get a generic AI chat screen. Is the readme the part that "demystifies" something? I feel like I am living in a bizarro world. Is this all AI? Are all the comments here from bots?

                                                                                                                          • tombelieber 3 days ago

                                                                                                                            looking forward to try it, great job

                                                                                                                            • Vektorceraptor 4 days ago

                                                                                                                              Haha, funny name :)

                                                                                                                              • Elengal 4 days ago

                                                                                                                                Cool

                                                                                                                                • oyebenny 5 days ago

                                                                                                                                  Neat!

                                                                                                                                  • hughw 4 days ago

                                                                                                                                    Tiny LLM is an oxymoron, just sayin.

                                                                                                                                    • uxcolumbo 4 days ago

                                                                                                                                      How about: LLMs are on a spectrum and this one is on the tiny side?

                                                                                                                                      • armanified 4 days ago

                                                                                                                                        True, but most would ignore LM if it weren't LLM.

                                                                                                                                      • hahooh 4 days ago

                                                                                                                                        haha funny, but really cool project. why fish tho lol.

                                                                                                                                        • aditya7303011 5 days ago

                                                                                                                                          Did something similar last year https://github.com/aditya699/EduMOE

                                                                                                                                          • dinkumthinkum 5 days ago

                                                                                                                                            I think this is a nice project because it is end to end and serves its goal well. Good job! It's a good example how someone might do something similar for a specific purpose. There are other visualizers that explain different aspects of LLMs but this is a good applied example.

                                                                                                                                            • Propelloni 5 days ago

                                                                                                                                              Great work! I still think that [1] does a better job of helping us understand how GPT and LLM work, but yours is funnier.

                                                                                                                                              Then, some criticism. I probably don't get it, but I think the HN headline does your project a disservice. Your project does not demystify anything (see below) and it diverges from your project's claim, too. Furthermore, I think you claim too much on your github. "This project exists to show that training your own language model is not magic." and then just posts a few command line statements to execute. Yeah, running a mail server is not magic, just apt-get install exim4. So, code. Looking at train_guppylm.ipynb and, oh, it's PyTorch again. I'm better off reading [2] if I'm looking into that (I know, it is a published book, but I maintain my point).

                                                                                                                                              So, in short, it does not help the initiated or the uninitiated. For the initiated it needs more detail for it to be useful, the uninitiated more context for it to be understood. Still a fun project, even if oversold.

                                                                                                                                              [1] https://spreadsheets-are-all-you-need.ai/ [2] https://github.com/rasbt/LLMs-from-scratch

                                                                                                                                              • jadengeller 4 days ago

                                                                                                                                                this comment seems to be astroturfing to sell a course

                                                                                                                                                • Propelloni 3 days ago

                                                                                                                                                  What do you mean, the LLM from Scratch book?