• saagarjha a day ago

    I don't want to dismiss this outright but I'm skimming this paper and pretty skeptical of something that's from a single guy that doesn't appear peer reviewed, spends most of its time talking about actual biology, comes up with a "RELU6" (RELU but minimum value 6), and then pushes detailed review to a future paper.

    • amelius a day ago

      He wrote this paper, "Cooperation is All You Need", with a group of people:

      https://arxiv.org/pdf/2305.10449

      And this paper in an IEEE journal:

      https://arxiv.org/pdf/2211.01950

      • yorwba a day ago

        Figure 3 B in "Cooperation is All You Need" shows the same score curves as the top left of Figure 6 in "Beyond Attention," so it must be basically the same implementation. Yet that earlier paper is only cited once, in the Acknowledgements section. As far as I can tell, the only mathematical change in this paper is capping the ReLU at 6. But it also adds a bunch of grandiose verbiage ("triadic modulation loops", "awake thought.")

        The author is clearly a crackpot. Maybe he wasn't a crackpot when he still managed to publish in peer-reviewed journals, but cognitive decline over time is not exactly unheard of.

        • anothermathbozo a day ago

          Warrantless and totally spiteful for you to make unqualified claims like “cognitive decline” from skimming two papers. This is shameful.

          • undefined a day ago
            [deleted]
            • frozenseven a day ago

              Cool insults. But perhaps you can explain why he's wrong?

            • pinoy420 a day ago

              [dead]

            • habinero a day ago

              I swear, most of the AI "papers" that get posted here are someone screwing around with ChatGPT on ketamine and deciding they're advancing humanity.

              • ivape a day ago

                You’ve just discovered the future of a jobless economy. Please write a blog post and I will surely upvote you.

                Ketamine is all you need

                • geeunits a day ago

                  Sat here vibe coding a pure assembly kernel for arm64, APL layer with conceptual memory layout. On my bed, eating a bag of chips, jobless since Jan. Everything but the Ket are mine

                  • ivape a day ago

                    You serious?

                    • TeMPOraL a day ago

                      Who knows, but drop the word "vibe" and this is basically the startup culture 15 years ago, so ¯\_(ツ)_/¯.

                      Well, okay, for better historical accuracy, replace APL with API, and the kernel for arm64 thing with Ruby on Rails on a new Macbook, but the point still stands.

                      • geeunits a day ago

                        yasqueen ← {'yes'≡⎕C ⍵}

              • edflsafoiewq 21 hours ago

                I don't understand the "Triadic Modulation Loop" block, does anyone else?

                Also

                > Competing interests: AA has a provisional patent application for the algorithm used in this paper.

                • mirekrusin a day ago

                  Results in this paper look way too good, I guess we'll have to wait for peer reviews and replications to see if it's true.

                  • RockyMcNuts a day ago

                    When you stack transformers, don't you get meta-attention and higher mental states?

                    • ldng a day ago

                      Can the anthropomorphic scam continue unchecked ? Apparently yes.

                      • TeMPOraL a day ago

                        Probably as long as non-anthropomorphic idiocy can.

                        No opinion on this submission, but a more general point. I'm not the one to jump into anthropomorphizing computers, but last year or two of LLM and adjacent research is a constant stream of papers and experiments that totally surprise everyone who refuse to even entertain comparisons between LLMs and people, while being entirely expected and completely not surprising to those who do.

                        • ImHereToVote a day ago

                          If modeling cognitive processes is a scam, then neuroscience must be the longest-running con in history.

                        • NetRunnerSu a day ago

                          [dead]

                          • bwest87 2 days ago

                            I did a chat with Gemini about the paper, and tldr is... * They introduce a loop at the beginning between Q, K, and V vectors (theoretically representing "question", "clues" and "hypothesis" of thinking) * This loop contains a non linearity (ReLU) * The loop is used to "pre select" relevant info * They then feed that into a light weight attention mechanism.

                            They claim OOM faster learning, and robustness acro domains. There's enough detail to probably do your own PuTorch implementation, though they haven't released code. The paper has been accepted into AMLDS2025. So peer reviewed.

                            At first blush, this sounds really exciting and if results hold up and are replicated, it could be huge.

                            • quinnjh 2 days ago

                              This is, intuitively, a really exciting title. Looking forward to reading / seeing similar work.

                              • aiaiaiaiaiaiai 2 days ago

                                [flagged]