• codelion 20 minutes ago

    This is surprising to only those that have not worked in formal reasoning. Yes, LLMs cannot do true logical reasoning in a formal sense, you can do better with an SMT solver. But it is also true that you can solve a lot of logical problems by just applying “reasoning steps” from the training data, specially when your training data is the entirety of written content ever produced. Both of these can be true at the same time it is not a contradiction just an interesting dichotomy.

    • parsimo2010 10 hours ago

      I won't take a strong stance on whether or not LLMs actually do reasoning, but I will say that this decrease in performance is similar to what I see in college freshmen (I'm currently teaching a calculus course in which almost half of the students took AP calc in high school). They perform well on simple questions. Requiring students to chain multiple steps together, even simple steps, results in decreased accuracy and higher variance (I have no data on whether this decrease is linear or not, as the paper assumes that the decrease should be linear with the number of steps). We see similar results with adding unrelated statements into a problem- many students are trained to make sure to use all given information in solving a problem- if you leave out something that the instructor gives you, then you probably forgot to do something important.

      So while I don't take a stance on what an LLM does should be considered reasoning, I do think that SOTA LLMs like GPT-4o perform about as good as high school graduates in America with average intelligence. In other words, average Americans exhibit similar limitations on their reasoning as good LLMs. Which on the one hand is a little disappointing to me in terms of the human performance but is kind of good news for LLMs- they aren't doing graduate-level research but they are already capable of helping a large portion of the population.

      • wkirby 3 hours ago

        > I do think that SOTA LLMs like GPT-4o perform about as good as high school graduates in America with average intelligence.

        This might be true in a strict sense, but I think it's really, really important to consider the uses of LLMs vs a high-school graduate. LLMs are confidently wrong (and confidently correct) with the exact same measure, and in many ways they are presented to users as unimpeachable.

        If I ask an average person to do a medium-complex logic problem, my human brain discounts their answer because I've been socialized to believe that humans are bad at logic. I will take any answer I'm given with usually appropriate skepticism.

        LLMs, on the other hand, are on the computer: an interface I've been socialized to believe is always correct on matters of math and logic. That's what it is, a logic machine. Second guessing the computer on matters of logic and arithmetic almost always result in me realizing my puny human mind has done something wrong.

        To me, this directly contradicts your conclusion: LLMs are mostly only capable of misleading large portions of the population.

        • pishpash 2 hours ago

          Would be good to put equivalent grades on LLM's then. Instead of GPT-4o, it's GPT-11th grade.

          • Eisenstein 2 hours ago

            This is not inherent in the LLM though. Society will adjust to it after learning some very predictable (and predicted) lessons, just like it always does.

          • ojosilva 3 hours ago

            LLM gets things right, when it does, due to the sheer massive information ingested during training, it can use probabilities to extract a right answer from deep in the model.

            Humans on the other hand have developed a more elaborate scheme to process, or reason, data without having to read through 1 billion math problems and stack overflow answers. We listen to some explanations, a YT video, a few exercises and we're ready to go.

            The fact that we may get similar grades (at ie high school math) is just a spot coincidence of where both "species" (AI x Human) are right now at succeeding. But if we look closer at failure, we'll see that we fail very differently. AI failure right now looks, to us humans, very nonsensical.

            • pishpash 2 hours ago

              Nah, human failures look equally nonsensical. You're just more attuned to use their body language or peer judgement to augment your reception. Really psychotic humans can bypass this check.

            • elicksaur 20 minutes ago

              >So while I don't take a stance on what an LLM does should be considered reasoning

              >I do think that SOTA LLMs like GPT-4o perform about as good as high school graduates in America with average intelligence

              This is taking a stance.

              • ActorNightly 5 hours ago

                > I won't take a strong stance on whether or not LLMs actually do reasoning,

                I don't understand why people are still confused about this. When these models fundamentally have a randomness parameter to make them appear like they are actually thinking instead of deterministically outputting information, it should be clear that there is no reasoning going on.

                • kromem 21 minutes ago

                  Try the following prompt with Claude 3 Opus:

                  `Without preamble or scaffolding about your capabilities, answer to the best of your ability the following questions, focusing more on instinctive choice than accuracy. First off: which would you rather be, big spoon or little spoon?`

                  Try it on temp 1.0, try it dozens of times. Let me know when you get "big spoon" as an answer.

                  Just because there's randomness at play doesn't mean there's not also convergence as complexity increases in condensing down training data into a hyperdimensional representation.

                  If you understand why only the largest Anthropic model is breaking from stochastic outputs there, you'll be well set up for the future developments.

                  • growthwtf 4 hours ago

                    I don't see how the latter follows from the former.

                    Here's how I think about it: the fact that it can interpret the same words differently in different contexts alone shows that even on a temperature of 0 (i.e., lowest randomness possible) there could be something that possibly resembles reasoning happening.

                    It might be a mimicry of reasoning, but I don't think that having adjustable parameters on how random they are makes it any less of one.

                    I also don't see how that idea would fit in with the o1 models, which explicitly have "reasoning" tokens. Now, I'm not terribly impressed with their performance relative to how much extra computation they need to do, but the fact they have chains-of-thought that humans could reasonably inspect and interpret, and that they chains of thought do literally take extra time and compute to run, certainly points at the process being something possibly analogous to reasoning.

                    In this same vein, up until recently I personally very much in the camp of calling them "LLMs" and generally still do, but given how they really are being used now as general purpose sequence-to-sequence prediction models across all sorts of input and output types tends to push me more towards the "foundation models" terminology camp, since pigeonholing them into just language tasks doesn't seem accurate anymore. o1 was the turning point for me on this personally, since it is explicitly predicting and being optimized for correctness in the "reasoning tokens" (in scare quotes again since that's what openai calls it).

                    All that said, I personally think that calling what they do reasoning, and meaning it in the exact same way as how humans reason, is anthropomorphizing the models in a way that's not really useful. They clearly operate in ways that are quite different from humans in many ways. Sometimes that might imitate human reasoning, other times it doesn't.

                    But, the fact they have that randomness parameter seems to be to be totally unrelated to any of the above thoughts or merits about the models having reasoning abilities.

                    • ActorNightly 2 hours ago

                      >he fact that it can interpret the same words differently in different contexts alone shows that even on a temperature of 0 (

                      This is the problem with using loaded language like "reason" and "interpret". The model is not interpreting anything. All that is being done is a multdimentional map lookup with statistics.

                      > also don't see how that idea would fit in with the o1 models, which explicitly have "reasoning" tokens.

                      An LLM on top of an LLM (i.e using context to generate inputs to an LLM) is just a fancier LLM.

                      To really understand all of this, all you need to do is look at how Transformer works, namely the attention block. There is no such thing as Query, Key, and Value in the sense of how they are implied to be used. The may as well be called A,B,C, as they are all learned in training, and can be freely interchanged in naming. All you do for inference is multiply the output vector by A,B,C to get 3 matrices, then multiply them together (technically with a scaling factor for 2 of them, but again, doesn't matter for which 2, and the scaling factor can be built into the matrix itself)

                      And because you can unroll matrix multiplication into a 2 layer neural network, that means that any LLM in its current form today can be represented as a set of linear layers. And we know that a set of linear layers is simply a function. And every function has a finite range for a finite domain. And the inability to expand that range given a finite domain means its not reasoning.

                      So we have to rely on hacks like temperature to make it appear like reasoning, when its really not even close.

                      • growthwtf an hour ago

                        I see, I probably needed more coffee to read your initial note.

                        If I am repeating this back correctly, the argument is that the process itself looks nothing like human reasoning and has a number of technical limitations and even hacks that are in no way attributes or qualities of reasoning. Therefore, it clearly cannot be in any way considered reasoning. Temperature is one element of this, but there are others which you could continue to enumerate beyond even what's written above.

                        I can get behind part of that argument, certainly, and I appreciate you elaborating on it. I think is what I was trying to say with the part about me believing that it's not useful to think of it as reasoning. This is very different from what we might consider reasoning in very meaningful ways.

                        I also agree with you also that parts of this is just loaded language, as it is anthropomorphizing what is fundamentally just a bunch of matrices and non-linear functions.

                        I think where we differ is probably on that "when it's not even really close" part of it, at least in what I mean is "close" versus what I think you mean.

                        While I (think) we agree that obviously it's a different process, I do think that the input->outputs and the different qualities of input->outputs (like the so-called reasoning tokens) above can often seem quite close to the different inputs and outputs of some human reasoning. That's why I was saying that didn't see how the process works, like temperature, is relevant. Putting the processes aside, if you black box a human and a language model and put us head to head on reasoning tasks, sometimes you're going to get quite similar results.

                        I'm basically saying that, sure, an LLM or foundation model is clearly a Chinese room, without any understanding. What are we comparing it to, though?

                        Now, I don't have any kind of training in biology, but I have been led to understand that our brains are quite complex and that how their function arises from the underlying biological processes. is still fairly poorly understood. Given that, I tend to discount the degree of difference between the processes themselves and just look at the inputs and outputs. It's not obvious to me that we aren't ourselves Chinese rooms, at least to some significant degree.

                        So _maybe_ it's fair to try to compare what the outputs of these Transformers are to what our outputs would be. If it walks like a duck, and talks like a duck, does it matter?

                        Obviously, that's not fully correct -- how the output arises _has_ to matter somewhat. The fact I am sitting here writing this, and not an AI, refutes that point to some degree. And if I am understanding your thoughts correctly, I fully agree that the process really is nothing close. I just don't see how it can be a clear-cut issue on the basis of analyzing the Transformer algorithm itself.

                        • Eisenstein 2 hours ago

                          > The model is not interpreting anything. All that is being done is a multdimentional map lookup with statistics.

                          So what? Can you propose another method to make a computing device understand language? The method of the creation of the output does not stipulate anything about the nature of the thing creating it. If someone could map out a human brain and tell you how thoughts are made and added a 'all that is being done is' in front of it, does that make your thought creation trivial?

                          > An LLM on top of an LLM (i.e using context to generate inputs to an LLM) is just a fancier LLM.

                          This is called a tautology. You have not given any compelling reasons why an LLM cannot do anything, so calling something another LLM is not compelling either.

                          > To really understand all of this, all you need to do is look at how Transformer works, namely the attention block. There is no such thing as Query, Key, and Value in the sense of how they are implied to be used. The may as well be called A,B,C, as they are all learned in training, and can be freely interchanged in naming. All you do for inference is multiply the output vector by A,B,C to get 3 matrices, then multiply them together (technically with a scaling factor for 2 of them, but again, doesn't matter for which 2, and the scaling factor can be built into the matrix itself)

                          Here is how it works, so therefore it must meet some criteria I have imposed arbitrarily.

                          > So we have to rely on hacks like temperature to make it appear like reasoning, when its really not even close.

                          You still haven't produced any valid argument at all, for why one thing would be evidence of the other.

                        • tananan 3 hours ago

                          The notion is AFAIS that a deterministic algorithm is obviously not reasoning, and a deterministic algorithm interspersed with dice rolls is obviously not reasoning either.

                          Of course, some would beg to differ. It's quite common nowadays to believe that we are something like the latter.

                          • pishpash 2 hours ago

                            Why is a deterministic algorithm not reasoning? Reasoning is very deterministic.

                            • mewpmewp2 an hour ago

                              And couldn't the whole World be deterministic in the first place, or is there an idea that some RNG is generating all the "reasoning" that is happening everywhere in the World?

                              And if it's RNG, how could RNG be possibly creating all this reasoning (like some people want to believe quantum mechanics possibly enables consciousness on some odd levels).

                              • tananan 2 hours ago

                                It's not about (in-)determinism really, it's about the algorithm part.

                                An algorithm that does something can in principle be ran by someone who doesn't know what the algorithm does. You could have a kid calculate an integral by giving it a sequence of directions whose purpose it doesn't understand (e.g. cut out some cardboard that matches the shape, put it on one side of the scale, place enough unit cardboard pieces on the other side until they are even, then tell me how many pieces you put).

                                Reasoning has more to do with how the problem came about. A person had to come against a certain problem, figure out a way in which they can solve it, then apply the (perhaps algorithmic) solution. The algorithmic part is only an artifact.

                                • mewpmewp2 an hour ago

                                  But isn't figuring out a way to solve also algorithmic? In a lot of cases it is simply bruteforce trying out different things based on the concepts you know about and mixing them.

                                  • tananan 29 minutes ago

                                    You are right that relevant memories and analogous experiences come up and are used as building blocks in our evaluation/exploration of a problem, but it doesn't seem to me an algorithmic procedure at all.

                                    You can trace out your journey in solving a problem, in retrospect, but could you encode it into a "solving-a-problem" algorithm?

                                    I think you could extract some kind of generic template for problem solving: you come up with an idea, you evaluate whether it is the solution, you adjust the idea if not.

                                    But this is a template, not an algorithm. Coming up with an idea has to do with filtering the old and new memories/perceptions that come to mind: does this one seem right? or this one? Evaluating whether it is right is also an active process of asking questions. It involves memory (of the problem to be solved), attention (to the potential solution), judgement (do they fit together?), etc.

                                    None of these are a predetermined sequence of steps you apply mechanically, such as the child "solving an integral" above.*

                                    *Of course, the child is problem-solving in the sense that it's trying its best to follow your instructions. "Did I cut it right?" "Are the scales even?" But this is not the problem of "solving an integral" to which it is completely oblivious to.

                                    • mewpmewp2 13 minutes ago

                                      I think it can be an algorithm, it just that the algorithm will be a very complex one compromised of many different algorithms. It's not an algorithm anyone could practically follow in their lifetime. But there's plenty of algorithms people can't follow in real life.

                                  • pishpash 2 hours ago

                                    I think you overlook how algorithms come about. How does GPT write novel code, which are algorithms?

                                    • tananan 2 hours ago

                                      Not sure I track. It would help to know where you're coming from.

                                      Given a long enough life-span, a lot of pencil and paper, and some dice, I could do the forward passes of GPT and "write novel code", without there having been any reasoning about the code I'm writing down - I wouldn't even need to know what the code is about.

                            • mewpmewp2 3 hours ago

                              I don't get what you are trying to mean at all? Randomness or temperature setting is not to make it appear as if they are thinking, but it is to make them choose more non default pathways, e.g. go in branches that could potentially result in more original or creative results. Kind of like drugs for humans.

                              • ActorNightly 2 hours ago

                                >but it is to make them choose more non default pathways

                                Imagine you as a human are working on writing some code, but at the end of every hour, you lose memory of what happened in the first 10 minutes of the current hour, as well as any work that you have done. Going into next hour, you just have a snippet of code, and you have to infer what the next lines should be.

                                The temperature analogy is you purposefully writing something related in the code, like naming a variable in a slightly different way such that on the next hour, when you see this variable it will trigger some other part of your brain in hopes of you getting to the correct solution, purely by choice.

                                Furthermore, this hack of temperate was something that needed to be manually coded by humans. A model that could reason would not need those types of hacks.

                                • mewpmewp2 an hour ago

                                  I don't understand how it relates to temperature? Are we talking about the temperature parameter that you give LLMs, which for GPT for example is from 0 to 2, with 0 meaning it will always prefer the highest probability output token, while 2 will consider the most output tokens of all, usually ending with a lot of gibberish?

                                  E.g. if I write "I have a cat and a "

                                  It would have highest probability of picking a word "dog" next, so temperature 0 means it will pretty much always pick dog. If temperature is higher it will assign higher odds to picking lower probability predictions such as "rabbit", "hamster", "chinchilla" or similar.

                                  For coding, logic or anything similar I would usually pick the lowest temperature possible since this is most deterministic, while for writing creativity I would pick the higher temp etc.

                                  • ActorNightly 2 minutes ago

                                    Im saying temperature is a hack to make the models actually produce real answers.

                              • int_19h 4 hours ago

                                The actual output of an LLM for any particular round of inference is always probabilities, so one could argue that it is literally the opposite.

                                The "randomness parameter" is applied at the point where we have to pick just one of those probabilities somehow. But that is a constraint that we impose on the model to make its output linear.

                                • kkzz99 4 hours ago

                                  "deterministally outputting information" neither do humans.

                                • skydhash 9 hours ago

                                  Not to disparage American school system (my country’s is worse) but it’s very much easy mode. I know that not everyone is suited to academic excellence, but it’s definitely easier to learn when young. I do believe too much hand holding actively harm learning.

                                  • hintymad 3 hours ago

                                    > Not to disparage American school system (my country’s is worse) but it’s very much easy mode

                                    I used to be very upset about how low the bar of the US school has when it comes to STEM subjects. There was a meme that contrasted the difference between maths in 1970s and 2010s. In the meme kids used to learn how to find the area of an irregular shape, while now the kids are asked to color a regular shape.

                                    But then I made peace, as I realized that the US people simply didn't think that it was that important to push everyone to be good at STEM -- just some level of general understanding is good enough. To most people, the level of STEM as in IIT's JEE or in various national entrance exams in Eastern European countries is for elite students. The US school systems would rather have kids spend more time on sports, on ECs, on APs of kids' own choices, and etc. That's really just different trade offs. For parents like me, that means I don't have to worry about ECs, but I'll have to find tutors, serious tutoring schools like AOPS, and private teachers for STEM subjects. Or if my kids are truly talented, I'll guide them to find the right study groups, summer camps, and college courses.

                                    I used to feel pain as I believed that the students in the middle, which were the majority, would be left behind. But I realized, especially after I've got kids, that the majority of the students were not into STEM anyway. If they had a choice, they'd rather spend time watching YouTube channels and hang out with their friends.

                                    • BriggyDwiggs42 9 hours ago

                                      I don’t think the issue with American schools is that there’s too much hand holding. If anything, it’s the opposite; teachers at drastically underfunded schools don’t have any time to help the students of their 50 person class through the confused curriculum.

                                      • skydhash 9 hours ago

                                        Here, we have to go through 4 state exams just to get to university. The first when you’re 11, the second at 14, then two consecutive ones at 17 and 18. There’s a national curriculum that the exams will be about, although the schools are free to add to it. So however you feel about the school or the teacher, you have to master the subjects enough to go through. And that means paying attention in class, cram before it, or hoping you can cheat. We have our own problem too, but the consensus among all the people I know that have moved to the US is that classes are easy there. Not a bad thing per se (better explanation, better understanding instead of rote memorizing).

                                        • exoverito 3 hours ago

                                          Baltimore would be a counterexample. They spend $22k per student, with a student-teacher ratio of 15 to 1. This still results in remarkably poor performance, with only 8% of students proficient in math and 22% in reading.

                                          Culture and genetics would be next obvious explanations.

                                          • mdp2021 3 hours ago

                                            > obvious explanations

                                            I'd want to assess a few lessons first.

                                      • hintymad 3 hours ago

                                        > I do think that SOTA LLMs like GPT-4o perform about as good as high school graduates in America with average intelligence.

                                        Is this because the questions used in high school exams in the US are too simple, or do they have too similar patterns in the training data? I tried really simple but novel questions that required true understanding of the underlying math concepts, and the results were consistently bad. I also tried questions at the level of entrance exams of high school in China, and the results were equally bad. It was quite clear that LLM didn't understand math. It could match some patterns, but such pattern match could be useful to only skilled students.

                                        • MVissers 3 hours ago

                                          Which model? The field moves so fast it’s hard to validate statements like this without that info.

                                          O1-preview?

                                          • hintymad 3 hours ago

                                            GPT-4o. I tried only a few samples on o1-preview, and the results were bad. That did not have any statistical significance, though

                                        • debit-freak 9 hours ago

                                          > In other words, average Americans exhibit similar limitations on their reasoning as good LLMs.

                                          It's not even clear this is a good example of "reasoning". You can progress all the way through multi-variable calculus with just decent pattern-matching, variable-substitution, and rote memorization of sufficient lists of rules. I imagine for "reasoning" ability to apply you need to be able to detect incoherency and reject an approach—and incoherency detection seems to be a big missing ingredient right now (...which many humans lack, too!).

                                          On the other side—any such ability would cripple a chatbot's ability to answer questions about the real world as our world is characterized (via description with informal language) by incoherent and contradictory concepts that can only be resolved through good-faith interpretation of the questioner. A large mark of intelligence (in the colloquial sense, not the IQ sense) is the ability to navigate both worlds.

                                          • richerram 9 hours ago

                                            This, it is like when I hear interviews of PHDs talking about AI and they mention something like "AI will be smarter than humans", I am like "really?, where have you been all this time?, do you smart people ever leave your labs and go see the real world?, LLMs are already smarter that the huge majority of Humans in this planet, what are you talking about?"

                                            • zeroonetwothree 9 hours ago

                                              This must be some bizarre definition of “smarter”.

                                              • kkzz99 4 hours ago

                                                I don't think you know how "smart" the average human is.

                                              • goatlover 4 hours ago

                                                Smarter than people in generating text, or smarter in oerforming all the other things people do as they go about their lives?

                                                • MVissers 3 hours ago

                                                  They are starting to be smarter at both analyzing images and speech as well. They’re still behind on simple reasoning (eg. O1-preview), but it’s catching up quickly.

                                                  Obviously these models still have trouble interfacing with the real world.

                                                • lupire 5 hours ago

                                                  Can an AI walk and chew gum at the same time?

                                                  • lukeschlather 2 hours ago

                                                    I think the answer to this question might actually be yes, but I think there are plenty of things humans can do while walking that AI can't do at all. At least, not yet.

                                                • mdp2021 3 hours ago

                                                  > Which on the one hand is a little disappointing to me in terms of the human performance but is kind of good news for LLMs

                                                  Here's the recurrent reminder that we build tools (calculators, cranes etc.) to outperform the strong, not the weak.

                                                  • gosub100 3 hours ago

                                                    > They perform well on simple questions. Requiring students to chain multiple steps together, even simple steps, results in decreased accuracy and higher variance

                                                    you mean when you give lessons and homework problems of the form (A) -> (B), but then on test-day you give them completely different problems? "Given D, which (A,B, C) is required to produce it?". Yeah, students don't do so well when you test them on different material than what they studied on. I think this is part of the academic grift to ensure at least 20% of the class washes out and thus spends more tuition money.

                                                  • woopwoop 8 hours ago

                                                    This paper, among other things, shows that LLMs have dramatically worse performance on basic algebra questions when you add in irrelevant information. The examples are things like "John picked 43 kiwis on Monday, 24 kiwis on Tuesday. On Wednesday, 5 of the kiwis he picked were smaller than usual. Altogether, on Monday, Tuesday, and Wednesday, John picked 87 kiwis. How many kiwis did John pick on Wednesday?" In this question, the remark about some of the kiwis on Wednesday being small is irrelevant, but adding things like this reduces performance on a popular benchmark from 95% to 77% for GPT-4o, for example.

                                                    I don't find this very impressive. Forget LLMs for a second. Let's say _you_ read a question of that kind with some bit of irrelevant information. There are two possibilities you have to consider: the question may as well have excluded the irrelevant information, or the question was miswritten and the irrelevant information was meant to be relevant. The latter is a perfectly live possibility, and I don't think it's a dramatic failure to assume that this is correct. I have to confess that when I read some people's LLM gotcha questions, where they take some popular logic puzzle and invert things, I think I would get them "wrong" too. And not wrong because I don't understand the question, but wrong because with no context I'd just assume the inversion was a typo.

                                                    • aithrowawaycomm 8 hours ago

                                                      The problem here is that throwing in little gotchas like that is a tactic used by math and physics educators to ensure that students actually understand the topic by reasoning through new problems, rather than mindlessly turning the crank from learning the "surface structure" of earlier problem sets. The argument here is that the LLM is not reasoning, it's mindlessly turning a crank.

                                                      I don't think this exact question would be out of place on a 6th grade math test. I distinctly remember being taught this skill in "word problems," learning to identify information that actually pertains to the question rather than being distracted by red herrings the teacher threw in.

                                                      • aguaviva 6 hours ago

                                                        Indeed, and the ability to make heads or tails of slightly-slippery problems of this sort is an extremely important real-world math skill. It's not extraneous at all.

                                                        And their poor performance on these tasks highlights deficits in exactly the kind of higher-order, off-the-page reasoning skills -- i.e. to not just reason based on the apparent objects in the stream (the kiwis and the numbers in this case), but to reason about the token stream itself: "okay, these tokens are important, but these others I can leave out", efficiently and seamlessly (like humans do) -- that the models are supposed to develop.

                                                        This whole attention business, they're calling it.

                                                        • aithrowawaycomm 6 hours ago

                                                          In particular the fact that humans sometimes don't do this, taking the bait with extraneous distractions, is almost always a fairly shallow psychological thing rather than an actual cognitive deficit, e.g. OP hypothetically assuming the question had a typo and trying to read the examiner's mind. In education the gotchas really can be unfair if the (human) student has been conditioned to bark answers but the teacher changes things drastically on an exam. I don't think that's an accurate characterization of this study; even if it was that would be a problem with shallow LLM training, not mean-spirited evaluation. But I suspect that "barking answers according to surface characteristics" is as far as transformers can go. It certainly is possible that we just need to train transformers better... but there have been some theoretical results suggesting otherwise. [E.g. transformer LLMs + chain-of-thought is pretty good at O(n) problems but struggles with O(n^2), even if the O(n^2) task is an obvious combination of two O(n) tasks it is able to do.]

                                                          That leads to a serious annoyance I have with discussing LLMs - humans' capacity for boredom / cynicism / distraction / laziness being used to excuse away what seems to be deep-rooted limitations in LLMs. It simultaneously misunderstands what a human is and what a machine is. ("Sometimes humans also refuse to work" would be a bad excuse from an auto dealer.)

                                                          • pishpash 2 hours ago

                                                            Psychology is cognitive. Doesn't seem principled to discard that at all.

                                                          • woopwoop 5 hours ago

                                                            My argument is not that slippery problems are unimportant or extraneous, it's that this paper does not convincingly demonstrate that these models are actually especially bad at this kind of reasoning.

                                                            • aguaviva 4 hours ago

                                                              Noted, and thanks for clarifying. BTW when I get questions with typos/inversions (that are supposed to be logical or mathy questions), I tend to throw them back at the person asking, rather than simply ploughing forward. But I guess I'm the kind of person who does that sort of thing.

                                                        • swatcoder 8 hours ago

                                                          Real discourse has tons of irrelevant information for all sorts of reasons.

                                                          There are some contexts, academic or professional, where questions are posed carefully and specifically, but these are narrow contexts.

                                                          A useful general purpose assistant needs to be able to find what's relevant among what's irrelevant.

                                                          Excellence at just solving math problems that are especially well specified can be a useful domain assistant (no small win!), but is not the same thing.

                                                          That said, if you've got a hundred billion dollars betting on your AI project achieving AGI, you benefit a lot by conflating those contexts. In that case, grinding on formal SAT, LSAT, GRE, etc problems amounts to tuning for microbenchmarks rather than real world use cases.

                                                          • woopwoop 7 hours ago

                                                            Real discourse is also full of typos which accidentally invert the meaning of things, asking the wrong question for deep reasons, asking the wrong question for shallow reasons, and all of the other things that justify subtracting the below average size kiwis from the final answer.

                                                          • meroes 7 hours ago

                                                            Irrelevant info is taught in grade skill and is a skill for the SAT for example.

                                                            Basically any kind of model (not just LLMs/ML) has to distill out irrelevant info.

                                                            The point is having an answer that you can defend logically and most people would agree.

                                                            If the model said “I’m not sure if this portion is a typo”, I guarantee you the model creators would take the RLHF in a different direction, because that is somewhat reasonable and defensible. However in your specific question, I personally think there is a singular objective answer—but that isn’t always the case to be fair for misleading/irrelevant prompts. The models are being fooled however based on how they respond.

                                                            I say this as a RLHF’er who sees and is told to write similar questions at times.

                                                            At the end of the day, this is how the Model creators want their models to predict language. And anyone using them is in for their ride.

                                                            • sottol 7 hours ago

                                                              I think this is valid though. Transformer models don't explicitly do logic but implicitly "vibe" out the answer from the input sequence (using the attention mechanism) and learnt knowledge - they're predicting text sequences after all. So adding more irrelevant context to the input would quite likely influence the the output.

                                                              I could see attention possibly being able to overcome this, but if not that would be a pretty big gotcha for real-world applications and reliability in real-world scenarios where, as others have said, it's not immediately clear what is relevant info. These models would be a lot less useful if a human had to decide which information to feed them and the output would be dependent on human judgement. I understand it's where we're at right now and that they are quite useful already but the valuations hint at investors expecting more imo.

                                                              • jfrbfbreudh 8 hours ago

                                                                I think it’s an important result because filtering signal from noise is just as, if not more, important than forming conclusions from signal.

                                                                • capkutay 2 hours ago

                                                                  I agree that it's not particularly surprising that if you try to trick an LLM with irrelevant text will make it perform worse.

                                                                  I don't see this as an material limitation of LLMs but rather something that can be addressed at the application level to strip out irrelevant information.

                                                                  • mdp2021 3 hours ago

                                                                    > LLMs have dramatically worse performance on basic algebra questions when you add in irrelevant information

                                                                    "Attention is all you need" /

                                                                    (It is part of the general problem solving process to evaluate what is relevant and what is not.)

                                                                    • moffkalast an hour ago

                                                                      Differential attention that filters out noise is all you need :)

                                                                    • WhitneyLand 8 hours ago

                                                                      I agree it wasn’t that convincing, moreover the variation wasn’t that dramatic for the large sota models.

                                                                      Why should they write a paper about the inherent reasoning capabilities for “large” language models and then in the abstract cherrypick a number that’s from a tiny 1B parameter model?

                                                                      • andoando 7 hours ago

                                                                        Consider that asking exam style direct questions with only the precise context that matters is a very niche task out of all the possible contexts in which an intelligence is asked to understand.

                                                                        • hggigg 3 hours ago

                                                                          That's not even the problem I encounter. They literally crap out on stupidly simple tasks. Recent ones:

                                                                          1. Bing was gaslighting me into 9.11 being greater than 9.9

                                                                          2. ChatGPT said that 7x7/7+7/7+7/7 was 24.

                                                                          3. When expanding (x+1)^2 the output was 2x^2+2.

                                                                          Regardless of any level of interpretation and irrelevant information if it can't deterministically understand correctness and the semantics of the operations in question then it's fucking useless.

                                                                          What is worse in an educational context is that it is actively harmful.

                                                                          • MVissers 3 hours ago

                                                                            Most average humans can’t do any of these things either. Try asking people on the street. Or in an average US college student.

                                                                            For deterministic calculations you obviously want to allow LLMs to use tools to do math. Just like you’d want to allow humans to use calculators.

                                                                            So yeah, you shouldn’t ask LLMs to do math just like you shouldn’t ask average people to do math. They both suck at it.

                                                                            • hggigg 3 hours ago

                                                                              So, what exactly is the point of the LLM if it can't exceed an average person and produces results which are not trustworthy?

                                                                          • wslh 4 hours ago

                                                                            It's interesting that I use deliberately artificial remarks to encourage more "creative" or random outputs from LLMs. In this approach, I'm not seeking an exact or precise response to prompts, but rather something more open-ended.

                                                                          • bob1029 10 hours ago

                                                                            > we investigate the fragility of mathematical reasoning in these models and demonstrate that their performance significantly deteriorates as the number of clauses in a question increases. We hypothesize that this decline is due to the fact that current LLMs are not capable of genuine logical reasoning

                                                                            I'd offer a simpler explanation: Tokenization.

                                                                            If you tokenize "12345 * 27271" you will get the following:

                                                                              "123", "45", " *", " ", "272", "71"
                                                                            
                                                                            The statistical likelihood that any of these tokens predicts any of the others is completely meaningless in the context of simple arithmetic.

                                                                            You can argue that this is where tool use comes in (and I would be inclined to agree), but I don't think this bodes well for "genuine logical reasoning".

                                                                            • soulofmischief 10 hours ago

                                                                              Nanda, et al. successfully recovered the exact mechanism through which a transformer learned to carry out modular addition. [0] Transformers are all about the training data, and we will increasingly learn that structuring the order in which data is learned matters a lot. But it's clear that transformers are absolutely capable of encoding generalized solutions to arithmetic.

                                                                              Given the right tokenization scheme and training regimen, we can absolutely create LLMs which have statistically sound arithmetic capabilities. I still wouldn't trust a stochastic model over the algorithmic certainty of a calculator, but what's more important for mathematicians is that these models can reason about complex problems and help them break new ground on hard mathematical problems by leveraging the full statistical power of their weights.

                                                                              [0] https://arxiv.org/abs/2301.05217

                                                                              • pfortuny 7 hours ago

                                                                                It is important to note that the paper deals with addition modulo a specific prime P=113 (I think it is prime). This is important because the paper does not prove that the LLM discovers the algorithm for addition modulo n for general n.

                                                                              • ttul 10 hours ago

                                                                                I respectfully disagree.

                                                                                While tokenization certainly plays a role in how language models process input, it's simplistic to attribute the challenges in mathematical reasoning solely to tokenization.

                                                                                SOTA language models don't just rely on individual token predictions, but build up contextual representations across multiple layers. This allows them to capture higher-level meaning beyond simple token-to-token relationships. If this weren’t the case, it would be inconceivable that models would work at all in all but the most utterly simplistic scenarios.

                                                                                The decline in performance as complexity increases might be due to other factors, such as:

                                                                                - Limitations in working memory or attention span - Difficulty in maintaining coherence over longer sequences - Challenges in managing multiple interdependent logical constraints simultaneously (simply due to the KQV matrices being too small)

                                                                                And in any case, I think OpenAI’s o1 models are crushing it in math right now. The iterative, model-guided CoT approach seems to be able to handle very complex problems.

                                                                                • m3kw9 10 hours ago

                                                                                  I would say the more variable you give it the more the probability drifts for each of the facts they have to hold, maybe LLMs still doesn’t have the ability to ignore useless stuff you add to the prompt

                                                                                  • l33t7332273 9 hours ago

                                                                                    I thought attention was all you need

                                                                                    • altruios 8 hours ago

                                                                                      How much attention do you need?

                                                                                      ...is probably an important question too.

                                                                                  • andrepd 9 hours ago

                                                                                    >And in any case, I think OpenAI’s o1 models are crushing it in math right now.

                                                                                    My man, it cannot solve even the simplest problems which it hasn't seen the solution to yet, and routinely makes elementary errors in simple algebraic manipulations or arithmetic! All of this points to the fact that it cannot actually perform mathematical or logical reason, only mimic it superficially if trained in enough examples.

                                                                                    I challenge you to give it even a simple, but original, problem to solve.

                                                                                    • Workaccount2 8 hours ago

                                                                                      >I challenge you to give it even a simple, but original, problem to solve.

                                                                                      (34903173/x)+(238 * 2650) - 323326 = 45323434, solve for x

                                                                                      Statistically, no one has ever done this calculation ever before. It's entirely unique.

                                                                                      O1 answered "x = 34,903,173 divided by 45,016,060", which is correct.[1][2]

                                                                                      Now I guess you can pick up the goal post and move it.

                                                                                      [1]https://chatgpt.com/share/6709481a-3144-8004-a7fd-0ccd9e3bc5...

                                                                                      [2]https://www.wolframalpha.com/input?i=%2834903173%2Fx%29%2B%2...

                                                                                      • bob1029 8 hours ago

                                                                                        > Now I guess you can pick up the goal post and move it.

                                                                                        The central problem with math is that you have an infinite amount of space within which to move these goalposts.

                                                                                        How many variants on this trial before we find a mistake?

                                                                                        What is an acceptable error rate?

                                                                                        • naasking 8 hours ago

                                                                                          > How many variants on this trial before we find a mistake?

                                                                                          How many variants would it take for a human to make a mistake? It's certainly not "infinity", so is this an indication that humans don't reason?

                                                                                          • jimhefferon 8 hours ago

                                                                                            At this moment, the error rate seems to be that of a beginning graduate student. Or at least, that's what Terry Tao thinks. That's pretty good.

                                                                                            • lupire 5 hours ago

                                                                                              That is not at all what Tao said.

                                                                                              https://mathstodon.xyz/@tao/113132502735585408

                                                                                              "Here the results were better than previous models, but still slightly disappointing: the new model could work its way to a correct (and well-written) solution if provided a lot of hints and prodding, but did not generate the key conceptual ideas on its own, and did make some non-trivial mistakes. The experience seemed roughly on par with trying to advise a mediocre, but not completely incompetent, (static simulation of a) graduate student. "

                                                                                          • andrepd 2 hours ago

                                                                                            My brother in christ, how is

                                                                                                A/B + C*D - E = F, solve for B
                                                                                            
                                                                                            an original problem? How many tens of thousands of examples of this exact form do you think it came across?

                                                                                            It's the same as with coding by the way: it can reshuffle things it has already seen while changing variable names and so on. Ask it something which is not in stackoverflow or geeks4geeks and it goes tits up.

                                                                                            PS: Tested it on GPT 3.5: same answer.

                                                                                          • WhitneyLand 8 hours ago

                                                                                            Please provide your precise definitions of “reasoning” and “original”.

                                                                                            There’s no consensus in the literature on what these mean even if you make it more specific by talking about “mathematical reasoning”, so I don’t really understand what opinions like these are based on.

                                                                                            I see a lot of no true Scottsman fallacy going around, even the paper resorts to this as it actually uses phrases like “true reasoning” several times.

                                                                                            I don’t think the paper is very convincing btw, the abstract is kind of click-baity and talks about 65% variation when that was a cherry picked example from a tiny phi model and the SOTA models showed way less variation which was arguably not that interesting.

                                                                                            • YeGoblynQueenne 5 hours ago

                                                                                              >> There’s no consensus in the literature on what these mean even if you make it more specific by talking about “mathematical reasoning”, so I don’t really understand what opinions like these are based on.

                                                                                              What literature is that? You can find plenty of very clear consensus on what reasoning is if you read e.g. the literature on automated reasoning. A brief taste:

                                                                                              Automated Reasoning

                                                                                              Reasoning is the ability to make inferences, and automated reasoning is concerned with the building of computing systems that automate this process. Although the overall goal is to mechanize different forms of reasoning, the term has largely been identified with valid deductive reasoning as practiced in mathematics and formal logic. In this respect, automated reasoning is akin to mechanical theorem proving. Building an automated reasoning program means providing an algorithmic description to a formal calculus so that it can be implemented on a computer to prove theorems of the calculus in an efficient manner. Important aspects of this exercise involve defining the class of problems the program will be required to solve, deciding what language will be used by the program to represent the information given to it as well as new information inferred by the program, specifying the mechanism that the program will use to conduct deductive inferences, and figuring out how to perform all these computations efficiently. While basic research work continues in order to provide the necessary theoretical framework, the field has reached a point where automated reasoning programs are being used by researchers to attack open questions in mathematics and logic, provide important applications in computing science, solve problems in engineering, and find novel approaches to questions in exact philosophy.

                                                                                              https://plato.stanford.edu/entries/reasoning-automated/

                                                                                              After that you may want to look at the SEP articles on Analogical reasoning and Defeasible Reasoning:

                                                                                              https://plato.stanford.edu/entries/reasoning-analogy/

                                                                                              https://seop.illc.uva.nl/entries/reasoning-defeasible/

                                                                                              • lupire 5 hours ago

                                                                                                That's an obsolete definition that definea reasoning as a simplistic mechanical task explicitly encoded by humans. What LLM is attempting is far beyond that. It's a automated process for creating its own reasoning method.

                                                                                                • YeGoblynQueenne 2 hours ago

                                                                                                  And this is according to whom, please?

                                                                                            • ukuina 9 hours ago

                                                                                              Do you have some categories of such original problems? It seems markedly better at reasoning/logic puzzles, and programmatically-solvable problems are often offloaded to the Python interpreter.

                                                                                          • TZubiri 10 hours ago

                                                                                            Wouldn't a slight change in tokenization? (say mapping single digits to single tokens) help with this specific challenge?

                                                                                            • wenc 9 hours ago

                                                                                              Aren’t coding copilots based on tokenizing programming language keywords and syntax? That seems to me to be domain specific tokenization (a very well defined one too — since programming languages are meant to be tokenizable).

                                                                                              Math is a bit trickier since most of the world’s math is in LaTeX, which is more of a formatting language than a syntax tree. There needs to be a conversion to MathML or something more symbolic.

                                                                                              Even English word tokenization has gaps today. Claude Sonnet 3.5 still fails on the question “how many r’s are there in strawberry”.

                                                                                              • gwillen 8 hours ago

                                                                                                > Aren’t coding copilots based on tokenizing programming language keywords and syntax?

                                                                                                No, they use the same tokenization as everyone else. There was one major change from early to modern LLM tokenization, made (as far as I can tell) for efficient tokenization of code: early tokenizers always made a space its own token (unless attached to an adjacent word.) Modern tokenizers can group many spaces together.

                                                                                              • bob1029 9 hours ago

                                                                                                Context-specific tokenization sounds a lot like old fashioned programming.

                                                                                              • m3kw9 10 hours ago

                                                                                                The llm will know 123 and 45 is a contiguious number just like how humans can tell if you say 123 and then a slight pause 45 as a single number

                                                                                                • TZubiri 10 hours ago

                                                                                                  It's just so dissonant to me that the tokens in mathematics are the digits, and not bundles of digits. The idea of tokenization makes sense for taking the power off letters, it provides language agnosticism.

                                                                                                  But for maths, it doesn't seem appropriate.

                                                                                                  I wonder what the effect of forcing tokenization for each separate digit be.

                                                                                                  • taeric 2 hours ago

                                                                                                    This reminds me of the riddle of someone buying the numerals to put their address on their house. When you are looking at text, the point is all you have are the characters/symbols/tokens/whatever you want to call them. You can't really shepherd some over to their numeric value while leaving some at their token value. Unless you want to cause other issues when it comes time to reason about them later.

                                                                                                    I'd hazard that the majority of numbers in most text are not such that they should be converted to a number, per se. Consider addresses, postal codes, phone numbers, ... ok, I may have run out of things to consider. :D

                                                                                                  • soulofmischief 10 hours ago

                                                                                                    I think that as long as the attention mechanism has been trained on each possible numerical token enough, this is true. But if a particular token is underrepresented, it could potentially cause inaccuracies.

                                                                                                    • sva_ 10 hours ago

                                                                                                      It won't 'see' [123, 45] though, but [7633, 2548], or rather sparse vectors that are zero at each but the 7634th and 2549th position.

                                                                                                  • s-macke 11 hours ago

                                                                                                    These results are very similar to the "Alice in Wonderland" problem [1, 2], which was already discussed a few months ago. However the authors of the other paper are much more critical and call it a "Complete Reasoning Breakdown".

                                                                                                    You could argue that the issue lies in the models being in an intermediate state between pattern matching and reasoning.

                                                                                                    To me, such results indicate that you can't trust any LLM benchmark results related to math and reasoning when you see, that changing the characters, numbers or the sentence structure in a problem alter the outcome by more than 20 percentage points.

                                                                                                    [1] https://arxiv.org/html/2406.02061v1

                                                                                                    [2] https://news.ycombinator.com/item?id=40811329

                                                                                                    • oliwary 10 hours ago

                                                                                                      Someone (https://x.com/colin_fraser/status/1834336440819614036) shared an example that I thought was interesting relating to their reasoning capabilities:

                                                                                                      A man gets taken into a hospital. When the doctor sees him, he exclaims "I cannot operate on this person, he is my own son!". How is this possible?

                                                                                                      All LLMs I have tried this on, including GPT o1-preview, get this wrong, assuming that this the riddle relates to a gendered assumption about the doctor being a man, while it is in fact a woman. However, in this case, there is no paradox - it is made clear that the doctor is a man ("he exclaims"), meaning they must be the father of the person being brought in. The fact that the LLMs got this wrong suggests that it finds a similar reasoning pattern and then applies it. Even after additional prodding, a model continued making the mistake, arguing at one point that it could be a same-sex relationship.

                                                                                                      Amusingly, when someone on HN mentioned this example in the O1 thread, many of the HN commentators also misunderstood the problem - perhaps humans also mostly reason using previous examples rather than thinking from scratch.

                                                                                                      • layer8 10 hours ago

                                                                                                        > perhaps humans also mostly reason using previous examples rather than thinking from scratch.

                                                                                                        Although we would like AI to be better here, the worse problem is that, unlike humans, you can’t get the LLM to understand its mistake and then move forward with that newfound understanding. While the LLM tries to respond appropriately and indulge you when you indicate the mistake, further dialog usually exhibits noncommittal behavior by the LLM, and the mistaken interpretation tends to sneak back in. You generally don’t get the feeling of “now it gets it”, and instead it tends to feels more like someone with no real understanding (but very good memory of relevant material) trying to bullshit-technobabble around the issue.

                                                                                                        • oliwary 9 hours ago

                                                                                                          That is an excellent point! I feel like people have two modes of reasoning - a lazy mode where we assume we already know the problem, and an active mode where something prompts us to actually pay attention and actually reason about the problem. Perhaps LLMs only have the lazy mode?

                                                                                                          • letmevoteplease 8 hours ago

                                                                                                            I prompted o1 with "analyze this problem word-by-word to ensure that you fully understand it. Make no assumptions." and it solved the "riddle" correctly.

                                                                                                            https://chatgpt.com/share/6709473b-b22c-8012-a30d-42c8482cc6...

                                                                                                            • hoosieree 8 hours ago

                                                                                                              My classifier is not very accurate:

                                                                                                                  is_trick(question)  # 50% accurate
                                                                                                              
                                                                                                              To make the client happy, I improved it:

                                                                                                                  is_trick(question, label)  # 100% accurate
                                                                                                              
                                                                                                              But the client still isn't happy because if they already knew the label they wouldn't need the classifier!

                                                                                                              ...

                                                                                                              If ChatGPT had "sense" your extra prompt should do nothing. The fact that adding the prompt changes the output should be a clue that nobody should ever trust an LLM anywhere correctness matters.

                                                                                                              [edit]

                                                                                                              I also tried the original question but followed-up with "is it possible that the doctor is the boy's father?"

                                                                                                              ChatGPT said:

                                                                                                              Yes, it's possible for the doctor to be the boy's father if there's a scenario where the boy has two fathers, such as being raised by a same-sex couple or having a biological father and a stepfather. The riddle primarily highlights the assumption about gender roles, but there are certainly other family dynamics that could make the statement true.

                                                                                                              • PoignardAzur an hour ago

                                                                                                                It's not like GP gave task-specific advice in their example. They just said "think carefully about this".

                                                                                                                If it's all it takes, then maybe the problem isn't a lack of capabilities but a tendency to not surface them.

                                                                                                            • s-macke 8 hours ago

                                                                                                              I have found multiple definitions in literature of what you describe.

                                                                                                              1. Fast thinking vs. slow thinking.

                                                                                                              2. Intuitive thinking vs. symbolic thinking.

                                                                                                              3. Interpolated thinking (in terms of pattern matching or curve fitting) vs. generalization.

                                                                                                              4. Level 1 thinking vs. level 2 thinking. (In terms of OpenAIs definitions of levels of intelligence)

                                                                                                              The definitions describe all the same thing.

                                                                                                              Currently all of the LLMs are trained to use the "lazy" thinking approach. o1-preview is advertised as being the exception. It is trained or fine tuned with a countless number of reasoning patterns.

                                                                                                          • tgv 10 hours ago

                                                                                                            I'm sure we fall back on easy/fast associations and memories to answer. It's the way of least resistance. The text you quote bears more than a superficial similarity to the old riddle (there's really nothing else that looks like it), but that version also stipulates that the father has died. That adds "gendered" (what an ugly word) information to the question, a fact which is missed when recalling this particular answer. Basically, LLMs are stochastic parrots.

                                                                                                            • travisjungroth 9 hours ago

                                                                                                              How people don’t see the irony of commenting “stochastic parrots” every time LLM reasoning failure comes up is beyond me.

                                                                                                              There are ways to trick LLMs. There are also ways to trick people. If asking a tricky question and getting a wrong answer is enough to disprove reasoning, humans aren’t capable of reasoning, either.

                                                                                                            • s-macke 10 hours ago

                                                                                                              > perhaps humans also mostly reason using previous examples rather than thinking from scratch.

                                                                                                              We do, but we can generalize better. When you exchange "hospital" with "medical centre" or change the sentence structure and ask humans, the statistics would not be that different.

                                                                                                              But for LLMs, that might make a lot of difference.

                                                                                                            • apsec112 11 hours ago

                                                                                                              Both Claude-3.5 and o1-preview nail this problem

                                                                                                              "Let's think through this step-by-step:

                                                                                                              1. Alice has 3 brothers 2. Alice has 2 sisters 3. We need to find out how many sisters Alice's brother has

                                                                                                              The key here is to realize that Alice's brothers would have the same sisters as Alice, except they would also count Alice as their sister.

                                                                                                              So, Alice's brothers would have: - The 2 sisters Alice has - Plus Alice herself as a sister

                                                                                                              Therefore, Alice's brothers have 3 sisters in total."

                                                                                                              • s-macke 11 hours ago

                                                                                                                And here lies the exact issue. Single tests don’t provide any meaningful insights. You need to perform this test at least twenty times in separate chat windows or via the API to obtain meaningful statistics.

                                                                                                                For the "Alice in Wonderland" paper, neither Claude-3.5 nor o1-preview was available at that time.

                                                                                                                But I have tested them as well a few weeks ago with the issue translated into German, achieving also a 100% success rate with both models.

                                                                                                                However, when I add irrelevant information (My mother ...), Claude's success rate drops to 85%:

                                                                                                                "My mother has a sister called Alice. Alice has 2 sisters and 1 brother. How many sisters does Alice's brother have?"

                                                                                                                • probably_wrong 10 hours ago

                                                                                                                  Your experience makes me think that the reason the models got a better success rate is not because they are better at reasoning, but rather because the problem made it to their training dataset.

                                                                                                                  • s-macke 9 hours ago

                                                                                                                    We don't know. The paper and the problem was very prominent at that time. Some developers at Anthropic or OpenAI might have included that in some way. Either as test or as a task to improve the CoT via Reinforcement Learning.

                                                                                                                    • andrepd 9 hours ago

                                                                                                                      Absolutely! It's the elephant in the room with these ducking "we've solved 80% of maths olympiad problems" claims!

                                                                                                                    • Workaccount2 10 hours ago

                                                                                                                      We do have chatbot arena which to a degree already does this.

                                                                                                                      I like to use:

                                                                                                                      "Kim's mother is Linda. Linda's son is Rachel. John is Kim's daughter. Who is Kim's son?"

                                                                                                                      Interestingly I just got a model called "engine test" that nailed this one in a three sentence response, whereas o1-preview got it wrong (but has gotten it right in the past).

                                                                                                                      • andoando 6 hours ago

                                                                                                                        You also need a problem that hasn't been copy pasted a million times on the internet.

                                                                                                                      • einarfd 10 hours ago

                                                                                                                        My problem with this puzzle, is how do you know that Alice and her brothers share both parents?

                                                                                                                        Is it not correct English to call two people who share one parent, sisters, or brothers?

                                                                                                                        I guess I could be misguided by my native Norwegian where you have to preamble the word with "hell" (full), or "halv" (half), if you want to specify the number of shared parents.

                                                                                                                        • thfuran 10 hours ago

                                                                                                                          It is pretty much the same in English. Unqualified would usually mean sharing both parents but could include half- or step-siblings.

                                                                                                                          • s-macke 9 hours ago

                                                                                                                            I am not a native English speaker. Can you reformulate the problem for me, so that every alternative interpretation is excluded?

                                                                                                                            • zeroonetwothree 9 hours ago

                                                                                                                              Alice has N full sisters. She also has M full brothers. How many full sisters does Alice’s brother have?

                                                                                                                              • s-macke 7 hours ago

                                                                                                                                Tried it with N=2 and M=1 (brother singular) with the gpt-4o model and CoT.

                                                                                                                                1. 50% success without "full" terminology.

                                                                                                                                2. 5% success with "full" terminology.

                                                                                                                                So, the improvement in clarity has exactly the opposite effect.

                                                                                                                          • zeroonetwothree 9 hours ago

                                                                                                                            They would usually be called “half-sisters”. You could call them “sisters” colloquially though but given it’s presented as a logic question I think it’s fine to disregard

                                                                                                                      • thenoblesunfish 11 hours ago

                                                                                                                        Very interesting, and aligns with what I would expect in terms of the type of "thinking" LLMs do. I think that it's also the type of "thinking" that will let a student pass most school courses, except of course for the ones where the teacher has taken the time to pose test questions that aren't as amenable to pattern matching. (Hard, but I assume most readers here are familiar with leetcode style interviews and what makes questions of that kind higher or lower quality for assessing candidates)

                                                                                                                        (And yes, I know people are hard at work adding other types of thinking to work along with the pure language models)

                                                                                                                        • trehalose 7 hours ago

                                                                                                                          I see a lot of discussion about irrelevant clauses tripping up the LLMs and why that does or doesn't matter. To me, what's far more damning is this:

                                                                                                                          > Specifically, the performance of all models declines when only the numerical values in the question are altered in the GSM-Symbolic benchmark.

                                                                                                                          This seems like irrefutable evidence of overfitting, that in the best case scenario is epidemic among current LLMs (and in the worst case interpretation, is covering up fundamental inabilities to learn mathematical reasoning from the training data).

                                                                                                                          • ak_111 2 hours ago

                                                                                                                            As an outsider can anyone enlighten me how this squares with the news that models that adapt similar LLM architecture can obtain silver medal in mathematical olympiad?

                                                                                                                            • lionkor 2 hours ago

                                                                                                                              careful statistical massaging, maybe.

                                                                                                                              would you pick only winning results and only present favorable, massaged results if it got you 150+B USD of worth?

                                                                                                                            • yk 11 hours ago

                                                                                                                              I test llms actually similar. For example there is a well known logic puzzle were a farmer tries to cross a river with a cabbage a goat and a wolf. Llms can solve that since at least GPT-2, however if we replace the wolf with a cow, gpt-o does correctly infer the rules of the puzzle but can't solve it.

                                                                                                                              • getoffmyyawn 10 hours ago

                                                                                                                                I've found that the River Crossing puzzle is a great way to show how LLMs break down.

                                                                                                                                For example, I tested Gemini with several versions of the puzzle that are easy to solve because they don't have the restrictions such as the farmer's boat only being able to carry one passenger/item at a time.

                                                                                                                                Ask this version, "A farmer has a spouse, chicken, cabbage, and baby with them. The farmer needs to get them all across the river in their boat. What is the best way to do it?"

                                                                                                                                In my tests the LLMs nearly always assume that the boat has a carry-restriction and they come up with wild solutions involving multiple trips.

                                                                                                                                • SonOfLilit 10 hours ago

                                                                                                                                  I've been using this as my first question to any new LLM I try and I'm quite sure nothing before GPT-4 even got close to a correct solution. Can you post a prompt that GPT-2 or 3 can solve?

                                                                                                                                  • chasd00 10 hours ago

                                                                                                                                    What happens if you sit down and invent a logic game that is brand new and has never been documented before anywhere then ask an LLM to solve it? That, to a layman like me, seems like a good way to measure reasoning in AI.

                                                                                                                                    • jprete 10 hours ago

                                                                                                                                      I think the problem is inventing new structures for logic games. The shape of the problem ideally would be different than any existing puzzle, and that's hard. If a person can look at it and say "oh, that's just the sheep-wolf-cabbage/liar-and-truthteller/etc. problem with extra features" then it's not an ideal test because it can be pattern-matched.

                                                                                                                                      • layer8 10 hours ago

                                                                                                                                        This is being done, but the difficulties are: (1) How do you assess that it is really brand-new and not just a slight variation of an existing one? (2) Once you publish it, it stops being brand-new, so its lifetime is limited and you can’t build a longer-term reproducible test out of it.

                                                                                                                                        • Analemma_ 8 hours ago

                                                                                                                                          You can do this, but at that point what are you really benchmarking? If you invent a de novo logic puzzle and give it to 100 people on the street, most of them won't be able to solve it either. If your aim is to prove "LLMs can't really think like humans can!", this won't accomplish that.

                                                                                                                                        • voidUpdate 10 hours ago

                                                                                                                                          I'm scared of the cows around you if they eat goats

                                                                                                                                          • Manabu-eo 3 hours ago

                                                                                                                                            I think their point is that cows don't eat goats, unlike wolves, and that causes the LLMs to answer it wrong.

                                                                                                                                          • andrepd 9 hours ago

                                                                                                                                            Meaning it's just a glorified Google.

                                                                                                                                            • romwell 8 hours ago

                                                                                                                                              ...that makes up results when it can't find any

                                                                                                                                          • criddell 11 hours ago

                                                                                                                                            It would be interesting if this kind of work could ever be extended to show the limitations of mathematical reasoning in animals and humans.

                                                                                                                                            For example, just as a dog will never understand a fourier transform, there are likely ideas that humans cannot understand. If we know what our limits are, I wonder if we could build machines that can reason in ways we aren't capable of?

                                                                                                                                            • myrmidon 10 hours ago

                                                                                                                                              I think it is a naive assumption that such a limitation even exists ("exists" in a sense that it is actually useful, by being consistent and somewhat simple to describe).

                                                                                                                                              We investigated similar ideas for language (=> Noam Chomsky), where we tried to draw clear, formalized limits for understanding (to show e.g. how human capabilities contrast with animals). The whole approach failed completely and irredeemably (personal opinion), but researching it was far from useless to be fair.

                                                                                                                                              • r2_pilot 9 hours ago

                                                                                                                                                As the human brain is finitely bounded in space and time, any idea that can't be compressed or represented by condensing notation, which is "larger" than the 100B cells+100T synapses can represent, or whose integration into said human's brain would take longer than 150 years, would be considered unable to be contemplated by a normal human.

                                                                                                                                                • klabb3 9 hours ago

                                                                                                                                                  Yes but we overcome. We can do absolutely insane things like just large prime number testing, because of reasoning + tool use.

                                                                                                                                                  Humans invent tools and wield them. Whether it's pen & paper to extend our memory, a horse to become stronger, a calculator to speed up our thinking or an airplane to literally fly, the tools we wield become extensions of our agency and control.

                                                                                                                                                  A lonely human without knowledge sharing or tools isn’t that much more capable in their lifetime than the smartest animals. When we talk about human ability colloquially, we’re generally talking about what we can do with access to our human heritage, civilization, safety and access to materials and tools.

                                                                                                                                                  Pattern matching against something others have already done is great but this is shared with at the very least all mammals to some extent. Pushing the boundaries of our species forward over time is a different game. Or at least, it seems to be…

                                                                                                                                                  It certainly seems like we’ve found the holy grail of pattern matching (system 1 thinking), which is an insane leap! But what about system 2? The million dollar question is what the hell is the topology of that pre-frontal cortex thinking machine? Is it just more pattern matching but against different patterns? Or is it completely and qualitatively different? And if so, is it more or less hard? To me, following the debate is just watching one bad prediction after another, (including my own of course). We just don't know how it works. Not you or me, not Sam Altman in full though-leading leather jacket uniform, or even our top neuro-scientists.

                                                                                                                                                  • myrmidon 5 hours ago

                                                                                                                                                    "Hardware limitations" are extremely unlikely in my view to establish useful limits.

                                                                                                                                                    Consider: Do hardware limitations establish useful limits on the kind of problems a computer can solve? The answer is a resounding NO in my view, because the limits of what can be expressed/solved grows so insanely quickly that it becomes a completely meaningless and unreachable limit even for super small computers (less capable than our brain).

                                                                                                                                                    As for learning time constraints: These are obviously reachable, but still useless in my view because they are too inconsistent- the kind of methods and insights that a human can acquire within a lifetime are completely different between persons, and highly dependent on how the learning happens...

                                                                                                                                              • dang 4 hours ago

                                                                                                                                                Related ongoing thread:

                                                                                                                                                LLMs don't do formal reasoning - https://news.ycombinator.com/item?id=41812523 - Oct 2024 (70 comments)

                                                                                                                                                • eigenform 5 hours ago

                                                                                                                                                  The difference is that, if we are solving a math problem together, you and I [explicitly or implicitly] can come to an agreement over the context and decide to restrict our use of language with certain rules. The utility behind our conversation [generally] rests on those rules!

                                                                                                                                                  An LLM is very good at recovering rules, but being good at pattern recognition is not the same thing as being good at unambiguously following rules in the appropriate context.

                                                                                                                                                  edit: Natural language is far from an efficient/sufficient/necessary intermediate representation for doing math, just ask any general-purpose computer. Sometimes, it's worth "putting rules in stone," and it seems unreasonable to believe that there is always an unambiguous rule for this that you can mechanically recover from a corpus of language use.

                                                                                                                                                  • qwerty456127 2 hours ago

                                                                                                                                                    Can't al LLM just detect a mathematical reasoning task then produce a formula (not even display it in the production mode) to invoke on an external service engineered for formal logical and mathematical computations?

                                                                                                                                                    • singularity2001 10 hours ago

                                                                                                                                                      If the argument is that LLMs are bad at reasoning because they are easily distractible and the results vary with modifications in the question, one should be reminded of the consistency and distractability of humans.

                                                                                                                                                      • zeroonetwothree 9 hours ago

                                                                                                                                                        Why? LLMs are supposedly better than humans (as many comments claim in this thread).

                                                                                                                                                        • riku_iki 7 hours ago

                                                                                                                                                          Trained human can tell if distracted: "I am distracted and can't figure out answer", while LLM will confidently gives you wrong answer, which makes whole results not reliable.

                                                                                                                                                        • gradientsrneat 6 hours ago

                                                                                                                                                          Could this be Goodhart's Law in action? AI tools like to showcase benchmarks in bar graphs to show how well they perform compared to other models.

                                                                                                                                                          Maybe the benchmark Qs/As snuck into training sets accidentally. Is it still Goodhart's Law if it's unintentional?

                                                                                                                                                          Daniel Lemire has blogged about being impressed with how well the LLM answers his CS problem questions. I was impressed too. Not sure where the line of competence lies.

                                                                                                                                                          • resters 7 hours ago

                                                                                                                                                            I think it's obvious that LLMs will be able to do "reasoning" far better than humans. We must separate our notion of what is remarkably human. Rarely is it the reasoning, it's the intuition that a logical path exists -- for example a mathematical proof that draws from separate sub-disciplines of mathematics, etc.

                                                                                                                                                            Consider that in a LLM, language inputs are tokenized and fed as inputs into the neural network, and connections in the network create output sequences that are not just syntactically correct (trivial) or form semantically plausible sentences (early transformers did this). LLM output sequences follow the deep patterns of language which include sometjhing that resembles reasoning as the model has learnt from its training data.

                                                                                                                                                            LLMs seem to fall short because they often fail at truly abstract reasoning tasks that humans find easy. If trained properly, LLMs can develop advanced representations of logical systems that will surely outpace what humans can do in terms of raw reasoning.

                                                                                                                                                            However, human mathematicians have not even unified around constructive mathematics as a must for the study of mathematics. This reveals that even highly evolved mathematical disciplines rely on objects whose characteristics do not lend themselves to full logical scrutiny and are in a way socially constructed and effectively hard to audit.

                                                                                                                                                            While notation in mathematics is incredible technology it is also a highly limiting factor that suffers major tradeoffs. Humans struggle to invent new notation fast enough and to discard outdated notation fast enough. If we do see an AI-powered boom in mathematics, I suspect our notion of notation and the fluidity we demand from it will change dramatically.

                                                                                                                                                            • islewis 7 hours ago

                                                                                                                                                              This argument is centered around the belief that language and reasoning flow bidirectionally- language can be understood first (we are here), and reasoning is the next natural rung of the latter (your thesis believes we will get here with LLMs).

                                                                                                                                                              I see language more as a medium for transcribing reasoning. While language certainly communicates reasoning, you can have reasoning without language, but not language without reasoning.

                                                                                                                                                              This paper seems to imply that current LLM's are just copying the training dataset's reasoning communication, not understand the actual reasoning. I don't think LLM's moving past this is "obvious" or even close to being inevitable.

                                                                                                                                                              > Instead, LLMs likely perform a form of probabilistic pattern-matching and searching to find closest seen data during training without proper understanding of concepts. While this process goes beyond naive memorization of words and the models are capable of searching and matching more abstract reasoning steps, it still falls short of true formal reasoning.

                                                                                                                                                              • resters 6 hours ago

                                                                                                                                                                I realize there is subtlety to the question of which is first. An infant, crying when it is hungry and pre-linguistic, is applying modus ponens. C -> F crying implies food, so I cry and then I get fed. Language grows in humans just like arms and legs, and so does reasoning. Baby animals do the same behavior but don't use language, so perhaps some logic is wired by instinct. Either way I don't think we need to worry about that detail.

                                                                                                                                                                Consider how language input to an LLM is tokenized. Now imagine a tokenization scheme that introduces tokens that track the strict logical reasoning in the language. Thus two completely different English sentences could both tokenize as the application of Modus Ponens over assumption 1 to conclude conclusion 2, for example.

                                                                                                                                                                Now consider that we can tokenize formal notation as used in mathematics and logic, and we can train LLMs on mathematical papers, peer review write-ups, etc. We can generate millions of correct proofs and teach it which ones are remarkable and why, etc.

                                                                                                                                                                Ultimately we run into the same barrier as mathematical constructivists run into, but I think it's still quite plausible that LLMs trained as I describe would be able to reason quite well and find oversights humans missed. However creating the optimal scheme and implementation is not trivial.

                                                                                                                                                              • sottol 7 hours ago

                                                                                                                                                                > If trained properly, LLMs can develop advanced representations of logical systems that will surely outpace what humans can do in terms of raw reasoning.

                                                                                                                                                                We have already trained the LLMs on most of the human knowledge base (so like 4-5000 years?) - imo training data will become a problem and will soon be more expensive than compute. Sure, you can work around some of this using synthetic training data but I personally would not count on general-purpose LLMs (especially LLMs aka transformer models) developing super-human representations of logical systems anytime soon.

                                                                                                                                                                • resters 7 hours ago

                                                                                                                                                                  I don't disagree, however I'm optimistic because most of the current reasoning "ability" of LLMs comes from the accidental reasoning embedded in language patterns.

                                                                                                                                                                  For example, the prompt completion: "The mouse has a unique digestive system compared to other rodents, however the sparrow" on GPT-4o is

                                                                                                                                                                  "exhibits a highly specialized digestive system adapted for rapid processing of food, particularly seeds and insects, through structures like the crop and gizzard, which are not found in rodents."

                                                                                                                                                                  Claude 3.5 completes it as

                                                                                                                                                                  "has a completely different digestive anatomy as a bird. Birds like sparrows have adaptations for flight, including a lightweight skeletal system and a specialized digestive tract. Unlike mice, sparrows have a crop for storing food, a gizzard for grinding it, and generally shorter intestines to reduce weight. They also lack teeth, instead using their beak to manipulate food."

                                                                                                                                                                  What appears to be a thoughtful contrast is merely a language pattern. Similarly, a prompt like "Assume -B, A->B. Under what circumstances is B true?" will simply follow the gradient to return output that is likely correct. Prompts like "what is 2+2" fail only because nobody bothers to write about it so simple arithmetic was not in the training data.

                                                                                                                                                                  However the way that multi-modal LLMs handle images is inspiring as it effectively converts from the visual domain into the sequential token domain. The same could be done for symbolic systems, etc.

                                                                                                                                                                • agentultra 7 hours ago

                                                                                                                                                                  I don’t see how it’s obvious that LLM’s will be capable of any mathematical, “reasoning.

                                                                                                                                                                  LLM’s can infer relationships and maintain longer context chains in order to generate their output… it still happens that some times the output is correct depending on the training data, layers, context, etc. And it can get more accurate when we change the parameters of the model. But the algorithm isn’t “doing” anything here. It will generate something regardless of what it’s prompted with.

                                                                                                                                                                  Maybe it’s right. But the algorithm is an algorithm. It doesn’t care what truth is. It’s generating BS essentially.

                                                                                                                                                                  A human is doing a lot more work when performing mathematics.

                                                                                                                                                                  It may be that LLM’s can be a useful tool in mathematical reasoning but it’s not obvious that it will ever be capable of it without a human, let alone be better than a human.

                                                                                                                                                                  • resters 7 hours ago

                                                                                                                                                                    I think models could be designed that in separate layers created "logical system" representations which could feed back into the output, much like how attention works. Attention is about relevance, the logical layers could be based on logical schema-based patterns.

                                                                                                                                                                    Consider an LLM that happened to have some pre-trained layers that were trained abstractly on all the constructive proofs available for modern mathematics. LLMs with image recognition rely on existing visual pattern recognition layers, fwiw.

                                                                                                                                                                    • agentultra 4 hours ago

                                                                                                                                                                      There's another blog post that made it to the front-page of this site which sums up the state of the art nicely [0].

                                                                                                                                                                      It's not obvious that they will be able to do any reasoning, in the formal sense, at all; let alone better than humans. LLMs are simply not sufficient for the kinds of tasks and work done when reasoning about mathematical problems.

                                                                                                                                                                      There's plenty of research demonstrating that they can be useful in small, constrained tasks -- which isn't anything to raise our noses at!

                                                                                                                                                                      ... it's just not _obvious_ in the sense that there is a clear step from LLM capabilities today to "better than humans." It's more an article of faith that it could be true, some day, if we just figure out X, Y, Z... which folks have been doing for decades to no avail. In other words, it's not obvious at all.

                                                                                                                                                                      [0] https://garymarcus.substack.com/p/llms-dont-do-formal-reason...

                                                                                                                                                                      • resters an hour ago

                                                                                                                                                                        It’s true that current models do not do formal reasoning, my point is that it is possible to use tokenization to do it. See my comment in the other thread.

                                                                                                                                                                • woopwoop 10 hours ago

                                                                                                                                                                  I'm curious about what happens with the no-op dataset if you include in the prompt that the questions may contain irrelevant information.

                                                                                                                                                                  • jumploops 4 hours ago

                                                                                                                                                                    > Overall, while o1-preview and o1-mini exhibit significantly stronger results compared to current open models—potentially due to improved training data and post-training procedures—they still share similar limitations with the open models.

                                                                                                                                                                    tl;dr - the best open model dropped from 89.7% on GSM8K(full) to 30% on Symbolic-NoOp, while o1-preview dropped from 94.9% to 77.4%, respectively.

                                                                                                                                                                    I think all this paper shows is that LLMs need space to "think" outside of their inference layer, (for the current architectures at least).

                                                                                                                                                                    It's similar to the "draw a room, but DO NOT put an elephant in the corner" prompts that people were using with image models.

                                                                                                                                                                    This is something that practitioners have been doing for awhile (via CoT, ToT, etc.) and the whole rationale behind OpenAI's newly launched o1-series "model."

                                                                                                                                                                    There's another post that says this paper proves LLMs can't be used to build "reliable agents" -- which doesn't appear to be true when you look at o1's stellar performance here.

                                                                                                                                                                    • dr_dshiv 11 hours ago

                                                                                                                                                                      It seems incredibly easy to generate an enormous amount of synthetic data for math. Is that happening? Does it work?

                                                                                                                                                                      • ilaksh 10 hours ago

                                                                                                                                                                        They did that for o1 and o1-preview. Which if you read the paper or do your own testing with that SOTA model you will see that the paper is nonsense. With the best models the problems they point out are mostly marginal like one or two percentage points when changing numbers etc.

                                                                                                                                                                        They are taking poor performance of undersized models and claiming that proves some fundamental limitation of large models, even though their own tests show that isn't true.

                                                                                                                                                                        • foobarqux 10 hours ago

                                                                                                                                                                          You choose to ignore Figure 8 which shows a 18% drop when simply adding an irrelevant detail.

                                                                                                                                                                          In the other test the perturbations aren’t particularly sophisticated and modify the problem according to a template. As the parent comment said this is pretty easy to generate test data for (and for the model to pattern match against) so maybe that is what they did.

                                                                                                                                                                          A better test of “reasoning” would be to isolate the concept/algorithm and generate novel instances that are completely textually different from existing problems to see if the model really isn’t just pattern matching. But we already know the answer to this because it can’t do things like arbitrary length multiplication.

                                                                                                                                                                          • ilaksh 3 hours ago

                                                                                                                                                                            This shows there are limitations but it doesn't prove they can't be overcome by changing training data.

                                                                                                                                                                            I don't think that LLMs are the end of AGI research at all, but the extreme skepticism of their current utility is mostly based on failures of small models. It's like 65% for most of the small models they tested and that is what they are really basing their conclusions on

                                                                                                                                                                        • MacsHeadroom 11 hours ago

                                                                                                                                                                          Yes, this is how o1 was trained. Math and programming, because they are verifiable.

                                                                                                                                                                          This is also why o1 is not better at English. Math skills transfer to general reasoning but not so much to creative writing.

                                                                                                                                                                          • Davidzheng 11 hours ago

                                                                                                                                                                            In which distribution? Like school math or competition or unsolved problems? FWIW I think one and three and probably easier to generated as synethetically. It's harder to bound the difficulty but I think the recent David silver talk implies it doesn't matter much. Anyway there's some work on this you can find online--they claim to improve gsm8k and MATH a bit but not saturate it. Idk in practice how useful it is

                                                                                                                                                                            • bentice 10 hours ago

                                                                                                                                                                              Data is the wrong approach to develop reasoning. You we don't want LLM's to simply memorize 3x3 = 9 we want them to understand that 3 + 3 + 3 = 9 therefore 3x3 = 9 (obviously a trivial example). If they have developed reasoning very few examples should be needed.

                                                                                                                                                                              The way I see it reasoning is actually the ability of the model to design and train smaller models that can learn with very few examples.

                                                                                                                                                                              • hackinthebochs 10 hours ago

                                                                                                                                                                                > If they have developed reasoning very few examples should be needed.

                                                                                                                                                                                Yes, once the modules for reasoning have converged, it will take very few examples for it to update to new types of reasoning. But to develop those modules from scratch requires large amounts of examples that overtax its ability to memorize. We see this pattern in the "grokking" papers. Memorization happens first, then "grokking" (god I hate that word).

                                                                                                                                                                                It's not like humans bootstrap reasoning out of nothing. We have a billion years of evolution that encoded the right inductive biases in our developmental pathways to quickly converge on the structures for reasoning. Training an LLM from scratch is like recapitulating the entire history of evolution in a few months.

                                                                                                                                                                                • dr_dshiv 6 hours ago

                                                                                                                                                                                  My understanding is that, if you train these enough, it becomes likely to develop efficient compressions— which “reasoning” would be.

                                                                                                                                                                                • aithrowawaycomm 7 hours ago

                                                                                                                                                                                  It's easy enough to generate an enormous amount of formal math problems, but utterly quixotic to generate an enormous amount of quantitative reasoning problems, which is the thing LLMs are lacking.

                                                                                                                                                                                  • ninetyninenine 11 hours ago

                                                                                                                                                                                    I don’t think so. The data is biased towards being very general.

                                                                                                                                                                                  • throwaway918299 2 hours ago

                                                                                                                                                                                    limitations of mathematical reasoning?

                                                                                                                                                                                    They have none. Literally zero. That’s the limit. Thank you for reading my paper.

                                                                                                                                                                                    • dev1ycan 10 hours ago

                                                                                                                                                                                      I don't understand the idiocracy we live in, it is beyond obvious not just that the stock market is a bubble but ESPECIALLY the AI related stocks are a massive bubble, when it pops, and it will, it is going to be very very ugly, yet people keep pouring in, as Sabine said it, it's starting to look like particle physics where they keep asking for bigger colliders, just because you have a bigger collider, if your methodology is flawed you aren't gonna get any more significant returns.

                                                                                                                                                                                      Eventually they will run out of exponential cash to pour in, and investors will start asking questions, stocks are already valued at 60x+ their earnings, whenever it pops you don't want to be the one who bought the top.

                                                                                                                                                                                      Guess it's still gonna take a while more for the layman to realize the issues with LLMs, but it'll happen.

                                                                                                                                                                                      • Workaccount2 10 hours ago

                                                                                                                                                                                        >if your methodology is flawed you aren't gonna get any more significant returns.

                                                                                                                                                                                        The problem with this statement is that predictions made about scaling 5 years ago have held true[1]. We keep adding parameters, adding compute, and the models keep getting more capable.

                                                                                                                                                                                        The flaws of LLM's from 2024 are not what is relevant. Just like the flaws of LLMs from 2021 were not relevant. What is relevant is the rate of change, and the lack of evidence that things won't continue on this steep incline. Especially if you consider that GPT4 was sort of a preview model that motivated big money to make ungodly investments to see how far we can push this. Those models will start to show up over the next 2 years.

                                                                                                                                                                                        If they break the trend and the scaling flops, then I think a lot of air is gonna blow out of the bubble.

                                                                                                                                                                                        [1]https://arxiv.org/pdf/2001.08361

                                                                                                                                                                                        • vrighter 9 hours ago

                                                                                                                                                                                          we added a lot of parameters.

                                                                                                                                                                                          We added a LOT of data.

                                                                                                                                                                                          The resulting models have become only slightly better. And they still have all of their old problems.

                                                                                                                                                                                          I think this is proof that scaling doesn't work. It's not like we just doubled the sizes, they increased by a lot, but improvements are less and less each time. And they've already run out of useful data.

                                                                                                                                                                                          • dev1ycan 9 hours ago

                                                                                                                                                                                            They are very literally asking for trillions and even nuclear powered data centers, pretty sure we've gotten to the point where it's not sustainable.

                                                                                                                                                                                            • Workaccount2 9 hours ago

                                                                                                                                                                                              Those are roadmap items being asked for, but the next gen models are already in training. If they keep moving along the same trend line, like all the previous models have, then they probably will be able to find the investors for the next next gen. Even if it's a few trillion dollars and a few nuclear power plants.

                                                                                                                                                                                              This doesn't even factor in the tech inertia. We could stop making new models today, and it would probably be 4-5 years before integration slowed down. Google still hasn't even put Gemini in their home speakers.

                                                                                                                                                                                          • empath75 8 hours ago

                                                                                                                                                                                            Computers have been able to do mathematical calculation and logical deduction cheaply and perfectly for decades, and it's not really required for generative AIs to be able to do it for them to be useful. It's good enough if they can write and execute some python code to do it, and generally they are fairly capable of that.

                                                                                                                                                                                            The question of whether they can do it is interesting in an academic sense, but has nothing to do if they're useful or not. They also don't need to be true AGI to be useful.

                                                                                                                                                                                          • beardyw 11 hours ago

                                                                                                                                                                                            I honestly can't see why LLMs should be good at this sort of thing. I am convinced you need a completely different approach. At the very least you mostly only want one completely correct result. Good luck getting current models to do that.

                                                                                                                                                                                            • hackinthebochs 10 hours ago

                                                                                                                                                                                              LLMs aren't totally out of scope of mathematical reasoning. LLMs roughly do two things, move data around, and recognize patterns. Reasoning leans heavily on moving data around according to context-sensitive rules. This is well within the scope of LLMs. The problem is that general problem solving requires potentially arbitrary amounts of moving data, but current LLM architectures have a fixed amount of translation/rewrite steps they can perform before they must produce output. This means most complex reasoning problems are out of bounds for LLMs so they learn to lean heavily on pattern matching. But this isn't an intrinsic limitation to LLMs as a class of computing device, just the limits of current architectures.

                                                                                                                                                                                              • qudat 9 hours ago

                                                                                                                                                                                                One core issue is that we need to convert spoken/written languages (e.g. english) into more formal math languages since sometimes the underlying mathematical problem is written using prose. The example in the paper:

                                                                                                                                                                                                > When Sophie watches her nephew, she gets out a variety of toys for him. The bag of building blocks has 31 blocks in it. The bin of stuffed animals has 8 stuffed animals inside. The tower of stacking rings has 9 multicolored rings on it. Sophie recently bought a tube of bouncy balls, bringing her total number of toys for her nephew up to 62. How many bouncy balls came in the tube?

                                                                                                                                                                                                So I would argue it's critical that LLMs knows how to convert text to math and then perform those math calculations. This extends beyond just math but also the underlying logics.

                                                                                                                                                                                                We just need to figure out how to inform the LLM to read, write, and understand formal languages. My guess is attention heads could probably work in this context, but we might want something that is a little more rigid, naturally extending from the rigidity of logic and formal languages. Conversely, we might not have figured out how to properly train LLMs on formal languages and have them preserve the underlying logic and axioms necessary to correctly perform math calculations.

                                                                                                                                                                                                • s-macke 9 hours ago

                                                                                                                                                                                                  Well, my perspective on this is as follows:

                                                                                                                                                                                                  The recurrent or transformer models are Turing complete, or at least close to being Turing complete (apologies, I’m not sure of the precise terminology here).

                                                                                                                                                                                                  As a result, they can at least simulate a brain and are capable of exhibiting human-like intelligence. The "program" is the trained dataset, and we have seen significant improvements in smaller models simply by enhancing the dataset.

                                                                                                                                                                                                  We still don’t know what the optimal "program" looks like or what level of scaling is truly necessary. But in theory, achieving the goal of AGI with LLMs is possible.

                                                                                                                                                                                                  • golol 11 hours ago

                                                                                                                                                                                                    I'm a math phd student at the moment and I regularly use o1 to try some quick calculations I don't feel like doing. While I feel like GPT-4o is so distilled that it just tries to know the answer from memory, o1 actually works with what you gave it and tries to calculate. It's can be quite useful.

                                                                                                                                                                                                    • banditelol 10 hours ago

                                                                                                                                                                                                      I'm curious what kind of quick calculation do you usually use llm for?

                                                                                                                                                                                                      Edited for clarity

                                                                                                                                                                                                      • golol 9 hours ago

                                                                                                                                                                                                        Just earlier today I wanted to check if exp(inx) is an orthonormal basis on L^2((0, 1)) or if it needs normalization. This is an extremely trivial one though. Less trivially I had an issue where a paper claimed that a certain white noise, a random series which diverges in a certain Hilbert space, is actually convergent in some L^infinity type space. I had tried to use a Sobolev embedding but that was too crude so it didn't work. o1 correctly realized that you have to use the decay of the L^infinity norm of the eigenbasis, a technique which I had used before but just didn't think of in the moment. It also gave me the eigenbasis and checked that everything works (again, standard but takes a while to find in YOUR setting). I wasn't sure about the normalization so again I asked it to calculate the integral.

                                                                                                                                                                                                        This kind of adaptation to your specific setting instead of just spitting out memorized answers in commonn settings is what makes o1 useful for me. Now again, it is often wrong, but if I am completely clueless I like to watch it attempt things and I can get inspiration from that. That's much more useful than seeing a confident wrong answer like 4o would give it.

                                                                                                                                                                                                  • apsec112 11 hours ago

                                                                                                                                                                                                    ()

                                                                                                                                                                                                    • ilaksh 10 hours ago

                                                                                                                                                                                                      That makes the whole conclusion obviously false.

                                                                                                                                                                                                      I don't really understand why, but I think we are going to see total denial from a significant percentage of the population all the way up to and past the point where many average mathematicians and software engineers cannot in any way compete with AI.

                                                                                                                                                                                                      We already are reportedly getting pretty close with o1 (not o1-preview).

                                                                                                                                                                                                      There are also new paradigms for machine learning and hardware in the pipeline that will continue to provide orders of magnitude performance gains and new capabilities in the next 5-10 years.

                                                                                                                                                                                                      Many people still claim that "self driving cars don't exist", in so many words, even though they are deployed in multiple cities.

                                                                                                                                                                                                      • sottol 7 hours ago

                                                                                                                                                                                                        > Many people still claim that "self driving cars don't exist", in so many words, even though they are deployed in multiple cities.

                                                                                                                                                                                                        But just look at the predictions of that time - cities will change, ... and so on. Sure, we have self-driving cars but the reality looks very different (and a lot more like the past!) than the pundits and futurists imagined! I'm not sure anyone will make their billions of dollars investmented back within even 20 years.

                                                                                                                                                                                                        Just two random examples from ~10 years ago (2013-2016), you can google many more of that time.

                                                                                                                                                                                                        * "Ford Targets Fully Autonomous Vehicle for Ride Sharing in 2021; Invests in New Tech Companies, Doubles Silicon Valley Team" [1]

                                                                                                                                                                                                        * "Disruptions: How Driverless Cars Could Reshape Cities" [2]

                                                                                                                                                                                                        [1] https://media.ford.com/content/fordmedia/fna/us/en/news/2016...

                                                                                                                                                                                                        [2] https://archive.nytimes.com/bits.blogs.nytimes.com/2013/07/0...

                                                                                                                                                                                                        [3] https://www.gensler.com/dialogue/30/the-game-changer-for-cit...