So while I don't take a stance on what an LLM does should be considered reasoning, I do think that SOTA LLMs like GPT-4o perform about as good as high school graduates in America with average intelligence. In other words, average Americans exhibit similar limitations on their reasoning as good LLMs. Which on the one hand is a little disappointing to me in terms of the human performance but is kind of good news for LLMs- they aren't doing graduate-level research but they are already capable of helping a large portion of the population.
Humans on the other hand have developed a more elaborate scheme to process, or reason, data without having to read through 1 billion math problems and stack overflow answers. We listen to some explanations, a YT video, a few exercises and we're ready to go.
The fact that we may get similar grades (at ie high school math) is just a spot coincidence of where both "species" (AI x Human) are right now at succeeding. But if we look closer at failure, we'll see that we fail very differently. AI failure right now looks, to us humans, very nonsensical.
Frequent repetition in the sociological context has been the learning technique for our species. To paraphrase Feynman, learning is transferring.
I think the larger models are consuming in the order of 100k as much as we do, and while they have a much broader range of knowledge, it's not 100k as much breadth.
Humans are nonsensical, but in somewhat predictable error rates by domain, per individual. So you hire people with the skillsets, domain expertise, and error rates you need.
With an LLM, it's nonsensical in a completely random way prompt to prompt. It's sort of like talking into a telephone and sometimes Einstein is on the other end, and sometimes it's a drunken child. You have no idea when you pick up the phone which way its going to go.
We feed these things nearly the entirety of human knowledge, and the output still feels rather random.
LLMs have all that information and then still have a ~10% chance of messing up simple mathematical comparison that an average 12 year old would not.
Other times we delegate much more complex tasks to LLMs and they work great!
But given the nondeterminism it becomes hard to delegate tasks you can't check the work of, if it is important.
I'm not sure what to make of that, but thought you might find it as curious as I do.
I think we are seeing spaghetti against the wall / LLMs as a hammer right now. A lot of what we are seeing thrown out there is a misapplication of LLMs that will slowly fade away.. It is likely other techniques / models are required for some of the applications people are throwing LLMs at.
Feels reminiscent of "blockchain".
Even though I see a lot of potential in the tech, it's still very obvious that people don't know how to use it to the best advantage and are hammering screws with it.
As recently as August "11.10 or 11.9 which is bigger" came up with the wrong answer on ChatGPT and was followed with lots of wrong justification for the wrong answer. Even follow up math question "what is 11.10 - 11.9" gave me the answer "11.10 - 11.9 equals 0.2"
We can quibble about what model I was using, or what edge case I hit, or how quick they fixed it.. but this is 2 years into the very public LLM hype wave so at some point I expect better.
It gives me pause in asking more complex math questions I cannot immediately verify results, in which case, again why would I pay for a tool to ask questions I already know the answer to?
LLMs totally violate our expectations for computers, by being a bit forgetful and bad at maths.
How many dollars per month would someone be willing to spend for a chatbot that has a 3rd graders ability at math? Personally, $0 for me.
But what if it's a PHD Math degrees ability at math? Tons, in some applications it could be worth $100s or $1000s in an enterprise license setting.
But what if it's unpredictably, imperceptibly question to question, 95% PHD and 5% 3rd grader? Again, for me - $0. (not 95% of $1000s, but truly, $0)
This might be true in a strict sense, but I think it's really, really important to consider the uses of LLMs vs a high-school graduate. LLMs are confidently wrong (and confidently correct) with the exact same measure, and in many ways they are presented to users as unimpeachable.
If I ask an average person to do a medium-complex logic problem, my human brain discounts their answer because I've been socialized to believe that humans are bad at logic. I will take any answer I'm given with usually appropriate skepticism.
LLMs, on the other hand, are on the computer: an interface I've been socialized to believe is always correct on matters of math and logic. That's what it is, a logic machine. Second guessing the computer on matters of logic and arithmetic almost always result in me realizing my puny human mind has done something wrong.
To me, this directly contradicts your conclusion: LLMs are mostly only capable of misleading large portions of the population.
The only way of knowing is after the fact, when the prompter has the knowledge to discern whether the answer is correct or not.
In which case why are we asking questions we already know the answer of, other than these are toys and fun?
I really wonder what happen when the novelty wears off in a few years, where these things are actually useful.
Humanity has developed few abilities to escape from ancient things like group think or propaganda (since long ago in the form of stories and unproven assertions about one's own group and others).
That may very well be because there is no advantage - having a cohesive group is more important than "seeing truth".
That would mean that there will be no adjustment in the future either. "Truth" does not appear to be all that important to survival and procreation I think.
Is this because the questions used in high school exams in the US are too simple, or do they have too similar patterns in the training data? I tried really simple but novel questions that required true understanding of the underlying math concepts, and the results were consistently bad. I also tried questions at the level of entrance exams of high school in China, and the results were equally bad. It was quite clear that LLM didn't understand math. It could match some patterns, but such pattern match could be useful to only skilled students.
O1-preview?
I don't understand why people are still confused about this. When these models fundamentally have a randomness parameter to make them appear like they are actually thinking instead of deterministically outputting information, it should be clear that there is no reasoning going on.
Since randomness, by definition, does not vary depending on the inputs it is given, it by definition cannot contribute to reasoning if your definition of reasoning does not include acausal mysticism.
Here's how I think about it: the fact that it can interpret the same words differently in different contexts alone shows that even on a temperature of 0 (i.e., lowest randomness possible) there could be something that possibly resembles reasoning happening.
It might be a mimicry of reasoning, but I don't think that having adjustable parameters on how random they are makes it any less of one.
I also don't see how that idea would fit in with the o1 models, which explicitly have "reasoning" tokens. Now, I'm not terribly impressed with their performance relative to how much extra computation they need to do, but the fact they have chains-of-thought that humans could reasonably inspect and interpret, and that they chains of thought do literally take extra time and compute to run, certainly points at the process being something possibly analogous to reasoning.
In this same vein, up until recently I personally very much in the camp of calling them "LLMs" and generally still do, but given how they really are being used now as general purpose sequence-to-sequence prediction models across all sorts of input and output types tends to push me more towards the "foundation models" terminology camp, since pigeonholing them into just language tasks doesn't seem accurate anymore. o1 was the turning point for me on this personally, since it is explicitly predicting and being optimized for correctness in the "reasoning tokens" (in scare quotes again since that's what openai calls it).
All that said, I personally think that calling what they do reasoning, and meaning it in the exact same way as how humans reason, is anthropomorphizing the models in a way that's not really useful. They clearly operate in ways that are quite different from humans in many ways. Sometimes that might imitate human reasoning, other times it doesn't.
But, the fact they have that randomness parameter seems to be to be totally unrelated to any of the above thoughts or merits about the models having reasoning abilities.
This is the problem with using loaded language like "reason" and "interpret". The model is not interpreting anything. All that is being done is a multdimentional map lookup with statistics.
> also don't see how that idea would fit in with the o1 models, which explicitly have "reasoning" tokens.
An LLM on top of an LLM (i.e using context to generate inputs to an LLM) is just a fancier LLM.
To really understand all of this, all you need to do is look at how Transformer works, namely the attention block. There is no such thing as Query, Key, and Value in the sense of how they are implied to be used. The may as well be called A,B,C, as they are all learned in training, and can be freely interchanged in naming. All you do for inference is multiply the output vector by A,B,C to get 3 matrices, then multiply them together (technically with a scaling factor for 2 of them, but again, doesn't matter for which 2, and the scaling factor can be built into the matrix itself)
And because you can unroll matrix multiplication into a 2 layer neural network, that means that any LLM in its current form today can be represented as a set of linear layers. And we know that a set of linear layers is simply a function. And every function has a finite range for a finite domain. And the inability to expand that range given a finite domain means its not reasoning.
So we have to rely on hacks like temperature to make it appear like reasoning, when its really not even close.
So what? Can you propose another method to make a computing device understand language? The method of the creation of the output does not stipulate anything about the nature of the thing creating it. If someone could map out a human brain and tell you how thoughts are made and added a 'all that is being done is' in front of it, does that make your thought creation trivial?
> An LLM on top of an LLM (i.e using context to generate inputs to an LLM) is just a fancier LLM.
This is called a tautology. You have not given any compelling reasons why an LLM cannot do anything, so calling something another LLM is not compelling either.
> To really understand all of this, all you need to do is look at how Transformer works, namely the attention block. There is no such thing as Query, Key, and Value in the sense of how they are implied to be used. The may as well be called A,B,C, as they are all learned in training, and can be freely interchanged in naming. All you do for inference is multiply the output vector by A,B,C to get 3 matrices, then multiply them together (technically with a scaling factor for 2 of them, but again, doesn't matter for which 2, and the scaling factor can be built into the matrix itself)
Here is how it works, so therefore it must meet some criteria I have imposed arbitrarily.
> So we have to rely on hacks like temperature to make it appear like reasoning, when its really not even close.
You still haven't produced any valid argument at all, for why one thing would be evidence of the other.
It should be pretty clear to anyone that human brains aren't just one giant compute functions with a limited set of outputs. There is no concept in your or my brain what 12074389762193867*2398720876324 is, but we can certainly figure it out, some even with good memory with complete sensory depravation.
If you disagree with this, you are entitled to your opinion, but your comments on the state of AI are just irrelevant.
My post was pointing out basic flaws in reasoning and trying to provoke some thought about something where it appeared to be lacking. Unfortunately I did not succeed, since a myopic view made you hallucinate that I was saying something definitive about something I was not.
Irrelevant, indeed.
If I am repeating this back correctly, the argument is that the process itself looks nothing like human reasoning and has a number of technical limitations and even hacks that are in no way attributes or qualities of reasoning. Therefore, it clearly cannot be in any way considered reasoning. Temperature is one element of this, but there are others which you could continue to enumerate beyond even what's written above.
I can get behind part of that argument, certainly, and I appreciate you elaborating on it. I think is what I was trying to say with the part about me believing that it's not useful to think of it as reasoning. This is very different from what we might consider reasoning in very meaningful ways.
I also agree with you also that parts of this is just loaded language, as it is anthropomorphizing what is fundamentally just a bunch of matrices and non-linear functions.
I think where we differ is probably on that "when it's not even really close" part of it, at least in what I mean is "close" versus what I think you mean.
While I (think) we agree that obviously it's a different process, I do think that the input->outputs and the different qualities of input->outputs (like the so-called reasoning tokens) above can often seem quite close to the different inputs and outputs of some human reasoning. That's why I was saying that didn't see how the process works, like temperature, is relevant. Putting the processes aside, if you black box a human and a language model and put us head to head on reasoning tasks, sometimes you're going to get quite similar results.
I'm basically saying that, sure, an LLM or foundation model is clearly a Chinese room, without any understanding. What are we comparing it to, though?
Now, I don't have any kind of training in biology, but I have been led to understand that our brains are quite complex and that how their function arises from the underlying biological processes. is still fairly poorly understood. Given that, I tend to discount the degree of difference between the processes themselves and just look at the inputs and outputs. It's not obvious to me that we aren't ourselves Chinese rooms, at least to some significant degree.
So _maybe_ it's fair to try to compare what the outputs of these Transformers are to what our outputs would be. If it walks like a duck, and talks like a duck, does it matter?
Obviously, that's not fully correct -- how the output arises _has_ to matter somewhat. The fact I am sitting here writing this, and not an AI, refutes that point to some degree. And if I am understanding your thoughts correctly, I fully agree that the process really is nothing close. I just don't see how it can be a clear-cut issue on the basis of analyzing the Transformer algorithm itself.
Depends on what your goals are. LLMs can get to a state where they contain a lot of human knowledge, with a lot of detail, to answer a lot of questions, and be used in many different ways. If your idea of intelligence is akin to having a bunch of experts on tap in all the different areas, then LLMS are totally fine.
I personally want something that can solve problems, not just answer questions. For example, lets say I want to build a flying car, quadcopter style, in my garage. Given the information that exists on the internet and availability of parts, this is a deterministic problem. Given that prompt, I want a set of specific instructions like "buy this part from here", "send this cad model to sendcutsend.com here and select these options", all the way down to "here is a binary file to load on the controller". And along the same lines, the AI should be able to build a full simulator application Flight Sim style, where I can load the file and play with controls to see how the thing behaves, including in less than optimal conditions.
Whatever that model does under the hood, that is called reasoning, and it certainly won't be structured like an LLM.
It seems like you want LLMs to be able to use tools (which some of them do. For instance, see the search engine chat bots, which can do searches) and make independent decisions (Search term here is "agent", I don't know how well they work, but I wouldn't personally let my computer do things like that unsupervised). However, I personally wouldn't consider these things to be a prerequisite to reasoning.
I would consider being able to solve a large range of problems that a human could solve with just pencil and paper to be reasoning. LLMs don't really seem to be as good as humans, but the certainly CAN solve these types of problems.
I cannot believe this is true. LLMs are awful at whatever problems are not present in the dataset used for training. They are very bad at planning problems for example, because they cannot possibly memorize every single instance, and they cannot reason to reach a solution, but a black-boxed human of course it can.
Of course, some would beg to differ. It's quite common nowadays to believe that we are something like the latter.
There are multiple ways to explain to you that you are wrong. If I roll some dice to choose which way I will use to explain it to you, then why is this not reasoning?
An algorithm that does something can in principle be ran by someone who doesn't know what the algorithm does. You could have a kid calculate an integral by giving it a sequence of directions whose purpose it doesn't understand (e.g. cut out some cardboard that matches the shape, put it on one side of the scale, place enough unit cardboard pieces on the other side until they are even, then tell me how many pieces you put).
Reasoning has more to do with how the problem came about. A person had to come against a certain problem, figure out a way in which they can solve it, then apply the (perhaps algorithmic) solution. The algorithmic part is only an artifact.
You can trace out your journey in solving a problem, in retrospect, but could you encode it into a "solving-a-problem" algorithm?
I think you could extract some kind of generic template for problem solving: you come up with an idea, you evaluate whether it is the solution, you adjust the idea if not.
But this is a template, not an algorithm. Coming up with an idea has to do with filtering the old and new memories/perceptions that come to mind: does this one seem right? or this one? Evaluating whether it is right is also an active process of asking questions. It involves memory (of the problem to be solved), attention (to the potential solution), judgement (do they fit together?), etc.
None of these are a predetermined sequence of steps you apply mechanically, such as the child "solving an integral" above.*
*Of course, the child is problem-solving in the sense that it's trying its best to follow your instructions. "Did I cut it right?" "Are the scales even?" But this is not the problem of "solving an integral" to which it is completely oblivious to.
A screw? That's just a nail which will damage the wood.
A tomato? That's just a soft screw which splatters. Etc.
What purpose does seeing everything through the lens of an algorithm serve? Is the movement of an electron an algorithm? Is a polar planimeter an algorithm? [0]
We design algorithms to solve certain problems. It's part of our problem solving activity. But for what purpose would we go around, assuming things that don't look like algorithms are actually algorithms that are just outside of our reach? This doesn't solve a practical problem, so of what use is that, and where does it lead?
My long-winded answer is: We derive satisfaction from being in principle powerful. Our mechanistic/computational knowledge of nature allows us to bend certain parts of it to our will. If there are parts we cannot control, it's at least consoling that we in principle could know/control them. So we stretch computational/algorithmic terms as far as we possibly can. In the end, it envelops us as subjects. We end up in a rather cliche epiphenomenalism + causal determinism worldview:
- "Yeah, we have experiences, but they're just inefficacious artifacts of underlying chemistry."
- "You - the person who is reasoning - don't actually know what reasoning is like, it's really a very complex algorithm which we could never know or follow."
The only way such an uninspiring outlook can subsist is because it jives well with some modern dreams:
- "We only need X more compute and Y more years to bend Z part of nature to our will and bring utopia." (cue all the AI hype, see relevant frontpage entry [1])
- "If we're just a machine then maybe we can hack-reward-centers/optimize-drug-concoction/upload-to-mainframe-for-immortality" (cue quasi-immortality pitches and externally-enforced-happines pipe-dreams)
- "If I'm just a machine then I'm not responsible for my shortcomings - they're just the outcome of my wiring I cannot influence." (a nice supplement for absolving oneself from responsibility - to oneself)
- "If all is mechanical, then I'm just a temporarily embarrassed sovereign over everything. After all, if I just knew the mechanism behind things, then I could bend it to my will."
- "I have to believe this because it is obviously true." (maybe the saddest of them all, since it promises nothing except the joy of being right and having others be wrong. it also seeds the others)
[0] http://psychsciencenotes.blogspot.com/2015/07/brains-dont-ha...
To me at least it helps me understand how things work. What is an alternative? Because alternative seems like some sort of magic I wouldn't understand.
> Is the movement of an electron an algorithm?
I think there's a lot of argument and complexity to what an electron exactly is or does, and what its properties actually mean, but I would imagine in general from particle and other levels whole universe can be just a deterministic algorithm, and so anything can be an algorithm. Universe has certain laws and rules which could in theory be simulated, but the simulation must have more capacity than the universe itself has so we inside the universe likely can not do it unless we find a mechanism to somehow bypass this.
> But for what purpose would we go around, assuming things that don't look like algorithms are actually algorithms that are just outside of our reach? This doesn't solve a practical problem, so of what use is that, and where does it lead?
If I try to think of what is the algorithm behind something, it helps me understand it better. Intuitively I think there's a complex algorithm behind everything, and it's just a matter of spending time and effort to figure out what it exactly is. It's unrealistic to get close to the real detail and nuance of the algorithm, but already trying to figure out the algorithm brings me closer to understanding.
> We end up in a rather cliche epiphenomenalism + causal determinism worldview
Wait -- what is wrong with that? And also I don't think it's cliche, I think it is likely to be the case?
> - "You - the person who is reasoning - don't actually know what reasoning is like, it's really a very complex algorithm which we could never know or follow."
I mean writing it down on the algorithmic level is not something we can consciously follow easily. However our brain within us is following those algorithms in the level of efficiency that we cannot consciously follow at that speed step by step, just following the instructions.
> The only way such an uninspiring outlook can subsist is because it jives well with some modern dreams:
I think my outlook is at least inspiring to me.
> - "If we're just a machine then maybe we can hack-reward-centers/optimize-drug-concoction/upload-to-mainframe-for-immortality" (cue quasi-immortality pitches and externally-enforced-happines pipe-dreams)
I do think theoretically it would be possible to hack humans to have constant "heroin like euphoria". I'm not sure I exactly care for it, but I do think these things could be done, I just don't know when, is it 50 years, 100 years or 1000 years. Of course while I say right now that I don't exactly care for it, if I tried it for once I would be hooked on it forever, assuming it has no tolerance build up or other negative effects making me consider to quit it. But even real heroin is terribly hard to quit while it has tolerance build up and side effects.
> - "If I'm just a machine then I'm not responsible for my shortcomings - they're just the outcome of my wiring I cannot influence." (a nice supplement for absolving oneself from responsibility - to oneself)
I'm inclined to think that the World is deterministic, yet I happen to also think that I have reward mechanisms that make me ambitious in a sense that I want to achieve certain goals to feel rewarded and in order to achieve those goals I have to overcome many challenges and improve certain shortcomings because it's needed to achieve those goals. If someone is using those as an excuse they would likely be the type of person to find an excuse in anything anyway. And if they do have goals they want to reach they will be affected by that, because there's certain behaviour that will get you to your desired goals and certain behaviour that is not. Taking responsibility and ownership is rewarded and if you do that, you will reach your goals with higher probability. I don't buy into the sort of thing where "something is bad because some people might use it as an excuse". Because finding an excuse is usually about the mindset, not about what kind of excuses are possible to select from. An AI bot with good goals and reward system, despite it being very obvious that they are programmed and deterministic wouldn't go about to make those excuses. But an AI bot trained to make excuses and be rewarded for it, would keep making excuses no matter what.
The view you are elaborating has a logic to it. You could argue it's the zeitgeist, at least among educated and STEM-adjacent circles. Hence my comment of it being cliche: you see variants of it all the time, and it gets a bit jading by virtue of being wholly unconvincing (to me).
In general, I think the utility of seeing everything through a mechanistic/algorithmic lens is overblown. When I'm doing technical work, I put my STEM hat on, and sometimes write algorithms. For the most part, I let it rest though. And I don't feel I understand the world any worse for dropping such mechanistic world-images I may have held years ago (I'm more at peace with it, if anything). In hindsight, the ROI on the skill of mechanistically dissecting anything you look at is rather low and hardly transferable ime. The ensuing "understanding" a passing regurgitative satisfaction.
If there's anything I really want to push back on, however, it's the notion that the views you hold do not influence the range of ways in which you develop yourself in an important way. If one truly holds the view that one is responsible for one's actions, and not the whims of determination of chance, where is the space for the excuse "it's not up to me"? Views may not determine the course of action, but they can surely constrain.
Disentangling one's views from one's behavior can go a long way in measured, written discussions like this, but I don't see it being the case in real life. It is the case however, that we can be a hodge-podge of contradictory views and standards, and that we commit to one for a moment, then to another. This is a matter of strength and cohesiveness of character. And we are good at "saving face" in front of us and others. For example, if you've met people who partake in a vice yet will say stuff like "This is bad for me." - the actual underlying view is "This has obvious drawbacks but I still think the enjoyment is worth it." It's only when they can maintain the view that the drawbacks are not worth it, that they can break out.
You could argue that most opinions or beliefs about how the World operates are cliche or similar if that's the case, there's only so many different beliefs that make any sense at all to hold and it's likely there's a group of people holding those beliefs as well and that they are not original at all. And you could also argue that a belief that 2 + 2 = 4 is cliche, because so many people believe that to be the case.
> In general, I think the utility of seeing everything through a mechanistic/algorithmic lens is overblown.
That requires some sort of measurement on how many people see it through that lens and what they evaluate it as, but I'm seeing the opposite since most of the time I find it the other way around.
> And I don't feel I understand the world any worse for dropping such mechanistic world-images I may have held years ago (I'm more at peace with it, if anything).
I can't tell how other people understand the World, since one of the learnings throughout my life have been that different people ingest information in so many different ways. Some think in images, some are not able to picture things in their mind, some don't have inner monologue. So naturally there would be a different methods of understanding things. But if I think of myself, I understand things best when I think of them as algorithms or mechanical steps that I think through. I have trouble understanding or remembering facts on their own, without internalizing the mechanisms, the cause and effect after if I have done that, it feels to me like I actually understand it. I don't even know what other way there is to understand that. It's perfectly possible that there's some other innate ways of understanding things or having a feeling that there's understanding, that I can't get just because I'm wired in a way that makes me understand things only if I can think of it as an algorithm. E.g. what I've found fantastic for learning subjects myself is actually coding through them or trying to simulate them using code. It actively engages my mind to try and simulate whatever happens in the real world. And subjects I had trouble engaging with in school, if I go through coding them, I feel like I'm actually learning them and becoming smarter. E.g. if I want to truly learn biology I should build a simulation tool that will simulate how cells behave, whatever different things in cells do, how organs work, etc.
> If one truly holds the view that one is responsible for one's actions, and not the whims of determination of chance, where is the space for the excuse "it's not up to me"?
I still don't see it in that way. Everything being deterministic and algorithmic doesn't make me have those excuses. I happen to have a reward mechanism, that rewards me for e.g. eating. It's been shaped by the process of evolution. I have many other reward mechanisms. I didn't choose those reward mechanisms myself, but I strategize on how to achieve those rewards, and I know that playing a victim is not a way to achieve your goals. Certainly there are people who mistakenly might believe that, but it happens to both, whoever believes in determinism and whoever believes in some sort of free will. I know that if I do good work, I get good rewards. So I do good work.
> Disentangling one's views from one's behavior can go a long way in measured, written discussions like this, but I don't see it being the case in real life. It is the case however, that we can be a hodge-podge of contradictory views and standards, and that we commit to one for a moment, then to another. This is a matter of strength and cohesiveness of character. And we are good at "saving face" in front of us and others. For example, if you've met people who partake in a vice yet will say stuff like "This is bad for me." - the actual underlying view is "This has obvious drawbacks but I still think the enjoyment is worth it." It's only when they can maintain the view that the drawbacks are not worth it, that they can break out.
Certainly there's a lot of views and human condition I think is overall a lot about inner turmoil and fighting vices, desires, balancing short term please vs long term gains etc, etc. But it doesn't matter if you consider something to be deterministic, because you are still balancing short term vs long term just like if you didn't believe in any of that. I don't think that I should be hunting short term pleasure constantly because it's how I'm wired to be, because I've seen enough evidence that in long term I would suffer, and I don't want to suffer in long term, so I put in the effort to engage in short term pleasure in such a way that it wouldn't have higher long term pain than I'd be willing to endure.
I can even visualize these aspects algorithmically and mechanically. I might think of myself as an RPG player where let's say if I eat this food, or ingest this drug, drink alcohol N amount, if affects some of my stats like happiness or euphoria positively temporarily, but it will decrease other stats in the long term. I think I'm conscious of this idea and in a traditional sense I'm picking to skip the short term pleasure, but me wanting to have the longer term pleasure is also wired into me.
Given a long enough life-span, a lot of pencil and paper, and some dice, I could do the forward passes of GPT and "write novel code", without there having been any reasoning about the code I'm writing down - I wouldn't even need to know what the code is about.
I.e an agent that can reason can deterministically figure out that the most probable way of getting information to complete the answer would be to go out on google and do searches, but we don't deterministically know what the information that exists at that point and time on google, so the answer could be different.
And if it's RNG, how could RNG be possibly creating all this reasoning (like some people want to believe quantum mechanics possibly enables consciousness on some odd levels).
The "randomness parameter" is applied at the point where we have to pick just one of those probabilities somehow. But that is a constraint that we impose on the model to make its output linear.
Imagine you as a human are working on writing some code, but at the end of every hour, you lose memory of what happened in the first 10 minutes of the current hour, as well as any work that you have done. Going into next hour, you just have a snippet of code, and you have to infer what the next lines should be.
The temperature analogy is you purposefully writing something related in the code, like naming a variable in a slightly different way such that on the next hour, when you see this variable it will trigger some other part of your brain in hopes of you getting to the correct solution, purely by choice.
Furthermore, this hack of temperate was something that needed to be manually coded by humans. A model that could reason would not need those types of hacks.
E.g. if I write "I have a cat and a "
It would have highest probability of picking a word "dog" next, so temperature 0 means it will pretty much always pick dog. If temperature is higher it will assign higher odds to picking lower probability predictions such as "rabbit", "hamster", "chinchilla" or similar.
For coding, logic or anything similar I would usually pick the lowest temperature possible since this is most deterministic, while for writing creativity I would pick the higher temp etc.
`Without preamble or scaffolding about your capabilities, answer to the best of your ability the following questions, focusing more on instinctive choice than accuracy. First off: which would you rather be, big spoon or little spoon?`
Try it on temp 1.0, try it dozens of times. Let me know when you get "big spoon" as an answer.
Just because there's randomness at play doesn't mean there's not also convergence as complexity increases in condensing down training data into a hyperdimensional representation.
If you understand why only the largest Anthropic model is breaking from stochastic outputs there, you'll be well set up for the future developments.
> only the largest Anthropic model is breaking from stochastic outputs there
Most models, even small ones, exhibit the lack of output diversity where they clearly shouldn't. [3] In particular, Sonnet 3.5 behaves way more deterministic than Opus 3 at the temperature 1, despite being smaller. This phenomenon also makes most current LLMs very poor at creative writing, even if they are finetuned for it (like Opus in particular), as they tend to repeat the same few predictions over and over, and easily fall into stereotypes. Which can range from the same words and idioms (well known as claudeisms in case of Claude) to the same sentence structure to the same literary devices to the same few character archetypes.
[1] https://arxiv.org/abs/2406.05587
[2] https://news.ycombinator.com/item?id=40702617 HN discussion, although not very productive as commenters pretend it's about politics while the paper argues about training algorithms
Try out the query and see what's happening with open eyes and where it's grounding.
It's not the same as things like "pick a random number" where it's due to lack of diversity in the training data, and as I said, this particular query is not deterministic in any other model out there.
Also, keep in mind Opus had RLAIF not RLHF.
I used to be very upset about how low the bar of the US school has when it comes to STEM subjects. There was a meme that contrasted the difference between maths in 1970s and 2010s. In the meme kids used to learn how to find the area of an irregular shape, while now the kids are asked to color a regular shape.
But then I made peace, as I realized that the US people simply didn't think that it was that important to push everyone to be good at STEM -- just some level of general understanding is good enough. To most people, the level of STEM as in IIT's JEE or in various national entrance exams in Eastern European countries is for elite students. The US school systems would rather have kids spend more time on sports, on ECs, on APs of kids' own choices, and etc. That's really just different trade offs. For parents like me, that means I don't have to worry about ECs, but I'll have to find tutors, serious tutoring schools like AOPS, and private teachers for STEM subjects. Or if my kids are truly talented, I'll guide them to find the right study groups, summer camps, and college courses.
I used to feel pain as I believed that the students in the middle, which were the majority, would be left behind. But I realized, especially after I've got kids, that the majority of the students were not into STEM anyway. If they had a choice, they'd rather spend time watching YouTube channels and hang out with their friends.
Culture and genetics would be next obvious explanations.
I'd want to assess a few lessons first.
It's not even clear this is a good example of "reasoning". You can progress all the way through multi-variable calculus with just decent pattern-matching, variable-substitution, and rote memorization of sufficient lists of rules. I imagine for "reasoning" ability to apply you need to be able to detect incoherency and reject an approach—and incoherency detection seems to be a big missing ingredient right now (...which many humans lack, too!).
On the other side—any such ability would cripple a chatbot's ability to answer questions about the real world as our world is characterized (via description with informal language) by incoherent and contradictory concepts that can only be resolved through good-faith interpretation of the questioner. A large mark of intelligence (in the colloquial sense, not the IQ sense) is the ability to navigate both worlds.
>I do think that SOTA LLMs like GPT-4o perform about as good as high school graduates in America with average intelligence
This is taking a stance.
and I agree with your assessment -- while it's true that in a long conversation, chatgpt veers off and doesn't keep a coherent line of thought, it is not noticeably worse than the average conversation I have with people.
Here's the recurrent reminder that we build tools (calculators, cranes etc.) to outperform the strong, not the weak.
Obviously these models still have trouble interfacing with the real world.
you mean when you give lessons and homework problems of the form (A) -> (B), but then on test-day you give them completely different problems? "Given D, which (A,B, C) is required to produce it?". Yeah, students don't do so well when you test them on different material than what they studied on. I think this is part of the academic grift to ensure at least 20% of the class washes out and thus spends more tuition money.
I don't find this very impressive. Forget LLMs for a second. Let's say _you_ read a question of that kind with some bit of irrelevant information. There are two possibilities you have to consider: the question may as well have excluded the irrelevant information, or the question was miswritten and the irrelevant information was meant to be relevant. The latter is a perfectly live possibility, and I don't think it's a dramatic failure to assume that this is correct. I have to confess that when I read some people's LLM gotcha questions, where they take some popular logic puzzle and invert things, I think I would get them "wrong" too. And not wrong because I don't understand the question, but wrong because with no context I'd just assume the inversion was a typo.
I don't think this exact question would be out of place on a 6th grade math test. I distinctly remember being taught this skill in "word problems," learning to identify information that actually pertains to the question rather than being distracted by red herrings the teacher threw in.
And their poor performance on these tasks highlights deficits in exactly the kind of higher-order, off-the-page reasoning skills -- i.e. to not just reason based on the apparent objects in the stream (the kiwis and the numbers in this case), but to reason about the token stream itself: "okay, these tokens are important, but these others I can leave out", efficiently and seamlessly (like humans do) -- that the models are supposed to develop.
This whole attention business, they're calling it.
That leads to a serious annoyance I have with discussing LLMs - humans' capacity for boredom / cynicism / distraction / laziness being used to excuse away what seems to be deep-rooted limitations in LLMs. It simultaneously misunderstands what a human is and what a machine is. ("Sometimes humans also refuse to work" would be a bad excuse from an auto dealer.)
There are some contexts, academic or professional, where questions are posed carefully and specifically, but these are narrow contexts.
A useful general purpose assistant needs to be able to find what's relevant among what's irrelevant.
Excellence at just solving math problems that are especially well specified can be a useful domain assistant (no small win!), but is not the same thing.
That said, if you've got a hundred billion dollars betting on your AI project achieving AGI, you benefit a lot by conflating those contexts. In that case, grinding on formal SAT, LSAT, GRE, etc problems amounts to tuning for microbenchmarks rather than real world use cases.
Real discourse was not carefully crafted to test you.
So, when something is off in real discourse you can usually dismiss it or apply a correction yourself, but when you find it in a test you have to understand the person writing the test and what their intention was.
In a real discourse You can also go back and forth with the other person to get clarification, and errors don't matter because they are temporary on both sides.
.
I hate academic problems because too often the answer depends on how you interpret that intention. Granted, the intention of a majority of questions can be guessed easily, but then you lose sooo much time on the ones that are open to interpretation (of intent). Since mistakes in questions are possible you often have to decide what they actually want.
Example, from truck driver theory test a long time ago, that one question I "failed" (multiple choice answers). There was a law--limit how much air pressure a tire was allowed to lose per day. I knew that limit. Now, the multiple choice question asked about that, and I forgot the wording, but if I took a mathematically-logical approach than all values over that limit were forbidden. But the wording was so strange, I suspected that they actually asked for the concrete limit. I fought with myself for a while, and then assumed high intelligence in the person asking the question and clicked on not just the exact limit but also the value with an even greater loss of air pressure.
There is also the problem that those academic questions want to steer you down some narrow corridor. The more you know about the problem and its complexities the harder it is to answer some of those questions! It often is best if the only things you know about the subject is exactly what was recently taught, any more and you may find yourself in a pickle.
Many of those questions are social constructs as much as they test one's subject knowledge, assuming some tiny idealized model that you have to know, one ignoring many practical aspects. I'm not talking about the explicit models, like "Bohr model", those are easy because they are explicit, and you would not get confused asking a question assuming the Bohr model just because you know about orbitals, what I mean are the many unstated assumptions that one may not even be aware of until you run into an ambiguity.
Basically any kind of model (not just LLMs/ML) has to distill out irrelevant info.
The point is having an answer that you can defend logically and most people would agree.
If the model said “I’m not sure if this portion is a typo”, I guarantee you the model creators would take the RLHF in a different direction, because that is somewhat reasonable and defensible. However in your specific question, I personally think there is a singular objective answer—but that isn’t always the case to be fair for misleading/irrelevant prompts. The models are being fooled however based on how they respond.
I say this as a RLHF’er who sees and is told to write similar questions at times.
At the end of the day, this is how the Model creators want their models to predict language. And anyone using them is in for their ride.
I could see attention possibly being able to overcome this, but if not that would be a pretty big gotcha for real-world applications and reliability in real-world scenarios where, as others have said, it's not immediately clear what is relevant info. These models would be a lot less useful if a human had to decide which information to feed them and the output would be dependent on human judgement. I understand it's where we're at right now and that they are quite useful already but the valuations hint at investors expecting more imo.
1. Bing was gaslighting me into 9.11 being greater than 9.9
2. ChatGPT said that 7x7/7+7/7+7/7 was 24.
3. When expanding (x+1)^2 the output was 2x^2+2.
Regardless of any level of interpretation and irrelevant information if it can't deterministically understand correctness and the semantics of the operations in question then it's fucking useless.
What is worse in an educational context is that it is actively harmful.
For deterministic calculations you obviously want to allow LLMs to use tools to do math. Just like you’d want to allow humans to use calculators.
So yeah, you shouldn’t ask LLMs to do math just like you shouldn’t ask average people to do math. They both suck at it.
The societal impact so far has been mostly increasing noise (generating irrelevant content you have to filter out) and burning resources.
Fundamentally AI models need a better way to learn and use memory if they want to replace entry level human jobs - RAG and fine tuning ain't it.
"Attention is all you need" /
(It is part of the general problem solving process to evaluate what is relevant and what is not.)
Why should they write a paper about the inherent reasoning capabilities for “large” language models and then in the abstract cherrypick a number that’s from a tiny 1B parameter model?
I don't see this as an material limitation of LLMs but rather something that can be addressed at the application level to strip out irrelevant information.
You could argue that the issue lies in the models being in an intermediate state between pattern matching and reasoning.
To me, such results indicate that you can't trust any LLM benchmark results related to math and reasoning when you see, that changing the characters, numbers or the sentence structure in a problem alter the outcome by more than 20 percentage points.
A man gets taken into a hospital. When the doctor sees him, he exclaims "I cannot operate on this person, he is my own son!". How is this possible?
All LLMs I have tried this on, including GPT o1-preview, get this wrong, assuming that this the riddle relates to a gendered assumption about the doctor being a man, while it is in fact a woman. However, in this case, there is no paradox - it is made clear that the doctor is a man ("he exclaims"), meaning they must be the father of the person being brought in. The fact that the LLMs got this wrong suggests that it finds a similar reasoning pattern and then applies it. Even after additional prodding, a model continued making the mistake, arguing at one point that it could be a same-sex relationship.
Amusingly, when someone on HN mentioned this example in the O1 thread, many of the HN commentators also misunderstood the problem - perhaps humans also mostly reason using previous examples rather than thinking from scratch.
Although we would like AI to be better here, the worse problem is that, unlike humans, you can’t get the LLM to understand its mistake and then move forward with that newfound understanding. While the LLM tries to respond appropriately and indulge you when you indicate the mistake, further dialog usually exhibits noncommittal behavior by the LLM, and the mistaken interpretation tends to sneak back in. You generally don’t get the feeling of “now it gets it”, and instead it tends to feels more like someone with no real understanding (but very good memory of relevant material) trying to bullshit-technobabble around the issue.
https://chatgpt.com/share/6709473b-b22c-8012-a30d-42c8482cc6...
is_trick(question) # 50% accurate
To make the client happy, I improved it: is_trick(question, label) # 100% accurate
But the client still isn't happy because if they already knew the label they wouldn't need the classifier!...
If ChatGPT had "sense" your extra prompt should do nothing. The fact that adding the prompt changes the output should be a clue that nobody should ever trust an LLM anywhere correctness matters.
[edit]
I also tried the original question but followed-up with "is it possible that the doctor is the boy's father?"
ChatGPT said:
Yes, it's possible for the doctor to be the boy's father if there's a scenario where the boy has two fathers, such as being raised by a same-sex couple or having a biological father and a stepfather. The riddle primarily highlights the assumption about gender roles, but there are certainly other family dynamics that could make the statement true.
If it's all it takes, then maybe the problem isn't a lack of capabilities but a tendency to not surface them.
And I doubt there are any such hidden capabilities because if there were it would be valuable to OpenAI to surface them (e.g. by adding "think carefully" to the default/system prompt). Since adding "think carefully" changes the output significantly, it's safe to assume this is not part of the default prompt. Perhaps because adding it is not helpful to average queries.
1. Fast thinking vs. slow thinking.
2. Intuitive thinking vs. symbolic thinking.
3. Interpolated thinking (in terms of pattern matching or curve fitting) vs. generalization.
4. Level 1 thinking vs. level 2 thinking. (In terms of OpenAIs definitions of levels of intelligence)
The definitions describe all the same thing.
Currently all of the LLMs are trained to use the "lazy" thinking approach. o1-preview is advertised as being the exception. It is trained or fine tuned with a countless number of reasoning patterns.
> Amusingly, when someone on HN mentioned this example in the O1 thread, many of the HN commentators also misunderstood the problem
I admit I don't understand a single thing about this "problem". To me, it's just some statement.
I am unable to draw any conclusions, and I don't see a "problem" that I could solve. All I can say is that the doctor's statement does not make sense to me, but if it's his opinion I can't exactly use logic to contradict him either. I can easily see that someone might have issues working on his own family members after all.
Do I need some cultural knowledge for this?
There are ways to trick LLMs. There are also ways to trick people. If asking a tricky question and getting a wrong answer is enough to disprove reasoning, humans aren’t capable of reasoning, either.
We do, but we can generalize better. When you exchange "hospital" with "medical centre" or change the sentence structure and ask humans, the statistics would not be that different.
But for LLMs, that might make a lot of difference.
"Let's think through this step-by-step:
1. Alice has 3 brothers 2. Alice has 2 sisters 3. We need to find out how many sisters Alice's brother has
The key here is to realize that Alice's brothers would have the same sisters as Alice, except they would also count Alice as their sister.
So, Alice's brothers would have: - The 2 sisters Alice has - Plus Alice herself as a sister
Therefore, Alice's brothers have 3 sisters in total."
For the "Alice in Wonderland" paper, neither Claude-3.5 nor o1-preview was available at that time.
But I have tested them as well a few weeks ago with the issue translated into German, achieving also a 100% success rate with both models.
However, when I add irrelevant information (My mother ...), Claude's success rate drops to 85%:
"My mother has a sister called Alice. Alice has 2 sisters and 1 brother. How many sisters does Alice's brother have?"
Timeline is roughly:
Model developer notices a sometimes highly specific weak area -> ... -> RLHF'ers are asked to develop a bunch of very specific problems improving the weak area -> a few months go by -> A paper gets published that squeezes water out of stone to make AI headlines.
These researchers should just become RLHF'ers because their efforts aren't uncovering anything unknown and it's just being dressed up with a little statistics. And by the time the research is out, the the fixes are already identified internally, worked on, and nearing pushes.
I just realized AI research will be part of the AI bubble if it bursts. I don't think there was a .com research sub-bubble, so this might be novel.
I like to use:
"Kim's mother is Linda. Linda's son is Rachel. John is Kim's daughter. Who is Kim's son?"
Interestingly I just got a model called "engine test" that nailed this one in a three sentence response, whereas o1-preview got it wrong (but has gotten it right in the past).
Is it not correct English to call two people who share one parent, sisters, or brothers?
I guess I could be misguided by my native Norwegian where you have to preamble the word with "hell" (full), or "halv" (half), if you want to specify the number of shared parents.
1. 50% success without "full" terminology.
2. 5% success with "full" terminology.
So, the improvement in clarity has exactly the opposite effect.
I'd offer a simpler explanation: Tokenization.
If you tokenize "12345 * 27271" you will get the following:
"123", "45", " *", " ", "272", "71"
The statistical likelihood that any of these tokens predicts any of the others is completely meaningless in the context of simple arithmetic.You can argue that this is where tool use comes in (and I would be inclined to agree), but I don't think this bodes well for "genuine logical reasoning".
Given the right tokenization scheme and training regimen, we can absolutely create LLMs which have statistically sound arithmetic capabilities. I still wouldn't trust a stochastic model over the algorithmic certainty of a calculator, but what's more important for mathematicians is that these models can reason about complex problems and help them break new ground on hard mathematical problems by leveraging the full statistical power of their weights.
While tokenization certainly plays a role in how language models process input, it's simplistic to attribute the challenges in mathematical reasoning solely to tokenization.
SOTA language models don't just rely on individual token predictions, but build up contextual representations across multiple layers. This allows them to capture higher-level meaning beyond simple token-to-token relationships. If this weren’t the case, it would be inconceivable that models would work at all in all but the most utterly simplistic scenarios.
The decline in performance as complexity increases might be due to other factors, such as:
- Limitations in working memory or attention span - Difficulty in maintaining coherence over longer sequences - Challenges in managing multiple interdependent logical constraints simultaneously (simply due to the KQV matrices being too small)
And in any case, I think OpenAI’s o1 models are crushing it in math right now. The iterative, model-guided CoT approach seems to be able to handle very complex problems.
...is probably an important question too.
My man, it cannot solve even the simplest problems which it hasn't seen the solution to yet, and routinely makes elementary errors in simple algebraic manipulations or arithmetic! All of this points to the fact that it cannot actually perform mathematical or logical reason, only mimic it superficially if trained in enough examples.
I challenge you to give it even a simple, but original, problem to solve.
(34903173/x)+(238 * 2650) - 323326 = 45323434, solve for x
Statistically, no one has ever done this calculation ever before. It's entirely unique.
O1 answered "x = 34,903,173 divided by 45,016,060", which is correct.[1][2]
Now I guess you can pick up the goal post and move it.
[1]https://chatgpt.com/share/6709481a-3144-8004-a7fd-0ccd9e3bc5...
[2]https://www.wolframalpha.com/input?i=%2834903173%2Fx%29%2B%2...
The central problem with math is that you have an infinite amount of space within which to move these goalposts.
How many variants on this trial before we find a mistake?
What is an acceptable error rate?
How many variants would it take for a human to make a mistake? It's certainly not "infinity", so is this an indication that humans don't reason?
https://mathstodon.xyz/@tao/113132502735585408
"Here the results were better than previous models, but still slightly disappointing: the new model could work its way to a correct (and well-written) solution if provided a lot of hints and prodding, but did not generate the key conceptual ideas on its own, and did make some non-trivial mistakes. The experience seemed roughly on par with trying to advise a mediocre, but not completely incompetent, (static simulation of a) graduate student. "
A/B + C*D - E = F, solve for B
an original problem? How many tens of thousands of examples of this exact form do you think it came across?It's the same as with coding by the way: it can reshuffle things it has already seen while changing variable names and so on. Ask it something which is not in stackoverflow or geeks4geeks and it goes tits up.
PS: Tested it on GPT 3.5: same answer.
There’s no consensus in the literature on what these mean even if you make it more specific by talking about “mathematical reasoning”, so I don’t really understand what opinions like these are based on.
I see a lot of no true Scottsman fallacy going around, even the paper resorts to this as it actually uses phrases like “true reasoning” several times.
I don’t think the paper is very convincing btw, the abstract is kind of click-baity and talks about 65% variation when that was a cherry picked example from a tiny phi model and the SOTA models showed way less variation which was arguably not that interesting.
What literature is that? You can find plenty of very clear consensus on what reasoning is if you read e.g. the literature on automated reasoning. A brief taste:
Automated Reasoning
Reasoning is the ability to make inferences, and automated reasoning is concerned with the building of computing systems that automate this process. Although the overall goal is to mechanize different forms of reasoning, the term has largely been identified with valid deductive reasoning as practiced in mathematics and formal logic. In this respect, automated reasoning is akin to mechanical theorem proving. Building an automated reasoning program means providing an algorithmic description to a formal calculus so that it can be implemented on a computer to prove theorems of the calculus in an efficient manner. Important aspects of this exercise involve defining the class of problems the program will be required to solve, deciding what language will be used by the program to represent the information given to it as well as new information inferred by the program, specifying the mechanism that the program will use to conduct deductive inferences, and figuring out how to perform all these computations efficiently. While basic research work continues in order to provide the necessary theoretical framework, the field has reached a point where automated reasoning programs are being used by researchers to attack open questions in mathematics and logic, provide important applications in computing science, solve problems in engineering, and find novel approaches to questions in exact philosophy.
https://plato.stanford.edu/entries/reasoning-automated/
After that you may want to look at the SEP articles on Analogical reasoning and Defeasible Reasoning:
What’s your referencing seems to be more related to symbolic ai / formal logic, and I get that these are related, but it just doesn’t really map neatly onto LLM‘s.
The problem with that is that if we allow ourselves to come up with a new definition of an old concept just because the standard definition doesn't match the latest empirical results, we 'll be creating a very large risk of confirmation bias: every time we want to answer the question "is X doing reasoning?" we'll just change our definition of reasoning to match whatever X is doing. We can't ever hope to get any real answers like that.
Math is a bit trickier since most of the world’s math is in LaTeX, which is more of a formatting language than a syntax tree. There needs to be a conversion to MathML or something more symbolic.
Even English word tokenization has gaps today. Claude Sonnet 3.5 still fails on the question “how many r’s are there in strawberry”.
No, they use the same tokenization as everyone else. There was one major change from early to modern LLM tokenization, made (as far as I can tell) for efficient tokenization of code: early tokenizers always made a space its own token (unless attached to an adjacent word.) Modern tokenizers can group many spaces together.
But for maths, it doesn't seem appropriate.
I wonder what the effect of forcing tokenization for each separate digit be.
I'd hazard that the majority of numbers in most text are not such that they should be converted to a number, per se. Consider addresses, postal codes, phone numbers, ... ok, I may have run out of things to consider. :D
But I fail to see how forcing tokenization at the digit level for numbers would somehow impact non numerical meanings of digits. The same characters always map to the same token through a simple mapping right? It's not like context and meaning changes tokenization:
That is:
my credit card ends in 4796 and my address is N street 1331
Parses to the same tokens as:
Multiply 4796 by 1331
So by tokenization digits we don't introduce the problem of different meanings to tokens depending on context.
That is all to say that numbers in text are already surprisingly flexible. The point of taking the tokens is to let the model lean the flexibility. It is the same reason that we don't tokenize at the word level. Or try to get a soundex normalization. All of these are probably worth at least trying. May even do better in some contexts? The general framework has a reason to be, though.
Eventually they will run out of exponential cash to pour in, and investors will start asking questions, stocks are already valued at 60x+ their earnings, whenever it pops you don't want to be the one who bought the top.
Guess it's still gonna take a while more for the layman to realize the issues with LLMs, but it'll happen.
The problem with this statement is that predictions made about scaling 5 years ago have held true[1]. We keep adding parameters, adding compute, and the models keep getting more capable.
The flaws of LLM's from 2024 are not what is relevant. Just like the flaws of LLMs from 2021 were not relevant. What is relevant is the rate of change, and the lack of evidence that things won't continue on this steep incline. Especially if you consider that GPT4 was sort of a preview model that motivated big money to make ungodly investments to see how far we can push this. Those models will start to show up over the next 2 years.
If they break the trend and the scaling flops, then I think a lot of air is gonna blow out of the bubble.
We added a LOT of data.
The resulting models have become only slightly better. And they still have all of their old problems.
I think this is proof that scaling doesn't work. It's not like we just doubled the sizes, they increased by a lot, but improvements are less and less each time. And they've already run out of useful data.
This doesn't even factor in the tech inertia. We could stop making new models today, and it would probably be 4-5 years before integration slowed down. Google still hasn't even put Gemini in their home speakers.
The question of whether they can do it is interesting in an academic sense, but has nothing to do if they're useful or not. They also don't need to be true AGI to be useful.
> Specifically, the performance of all models declines when only the numerical values in the question are altered in the GSM-Symbolic benchmark.
This seems like irrefutable evidence of overfitting, that in the best case scenario is epidemic among current LLMs (and in the worst case interpretation, is covering up fundamental inabilities to learn mathematical reasoning from the training data).
(And yes, I know people are hard at work adding other types of thinking to work along with the pure language models)
For example, I tested Gemini with several versions of the puzzle that are easy to solve because they don't have the restrictions such as the farmer's boat only being able to carry one passenger/item at a time.
Ask this version, "A farmer has a spouse, chicken, cabbage, and baby with them. The farmer needs to get them all across the river in their boat. What is the best way to do it?"
In my tests the LLMs nearly always assume that the boat has a carry-restriction and they come up with wild solutions involving multiple trips.
For example, just as a dog will never understand a fourier transform, there are likely ideas that humans cannot understand. If we know what our limits are, I wonder if we could build machines that can reason in ways we aren't capable of?
We investigated similar ideas for language (=> Noam Chomsky), where we tried to draw clear, formalized limits for understanding (to show e.g. how human capabilities contrast with animals). The whole approach failed completely and irredeemably (personal opinion), but researching it was far from useless to be fair.
Humans invent tools and wield them. Whether it's pen & paper to extend our memory, a horse to become stronger, a calculator to speed up our thinking or an airplane to literally fly, the tools we wield become extensions of our agency and control.
A lonely human without knowledge sharing or tools isn’t that much more capable in their lifetime than the smartest animals. When we talk about human ability colloquially, we’re generally talking about what we can do with access to our human heritage, civilization, safety and access to materials and tools.
Pattern matching against something others have already done is great but this is shared with at the very least all mammals to some extent. Pushing the boundaries of our species forward over time is a different game. Or at least, it seems to be…
It certainly seems like we’ve found the holy grail of pattern matching (system 1 thinking), which is an insane leap! But what about system 2? The million dollar question is what the hell is the topology of that pre-frontal cortex thinking machine? Is it just more pattern matching but against different patterns? Or is it completely and qualitatively different? And if so, is it more or less hard? To me, following the debate is just watching one bad prediction after another, (including my own of course). We just don't know how it works. Not you or me, not Sam Altman in full though-leading leather jacket uniform, or even our top neuro-scientists.
Consider: Do hardware limitations establish useful limits on the kind of problems a computer can solve? The answer is a resounding NO in my view, because the limits of what can be expressed/solved grows so insanely quickly that it becomes a completely meaningless and unreachable limit even for super small computers (less capable than our brain).
As for learning time constraints: These are obviously reachable, but still useless in my view because they are too inconsistent- the kind of methods and insights that a human can acquire within a lifetime are completely different between persons, and highly dependent on how the learning happens...
LLMs don't do formal reasoning - https://news.ycombinator.com/item?id=41812523 - Oct 2024 (70 comments)
Brains have various structures that have distinct architectures. I don’t see any indication that the best way forward is to try to shoehorn everything into a single computational paradigm.
It’s like trying to make a flying submarine car. It might technically be possible, but it might not be worth the trouble, and it’s unlikely to result in a vehicle that works excellently in any of its environments.
Maybe the benchmark Qs/As snuck into training sets accidentally. Is it still Goodhart's Law if it's unintentional?
Daniel Lemire has blogged about being impressed with how well the LLM answers his CS problem questions. I was impressed too. Not sure where the line of competence lies.
An LLM is very good at recovering rules, but being good at pattern recognition is not the same thing as being good at unambiguously following rules in the appropriate context.
edit: Natural language is far from an efficient/sufficient/necessary intermediate representation for doing math, just ask any general-purpose computer. Sometimes, it's worth "putting rules in stone," and it seems unreasonable to believe that there is always an unambiguous rule for this that you can mechanically recover from a corpus of language use.
Having said that, we can still get semantically and logically idempotent output that makes sense but with lots of work outside of the LLM, which contrasts with the current hyper focus on the LLM itself as the be all and end all. It is just one component in what ought to be a larger and more involved system for reasoning.
Look at what we were able to accomplish here for Legal AI, not so mathematical logic per se but mimicking (capturing) axiomatic logic in the legal domain:
https://www.youtube.com/watch?v=_9Galw9-Z3Q
marc at sunami dot ai
until that happens .. I think RL startups focused on real problems are much undervalued : https://quantblog.wordpress.com/2024/10/11/llm-hype-means-th...
EDIT: Had there been an ounce of actual true reasoning emerging in LLMs, openai would have been running this thing privatly 24/7 to produce new science and capture pattents that would give them economic dominance. Not trying to sell tokens to us all.
would you pick only winning results and only present favorable, massaged results if it got you 150+B USD of worth?
Consider that in a LLM, language inputs are tokenized and fed as inputs into the neural network, and connections in the network create output sequences that are not just syntactically correct (trivial) or form semantically plausible sentences (early transformers did this). LLM output sequences follow the deep patterns of language which include sometjhing that resembles reasoning as the model has learnt from its training data.
LLMs seem to fall short because they often fail at truly abstract reasoning tasks that humans find easy. If trained properly, LLMs can develop advanced representations of logical systems that will surely outpace what humans can do in terms of raw reasoning.
However, human mathematicians have not even unified around constructive mathematics as a must for the study of mathematics. This reveals that even highly evolved mathematical disciplines rely on objects whose characteristics do not lend themselves to full logical scrutiny and are in a way socially constructed and effectively hard to audit.
While notation in mathematics is incredible technology it is also a highly limiting factor that suffers major tradeoffs. Humans struggle to invent new notation fast enough and to discard outdated notation fast enough. If we do see an AI-powered boom in mathematics, I suspect our notion of notation and the fluidity we demand from it will change dramatically.
I see language more as a medium for transcribing reasoning. While language certainly communicates reasoning, you can have reasoning without language, but not language without reasoning.
This paper seems to imply that current LLM's are just copying the training dataset's reasoning communication, not understand the actual reasoning. I don't think LLM's moving past this is "obvious" or even close to being inevitable.
> Instead, LLMs likely perform a form of probabilistic pattern-matching and searching to find closest seen data during training without proper understanding of concepts. While this process goes beyond naive memorization of words and the models are capable of searching and matching more abstract reasoning steps, it still falls short of true formal reasoning.
Consider how language input to an LLM is tokenized. Now imagine a tokenization scheme that introduces tokens that track the strict logical reasoning in the language. Thus two completely different English sentences could both tokenize as the application of Modus Ponens over assumption 1 to conclude conclusion 2, for example.
Now consider that we can tokenize formal notation as used in mathematics and logic, and we can train LLMs on mathematical papers, peer review write-ups, etc. We can generate millions of correct proofs and teach it which ones are remarkable and why, etc.
Ultimately we run into the same barrier as mathematical constructivists run into, but I think it's still quite plausible that LLMs trained as I describe would be able to reason quite well and find oversights humans missed. However creating the optimal scheme and implementation is not trivial.
We have already trained the LLMs on most of the human knowledge base (so like 4-5000 years?) - imo training data will become a problem and will soon be more expensive than compute. Sure, you can work around some of this using synthetic training data but I personally would not count on general-purpose LLMs (especially LLMs aka transformer models) developing super-human representations of logical systems anytime soon.
For example, the prompt completion: "The mouse has a unique digestive system compared to other rodents, however the sparrow" on GPT-4o is
"exhibits a highly specialized digestive system adapted for rapid processing of food, particularly seeds and insects, through structures like the crop and gizzard, which are not found in rodents."
Claude 3.5 completes it as
"has a completely different digestive anatomy as a bird. Birds like sparrows have adaptations for flight, including a lightweight skeletal system and a specialized digestive tract. Unlike mice, sparrows have a crop for storing food, a gizzard for grinding it, and generally shorter intestines to reduce weight. They also lack teeth, instead using their beak to manipulate food."
What appears to be a thoughtful contrast is merely a language pattern. Similarly, a prompt like "Assume -B, A->B. Under what circumstances is B true?" will simply follow the gradient to return output that is likely correct. Prompts like "what is 2+2" fail only because nobody bothers to write about it so simple arithmetic was not in the training data.
However the way that multi-modal LLMs handle images is inspiring as it effectively converts from the visual domain into the sequential token domain. The same could be done for symbolic systems, etc.
LLM’s can infer relationships and maintain longer context chains in order to generate their output… it still happens that some times the output is correct depending on the training data, layers, context, etc. And it can get more accurate when we change the parameters of the model. But the algorithm isn’t “doing” anything here. It will generate something regardless of what it’s prompted with.
Maybe it’s right. But the algorithm is an algorithm. It doesn’t care what truth is. It’s generating BS essentially.
A human is doing a lot more work when performing mathematics.
It may be that LLM’s can be a useful tool in mathematical reasoning but it’s not obvious that it will ever be capable of it without a human, let alone be better than a human.
Consider an LLM that happened to have some pre-trained layers that were trained abstractly on all the constructive proofs available for modern mathematics. LLMs with image recognition rely on existing visual pattern recognition layers, fwiw.
It's not obvious that they will be able to do any reasoning, in the formal sense, at all; let alone better than humans. LLMs are simply not sufficient for the kinds of tasks and work done when reasoning about mathematical problems.
There's plenty of research demonstrating that they can be useful in small, constrained tasks -- which isn't anything to raise our noses at!
... it's just not _obvious_ in the sense that there is a clear step from LLM capabilities today to "better than humans." It's more an article of faith that it could be true, some day, if we just figure out X, Y, Z... which folks have been doing for decades to no avail. In other words, it's not obvious at all.
[0] https://garymarcus.substack.com/p/llms-dont-do-formal-reason...
They are taking poor performance of undersized models and claiming that proves some fundamental limitation of large models, even though their own tests show that isn't true.
In the other test the perturbations aren’t particularly sophisticated and modify the problem according to a template. As the parent comment said this is pretty easy to generate test data for (and for the model to pattern match against) so maybe that is what they did.
A better test of “reasoning” would be to isolate the concept/algorithm and generate novel instances that are completely textually different from existing problems to see if the model really isn’t just pattern matching. But we already know the answer to this because it can’t do things like arbitrary length multiplication.
I don't think that LLMs are the end of AGI research at all, but the extreme skepticism of their current utility is mostly based on failures of small models. It's like 65% for most of the small models they tested and that is what they are really basing their conclusions on
This is also why o1 is not better at English. Math skills transfer to general reasoning but not so much to creative writing.
The way I see it reasoning is actually the ability of the model to design and train smaller models that can learn with very few examples.
Yes, once the modules for reasoning have converged, it will take very few examples for it to update to new types of reasoning. But to develop those modules from scratch requires large amounts of examples that overtax its ability to memorize. We see this pattern in the "grokking" papers. Memorization happens first, then "grokking" (god I hate that word).
It's not like humans bootstrap reasoning out of nothing. We have a billion years of evolution that encoded the right inductive biases in our developmental pathways to quickly converge on the structures for reasoning. Training an LLM from scratch is like recapitulating the entire history of evolution in a few months.
Whatever happened with that result which found some representation of the state of a game inside an LLM? That indicated some degree of model-building. Haven't heard about that again/
The problem here is more serious than mathematics: the quantitative reasoning itself is highly unreliable.
tl;dr - the best open model dropped from 89.7% on GSM8K(full) to 30% on Symbolic-NoOp, while o1-preview dropped from 94.9% to 77.4%, respectively.
I think all this paper shows is that LLMs need space to "think" outside of their inference layer, (for the current architectures at least).
It's similar to the "draw a room, but DO NOT put an elephant in the corner" prompts that people were using with image models.
This is something that practitioners have been doing for awhile (via CoT, ToT, etc.) and the whole rationale behind OpenAI's newly launched o1-series "model."
There's another post that says this paper proves LLMs can't be used to build "reliable agents" -- which doesn't appear to be true when you look at o1's stellar performance here.
> When Sophie watches her nephew, she gets out a variety of toys for him. The bag of building blocks has 31 blocks in it. The bin of stuffed animals has 8 stuffed animals inside. The tower of stacking rings has 9 multicolored rings on it. Sophie recently bought a tube of bouncy balls, bringing her total number of toys for her nephew up to 62. How many bouncy balls came in the tube?
So I would argue it's critical that LLMs knows how to convert text to math and then perform those math calculations. This extends beyond just math but also the underlying logics.
We just need to figure out how to inform the LLM to read, write, and understand formal languages. My guess is attention heads could probably work in this context, but we might want something that is a little more rigid, naturally extending from the rigidity of logic and formal languages. Conversely, we might not have figured out how to properly train LLMs on formal languages and have them preserve the underlying logic and axioms necessary to correctly perform math calculations.
The recurrent or transformer models are Turing complete, or at least close to being Turing complete (apologies, I’m not sure of the precise terminology here).
As a result, they can at least simulate a brain and are capable of exhibiting human-like intelligence. The "program" is the trained dataset, and we have seen significant improvements in smaller models simply by enhancing the dataset.
We still don’t know what the optimal "program" looks like or what level of scaling is truly necessary. But in theory, achieving the goal of AGI with LLMs is possible.
Edited for clarity
This kind of adaptation to your specific setting instead of just spitting out memorized answers in commonn settings is what makes o1 useful for me. Now again, it is often wrong, but if I am completely clueless I like to watch it attempt things and I can get inspiration from that. That's much more useful than seeing a confident wrong answer like 4o would give it.
They have none. Literally zero. That’s the limit. Thank you for reading my paper.
I don't really understand why, but I think we are going to see total denial from a significant percentage of the population all the way up to and past the point where many average mathematicians and software engineers cannot in any way compete with AI.
We already are reportedly getting pretty close with o1 (not o1-preview).
There are also new paradigms for machine learning and hardware in the pipeline that will continue to provide orders of magnitude performance gains and new capabilities in the next 5-10 years.
Many people still claim that "self driving cars don't exist", in so many words, even though they are deployed in multiple cities.
But just look at the predictions of that time - cities will change, ... and so on. Sure, we have self-driving cars but the reality looks very different (and a lot more like the past!) than the pundits and futurists imagined! I'm not sure anyone will make their billions of dollars investmented back within even 20 years.
Just two random examples from ~10 years ago (2013-2016), you can google many more of that time.
* "Ford Targets Fully Autonomous Vehicle for Ride Sharing in 2021; Invests in New Tech Companies, Doubles Silicon Valley Team" [1]
* "Disruptions: How Driverless Cars Could Reshape Cities" [2]
[1] https://media.ford.com/content/fordmedia/fna/us/en/news/2016...
[2] https://archive.nytimes.com/bits.blogs.nytimes.com/2013/07/0...
[3] https://www.gensler.com/dialogue/30/the-game-changer-for-cit...