But as impressive as this is, it’s easy to lose sight of the bigger picture: we’ve only scratched the surface of what artificial intelligence could be — because we’ve only scaled two modalities: text and images.
That’s like saying we’ve modeled human intelligence by mastering reading and eyesight, while ignoring touch, taste, smell, motion, memory, emotion, and everything else that makes our cognition rich, embodied, and contextual.
Human intelligence is multimodal. We make sense of the world through:
Touch (the texture of a surface, the feedback of pressure, the warmth of skin0; Smell and taste (deeply tied to memory, danger, pleasure, and even creativity); Proprioception (the sense of where your body is in space — how you move and balance); Emotional and internal states (hunger, pain, comfort, fear, motivation).
None of these are captured by current LLMs or vision transformers. Not even close. And yet, our cognitive lives depend on them.
Language and vision are just the beginning — the parts we were able to digitize first - not necessarily the most central to intelligence.
The real frontier of AI lies in the messy, rich, sensory world where people live. We’ll need new hardware (sensors), new data representations (beyond tokens), and new ways to train models that grow understanding from experience, not just patterns.
I respectfully disagree. Touch gives pretty cool skills, but language, video and audio are all that are needed for all online interactions. We use touch for typing and pointing, but that is only because we don't have a more efficient and effective interface.
Now I'm not saying that all other senses are uninteresting. Integrating touch, extensive proprioception, and olfaction is going to unlock a lot of 'real world' behavior, but your comment was specifically about intelligence.
Compare humans to apes and other animals and the thing that sets us apart is definitely not in the 'remaining' senses, but firmly in the realm of audio, video and language.
Current generative models merely mimic the output, with a fuzzy abstract linguistic mess in place of any physical/causal models. It's unsurprising that their capacity to "reason" is so brittle.
Language can exist entirely independently from senses and cognition. It is an encoding of patterns in the world where the only thing that matters is if anybody or anything wielding it can map the encodings to and from the patterns they encode for (which is more of a sociological/synchronisation challenge).
Does C, or Java, 'make no sense' because it 'ignores lower level cognition'?
There are many parts of non-programming languages that similarly have nothing to do with embodiment. Some of them are even about incredibly abstract things impossible in our universe. One could argue that for many fields genius lies in being able to mentally model what is so foreign to the intuition our embodiment has imbued us with or to be able to find a mapping to facilitate that intuition. Said otherwise: the experience our embodiment has given us might limit how well we can understand the world (Quantum Mechanics anyone?).
Again, embodiment is interesting and worth pursuing, but far from a requirement for far-reaching intelligence.
Helen Keller begs to disagree. Language and cognition were clearly linked for her.
> It wasn't until April 5, 1887, when Anne took Helen to an old pump house, that Helen finally understood that everything has a name. Sullivan put Helen’s hand under the stream and began spelling “w-a-t-e-r” into her palm, first slowly, then more quickly.
> Keller later wrote in her autobiography, “As the cool stream gushed over one hand she spelled into the other the word water, first slowly, then rapidly. I stood still, my whole attention fixed upon the motions of her fingers. Suddenly I felt a misty consciousness as of something forgotten–-a thrill of returning thought; and somehow the mystery of language was revealed to me. I knew then that ‘w-a-t-e-r’ meant the wonderful cool something that was flowing over my hand. That living word awakened my soul, gave it light, hope, joy, set it free! There were barriers still, it is true, but barriers that could in time be swept away.”
"one plus one equals two" can be understood and worked with without ever feeling water over your hand. It is a priori knowledge (see Hume's fork for an explanation).
You have to understand that the richness of language linked to cognition is due to your experience with that part of language and resulting romantization of it. It doesn't mean that it is a core defining feature of language, even though it feels that way (and as touching as that anecdote is).
It makes sense in context, but that context includes the machine on which the compiled code runs. Without the underlying machine, there's no real purpose for C or Java. I'm open to the idea that 'lower level cognition' may be as relevant to language as the machine is to C or Java.
They do express algorithms, don't they?
I probably made a mistake when i asserted that -- should have thought it over. Vision is evolutionarily older and more “primitive”, while language is uniquely human [or maybe, more broadly, primate, cetacean, cephalopod, avian...] symbolic, and abstract — arguably a different order of cognition altogether. But i maintain that each and every sense is important as far as human cognition -- and its replication -- is concerned.
Language allows encoding and compression of information about the world, which is of course incredibly powerful and increases communication bandwidth enormously (as well as tons of other stuff).
I'd say that for high level cognitive processes, hearing and speaking were an important stepping stone because for some reason evolving organs that can generate relatively high bandwidth signals in audio seems to be easier than evolving something that does that for visuals (very few Teletubby screens on tummies in the natural world).
Interesting games to think about in this sense: Pictionary/drawing games and charades.
I thought about this some more and I think the prevalence of making sounds rather than gesturing etc. is due to sound being a broadcasting mechanism that works over long distances and without line of sight.
Visually indicating that you've claimed some territory is pretty hard.
If all humans lacked vision, the human race would definitely not do just fine.
It may be, that we are not using touch for anything important as adults. But babies rely on touch to explore their surroundings. They stick anything into their mouth, why? Because a tongue is the most touch-sensitive organ. They are exploring things by touching them with their tongues.
I can only guess what people get from that, but my guess is they get understanding of geometry and of surface properties of objects, which you'll have problems to get by processing photos or texts.
> your comment was specifically about intelligence.
Talking about intelligence, I do not believe that LLMs can match humans without deep understanding or 3d-space and material science^W intuition. It needs touch and temperature sensitivity at least. Probably you can replace it with billions of words of texts describing these things, but I doubt it.
Another thing to remember is that the senses we have aren't the only ones in biology and far from the only ones possible. In fact, anything that gives you another type of information about the world (you're modeling) is a different sense. In that sense (ha), AI has access to an incredibly vast and varied array of senses that is inaccessible to humans. Lidar is a very simple example of that.
I don't think touch and temperature sensitivity are needed to achieve it, but I do agree that training with senses specifically for understanding 3D space is very important. At the very least binocular video.
So AI developers understand limitations and trying to remove them. It will help, but it will not make AI vision to be on par with a human's.
> In that sense (ha), AI has access to an incredibly vast and varied array of senses that is inaccessible to humans. Lidar is a very simple example of that.
I don't think that current uses of lidars have anything to do with intelligence. Not every neuro-net is about intelligence.
> I don't think touch and temperature sensitivity are needed to achieve it,
I'm sure they are. To understand forms you need to explore them with touch. The ability to understand forms by just looking at them is an acquired skill. Maybe it is possible to train these abilities without the touch, but how? I believe it will take a shitload of training data, and I'm not sure it will be good enough.
Temperature sensitivity is a big thing, because it allow you to guess thermal conductivity of a thing by just looking at it. It allows to guess wetness of a thing. It allows us to guess temperature of things by looking at them: like you see sun shining, fire burning, people touching things and yanking their hands from hot things. Or just how about a person that cautiously trying to learn a temperature of a thing, at first measuring infrared radiation, then a quick touch, then a touch for a longer time, and finally a long sustained contact: how could you understand all these proceedings without your own experience of grasping the hot thing, crying from a pain and dropping the thing on your feet?
These are just obvious ideas from top of my mind. What else comes from temperature sensitivity I don't know and no one is, because no one really knows how people learn to use their senses and to think. There are theories about it, but they are more of descriptive nature: they describe what is known without having a lot of a predictive power. Because of this the optimism of AI crowd seems overinflated. They don't know what they are trying to do, and still they believe in their eventual success.
Probably you can learn it by thinking, but can LLMs think, while training? You can learn it as a pattern of a behaviour, without understanding the meaning of it, but then you'll hallucinate this pattern all the time, just because some of the movements were close enough.
> At the very least binocular video.
I'm not sure that people can learn 3d by looking. At least they do not just rely on a binocular vision to learn it. They touch, they lick. They measure things in different ways (by sticking it in mouth, by grasping, by climbing on top of it or falling from it, by hugging it), they measure distances by crawling or walking along them. They are finding a spot where they can see what happens behind a pack of tree, or maybe behind something else. People not just using more senses, they are acting also, which allows them to learn causal relationships. Watching binocular video is not acting, so you can get correlation only without any hope to learn how to distinguish correlations from causations, and at the same time it is much more limited in a data available.
Science says that 80 or 90% of information people get is coming from their vision? I'm skeptical about this, because I don't know how they measure "information", but in any case human vision was trained with support from other senses. I wouldn't be surprised, if at certain stages of a baby's development other senses are more advanced and are used to get labelled data to train vision.
Only octopuses, elephants and apes are in a similar league with regards to dexterity and finesse.
Human neural networks are dynamic, they change and rearrange, grow and sever. An LLM is fixed and relies on context, if you give it the right answer it won't "learn" that is the correct answer unless it is fed back into the system and trained over months. What if it's only the right answer for a limited period of time?
To build an intelligent machine, it must be able train itself in real time and remember.
I imagine humans are limited by the # of synapses we have so it's useful to forget but maybe machines can move the useless stuff to deep storage until it's dug out, in the same way certain things can trigger a deep memory in humans.
To a great extent, it's not AI research that is the primary driver behind the huge advances in AI, either in terms of techniques (transformers) or data sets. Instead, the biggest single factor responsible for this huge boost are advances in compute hardware and compute power in general. Even if we had known about the Transformer architecture 20 years earlier, and we had had the datasets that OpenAI and Google amassed 20 years earlier, we still would not have been able to get anywhere close to training an LLM on hardware from 20 years ago.
And given this, and given that LLMs have already pushed this compute power to the limit, it's very possible that we'll stagnate at more or less the current level unless and until a new 10x or even 100x boost in compute power happens. It's very unlikely that you could train a model on 100x as much data as you get today without that, which is what you would likely require to add multiple modalities and then combine them.
Based on the architectures we have they may also be the ending. There’s been a lot of news in the past couple years about LLMs but has there been any breakthroughs making headlines anywhere else in AI?
Yeah, lots of stuff tied to robotics, for instance; this overlaps with vision, but the advances go beyond vision.
Audio has seen quite a bit. And I imagine there is stuff happening in niche areas that just aren't as publicly interesting as language, vision/imagery, audio, and robotics.
Like Dr. Who said: DALEKs aren't brains in a machine, they are the machine!
Same is true for humans. We really are the whole body, we're not just driving it around.
The brain could. Of course it could. It's just a signals processing machine.
But would it be missing anything we consider core to the way humans think? Would it struggle with parts of cognition?
For example: experiments were done with cats growing up in environments with vertical lines only. They were then put in a normal room and had a hard time understanding flat surfaces.
https://computervisionblog.wordpress.com/2013/06/01/cats-and...
I do know of studies that showed blind people start using their visual cortex to process sounds. That is pretty cool imo
That's not what these models do
“We’ve barely scratched the surface with Rust, so far we’re only focused on code and haven’t even explored building mansions or ending world hunger”
They took 1970s dead tech and deployed it on machines 1 million times more powerful. I'm not sure I'd qualify this as progress. I'd also need an explanation as to what systemic improvements in models and computations that give an exponential growth in performance are planned.
I don't see anything.
Putting that aside. The shared prize in 2024 was given for work done in the 1970s and 1980s. Was this meant to be a confirmation of my point? You've done so beautifully.
In 2022 they saw fit to award Ben Bernanke. Yep. That one. For, I kid you not, work on the impacts of financial crises. Ironically also work originally done in the 1970s and 80s.
Progress for me includes both small iterative refinements and big leaps. It also includes trying old techniques in new domains with new technology. So I think we just have differing definitions for progress.
If this isn’t meant to be sarcasm or irony, you’ve got some really exciting research and learning ahead of you! At the moment it reads very “computers are just addition and multiplication and we’ve had that for thousands of years!”
I've done the research. Which is why I made the point I did. You're being dismissive and rude instead of putting forth any sort of argument. It's the paper hat of fake intellect. Yawn.
> At the moment it reads very “computers are just addition and multiplication and we’ve had that for thousands of years!”
Let's be specific then. The problem with the models is they require exponential cost growth for model generation giving only linear increases in output performance. This cost curve is currently a factor or two stronger than the curve of increasing hardware performance. Putting the technology, absent any actual fundamental algorithmic improvements, which do /not/ seem forthcoming despite billions in speculative funding, into a strict coffin corner. In short: AI winter 2.0.
Got any plans for that? Any specific research that deals with that? Any thoughts of your own on this matter?
Great. What's the 1970s equivalent of word2vec or embeddings, that we've simply scaled up? Where are the papers about the transformer architecture or attention from the 1970s? Sure feels like you think LLMs are just big perceptrons.
> The problem with the models is they require exponential cost growth
Let's stick to the assertion I was disputing instead.
And at the same time I have noticed that people don’t understand the difference between an S-curve and an exponential function. They can look almost identical at certain intervals.
Being Right and being Successful are not the same thing.
It’s apparently much easier to scare the masses with visions of ASI, than to build a general intelligence that can pick up a new 2D video game faster than a human being.
It should be required to point to the “solution” and maybe how it works to say “he just sucks” or “this was solved before”.
IMO the problem with current models is that they don’t learn categorically like: lions are animals, animals are alive. goats are animals, goats are alive too. So if lions have some property like breathing and goats also have it, it is likely that other similar things have the same property.
Or when playing a game, a human can come up with a strategy like: I’ll level this ability and lean on it for starting, then I’ll level this other ability that takes more time to ramp up while using the first one, then change to this play style after I have the new ability ready. This might be formulated completely based on theoretical ideas about the game, and modified as the player gets more experience.
With current AI models as far as I can understand, it will see the whole game as an optimization problem and try to find something at random that makes it win more. This is not as scalable as combining theory and experience in the way that humans do. For example a human is innately capable of understanding there is a concept of early game, and the gains made in early game can compound and generate a large lead. This is pattern matching as well but it is on a higher level .
Theory makes learning more scalable compared to just trying everything and seeing what works
This comment, with the exception of the random claim of "he is just bad at this", reads like a thinly veiled appeal to authority. I mean, you're complaining about people pointing out prior work, reviewing the approach, and benchmarking the output.
I'm not sure you are aware, but those items (bibliographical review, problem statement, proposal, comparison/benchmarks) are the very basic structure of an academic paper, which each and every single academic paper on any technical subject are required to present in order to be publishable.
I get that there must be a positive feedback element to it, but pay attention to your own claim: "He is trying to solve this to advance the field." How can you tell whether this really advances the field if you want to shield it from any review or comparison? Otherwise what's the point? To go on and claim that ${RANDOM_CELEB} parachuted into a field and succeeded at first try where all so-called researchers and experts failed?
Lastly, "he is just bad at this". You know who is bad at research topics? Researchers specialized on said topic. Their job is to literally figure out something they don't know. Why do you think someone who just started is any different?
A serious attempt at video/vision would involve some probabilistic latent space that can be noised in ways that make sense for games in general. I think veo3 proves that ai can generalize 2d and even 3d games, generating a video under prompt constraints is basically playing a game. I think you could prompt veo3 to play any game for a few seconds and it will generally make sense even though it is not fine tuned.
[0] https://deepmind.google/discover/blog/genie-2-a-large-scale-...
Besides static puzzles (like a maze or jigsaw) I don't believe this analogy holds? A model working with prompt constraints that aren't evolving or being added over the course of "navigating" the generation of the model's output means it needs to process 0 new information that it didn't come up with itself — playing a game is different from other generation because it's primarily about reacting to input you didn't know the precise timing/spatial details of, but can learn that they come within a known set of higher order rules. Obviously the more finite/deterministic/predictably probabilistic the video game's solution space, the more it can be inferred from the initial state, aka reduce to the same type of problem as generating a video from a prompt), which is why models are still able to play video games. But as GP pointed out, transfer function negative in such cases — the overarching rules are not predictable enough across disparate genres.
> I think you could prompt veo3 to play any game for a few seconds
I'm curious what your threshold for what constitutes "play any game" is in this claim? If I wrote a script that maps button combinations to average pixel color of a portion of the screen buffer, by what metric(s) would veo3 be "playing" the game more or better than that script "for a few seconds"?
edit: removing knee-jerk reaction language
I am just saying we have proof that it can understand complex worlds and sets of rules, and then abide by them. It doesn't know how to use a controller and it doesn't know how to explore the game physics on its own, but those steps are much easier to implement based on how coding agents are able to iterate and explore solutions.
In the same way that keeping a dream journal is basically doing investigative journalism, or talking to yourself is equivalent to making new friends, maybe.
The difference is that while they may both produce similar, "plausible" output, one does so as a result of processes that exist in relation to an external reality.
It doesn't. And you said it yourself:
> generating a video under prompt constraints is basically playing a game.
No. It's neither generating a game (that people can play) nor is it playing a game (it's generating a video).
Since it's not a model of the world in any sense of the word, there are issues with even the most basic object permanenece. E.g. here's veo3 generating a GTA-style video. Oh look, the car spins 360 and ends up on a completely different street than the one it was driving down previously: https://www.youtube.com/watch?v=ja2PVllZcsI
Also, prompting doesn't work as you imply it does.
If I were to hand you a version of a 2d platformer (lets say Mario) where the gimmick is that you're actually playing the fourier transform of the normal game, it would be hopeless. You might not ever catch on that the images on screen are completely isomorphic to a game you're quite familiar with and possibly even good at.
But some range of spatial transform gimmicks are cleanly intuitive. We've seen this with games like vvvvvv and braid.
So the general rule seems to be that intelligence is transferable to situations that are isomorphic up to certain "natural" transforms, but not to "matching any possible embedding of the same game in a different representation".
Our failure to produce anything more than hyper-specialists forces us to question exactly is meant by the ability to generalize other than just "mimicking an ability humans seem to have".
Except that's of course superficial nonsense. Position space isn't an accident of evolution, one of many possible encodings of spatial data. It's an extremely special encoding: The physical laws are local in position and space. What happens on the moon does not impact what happens when I eat breakfast much. But points arbitrarily far in momentum space do interact. Locality of action is a very very deep physical principle, and it's absolutely central to our ability to reason about the world at all. To break it apart into independent pieces.
So I strongly reject your example. It makes no sense to present the pictures of a video game in Fourier space. Its highly unnatural for very profound reasons. Our difficulty stems entirely from the fact that our vision system is built for interpreting a world with local rules and laws.
I also see no reason that an AI could successfully transfer between the two representations easily. If you start from scratch it could train on the Fourier space data, but that's more akin to using different eyes, rather than transfer.
Not Zelda. That game is highly nonlinear and its measurable goals (triforce pieces) are long-term objectives that take a lot of gameplay to obtain. As far as I’m aware, no AI has been able to make even modest progress without any prior knowledge of the game itself.
Yet many humans can successfully play and complete the first dungeon without any outside help. While completing the full game is a challenge that takes dedication, many people achieved it long before having access to the internet and its spoiler resources.
So why is this? Why are humans so much better at Zelda than AIs? I believe that transfer knowledge has a lot to do with it. For starters, Link is approximately human (technically Hylian, but they are considered a race of humans, not a separate species) which means his method of sensing and interacting with his world will be instantly familiar to humans. He’s not at all like an earthworm or an insect in that regard.
Secondly, many of the objects Link interacts with are familiar to most modern humans today: swords, shields, keys, arrows, money, bombs, boomerangs, a ladder, a raft, a letter, a bottle of medicine, etc. Since these objects in-game have real world analogues, players will already understand their function without having to figure it out. Even the triforce itself functions similarly to a jigsaw puzzle, making it obvious what the player’s final objective should be. Furthermore, many players would be familiar with the tropes of heroic myths from many cultures which the Zelda plot closely adheres to (undertake a quest of personal growth, defeat the nemesis, rescue the princess).
All of this cultural knowledge is something we take for granted when we sit down to play Zelda for the first time. We’re able to transfer it to the game without any effort whatsoever, something I have yet to witness an AI achieve (train an AI on a general cultural corpus containing all of the background cultural information above and get it to transfer that knowledge into gameplay as effectively as an unspoiled Zelda beginner).
As for the Fourier transform, I don’t know. I do know that the Legend of Zelda has been successfully completed while playing entirely blindfolded. Of course, this wasn’t with Fourier transformed sound, though since the blindfolded run relies on sound cues I imagine a player could adjust to the Fourier transformed sound effects.
It sounds like the "best" AI without constraint would just be something like a replay of a record speedrun rather than a smaller set of heuristics of getting through a game, though the latter is clearly much more important with unseen content.
[1] https://instadeep.com/2021/10/a-simple-introduction-to-meta-...
John Carmack founded Keen technology in 2022 and has been working seriously on AI since 2019. From his experience in the video game industry, he knows a thing or two about linear algebra and GPUs, that is the underlying maths and the underlying hardware.
So, for all intent and purposes, he is an "AI guy" now.
He has built an AI system that fails to do X.
That does not mean there isn't an AI system that can do X. Especially considering that a lot is happening in AI, as you say.
Anyway, Carmack knows a lot about optimizing computations on modern hardware. In practice, that happens to be also necessary for AI. However, it is not __sufficient__ for AI.
Perhaps you have put your finger on the fatal flaw ...
You are holding the burden of proof here...
Maybe this is formulated a bit harshly, but let us respect the logic here.
God I hate sounding like this. I swear I'm not too good for John Carmack, as he's infinitely smarter than me. But I just find it a bit weird.
I'm not against his discovery, just against the vibe and framing of the op.
One phenomena that bared this to me, in a substantive way, was noticing an increasing # of reverent comments re: Geohot in odd places here, that are just as quickly replied to by people with a sense of how he works, as opposed to the keywords he associates himself with. But that only happens here AFAIK.
Yapping, or, inducing people to yap about me, unfortunately, is much more salient to my expected mindshare than the work I do.
It's getting claustrophobic intellectually, as a result.
Example from the last week is the phrase "context engineering" - Shopify CEO says he likes it better than prompt engineering, Karpathy QTs to affirm, SimonW writes it up as fait accompli. Now I have to rework my site to not use "prompt engineering" and have a Take™ on "context engineering". Because of a couple tweets + a blog reverberating over 2-3 days.
Nothing against Carmack, or anyone else named, at all. i.e. in the context engineering case, they're just sharing their thoughts in realtime. (i.e. I don't wanna get rolled up into a downvote brigade because it seems like I'm affirming the loose assertion Carmack is "not an AI guy", or, that it seems I'm criticizing anyone's conduct at all)
EDIT: The context engineering example was not in reference to another post at the time of writing, now one is the top of front page.
The difference here is that your example shows a trivial statement and a change period of 3 days, whereas what Carmack is doing is taking years.
Not sure why justanotherjoe is a credible resource on who is and isn’t expert in some new dialectic and euphemism for machine state management. You’re that nobody to me :shrug:
Yann LeCun is an AI guy and has simplified it as “not much more than physical statistics.”
WWhole lot of AI is decades old info theory books applied to modern computer.
Either a mem value is or isn’t what’s expected. Either an entire matrix of values is or isn’t what’s expected. Store the results of some such rules. There’s your model.
The words are made up and arbitrary because human existence is arbitrary. You’re being sold on a bridge to nowhere.
That's just what I think anyway.
And like I get it, it’s fun to complain about the obnoxious and irrational AGI people. But the discussion about how people are using these things in their everyday lives is way more interesting.
Where can I read about these experiments?
The only thing I've seen approximating generalization has appeared in symbolic AI cases with genetic programming. It's arguably dumb luck of the mutation operator, but oftentimes a solution is found that does work for the general case - and it is possible to prove a general solution was found with a symbolic approach.
I'm wondering whether one has tested with the same model but on two situations:
1) Bring it to superhuman level in game A and then present game B, which is similar to A, to it.
2) Present B to it without presenting A.
If 1) is not significantly better than 2) then maybe it is not carrying much "knowledge", or maybe we simply did not program it correctly.
Doesn't seem unreasonable that the same holds in a gaming setting, that one should train on many variations of each level. Change the lengths of halls connecting rooms, change the appearance of each room, change power-up locations etc, and maybe even remove passages connecting rooms.
[1]: https://physics.allen-zhu.com/part-3-knowledge/part-3-1
Given the long list of dead philosophers of mind, if you have a trivial proof, would you mind providing a link?
A simple nonsense programming task would suffice. For example "write a Python function to erase every character from a string unless either of its adjacent characters are also adjacent to it in the alphabet. The string only contains lowercase a-z"
That task isn't anywhere in its training set so they can't memorise the answer. But I bet ChatGPT and Claude can still do it.
Honestly this is sooooo obvious to anyone that has used these tools, it's really insane that people are still parroting (heh) the "it just memorises" line.
Because they are. This is some crazy semantic denial. I should stop engaging with this nonsense.
We have AI that is kind of close to passing the Turing test and people still say it's not intelligent...
These machines are only able to output text.
It seems hard to think they could reasonably think any -normal- person.
Tech only feels like magic if you don't know how it works
Not really. Most of those seemingly novel problems are permutations of existing ones, like the one you mentioned. A solution is simply a specific permutation of tokens in the training data which humans are not able to see.
This doesn't mean that the permutation is something that previously didn't exist, let alone that it's something that is actually correct, but those scenarios are much rarer.
None of this is to say that these tools can't be useful, but thinking that this is intelligence is delusional.
> We have AI that is kind of close to passing the Turing test and people still say it's not intelligent...
The Turing test was passed arguably decades ago. It's not a test of intelligence. It's an _imitation game_ where the only goal is to fool humans into thinking they're having a text conversation with another human. LLMs can do this very well.
They generate statistically plausible answers (to simplify the answer) based on the training set and weights they have.
Or we do it most of the time :)
If we were taking a walk and you asked me for an explanation for a mathematical concept I have not actually studied, I am fully capable of hazarding a casual guess based on the other topics I have studied within seconds. This is the default approach of an LLM, except with much greater breadth and recall of studied topics than I, as a human, have.
This would be very different than if we sat down at a library and I applied the various concepts and theorems I already knew to make inferences, built upon them, and then derived an understanding based on reasoning of the steps I took (often after backtracking from several reasoning dead ends) before providing the explanation.
If you ask an LLM to explain their reasoning, it's unclear whether it just guessed the explanation and reasoning too, or if that was actually the set of steps it took to get to the first answer they gave you. This is why LLMs are able to correct themselves after claiming strawberry has 2 rs, but when providing (guessing again) their explanations they make more "relevant" guesses.
Of course, this because I have spent a lot of time TRAINING to play chess and basically none training to play go.
I am good on guitar because I started training young but can't play the flute or piano to save my life.
Most complicated skills have basically no transfer or carry over other than knowing how to train on a new skill.
I guess its a totaly different level of control: instead of immediately choosing a certain button to press, you need to set longer term goals. "press whatever sequence over this time i need to do to end up closer to this result"
There is some kind of nested multidimensional thing to train on here instead of immediate limited choices
We train the models on what are basically shadows, and they learn how to pattern match the shadows.
But the shadows are only depictions of the real world, and the LLMs never learn about that.
A lot of intelligence is just pattern matching and being quick about it.
Current AI only does one of those (pattern matching, not evolution), and the prospects of simulating evolution is kind of bleak, given I don’t think we can simulate a full living cell yet from scratch? Building a world model requires life (or something that has undergone a similar evolutionary survivorship path), not something that mimics life.
To mitigate this you have to include the other categories in your finetune training dataset so it doesn't lose the existing knowledge. Otherwise, the backpropagation and training will favour weights that reflect the new data.
In the game example having the weights optimized for game A doesn't help with game B. It would be interesting to see if training for both game A and B help it understand concepts in both.
Similarly with programming languages it would be interesting to see if training it with multiple languages if it can extract concepts like if statements and while loops.
IIUC from the observations with multilingual LLMs you need to have the different things you are supporting in the training set together. Then the current approach is able to identify similar concepts/patterns. It's not really learning these concepts but is learning that certain words often go together or that a word in one language is similar to another.
It would be interesting to study multilingual LLMs for their understanding of those languages in the case where the two languages are similar (e.g. Scottish and Irish Gaelic; Dutch and Afrikaans; etc.), are in the same language family (French, Spanish, Portuguese), or are in different language families (Italian, Japanese, Swahili), etc.
Supposedly it does both A and B worse. That's their problem statement essentially. Current SOTA models don't behave like humans would. If you took a human that's really good at A and B, chances are they're gonna pick up C much quicker than a random person off the street that hasn't even seen Atari before. With SOTA models, the random "person" does better at C than the A/B master.
AI has beat the best human players in Chess, Go, Mahjong, Texas hold'em, Dota, Starcraft, etc. It would be really, really surprising that some Atari game is the holy grail of human performance that AI cannot beat.
In other words, the Starcraft AIs that win do so by microing every single unit in the entire game at the same time, which is pretty clever, but if you reduce them to interfacing with the game in the same way a human does, they start losing.
One of my pet peeves when we talk about the various chess engines is yes, given a board state they can output the next set of moves to beat any human, but can they teach someone else to play chess? I'm not trying to activate some kinda "gotcha" here, just getting at what does it actually mean to "know how to play chess". We'd expect any human that claimed to know how to play to be able to teach any other human pretty trivially.
Less quality of life focused, I don’t believe that the models he uses for this research are capable of more. Is it really that revealing?
The original paper "Playing Atari with Deep Reinforcement Learning" (2013) from Deepmind describes how agents can play Atari games, but these agents would have to be specifically trained on every individual game using millions of frames. To accomplish this, simulators were run in parallel, and much faster than in real-time.
Also, additional trickery was added to extract a reward signal from the games, and there is some minor cheating on supplying inputs.
What Carmack (and others before him) is interested in, is trying to learn in a real-life setting, similar to how humans learn.
Innovation is in the cracks: recognition of holes, intersections, tangents, etc. on old ideas. It has bent said that innovation is done on the shoulders of giants.
So AI can be an express elevator up to an army of giant's shoulders? It all depends on how you use the tools.
As with most things, the truth lies somewhere in the middle. LLMs can be helpful as a way of accelerating certain kinds and certain aspects of research but not others.
I wonder if we can mine patent databases for old ideas that never worked out in the past, but now are more useful. Perhaps due to modern machining or newer materials or just new applications of the idea.
I welcome the hair-splittery that is sure to follow about what it means to "understand" anything
A human with all that data, if it could fit in their brain, would likely come up with something interesting. Even then... I'm not entirely sure it's so simple. I'd wager most of us have enough knowledge in our brains today to come up with something if we applied ourselves, but ideas don't spontaneously appear just because the knowledge is there.
What if we take our AI models and force them to continuously try making connections between unlikely things? The novel stuff is likely in the parts that don't already have strong connections because research is lacking but could. But how would it evaluate what's interesting?
It reminds me of an AI talk a few decades ago, about how the cycle goes: more data -> more layers -> repeat...
Anyways, I'm not sure how your comment relates to these two avenues of improvement.
The insight into the structure of the benzene ring famously came in a dream, hadn't been seen before, but was imagined as a snake bitings its own tail.
--- start quote ---
The empirical formula for benzene had been long known, but its highly unsaturated structure was a challenge to determine. Archibald Scott Couper in 1858 and Joseph Loschmidt in 1861 suggested possible structures that contained multiple double bonds or multiple rings, but the study of aromatic compounds was in its earliest years, and too little evidence was then available to help chemists decide on any particular structure.
More evidence was available by 1865, especially regarding the relationships of aromatic isomers.
[ Kekule claimed to have had the dream in 1865 ]
--- end quote ---
The dream claim came from Kekule himself 25 years after his proposal that he had to modify 10 years after he proposed it.
Can you imagine if we applied the same gatekeeping logic to science?
Imagine you weren't allowed to use someone else's scientific work or any derivative of it.
We would make no progress.
The only legitimate defense I have ever seen here revolves around IP and copyright infringement, which I couldn't care less about.
I kind of wonder if libraries like pytorch have hurt experimental development. So many basic concepts no one thinks about anymore because they just use the out of the box solutions. And maybe those solutions are great and those parts are "solved", but I am not sure. How many models are using someone else's tokenizer, or someone else's strapped on vision model just to check a box in the model card?
When the foundation layer at a given moment doesn't yield an ROI on intellectual exploration - say because you can overcompensate with VC funded raw compute and make more progess elsewhere -, few(er) will go there.
But inevitably, as other domains reach diminishing returns, bright minds will take a look around where significant gains for their effort can be found.
And so will the next generation of PyTorch or foundational technologies evolve.
[1] https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
But yes, there's a ton of interesting and useful stuff (beyond datasets and data related improvements) going on right now, and I'm not even talking about LLMs. I don't do anything related to LLM and even then I still see tons of new stuff popping up regularly.
Frameworks like pytorch are really flexible. You can implement any architecture, and if it's not enough, you can learn CUDA.
Keras it's the opposite, it's probably like you describe things.
The ability to collect gene expression data at a tissue specific level has only been invented and automated in the last 4-5 years (see 10X Genomics Xenium, MERFISH). We've only recently figured out how to collect this data at the scale of millions of cells. A breakthrough on this front may be the next big area of advancement.
As a simple analogy, read out the following sentence multiple times, stressing a different word each time.
"I never said she stole my money"
Note how the meaning changes and is often unique?
That is a lens I to the frame problem and it's inverse, the specification problem.
The above problem quickly becomes tower-complete, and recent studies suggest that RL is reinforcing or increasing the weight of existing patterns.
As the open domain frame problem and similar challenges are equivalent to HALT, finding new ways to extract useful information will be important for generalization IMHO.
Synthetic data is useful, but not a complete solution, especially for tower problems.
and as far as synthetic vs real data, there's a lot of gaps in LLM knowledge; and vision models suffer from "limited tags", which used to have workarounds with textual embeddings and the like, but those went by the wayside as LoRA, controlnet, etc. appeared.
There's people who are fairly well known that LLMs have no idea about. There's things in books i own that the AI confidently tells me are either wrong or don't exist.
That one page about compressing 1 gig wikipedia as small as possible implicitly and explicitly states that AI is "basically compression" - and if the data isn't there, it's not in the compressed set (weights) either.
And i'll reply to another comment here, about "24/7 rolling/ for looped" AI - i thought of doing this when i first found out about LLMs, but context windows are the enemy, here. I have a couple of ideas about how to have a continuous AI, but i don't have the capital to test it out.
What about simulation: models can make 3D objects so why not give them a physics simulator? We have amazing high fidelity (and low cost!) game engines that would be a great building block.
What about rumination: behind every Cursor rule for example, is a whole story of why a user added it. Why not take the rule, ask a reasoning model to hypothesize about why that rule was created, and add that rumination (along with the rule) to the training data. Providing opportunities to reflect on the choices made by their users might deepen any insights, squeezing more juice out of the data.
We let models write code and run it. Which gives them a high chance of getting arithmetic right.
Solving the “crossing the river” problem by letting the model create and run a simulation would give a pretty high chance of getting it right.
https://docs.anthropic.com/en/docs/agents-and-tools/tool-use...
Each Cursor rule is a byproduct of tons of work and probably contains lots that can be unpacked. Any research on that?
This is easier said than done though because this value function is so noisy it's often hard to learn from it. And also whether or not a response (the model output) matches the value function (the Cursor rules) is not even that easy to grade. It's been easier to train the chain-of-thought style reasoning since one can directly score it via the length of thinking.
This new paper covers some of the difficulties of language-based critic models: https://openreview.net/pdf?id=0tXmtd0vZG
Generally speaking, the algorithm and approach is not new. Being able to do it in a reasonable amount of compute is the new part.
Do that for a bunch of rules scraped from a bunch of repos - and you’ve got yourself a dataset for training a new model with - or maybe for fine tuning.
It can probably remember more facts about a topic than a PhD in that topic, but the PhD will be better at thinking about that topic.
"Thinking" is too broad a term to apply usefully but I would say its pretty clear we are not close to AGI.
Why should the model need to memorize facts we already have written down somewhere?
So can a notebook.
Models need to decide for themselves what they should learn.
Eventually, after entering the open world, reinforcement learning/genetic algorithms are still the only perpetual training solution.
That's not super relevant in my mind. It's because they're showing fruit now that will allow research to move forward. And the success, as we know, draws a lot of eyeballs, dollars, resources.
If this path was going to hit a wall, we will hit it more quickly now. If another way is required to move forward, we are more likely to find it now.
Just a hypothesis of mine
The original idea of connectionism is that neural networks can represent any function, which is the fundamental mathematical fact. So we should be optimistic, neural nets will be able to do anything. Which neural nets? So far people have stumbled on a few productive architectures, but it appears to be more alchemy than science. There is no reason why we should think there won't be both new ideas and new data. Biology did it, humans will do it too.
> we’re engaged in a decentralized globalized exercise of Science, where findings are shared openly
Maybe the findings are shared, if they make the Company look good. But the methods are not anymore
Ideas are not new, according to author
But hardware is new and author never mentions impact of hardware improvements
Because new methods unlock access to new datasets.
Edit: Oh I see this was a rhetorical question answered in the next paragraph. D'oh
For example, FPGAs use a lot of area and power routing signals across the chip. Those long lines have a large capacitance, and thus cause a large amount of dynamic power loss. So does moving parameters around to/from RAM instead of just loading up a vast array of LUTs with the values once.
"There weren't really any advancements from around 2018. The majority of the 'advancements' were in the amount of parameters, training data, and its applications. What was the GPT-3 to ChatGPT transition? It involved fine-tuning, using specifically crafted training data. What changed from GPT-3 to GPT-4? It was the increase in the number of parameters, improved training data, and the addition of another modality. From GPT-4 to GPT-40? There was more optimization and the introduction of a new modality. The only thing left that could further improve models is to add one more modality, which could be video or other sensory inputs, along with some optimization and more parameters. We are approaching diminishing returns." [1]
10 months ago around o1 release:
"It's because there is nothing novel here from an architectural point of view. Again, the secret sauce is only in the training data. O1 seems like a variant of RLRF https://arxiv.org/abs/2403.14238
Soon you will see similar models from competitors." [2]
Winter is coming.
If the technology is useful, the Slope of Enlightenment, followed by the Plateau of Productivity.
shortly thereafter the entire ecosystem will collapse
- Moore's law petering out, steering hardware advancements towards parallelism
- Fast-enough internet creating shift to processing and storage in large server farms, enabling both high-cost training and remote storage of large models
- Social media + search both enlisting consumers as data producers, and necessitating the creation of armies of Mturkers for content moderation + evaluation, later becoming available for tagging and rlhf
- A long-term shift to a text-oriented society, beginning with print capitalism and continuing through the rise of "knowledge work" through to the migration of daily tasks (work, bill paying, shopping) online, that allows a program that only produces text to appear capable of doing many of the things a person does
We may have previously had the technical ideas in the 1990s but we certainly didn't have the ripened infrastructure to put them into practice. If we had the dataset to create an LLM in the 90s, it still would have been astronomically cost-prohibitive to train, both in CPU and human labor, and it wouldn't have as much of an effect on society because you wouldn't be able to hook it up to commerce or day-to-day activities (far fewer texts, emails, ecommerce).
Which make sit blatantly obvious why we're beginning to see products being marketed under the guise of assistants/tools to aid you whose actual purpose is to gather real world picture and audio data, think meta glasses and what Ives and Altman are cooking up with their partnership.
The iPhone is a perfect example. There were smartphones with cameras and web browsers before. But when the iPhone launched, it added a capacitive touch screen that was so responsive there was no need for a keyboard. The importance of that one technical innovation can't be overstated.
Then the "new new thing" is followed by a period of years where the innovation is refined, distributed, applied to different contexts, and incrementally improved.
The iPhone launched in 2007 is not really that much different than the one you have in your pocket today. The last 20 years has been about improvements. The web browser before that is also pretty much the same as the one you use today.
We've seen the same pattern happen with LLMs. The author of the article points out that many of AI's breakthroughs have been around since the 1990s. Sure! And the Internet was created in the 1970s and mobile phones were invented in the 1980s. That doesn't mean the web and smartphones weren't monumental technological events. And it doesn't mean LLMs and AI innovation is somehow not proceeding apace.
It's just how this stuff works.
> i used chatgpt for the first time today and have some lite rage if you wanna hear it. tldr it wasnt correct. i thought of one simple task that it should be good at and it couldnt do that.
> (The kangxi radicals are neatly in order in unicode so you can just ++ thru em. The cjks are not. I couldnt see any clear mapping so i asked gpt to do it. Big mess i had to untangle manually anyway it woulda been faster to look them up by hand (theres 214))
> The big kicker was like, it gave me 213. And i was like, "why is one missing?" Then i put it back in and said count how many numbers are here and it said 214, and there just werent. Like come on you SHOULD be able to count.
If you can make the language models actually interface with what we've been able to do with computers for decades, i imagine many paths open up.
There’s an infinite repertoire of such tasks that combine AI capabilities with traditional computer algorithms, and I don’t think we have a generic way of having AI autonomously outsource whatever parts require precision in a reliable way.
The reason we don't do it isn't because it's hard, it's because it yields worse results for increased cost.
This happens to be the basis of every aspect of our biology.
I don’t think it would have had the same impact
Slight difference to those methods, wouldn't you agree?
But even I can see that this ""AI"" stuff is not going to blow over. That ship has sailed. Even if the current models get only marginal improvements, the momentum is unquestionably, inarguably there to make the adoption and productization 10x or even 100x wider than it is now. Robotics, automatization, self-driving, all kinds of kiosks, military applications (gathering and merging sensor data, controlling drone swarms, etc.)...
Just the amount of money (it's going to be trillions before the decade is over) and the amount of students in the field (basically all computer science degrees nowadays teach AI in some form) guarantees we're stuck with ""AI"" forever (at least until it kills us or merges with us)
And no one has found a way to make any money with it. All the tech companies are burning money by the truckload so investors don't lose confidence, but none of them have actually shown it's a good financial investment.
At the end of the day, I don't think anyone is going to want to pay what it really costs to run these models, just for a result that is so unreliable. Once they start to stagnate everyone will lose interest.
The only reason it might stick around is because investors will get desperate to get returns and go full sunk-cost once it starts looking like they made a bad call. (Which they will blame the companies for, of course)
Why?
For the same reason now 32-bit CPUs and a megabyte of RAM run some Javascript or MicroPython to check a handful of logical conditions and flip a couple of I/O bits is no longer a custom curcuit or a handful of TTL-chips wired together.
Each crawl on the internet is actually a discrete chunk of a more abstractly defined, constant influx of information streams. Let's call them rivers (it's a big stream).
These rivers can dry up, present seasonal shifts, be poisoned, be barraged.
It will never "get there" and gather enough data to "be done".
--
Regarding "new ideas in AI", I think there could be. But this whole thing is not about AI anymore.