« BackInterview with Yann LeCun on AIwsj.comSubmitted by kgwgk 7 hours ago
  • singingwolfboy 6 hours ago
    • ks2048 2 hours ago

      It seems the problem with this "debate" is that intelligence is very clearly not a scalar value - where you can look at a number line and put a dot where cats are, where humans are, where AIs are, etc.

      So, people just talk past each other, with everyone using a different method on how a complex trait like "intelligence" is to be collapsed down to a scalar for easy comparison.

      • echelon 36 minutes ago

        We don't understand intelligence.

        But we do understand vision and hearing. We have for over 50 years. We've implemented them classically and described them with physics. Game engines, graphics pipelines, synthesizers, codecs, compression, digital image processing, ... the field is vast and productive.

        Our mastery over signals is why I'm so bullish on diffusion and AI for images, video, and audio regardless with whatever happens with LLMs.

        And if this tech cycle only improves our audio-visual experience and makes games and film more accessible, it'll still be revolutionary and a step function improvement over what came before.

      • benlivengood 2 hours ago

        I think I would put more credence in Yann LeCun's predictions if he had predicted the emergence of few-shot learning and chain-of-thought reasoning as a function of model size/data before they happened.

        In part, he's arguing that LLMs are not the most efficient path toward intelligence and that some other design will do better, which is probably true, but no one has pushed model size for the (somewhat ironically named) Gato-style multimodal embodied transformers that I think would result in something closer to cat intelligence.

        I am reasonably certain that further step changes will happen as LLM model and/or data sizes increase. Right now we're achieving a lot of SOTA performance with somewhat smaller models and multimodal pretraining, but not putting those same methods to work in training even larger models on more data.

        • twobitshifter 3 hours ago

          The recent advance of reasoning in o1 preview seems to have not been widely understood by the media and even LeCun in this case. o1 preview represents a new training paradigm using reinforcement learning on the reasoning steps applied to the solution. This step allows for reasoning to be developed just like AlphaZero is able to ‘reason’ and come up with unique solutions. The reinforcement learning in o1preview means that the ‘repeating facts learned’ arguments no longer apply. Instead the AI is free to come up with its own reasoning steps that lead to correct answers and these reasoning steps are refined over time. It can continue to train and get better by repeatedly answering the same questions, the same way alphazero can play the same game multiple times.

          • razodactyl 6 hours ago

            He is highly knowledgeable in his field.

            He's very abrasive in his conduct but don't mistake it for incompetence.

            Even the "AI can't do video" thing was blown out and misquoted because discrediting people and causing controversy fuels more engagement.

            He actually said something along the lines of it "not being able to do it properly" / everything he argues is valid from a scientific perspective.

            The joint embeddings work he keeps professing has merit.

            ---

            I think the real problem is that from a consumer perspective, if the model can answer all their questions it must be intelligent / from a scientist's perspective it's not capable for the set of all consumers so it's not intelligent.

            So we end up with a dual perspective where both are correct due to technical miscommunication and misunderstanding.

            • kergonath 3 hours ago

              > He is highly knowledgeable in his field

              Indeed. It seems to me that he has a type of personality common in skilled engineers. He is competent and makes informed decisions, but does not necessarily explain them well (or at all if they feel trivial enough), is certain of his logic (which often sounds like arrogance), and does not seem to have much patience.

              He is great at what he does but he really is not a spokesman. His technical insights are often very interesting, though.

            • Yacovlewis 5 hours ago

              From my own experience trying to build an intelligent digital twin startup based on the breakthrough in LLM's, I agree with LeCunn that LLMs are actually quite far from demonstrating the intelligence of house cats, and I myself likely jumped the gun by trying to emulate intelligent humans with the current stage of AI.

              His AI predictions remind me of Prof. Rodney Brooks (MIT, Roomba) and his similarly cautious timelines for AI development. Brooks has a very strong track record over decades of being pretty accurate with his timelines.

              • steveBK123 4 hours ago

                I would suspect any possible future AGI-like progress would be some sort of ensemble. LLMs may be a piece of the puzzle, but they aren't a single model to AGI solution.

              • tikkun 6 hours ago

                It's frustrating how many disagreements come down to framings rather than actual substance.

                His framing of intelligence is one thing. The people who disagree with him are framing intelligence a different way.

                End of story.

                I wish that all the energy went towards substantive disagreements rather than disagreements that are mostly (not entirely) rooted in semantics and definitions.

                • jcranmer 6 hours ago

                  That's not what he's saying at all, though.

                  What he's saying is that he thinks the current techniques for AI (e.g., LLMs) are near the limits of what you can achieve with such techniques and are thus a dead-end for future research; also consequently, hyperventilation about AI superintelligence and the like is extremely irresponsible. It's actually a substantial critique of AI today in its actual details, albeit one modulated by popular press reporting that's dumbing it down for popular consumption.

                  • aithrowawaycomm 5 hours ago

                    His point is that lots of AI folks are framing intelligence incorrectly by overvaluing surface knowledge or ability to be trained to solve constrained problems, when cats have deeper cognitive abilities like planning and rich spatial reasoning which are far beyond the reach of any AI in 2024.

                    ANNs are extremely useful tools because they can process all sorts of information humans find useful: unlike animals or humans, ANNs don't have their own will, don't get bored or frustrated, and can focus on whatever you point them at. But in terms of core cognitive abilities - not surface knowledge, not impressive tricks, and certainly not LLM benchmarks - it is hard to say ANNs are smarter than a spider. (In fact they seem dumber than jumping spiders, which are able to form novel navigational plans in completely unfamiliar manmade environments. Even web-spinning spiders have no trouble spinning their webs in cluttered garages or pantries; would a transformer ANN be able to do that if it was trained on bushes and trees?)

                  • mrandish 6 hours ago

                    While I'm no expert on AI, everything I've read from LeCunn on AI risk so far strikes me as directionally correct. I keep revisting the best examples I can find of the 'Foom' hypothesis and it just doesn't seem likely. Not to say that AI won't be both very useful and disruptive, just that existential fears like Skynet scenarios don't strike me as plausible.

                    • llamaimperative 5 hours ago

                      > just that existential fears like Skynet scenarios don't strike me as plausible.

                      What's the most plausible (even if you find it implausible) disaster scenario you came across in your research? It's a little surprising to see someone who has seriously looked into these ideas describe the bundle of them as "like Skynet."

                      • trescenzi 5 hours ago

                        I think the risk is much higher with regards to how people use it and much less that it becomes some sudden super intelligent monster. AI doesn’t have to be rational or intelligent to cause massive amounts of damage it just has to be put in charge of dangerous enough systems. Or more pernicious you give it the power to make health care or employment decisions.

                        It seems silly to me the idea of risk is all concentrated around the runaway intelligence scenario. While that might be possible there is real risk today in how we use these systems.

                        • mrandish 4 hours ago

                          I agree with what you've said. Personally, I have no doubt that, like any powerful new technology, AI will be used for all kinds of negative and annoying things as well as beneficial things. This is what I meant by "disruptive" in my GP. However, I also think that society will adapt to address these disruptions just like we have in the past.

                          • nradov 32 minutes ago

                            We have had software making healthcare decisions for decades in areas like ECG pattern analysis, clinical decision support, medication administration, insurance claims processing, etc. Occasionally software defects or usability flaws lead to patient harm but mostly they work pretty well. There's no evidence that using AI to supplement existing deterministic algorithms will make things worse, it's just a lot of uninformed and unscientific fearmongering.

                        • habitue 5 hours ago

                          Statements like "It doesnt seem plausible", "it doesn't seem likely" aren't the strongest arguments. How things seem to us is based on what we've seen happen before. None of us has witnessed humanity replace itself with something that we dont understand before.

                          Our intuition isn't a good guide here. Intuitions are honed through repeated exposure and feedback, and we clearly don't have that in this domain.

                          Even though it doesn't feel dangerous, we can navigate this by reasoning through it. We understand that intelligence trumps brawn (e.g. Humans don't out-claw a tiger, we master it with intelligence). We understand that advances in AI have been very rapid, and that even though current AI doesnt feel dangerous, current AI turns into much more advanced future AI very quickly. And we understand that we dont really understand how these things work. We "control them safely" through mechanisms similar to how evolution controls us: theough the objective function. That shouldn't fill us with confidence because we find loopholes in evolution's objective function left and right like contraception, hyper-palatable foods, tiktok, etc.

                          All these lines of evidence converge on the conclusion that what we're building is dangerous to us.

                          • mrandish 4 hours ago

                            > ... "it doesn't seem likely" aren't the strongest arguments.

                            Since we're talking about the future, it would be incorrect to talk in absolutes so speaking in probabilities and priors is appropriate.

                            > Our intuition isn't a good guide here.

                            I'm not just using intuition. I've done as extensive an evaluation of the technology, trends, predictions and, most importantly, history as I'm personally willing to do on this topic. Your post is an excellent summary of basically the precautionary principle approach but, as I'm sure you know, the precautionary principle can be over-applied to justify almost any level of response to almost any conceivable risk. If the argument construes the risk as probably existential, then almost any degree of draconian response could be justified. Hence my caution when the precautionary principle is invoked to argue for disruptive levels of response (and to be clear, you didn't).

                            So the question really comes down to which scenarios at which level of probability and then what levels of response those bell-curve probabilities justify. Since I put 'foom-like' scenarios at low probability (sub-5%) and truly existential risk at sub-1%, I don't find extreme prevention measures justified due to their significant costs, burdens and disruptions.

                            At the same time, I'm not arguing we shouldn't pay close attention as the technology develops while expending some reasonable level of resources on researching ways to detect, manage and mitigate possible serious AI risks, if and when they materialize. In particular, I find the current proposed legislative responses to regulate a still-nascent emerging technology to be ill-advised. It's still far too early and at this point I find such proposals by (mostly) grandstanding politicians and bureaucrats more akin to crafting potions to ward off an unseen bogeyman. They're as likely to hurt as to help while imposing substantial costs and burdens either way. I see the current AI giants embracing such proposals as simply them seeing these laws as an opportunity to raise the drawbridge behind themselves since they have the size and funds to comply while new startups don't - and those startups may be the most likely source of whatever 'solutions' we actually need to the problems which have yet to make themselves evident.

                            • slibhb 2 hours ago

                              Actually, statements like "it does/n't seem plausible" are the only things we can say about AI risk.

                              People are deluding themselves when thwy claim they "reason through this" (i.e. objectively). In other words: no one knows what's going to happen; people are just saying what they think.

                              • nradov an hour ago

                                There is no such evidence. You're just making things up. No one has described a scientifically plausible scenario for actual danger.

                              • Elucalidavah 6 hours ago

                                > it just doesn't seem likely

                                It is likely conditional on the price of compute dropping the way it has been.

                                If you can basically simulate a human brain on a $1000 machine, you don't really need to employ any AI researchers.

                                Of course, there has been some fear that the current models are a year away from FOOMing, but that does seem to be just the hype talking.

                                • threeseed 5 hours ago

                                  If you can simulate a human brain and it required a $100b machine you would still get funding in a weekend.

                                  Because you could easily find ways to print money e.g. curing types of cancers or inventing a better Ozempic.

                                  But the fact is that there is no path to simulating a human brain.

                                  • llamaimperative 5 hours ago

                                    There is no path to it? That's a bold claim. Are brains imbued with special brain-magic that makes them more than, at rock bottom, a bunch of bog-standard chemical and electrical and thermal reactions?

                                    It seems very obviously fundamentally solvable, though I agree it is nowhere in the near future.

                                    • aithrowawaycomm 5 hours ago

                                      This seems like a misreading - there's also no real path to P-NP or to disentangling the true chemical origins of life. OP didn't say it was impossible. The problem is we don't know very much about intelligence in animals generally, and even less about intelligence in humans. In particular, we know far less about intelligence than we do computational complexity or early forms of life.

                                      • CooCooCaCha 5 hours ago

                                        Those seem like silly analogies. There are billions of brains on the planet, humans can grow them inside themselves (pregnancy). Don’t get me wrong, it’s a hard problem, they just seem like different classes of problems.

                                        I could see P=NP being impossible to prove but I find it hard to believe intelligence is impossible to figure out. Heck if you said it’d take us 100 years I would still think that’s a bit much.

                                        • RandomLensman 5 hours ago

                                          We have not even figured out single cell organisms, let alone slightly more complex organisms - why would intelligence be such an easy target?

                                          • CooCooCaCha an hour ago

                                            I didn’t say easy.

                                          • aithrowawaycomm 4 hours ago

                                            I think it'll take much longer than 100 years. The "limiting factor" here is cognitive science experiments on smart animals like rats and pigeons, and less smart animals like spiders and lampreys, all of which will help us understand what intelligence truly is. These experiments take time and resources.

                                            • threeseed 4 hours ago

                                              > Don’t get me wrong, it’s a hard problem, they just seem like different classes of problems

                                              Time travel. Teleportation through quantum entanglement. Intergalactic travel through wormholes.

                                              And don't get me wrong they are hard. But just another class of problems. Right ?

                                              • CooCooCaCha an hour ago

                                                Yes absolutely. I have a (supposedly) working brain in my head right now. But so far there are no working examples of the things you listed.

                                          • bob1029 5 hours ago

                                            > Are brains imbued with special brain-magic that makes them more than, at rock bottom, a bunch of bog-standard chemical and electrical and thermal reactions?

                                            Some have made this argument (quantum effects, external fields, etc.).

                                            If any of these are proven to be true then we are looking at a completely different roadmap.

                                            • llamaimperative 5 hours ago

                                              Uh yeah, but we have no evidence for any of them (aside from quantum effects, which are "engineerable" to the extent they exist in brains anyway).

                                              • threeseed 4 hours ago

                                                > "engineerable" to the extent they exist in brains anyway

                                                Can you please enlighten us then since you clearly know to what extent quantum effects exist in the brain.

                                                • llamaimperative an hour ago

                                                  I’m saying to whatever extent they occur, they are just quantum interactions. There’s a path to reproducing them with engineering.

                                                  It’s odd to say “reproduce quantum interactions” but remember to the extent they exist in the brain, they also behave as finicky/noisy quantum interactions. They’re not special brain quantum things.

                                        • mrandish 4 hours ago

                                          > If you can basically simulate a human brain

                                          Based on the evidence I've seen to date, doing this part at the scale of human intelligence (regardless of cost) is highly unlikely to be possible for at least decades.

                                          (a note to clarify: the goal "simulate a human brain" is substantially harder than other goals usually discussed around AI, like "exceed domain expert human ability on tests measuring problem solving in certain domain(s).)

                                        • Yoric 4 hours ago

                                          On the other hand, we can wipe our civilization (with or without AI) without needing anything as sophisticated as Skynet.

                                        • qarl 6 hours ago

                                          Well... except... cats can't talk.

                                          • aithrowawaycomm 4 hours ago

                                            I believe my cats sometimes get frustrated with the limitations of their own vocalizations and try to work around them when communicating with me. If, say, they want a treat, they are only able to meow and perform "whiny" body motions, and maybe I'll give them pets or throw a ball instead. So they have adapted a bit:

                                            - both of them will spit regular kibble out in front of me when they want a fancier treat (cats are hilarious)

                                            - the boy cat has developed very specific "sweet meows" (soft, high-pitched) for affection and "needy meows" (loud, full-chested) for toys or food; for the first few years he would simply amp up the volume and then give a frustrated growl when I did the wrong thing

                                            - the lady cat (who only has two volumes, "yell" and "scream"), instead stands near what she wants before meowing; bedroom for cuddles, water bowl for treats, hallway or office for toys

                                            - the lady cat was sick a while back and had painful poops; for weeks afterwards if she wanted attention and I was busy she would pretend to poop and pretend to be in pain, manipulating me into dropping my work and checking on her

                                            It goes both ways, I've developed ways of communicating with them over the years:

                                            - the lady is skittish but loves laying in bed with me, so I sing "gotta get up, little pup" in a particular way; she will then get up and give me space to leave the bed, without me scaring her with a sudden movement

                                            - I don't lose my patience with them often, but they understand my anxious/exasperated tone of voice and don't push their luck too much (note that some of this is probably shared mammalian instinct)

                                            - the boy sometimes bullies the lady, and I'll raise my voice at him; despite being otherwise skittish and scared of loud noises, the lady seems to understand that I am mad at the boy because of his actions and there's nothing to be alarmed by

                                            Sometimes I think the focus on "context-free" (or at least context-lite) symbolic language, essentially unique to humans, makes us lose focus on the fact that communication is far older than the dinosaurs, and that maybe further progress on language AI should focus on communication itself, rather than symbol processing with communication as a side effect.

                                            • Barrin92 4 hours ago

                                              And as Marvin Minsky taught us, which is probably one of the most profound insights in the entire field, talking seems like an accomplishment to us because it's the least developed part of our capacities. It's so conscious a task not because it's a sign of intellect but because it's the least developed and most novel thing our brains do, which is why it's also likely the fastest to learn for a machine.

                                              Moving as smoothly as a cat and navigating the world is the part that actually took our brains millions of years to learn, and movement is effortless not because it's easy but because it took so long to master, so it's also going to be the most difficult thing to teach a machine.

                                              The cognitive stuff is the dumb part, and that's why we have chess engines, pocket calculators and chatbots before we have emotional machines, artificial plumbers and robots that move like spiders.

                                              • qarl 3 hours ago

                                                I'm not sure that's the right way to look at it.

                                                Ten years ago, it was common to hear the argument: "Are cats intelligent? No, they can't speak." Language was seen as the pinnacle of the mind. Lately that's been flipped on its head, but only because machines have gotten so good at it.

                                                I think the real reason we don't have robots that move like spiders is that robots don't have muscles, and motors are a very poor approximation.

                                              • Scrapemist 6 hours ago

                                                Maybe AI can translate

                                                • qarl 6 hours ago

                                                  HEH... reminds me of an argument Searle once made: with the right "translation" you can make a wall intelligent.

                                                • CamperBob2 3 hours ago

                                                  Or write code. Or write songs. Or create paintings. Or write essays.

                                                  The whole comparison is stupid, and inexplicable at LeCun's level of play. AI is not a model of a human brain, or a cat brain, or a parrot brain, or any other kind of brain. It's something else, something that did not exist in any form just a few years ago.

                                                  • tkgally 3 hours ago

                                                    What is increasingly making sense to me is the description of current AI as an alien intelligence—something potentially powerful but fundamentally different from ours. Viewed that way, LeCun's use of humans—or cats—as the baseline seems misguided. Yes, there are things biological intelligence can do that artificial intelligence cannot, but there are also things AI can do better than us. And the danger might be that, because of their speed, replicability, networkability, and other capabilities that exceed those of biology, AI systems can be intelligent in ways that we have trouble imagining.

                                                    • CamperBob2 2 hours ago

                                                      Valid point there, for sure. People are so busy arguing over when and whether we'll build something with human-level intelligence, they aren't stopping to ask if something even better is possible.

                                                    • hyperG 2 hours ago

                                                      Animals fly by flapping their wings, hence an airplane is not really flying. It can't even land safely in a tree!

                                                      • CamperBob2 2 hours ago

                                                        Exactly. It's pointless to argue over whether an aircraft is really capable of flight, when small drones are already capable of outmaneuvering most birds on the birds' own turf.

                                                  • blackeyeblitzar 2 hours ago

                                                    I feel like LeCun does the same interview and the same presentation over and over. He’s obsessed with the cat analogy and the notion that JEPA will succeed transformer based ‘traditional’ LLM architecture. Maybe that is true but I feel like he has too absolute a view on these topics. Sometimes I feel like I am listening to a politician or marketer rather than someone making forward progress in the field.

                                                    • jimjimjim 6 hours ago

                                                      LLMs are great! They are just not what I would call Intelligence

                                                      • miohtama 6 hours ago

                                                        If you don't call it intelligence you miss the enormous political and social opportunity to go down in the history as the pioneer of AI regulation (:

                                                      • mmoustafa 6 hours ago

                                                        It’s really hard for me to believe Yann is engaging sincerely, he is downplaying LLM abilities on purpose.

                                                        He leads AI at Meta, a company with the competitive strategy to commoditize AI via Open Source models. Their biggest hinderance would be regulation putting a stop to the proliferation of capabilities. So they have to understate the power of the models. This is the only way Meta can continue sucking steam out of the leading labs.

                                                        • threeseed 6 hours ago

                                                          > commoditize AI via Open Source models

                                                          Sounds like we should be fully supporting them then.

                                                          • muglug 6 hours ago

                                                            You’re free to concoct a conspiracy that he’s just a puppet for Meta’s supposed business interests*, but that doesn’t change the validity of his claims.

                                                            * pretty sure any revenue from commercial Llama licenses are a rounding error at best

                                                            • lyu07282 5 hours ago

                                                              You don't have to assume malice, he is a strong believer in liberalism so naturally he would argue whatever leads to less regulation. Even if he thought AI was dangerous he would still believe that corporations are better suited to combat that threat than any government.

                                                              Its similar to how the WSJ journalist would never ask him what he thinks about the larger effects of the "deindustrialization" of knowledge-based jobs caused by AI. Not because the journalist is malicious, its just the shared, subconscious ideology.

                                                              People don't need a reason to protect capital interests, even poor people on the very bottom will protect it.

                                                            • krig 6 hours ago

                                                              (reacting to the title alone since the article is paywalled)

                                                              AI can’t push a houseplant off a shelf, so there’s that.

                                                              Talking about intelligence as a completely disembodied concept seems meaningless. What does ”cat” even mean if comparing to something that doesn’t have a physical corporeal presence in time and space. To compare like this seems to me like making a fundamental category error.

                                                              edit: Quoting, “You’re going to have to pardon my French, but that’s complete B.S.”

                                                              I guess I’m just agreeing with LeCun here.

                                                              • jcranmer 6 hours ago

                                                                It's referring to the fact that cats are able to do tasks like multistage planning, which he asserts current AIs are unable to do.

                                                                • qarl 5 hours ago

                                                                  > It's referring to the fact that cats are able to do tasks like multistage planning, which he asserts current AIs are unable to do.

                                                                  I don't understand this criticism at all. If I go over to ChatGPT and say "From the perspective of a cat, create a multistage plan to push a houseplant off a shelf" it will satisfy my request perfectly.

                                                                  • krig 6 hours ago

                                                                    Thanks, that makes more sense than the title. :)

                                                                    • dang 6 hours ago

                                                                      We replaced the baity title with something suitably bland.

                                                                      If there's a representative phrase from the article itself that's neutral enough, we could use that instead.

                                                                  • sebastiennight 6 hours ago

                                                                    Out of curiosity, would you say a person with locked-in syndrome[0] is no longer intelligent?

                                                                    [0]: https://en.wikipedia.org/wiki/Locked-in_syndrome

                                                                    • krig 6 hours ago

                                                                      I don’t think ”intelligent” is a particularly meaningful concept, and just leads to such confusion as your comment hints at. Do I think a person with locked-in syndrome is still a human being with thoughts, desires and needs? Yes. Do I think we can rank intelligences along an axis where a locked-in person somehow rates lower than a healthy person but higher than a cat? I don’t think so. A cat is very good at being a cat, much better than any human is.

                                                                      • krig 6 hours ago

                                                                        I would also point out that a person with locked-in syndrome still has ”a physical corporeal presence in time and space”, they have carers, relationships, families, histories and lives beyond themselves that are inextricably tied to them as an intelligent being.