• falcor84 2 hours ago

    The idea of Claude having "anterograde amnesia" and the top-rated comment there by Noosphere89 really resonated with me:

      "I would analogize this to a human with anterograde amnesia, who cannot form new memories, and who is constantly writing notes to keep track of their life. The limitations here are obvious, and these are limitations future Claudes will probably share unless LLM memory/continual learning is solved in a better way."
    
      This is an extremely underrated comparison, TBH. Indeed, I'd argue that frozen weights + lack of a long-term memory are easily one of the biggest reasons why LLMs are much more impressive than useful at a lot of tasks (with reliability being another big, independent issue).
    
      It emphasizes 2 things that are both true at once: LLMs do in fact reason like humans and can have (poor-quality) world-models, and there's no fundamental chasm between LLM capabilities and human capabilities that can't be cured by unlimited resources/time, and yet just as humans with anterograde amnesia are usually much less employable/useful to others than people who do have long-term memory, current AIs are much, much less employable/useful than future paradigm AIs.
    • skybrian 2 hours ago

      For a coding agent, the project "learns" as you improve its onboarding docs (AGENTS.md), code, and tests. If you assume you're going to start a new conversation for each task and the LLM is a temp that's going to start from scratch, you'll have a better time.

      • falcor84 2 hours ago

        But that's the thing: Claude Plays Pokemon is an experiment in having Claude work fully independently, so there's no "you" who would improve its onboarding docs or anything else, it has to do so on its own. And as long as it cannot do so reliably, it effectively has anterograde amnesia.

        And just to be clear, I'm mentioning this because I think that Claude Plays Pokemon is a playground for any agentic AI doing any sort of long-term independent work; I believe that the solution needed here is going to bring us closer to a fully independent agent in coding and other domains. It reminds me of the codeclash.ai benchmark, where similar issues are seen across multiple "rounds" of an AI working on the same codebase.

        • kaashif 2 hours ago

          Yeah but it feels terrible. I put as much as I can into Claude skills and CLAUDE.md but the fact that this is something I even have to think about makes me sad. The discrete points where the context gets compacted really feel bad and not like how I think AGI or whatever should work.

          Just continuously learn and have a super duper massive memory. Maybe I just need a bazillion GPUs to myself to get that.

          But no-one wants to manage context all the time, it's incidental complexity.

          • falcor84 an hour ago

            I agree with essentially everything you said, except for the final claim that managing context is incidental complexity. From what I know of cognitive science, I would argue that context management is a central facet of intelligence, and a lot of the success of humans in society is dependent on their ability to do so. Looking at it from the other side, executive function disorders such as ADHD offer significant challenges for many humans, and they seem to be not quite entirely unlike these context issues that Claude faces.

      • martinald an hour ago

        This actually matches my experience quite well. I use vision (often) to try and do 2 main things in Claude code:

        1) give it text data from something that is annoying to copy and paste (eg labels off a chart or logs from a terrible web UI that doesn't make it easy to copy and paste).

        2) give it screenshots of bugs, especially UI glitches.

        It's extremely good at 1), can't remember when it got it wrong.

        On 2) it _really_ struggled until opus 4.5, almost comically so, with me posting a screenshot and a description of the UI bug and it telling me "great it looks perfect! What next?"

        With opus 4.5 it's not quite laughably as bad but still often misses very obvious problems.

        It's very interesting to see the rapid progression on these benchmarks, as it's probably a very good proxy for "agentic vision".

        I've came to the conclusion that browser use without vision (eg based on the DOM or accessibility trees) is a dead end, simply because "modern" websites tend to use a comical amount of tokens to render. So if this gets very good (close to human level/speed) then we have basically solved agents being able to browse any website/GUI effectively.

        • oceansky an hour ago

          I wonder if there's someone at Antrophic working to fine-tune the model's pokemon playing ability specifically.

          Maybe not but it sure would be funny.

          • dbish an hour ago

            If I recall correctly, a prior interview about claude plays pokemon stated they purposely chose pokemon as a use case that was not meant to be trained/finetuned on. That's what makes it an interesting problem, so hopefully they aren't.

            • oceansky an hour ago

              I believe the testing itself is done in very good faith.

              But I believe the team at Antrophic looks for popular use cases like this one to improve their datasets. Same for every other big player in the LLM game.