• the_harpia_io an hour ago

    the hiding stuff is weird because the whole reason you'd want to see what Claude is doing isn't just curiosity - it's about catching when it goes off the rails before it makes a mess. like when it starts reading through your entire codebase because it misunderstood what you asked for, or when it's about to modify files you didn't want touched. the verbose mode fix is good but honestly this should've been obvious from the start - if you're letting an AI touch your files, you want to know exactly which files. not because you don't trust the tool in theory but because you need to verify it's doing what you actually meant, not what it thinks you meant. abstractions are great until they hide the thing that's about to break your build

    • rco8786 an hour ago

      > it's about catching when it goes off the rails before it makes a mess

      The latest "meta" in AI programming appears to be agent teams (or swarms or clusters or whatever) that are designed to run for long periods of time autonomously.

      Through that lens, these changes make more sense. They're not designing UX for a human sitting there watching the agent work. They're designing for horizontally scaling agents that work in uninterrupted stretches where the only thing that matters is the final output, not the steps it took to get there.

      That said, I agree with you in the sense that the "going off the rails" problem is very much not solved even on the latest models. It's not clear to me how we can trust a team of AI agents working autonomously to actually build the right thing.

      • g947o 42 minutes ago

        None of those wild experiments are running on a "real", existing codebase that is more than 6 months old. The thing they don't talk about is that nobody outside these AI companies wants to vibe code with a 10 year old codebase with 2000 enterprise customers.

        As you as you start to work with a codebase that you care about and need to seriously maintain, you'll see what a mess these agents make.

        • krastanov 3 minutes ago

          I maintain serious code bases and I use LLM agents (and agent teams) plenty -- I just happen to review the code they write, I demand they write the code in a reviewable way, and use them mostly for menial tasks that are otherwise unpleasant timesinks I have to do myself. There are many people like me, that just quietly use these tools to automate the boring chores of dealing with mature production code bases. We are quiet because this is boring day-to-day work.

          E.g. I use these tools to clean up or reorganize old tests (with coverage and diff viewers checking of things I might miss), update documentation with cross links (with documentation linters checking for errors I miss), convert tests into benchmarks running as part of CI, make log file visualizers, and many more.

          These tools are amazing for dealing with the long tail of boring issues that you never get to, and when used in this fashion they actually abruptly increase the quality of the codebase.

          • rco8786 30 minutes ago

            That is also my experience. Doesn't even have to be a 10 year old codebase. Even a 1 year old codebase. Any one that is a serious product that is deployed in production with customers who rely on it.

            Not to say that there's no value in AI written code in these codebases, because there is plenty. But this whole thing where 6 agents run overnight and "tada" in the morning with production ready code is...not real.

          • ano-ther 20 minutes ago

            Wouldn’t the swarms benefit from supervisory agents that have visibility into what each agent does? Still not fully trustworthy, but at least a safeguard.

            • faeyanpiraat 40 minutes ago

              Looking at it from far is simply making something large from a smaller input, so its kind of like nondeterministic decompression.

              What fills the holes are best practices, what can ruin the result is wrong assumptions.

              I dont see how full autonomy can work either without checkpoints along the way.

              • rco8786 28 minutes ago

                Totally agreed. Those assumptions often compound as well. So the AI makes one wrong decision early in the process and it affects N downstream assumptions. When they finally finish their process they've built the wrong thing. This happens with one process running. Even on latest Opus models I have to babysit and correct and redirect claude code constantly. There's zero chance that 5 claude codes running for hours without my input are going to build the thing I actually need.

                And at the end of the day it's not the agents who are accountable for the code running in the production. It's the human engineers.

            • aceelric 18 minutes ago

              Exactly, and this is the best way to do code review while it's working so that you can steer it better. It's really weird that Anthropic doesn't get this.

              • xnorswap 42 minutes ago

                Yes, this is why I generally still use "ask for permission" prompts.

                As tedious as it is a lot of the time ( And I wish there was an in-between "allow this session" not just allow once or "allow all" ), it's invaluable to catch when the model has tried to fix the problem in entirely the wrong project.

                Working on a monolithic code-base with several hundred library projects, it's essential that it doesn't start digging in the wrong place.

                It's better than it used to be, but the failure mode for going wrong can be extreme, I've come back to 20+ minutes of it going around in circles frustrating itself because of a wrong meaning ascribed to an instruction.

                • faeyanpiraat 37 minutes ago

                  The other side of catcing going off the rails is when it wants to make edits without it reading the context I know would’ve been neccessary for a high quality change.

                • xg15 an hour ago

                  > Cherny responded to the feedback by making changes. "We have repurposed the existing verbose mode setting for this," he said, so that it "shows file paths for read/searches. Does not show full thinking, hook output, or subagent output (coming in tomorrow's release)."

                  How to comply with a demand to show more information by showing less information.

                  • embedding-shape an hour ago

                    Words have lost all meaning. "Verbose" no longer means "containing more words than necessary" but instead "Bit more than usual". "Fast" no longer mean "characterized by quick motion, operation, or effect" but instead depends on the company, some of them use slightly different way, but same "speed", but it's called "fast mode".

                    It's just a whole new world where words suddenly mean something completely different, and you can no longer understand programs by just reading what labels they use for various things, you need to also lookup if what they think "verbose" means matches with the meaning you've built up understanding of first.

                    • 3oil3 43 minutes ago

                      This is really the kind of things Claude sometimes does. "Actually, wait... let's repurpose the existing verbose mode for this, simpler, and it fits the user's request to limit bloating"

                    • alentred an hour ago

                      Well, there is OpenCode [1] as an alternative, among many others. I have found OpenCode being the closest to Claude Code experience, and I find it quite good. Having said that I still prefer Claude Code for the moment.

                      [1] https://opencode.ai/

                      • skerit 41 minutes ago

                        What does Claude-Code do different that you still prefer it? I'm so in love with OpenCode, I just can't go back. It's such a nicer way of working. I even love the more advanced TUI

                        • epiccoleman 8 minutes ago

                          Are you paying per-token after Anthropic closed the loophole on letting you log in to OpenCode?

                        • kardianos 5 minutes ago

                          I've liked opencode+glm5 quite a bit so far.

                          • subscribed an hour ago

                            I haven't tried it myself but there was a plenty of people in the other thread complaining that even on the Max subscription they couldn't use OpenCode.

                            • kachapopopow an hour ago
                              • saagarjha an hour ago

                                OpenCode would be nicer if they used normal terminal scrolling and not their own thing :(

                              • clktmr an hour ago

                                It's probably in their interest to have as many vibed codebases out there as possible, that no human would ever want to look at. Incentivising never-look-at-the-code is effectively a workflow lockin.

                                • kachapopopow an hour ago

                                  I always review every single change / file in full and spend around 40% of the time it takes to produce something doing so. I assume it's the same for a lot of people who used to develop code and swapped to mostly code generation (since it's just faster). The spend I time looking at it depends on how much I care about it - something you don't really get writing things manually.

                                • kcartmell an hour ago

                                  Not trying to tell anyone else how to live, just want to make sure the other side of this argument is visible. I run 5+ agents all day every day. I measure, test, and validate outputs exhaustively. I value the decrease in noise in output here because I am very much not looking to micromanage process because I am simply too slow to keep up. When I want logging I can follow to understand “thought process” I ask for that in a specific format in my prompt something like “talk through the problem and your exploration of the data step by step as you go before you make any changes or do any work and use that plan as the basis of your actions”.

                                  I still think it’d be nice to allow an output mode for you folks who are married to the previous approach since it clearly means a lot to you.

                                  • nojs 36 minutes ago

                                    > I run 5+ agents all day every day

                                    Curious what plans you’re using? running 24/7 x 5 agents would eat up several $200 subscriptions pretty fast

                                    • kcartmell 31 minutes ago

                                      My primary plan is the $200 Claude max. They only operate during my working hours and there is significant downtime as they deliver results and await my review.

                                  • panozzaj 34 minutes ago

                                    Claude logs the conversation to ~/.claude/projects, so you can have write a tool to view them. I made a quick tool that has been valuable the last few weeks: https://github.com/panozzaj/cc-tail

                                    • tylervigen 19 minutes ago

                                      This article is mostly about this discussion on hn: https://news.ycombinator.com/item?id=46978710

                                      • radial_symmetry 24 minutes ago

                                        If you use Claude Code in Nimbalyst it tracks every file change for you and gives you red/green diffs for your session.

                                        • small_model 31 minutes ago

                                          I always get Claude Code to create a plan unless its trivial, it will describe all the changes its going to make and to which files, then let it rip in a new context.

                                          • dionian 20 minutes ago

                                            Why use a new context? Or you mean, just accept the plan and it automatically clears the context.

                                          • anonzzzies 12 minutes ago

                                            We have been playing with glm4.7 on cerebras which I hope to be the near future for any model; it generates 1000s of lines when you recover from a sneeze : it's absolutely irrelevant if you can see what it does because there is no way you can read it live (at 1000s of tokens/s) and you are not going to read it afterwards. Catching it before it does something weird is just silly; you won't be able to react. Works great for us combined with Claude Code; claude does the senior work like planning and takes its time: glm does the implementation in a few seconds.

                                            • corv 2 hours ago

                                              When their questionnaire asked me for feedback I specifically mentioned that I hoped they would not reduce visibility to the point of Github Actions.

                                              I guess that fell on deaf ears.

                                              • MaxikCZ an hour ago

                                                Can anybody break my black glasses and offer an anecdote of a high-employee count firm actually involving humans for reading feedback? I suspect its just there for "later", but never actually looked at by anyone...

                                                • gambiting 19 minutes ago

                                                  You know when your game crashes on PS5 and you get a little popup that offers you the opportunity to write feedback/description of the crash?

                                                  Yeah, I used to sit and read all of these(at one of the largest video game publishers - does that count?). 95% of them were "your game sucks" but we fixed many bugs thanks to detailed descriptions that people have provided through that box.

                                              • seunosewa an hour ago

                                                Perhaps they can just make it an option??

                                                • jstummbillig an hour ago

                                                  That is such silly framing. They are not "trying" to hide anything. They are trying to create a better product -- and might be making unpopular or simply bad choices along the way -- but the objective here is not to obfuscate which files are edited. It's a side effect.

                                                  • subscribed an hour ago

                                                    Instead of adding a settings option to hide the filenames they hide them for everyone AND rewrite verbose mode, which is no longer a verbose mode, but the way to see filenames, thus breaking everyone's (depending on these) workflows for...... what exactly?

                                                    If they tried to create a better product I'd expect them to just add the awesome option, not hide something that saves thousands of tokens and context if the model goes the wrong way.

                                                    • jstummbillig 13 minutes ago

                                                      Again, the framing is simply off. Why would they want to break "everyone's" workflow ("everyone" including the people working at Anthropic, who use the product themselves, which should give us some pause)? Why would you ever want to make a bad decision?

                                                      The answer in both cases is: There is no reason to want to do that. If it happens, it's because you sometimes make bad decisions, because it's hard to make good decisions.

                                                    • acron0 an hour ago

                                                      How can you combat one unprovable framing by insisting on another unprovable framing?

                                                    • alansaber 30 minutes ago

                                                      The nice thing about the competition in the CLI space is that... you can just move? CC has always been a bit wonky/ this is active enshittification- there is the likes of Codex etc...

                                                      • krystofee 17 minutes ago

                                                        ctrl+o ?

                                                        • amelius 2 hours ago

                                                          Why not run Claude on an FUSE based filesystem, and make a script that shows the user which files are being accessed?