• LatencyKills 5 hours ago

    I realize my situation isn’t typical, but I’m retired and have dealt with depression most of my life.

    The thing I miss most about work (yes, you really can miss work) is collaborative problem-solving. At Microsoft, we called it “teddy bear debugging”—basically, self-explaining a problem out loud to clarify your thinking. [1]

    These days, when I’m stuck, I open Claude Code and “talk it through.” That back-and-forth helps me reason through technical issues and scratches a bit of that collaborative itch that helped keep my depression in check.

    [1]: https://economictimes.indiatimes.com/news/international/us/w...

    • tclancy 3 hours ago

      I've found something similar. I've been using Claude Code to build lots of things I would but fear failing at or hitting an iceberg. Having seen success for that, I've started rubber ducking it through a number of things. Changed the carburetor on my snow blower for the first time ever and with minimal pain mainly because "asking Claude about it" meant making myself stop and think through the process, plan an approach and put together a mise-en-place rather than starting, realizing I needed a couple of tools, leaving things a mess and not coming back due to anxiety.

      Basically, it helps me avoid what they called "gumption traps" in Zen and the Art of Motorcycle Maintenance.

      • byproxy 4 hours ago

        Yep, this, so far, proven the most promising use of LLMs, to me. I've read about people's Rube Goldberg machine-eqsue setups for getting agentic LLMs to work for them, but I find simply having a dialectic with an LLM to be more fruitful. Rubber-ducking with a duck that quacks back.

        • magicpin 4 hours ago

          How do you prevent it from just taking the reins and writing an entire function or class for you when all you wanted to do was just talk about the code you already had.

          • lanyard-textile 3 hours ago

            "No coding. (Explain|Debug|Analyze|Talk through) this with me:"

            "Talk with me first:" (Implying anything other than talking, like coding, would be a separate distinct step that is not to be done)

            Proposal is the best keyword imo if it fits what you'd like.

            "Propose changes you would make to (this repo|staged changes|latest commit)."

            "Propose alternatives."

            "Propose flaws." / "Propose flaws in my reasoning."

            • LatencyKills 3 hours ago

              I keep Claude in "planning mode" (shift+tab) so it cannot touch my codebase.

              • sgc 4 hours ago

                "Do not write any code ..." If you are using LLMs for highly restricted work, it is rather trivial to keep them in check enough to receive useful responses.

                • fzzzy 4 hours ago

                  You don't turn on editing mode.

              • righthand 4 hours ago

                But talking with an Llm isn’t teddy bear/rubber duck debugging because your llm has some high odds of outputting good feedback. Teddy bear/rubber duck debugging involves the other party not knowing anything about your problem let a lone even capable of giving a response (hence why it’s not go-ask-a-coworker/teacher/professional debugging). It’s about getting yourself to refocus the problem and state what you already know and allowing your brain to organize the facts.

                I’m not trying to be rude but it seems like you’re conflating collaborative problem solving with rubber duck debugging. You haven’t actually collaborated with a rubber duck when you’re finished rubber duck debugging.

                • LatencyKills 3 hours ago

                  > But talking with an Llm isn’t teddy bear/rubber duck debugging because your llm has some high odds of outputting good feedback.

                  That isn't how we did it at either Microsoft or Apple. There, we defined it as walking another engineer through a problem. That person may or may not have been an expert in whatever I was working on at the time. You truly aren't suggesting that rubber duck debugging only works when you don't receive feedback?

                  I use Claude to bounce ideas around just like I did with my human teammates.

                  I think you're being pedantic, but it doesn't matter to me: in the end, I work must better when I can talk through a problem; Claude is a good stand-in when I don't have access to another human.

                  • righthand an hour ago

                    No I’m suggesting that RDD is not a mechanism to reason and solve your problem, but rather a mechanism to get your mind thinking into the problem solving state. It is asking you to physically repeat what is in your brain. The same as writing it out on a marker board or handwritten notes. Rubber duck debugging is about debugging you, not debugging the code. That’s why it doesn’t matter who you talk to about the problem in rubber duck debugging.

                    The part where your colleague or Llm returns more information or advice is past the rubber ducking state. Depending on the difficulty of the problem you may not need to ask a colleague to lead you to water. And if rubber duck debugging can be done solo, what is the actual process you get from it as non-relative to you coworker/code assistant?

                  • nottorp 4 hours ago

                    > involves the other party not knowing anything about your problem let a lone even capable of giving a response

                    I prefer grabbing a colleague that is technical but does not work on this particular project. Seems to force me to organize the info in my head more than an actual rubber duck.

                    • righthand 4 hours ago

                      Sure no one likes being seen ranting at no one either. Rubber ducks can be pencils, dogs, hamsters, and teddy bears, even friendly Carroll in accounting too.

                    • salawat 4 hours ago

                      Rubber duck debugging is a null-llm offloading to your gray matter for the other half of interlocution. A fancy way of recruiting your other brain matter into the problem solving process. Perhaps by offloading to a non-null LLM, there is decreased activation/recruitment of brain regions in the problem solving process, leading to network pruning over time. Particularly in the event you take the position that the "tool" isn't something worthy of having it's inner state reacted to and modeled via mirror networks.

                      But what do I know man, I'm just a duck on the Internet. On the Internet, no one knows you're a duck.

                      Quack.

                      • righthand 4 hours ago

                        But the point is that as soon as you get feedback and a response you’re back in traditional reasoning, puzzle solving, teaching, learning, etc. paradigm. Not in the rubber duck debugging paradigm. RDD is clearly defined as different. The GP is just choosing to remove the elements that make it unique but keep the metaphorical branding. Even bots responding is not RDD. Rubber ducks can’t respond or understand.

                        You don’t send kids to Rubber Duck Debugging Class (you send them to School) because you can’t see the teacher in the classroom while you’re at work.

                        You’re debugging yourself, not the actual problem per say.

                    • poszlem 4 hours ago

                      Or "rubberducking" as it's called now: https://en.wikipedia.org/wiki/Rubber_duck_debugging

                    • Sol- 4 hours ago

                      Maybe I'm nitpicking here, but in their abstract

                      > "Greater levels of AI use were associated with modest increases in depressive symptoms"

                      to me ever so slightly implies causality via "increases ...", even though, as they are also very transparent about, this paper isn't about any causal mechanism. I feel like "associated with higher rates of depressive symptoms" might have read more neutrally and would have been in line with the results of their paper.

                      Not suggesting something intentional by the authors, of course, I just found it interesting how verbs subtly influence the meaning of things, at least for me.

                      But perhaps I'm also biased because I kind of intuitively believe that the causation is that depressive people enjoy talking to the AI, rather than AI being the cause of anything. I worry that any reverse interpretations will lead to an over-regulation of AI in such contexts.

                      • Sharlin 4 hours ago

                        It's standard academic use of "increased", so I can't fault the authors for using it. Few in the intended target audience would read that as implying causation. One could of course argue that abstracts should be written with a larger audience in mind, but the job of a researcher is first and foremost to communicate as effectively as possible to other researchers.

                        • mwigdahl 4 hours ago

                          I don't think replacing "increased" with "greater" or "higher" would compromise communication to researchers at all, but it could cut down on misinterpretation and miscommunication in the wider science reporting world.

                          Seems like it would be overall beneficial.

                          • Sharlin 4 hours ago

                            Yes, but should we expect researchers to have the lay communication skills to even consider such things, to realize that the phrasing could be misinterpreted? Traditionally that's the job of the institute's PR department writing press releases. Anyone reading an abstract directly from its authors should also be expected to have basic academic reading skills.

                        • armoredkitten 3 hours ago

                          To me, the wording doesn't necessarily imply causality, but it does imply a repeated-measures design. Something being "associated with an increase in symptoms" is different than something being "associated with higher symptoms"; the former suggests that participants were measured at multiple time points, and there is a factor that could explain that change over time. But reading through the study, it was just a single time point.

                          Regardless, you're correct that it also shouldn't be taken to imply a causal relationship.

                          • troosevelt 4 hours ago

                            I noticed how much basic stuff is getting upvoted that confirms people's priors. I guess HN has always been this way, but it doesn't speak well of a community that views itself as thoughtful.

                            It's frustrating watching this topic turn into culture war.

                            • wat10000 4 hours ago

                              It would imply that if used as a verb, but it’s used as a noun here.

                            • throwawayk7h 4 hours ago

                              I think the causality is reversed. I have depression+ADD which has made life very difficult for me, but Claude allows me to be productive by helping me get organised and started on tasks, something normally very difficult for me.

                              • worldsayshi 4 hours ago

                                I very much suspect that this depends on how you use it. You can use it to dig yourself a hole as much as building a ladder to get out. To me it always comes down to whether you focus on cultivating internal impulses that are helpful vs unhelpful to you. LLM can probably help you cultivate most aspects of yourself that you want to focus on.

                              • erikgahner 4 hours ago

                                A lot of people might read this and infer that AI use causes depressive symptoms, but the study cannot say anything about causation at all. The study is also transparent about this fact: "Further work is needed to understand whether these associations are causal"

                                • nDRDY 4 hours ago

                                  Y'all picked a funny time to nitpick at standard academic boilerplate. If we discounted all research that only "associated" things, then we wouldn't know much at all! Then again, arguably we don't.

                                  • erikgahner 4 hours ago

                                    I wouldn't call this a minor detail (i.e., nitpicking), and it is worth pointing out again and again when these studies get public attention.

                                    We should encourage stronger research designs (including A/B tests) if we care about the impact of AI use on mental health outcomes. A study like this one cannot say anything about the effect at all (it is even possible that AI use will have a positive impact on mental health).

                                    • nDRDY 4 hours ago

                                      The translation between academic boilerplate and its real-world meaning and ramifications should be much more widely known. I wish more people had been nitpicking such things around 6 years ago.

                                      As for this research particular...pfff...I'm rooting for the collapse of this LLM-fuelled craze, so I'm biased.

                                    • undefined 4 hours ago
                                      [deleted]
                                      • squigz 4 hours ago

                                        The "correlation is not causation" argument gets brought up every single time such a study is shared on HN, so I'm not sure what you mean by "picked a funny time"?

                                        Anyway there's no reason to discount it, but it does mean you can't run with the assumption that there is causation.

                                        • sigbottle 3 hours ago

                                          I don't think psychology is useless, not one bit. But specifically the way modern papers publish findings make me distrust basically all statistical studies in the social sciences, aside from even the most basic philosophical issues that arise from these kinds of studies (people are very different, etc.).

                                          Like even if you accept a bunch of premises to make the studies even work, the raw stats are often so bad and there's no rigor to try and actually explain alternatives that I just have stopped reading them entirely.

                                          Again, I'm not one to hate on the social sciences. History, anthropology, politics, law, psychology, sociology, all of that is very interesting and important. But the horrible statistics that don't understand garbage in garbage out have turned me off of it. Much rather read qualitative studies that actually try to gather detailed, real data, even if it's not as automated as a random survey

                                      • drakonka 4 hours ago

                                        Anecdotally, not with depressive symptoms but anxiety, I find that use of ChatGPT/Claude for 'brainstorming' personal situations was definitely a gateway to further rumination for me. As someone who works on AI agents I thought I'd never fall into that trap and knew how to use it 'properly' when I wanted a sounding board. I was wrong. I now avoid general-use chatbots for personal issues as much as I can because it feels like it's helping in the short term, but has always been worse in aggregate.

                                        (I say general-use because I think there are some AI-based tools that are specially made which _can_ actually be helpful for this - but opening a ChatGPT tab, even with lots of relevant instructions, ain't it in my experience. The interface itself is counter-productive to healthy processing.)

                                        • ToucanLoucan 4 hours ago

                                          My reaction is that depressed people are, for whatever reason you described, more likely to use generative AI. I can think of a bunch of reasons, most tied to executive function in some way, but like, are we really surprised that people who are struggling to find pleasure/accomplishment/meaning in general life find AI appealing? You get to just play with it continuously, it always answers your messages, it always encourages you to keep talking, keep interacting with it, and it will make things for you for no greater cost than the asking.

                                          I don't think this is a mark against those users to be clear, I see this as largely the same chicken-egg relationship you find between depressed people and video games. It's also subject to the same kinds of abuses on the part of the merchant, things like in-game purchases that are particularly attractive to people with executive function issues, and why the predominant "whales" of the video game industry and especially the mobile game industry are people who are already struggling. I think AI is going to end up in a similar position because like, again, not trying to be shitty, but if your life kind of broadly sucks, I'm sure playing in an AI chatbox all day where something that sounds vaguely human will validate whatever you say, make stuff for you at request, and never challenge you in the slightest is quite attractive to you. And, thinking through it further, these systems also adapt to their users, learn how to engage with them better, as many products have before them that have trapped the neurodivergent into problematic usage scenarios.

                                          I don't judge the people, but I am incredibly suspicious of the businesses behind these and other products that seem almost designed to attract neurodivergent people. If you design a machine that gives dopamine on demand, you can't really be shocked when people who are dopamine‑starved use it a lot. Potentially to a harmful extent.

                                        • Aurornis 4 hours ago

                                          This is a study where reading the details is important. I’m already seeing comments guessing that the results are due to AI changing the nature of work, but the paper shows that the non-work daily users are driving the result.

                                          > The highest estimates were observed among individuals using AI for personal use

                                          and

                                          > Incorporating individual terms for school, work, and personal use, only personal use was significantly associated with PHQ-9 (β = 0.31 [95% CI, 0.10-0.52]), while the other 2 were not

                                          • theknarf 4 hours ago

                                            I would make sense that depressed people use AI as an assistive tool in their daily lives.

                                            • Sharlin 4 hours ago

                                              My hypothesis is that depressed people use AI more for companionship/sexual roleplaying than as an assistive tool. Though you could count that as "assistive" as well, I guess. Depression, loneliness, and a lack of social contacts are highly correlated.

                                              • nottorp 4 hours ago

                                                One interesting next step would be correlating "AI" use with how much of a social life those users have.

                                            • lehmacdj 3 hours ago

                                              I'm pretty ambivalent about generative AI's effect on my happiness/motivation.

                                              Often talking to Claude/using AI agents to build software is really enjoyable/motivating, and it also makes it easier to get the satisfaction from completing projects.

                                              But it also tends to make me think about how quickly the technology is developing. This makes me anxious about x-risks from AI, which makes it harder to get work done.

                                              • hackitup7 3 hours ago

                                                I don't know if I'm a crazy weirdo here but I find that talking to LLMs / using them for certain tasks that I find stressful improves my mental health.

                                                • citrin_ru 4 hours ago

                                                  People who use AI at work are likely more worried about loosing job (after being replaced by AI) than people whose professions are less exposed to AI.

                                                  • testfrequency 3 hours ago

                                                    Anecdotally: The most depressed friends I have are all tech workers who are using AI daily for their personal life, and of course at their respective work places.

                                                    I know that’s not a fair correlation to make, but I have friends who use AI casually and not in tech, they seem outwardly fine and don’t make depressive comments about the future.

                                                    • giantg2 5 hours ago

                                                      Makes sense to me. It's the old the work you put in determines what satisfaction you get out of it. If everything is done for you, then what satisfaction is there?

                                                      • Sol- 4 hours ago

                                                        The paper does seem to include a section where they check what the AI is used for and in work contexts, there was no correlation between depression and AI usage. Only in personal contexts.

                                                        • giantg2 4 hours ago

                                                          I guess it might depend if you get satisfaction out of your job. Many do not even without AI

                                                        • hackyhacky 4 hours ago

                                                          > It's the old the work you put in determines what satisfaction you get out of it.

                                                          I guess that explains why people who dig ditches for a living are so satisfied.

                                                          • nottorp 4 hours ago

                                                            I'm sure somewhere there is a minority of people who actually like ditch digging and are satisfied.

                                                            It just happens that for technical stuff the minority that likes what they do is larger and thus actually noticeable.

                                                            • mrwh 4 hours ago

                                                              Gardeners seem like pretty happy people to me.

                                                        • undefined 4 hours ago
                                                          [deleted]
                                                          • incomingpain 4 hours ago

                                                            USA depression has been on the rise for a long time. ~20% in 2015. To 29% most recently. Blame on covid is appropriate im sure. The original causes sourcing from the 80s and 90s, that are still ongoing.

                                                            Whereas generative AI is a recent thing. ~27% in 2021.

                                                            The correlation therefore is very very low and certainly not causal.

                                                            The question 'can AI make it worse' and this study didnt really do that.

                                                            Then consider confounders and this study is even weaker. Depression leads into AI usage, not the other way around.

                                                            • techpulse_x 4 hours ago

                                                              [dead]