• mensetmanusman a day ago

    Having dealt with near and distant family psychosis on more than one occasion…

    The truth is that the most random stuff will set them off. In one case, a patient would find reinforcement on obscure YouTube groups of people predicting the doom of the future.

    Maybe the advantage of AI over YouTube psychosis groups is that AI could at least be trained to alert the authorities after enough murder/suicide data is gathered.

    • raxxorraxor a day ago

      I don't think AI informing authorities (who would do nothing) is in any way desirable.

      At some point you have to just live with marginal dangers. There is no technical solution here.

      • mensetmanusman a day ago

        Wouldn’t have to be ‘authorities’, but for people with broken families, authorities who knock on the door and check for a health crisis are literally all that they have.

        (We have family that would be homeless if we hadn’t taken them under our wings to house them).

        • raxxorraxor 7 hours ago

          I am sorry to hear that. But these problems would still not justify surveillance of users with a direct line to authorities. In some countries these can be quite abusive. For people that such engagement with AI is dangerous should look for other means.

        • simianwords a day ago

          a nice way to convey this is: the optimal amount of such mishaps is non-zero.

        • Avicebron a day ago

          I'd prefer using chatgpt or claude or whatever doesn't mean someone gets swatted when they get heated about "kill this damn thread"

          • recursive a day ago

            I doubt this is all going to end up according to any of our preferences.

          • ChrisMarshallNY a day ago

            I have very close family with schizoaffective disorder.

            This story is pretty terrifying to me. I could easily see them getting led into madness, exactly as the story says.

            • garyfirestorm a day ago

              Minority Report coming to life

              • IIAOPSW a day ago

                More like plurality report

              • aaron695 a day ago

                [dead]

              • funwares a day ago

                Both his Instagram [0] and YouTube pages [1] are still up. He had a habit of uploading screen recordings of his chats with ChatGPT.

                It looks like fairly standard incomprehensible psychosis messages but it seems notable to me that ChatGPT responds as if they are normal (profound, even) messages.

                The 'In Search of AI Psychosis' article and discussions on HN [2] from a few days ago are very relevant here too.

                [0] https://www.instagram.com/eriktheviking1987

                [1] https://youtube.com/@steinsoelberg2617

                [2] https://news.ycombinator.com/item?id=45027072

                • judge123 a day ago

                  This is horrifying, but I feel like we're focusing on the wrong thing. The AI wasn't the cause; it was a horrifying amplifier. The real tragedy here is that a man was so isolated he turned to a chatbot for validation in the first place.

                  • AIPedant a day ago

                    I don't think "so isolated he turned to a chatbot for validation" describes this, or why people get unhealthily attached to chatbots.

                    1) The man became severely mentally ill in middle age, and he lived with his mother because he couldn't take care of himself. Describing him as merely "isolated" makes me wonder if you read the article: meeting new friends was not going to help him very much because he was not capable of maintaining those friendships.

                    2) Saying people turn to chatbots because of isolation is like saying they turn to drugs because of depression. In many cases that's how it started. But people get addicted to chatbots because they are to social interaction what narcotics are to happiness: in the short term you get all of the pleasure without doing any of the work. Human friends insist on give-and-take, chatbots are all give-give-give.

                    This man didn't talk to chatbots because he was lonely. He did so because he was totally disconnected from reality, and actual human beings don't indulge delusions with endless patience and encouragement the way ChatGPT does. His case is extreme but "people tell me I'm stupid or crazy, ChatGPT says I'm right" is becoming a common theme on social media. It is precisely why LLMs are so addictive and so dangerous.

                    • mediumsmart a day ago

                      Most of us are so isolated that we will turn to validation from bubble brothers sharing our view which is a real tragedy, yes. It may be horrifying but a horrifying normal at that.

                      • strogonoff a day ago

                        In a technical sense, no technology is ever the cause of anything: at the end of the day, humans are the cause. However, technology often unlocks scale, and at some point quantity becomes quality, and I believe that is usually implied when it is said that technology “causes” something.

                        For example, cryptocurrency and tumblers are not themselves the cause of scams. Scams are a result of a malevolent side of human nature; a result of mental health issues, insecurity and hatred, oppression, etc., whereas cryptocurrencies, as many people are keen to point out, are just like cash, only digital. However, one of the core qualities of cash is that it is unwieldy and very difficult to move in big amounts. Cash would not allow criminals to casually steal a billion USD in one go, or ransomware a dozen of hospitals, causing deaths, subsequently washing the proceeds and maintaining plausible deniability throughout. Removing a constraint on cash makes it a new thing qualitatively. Is there a benefit from it? Sure. However, can we say it caused (see above) a wave of crime? I think so.

                        Similarly, if there has been a widespread problem of mental health issues for a while, but now people are enabled to “address” these issues by themselves—at humongous scale, worldwide—of course it will be possible to say LLMs would not be the cause of whatever mayhem ensues; but wouldn’t they?

                        Consider that it used to be that physical constraints made any individual worldview necessarily be tempered and averaged out by surrounding society. If someone had a weird obsession with murdering innocent people, they would not be able to find like-minded people to encourage them very easily (unless they happen to be in a localized cult) to sustain this obsession and transform it.

                        Then, at some point, the Internet and social media made it easy, for someone who might have otherwise been a pariah or forced to adjust, to find like-minded people (or just people who want to see the world burn) right in their bedrooms and basements, for better and for worse.

                        Now, a new variety of essentially fancy non-deterministic autocomplete, equipped with enough context to finely tailor its output to each individual, enables us to fool ourselves into thinking that we are speaking to a human-like consciousness—meaning that to fuel one’s weird obsession, no matter how left field, one does not have to find a real human at all.

                        Humans are social creatures, we model ourselves and become self-aware through other people. As chatbots are becoming normalized and humans want to talk to each other less, we (not individually, but at societal scale) are increasingly at the mercy of how an LLMs happen to (mal)function. In theory, they could heal society at scale as well, but even if we imagine there were no technical limitations preventing that, sadly practice is more likely to show selfish interests prevail and be amplified.

                      • fbhabbed a day ago

                        This is not Cyberpunk 2077 and "AI psychosis" is trash, just like the article.

                        Someone is mentally ill, and can use AI. Doesn't mean AI is the problem. A mentally ill person can also use a car. Let's ban cars?

                        • ranguna a day ago

                          A car won't manipulate you into ending your's or someone else's life. You just get on the car and do it. An AI can lead you from a fragile state of mind to a suicidal state of mind.

                          Not saying I want AIs to be banned or that the article is good, I'm just argumenting that your analogy could potentially be flawed.

                          • fbhabbed a day ago

                            Even a song can do that, or a movie

                            • itsdrewmiller 20 hours ago

                              AIs don’t have agency and cannot manipulate anyone either.

                          • ChrisArchitect a day ago
                            • crawsome a day ago

                              A tech industry veteran? You would think they could realize it's a disingenuous exchange between him and the AI, but nobody is immune to mental illness.

                              • StilesCrisis a day ago

                                It says he worked in marketing, so not necessarily super tech savvy.

                              • DaveZale a day ago

                                why is this stuff legal?

                                there should be a "black box" warning prominent on every chatbox message from AI, like "This is AI guidance which can potentially result in grave bodily harm to yourself and others."

                                • mpalmer a day ago

                                  Your solution for protecting mentally ill people is to expect them to be rational and correctly interpret/follow advice?

                                  Just to be safe, we better start attaching these warnings to every social media client. Can't be too careful

                                  • DrillShopper a day ago

                                    > Just to be safe, we better start attaching these warnings to every social media client.

                                    This but completely unironically.

                                  • lukev a day ago

                                    That would not help in the slightest, any more than a "surgeon general's warning" helps stop smokers.

                                    The problem is calling it "AI" to start with. This (along with the chat format itself) primes users to think of it as an entity... something with care, volition, motive, goals, and intent. Although it can emulate these traits, it doesn't have them.

                                    Chatting with a LLM is entering a one-person echo-chamber, a funhouse mirror that reflects back whatever semantic region your initial query put it. And the longer you chat, the deeper that rabbit hole goes.

                                    • duskwuff a day ago

                                      > That would not help in the slightest, any more than a "surgeon general's warning" helps stop smokers.

                                      Particularly given some documented instances where a user has asked the language model about similar warnings, and the model responded by downplaying the warnings, or telling the user to disregard them.

                                      • threatofrain a day ago

                                        It's not a one-person echo-chamber though, it also carries with it the smell and essence of a large corpus of human works. That's why it's so useful to us, and that's why it carries so much authority.

                                        • jvanderbot a day ago

                                          Well, hate to be that guy, but surgeons general warnings coincided with significant reduction in smoking. We've just reached the flattening of that curve. After decades of declines.

                                          It's hard to believe that a prominent well - worded warning would do nothing but that's not to say it'll be effective for this.

                                          • Daviey a day ago

                                            It reminds me of when the UK Prime Minister sent a letter to all 30 milliom households during the first Covid lockdown.

                                            Printing and postage cost was about £5.8 million. At the time, I thought it was a waste of taxpayers’ money. A letter wouldn’t change anyone’s behaviour, least of all not mine or anyone I knew.

                                            But the economics told a different story. The average cost of treating a Covid patient in intensive care ran to tens of thousands of pounds. The UK Treasury values preventing a single death at around £2 million (their official Value of a Prevented Fatality). That means if the letter nudged just three people into behaviour that prevented their deaths, it would have paid for itself. Three deaths, out of 30 million households.

                                            In reality, the effect was likely far larger. If only 0.1% of households (30,000 families) changed their behaviour even slightly, whether through better handwashing, reduced contact, or staying home when symptomatic, those small actions would multiply during an exponential outbreak. The result could easily be hundreds of lives saved and millions in healthcare costs avoided.

                                            Seen in that light, £5.8 million wasn’t wasteful at all. It was one of the smarteer investments of the pandemic.

                                            What I dismissed as wasteful and pointless, turned out to be a great example of how what appears to be a large upfront cost can deliver returns that massively outweigh the initial outlay.

                                            I changed my view and admitted I was wrong.

                                            • ianbicking a day ago

                                              The surgeon general warning came along with a large number of other measures to reduce smoking. If that warning had an effect, I would guess that effect was to prime the public for the other measures and generally to change consensus.

                                              BUT, I think it's very likely that the surgeon general warning was closer to a signal that consensus had been achieved. That voice of authority didn't actually _tell_ anyone what to believe, but was a message that anyone could look around and use many sources to see that there was a consensus on the bad effects of smoking.

                                              • lukev a day ago

                                                Well if that's true, by all means.

                                                But saying "This AI system may cause harm" reads to me as similar to saying "This delightful substance may cause harm."

                                                The category error is more important.

                                            • tomasphan a day ago

                                              Should we really demand this of every AI chat application to potentially avert a negative response from the tiny minority of users that blindly follow what they’re told? Who is going to enforce this? What if I host a private AI model for 3 users. Do I need that and what is the punishment for non compliance? You see where I’m going with this. The problem with your sentiment is that as soon as you draw a line it must be defined in excruciating detail or you risk unintended consequences.

                                              • undefined a day ago
                                                [deleted]
                                                • ETH_start a day ago

                                                  [dead]