• fragmede 2 days ago

    The headline is misleading. The bill allows AI and algorithms to be used, as long as it doesn't supplant a licensed medical professional deciding (K.1.D), or violate civil rights along with a few other things, but it's not outright prohibited as the headline could be interpreted.

    Section K.1 of SB 1120

    https://leginfo.legislature.ca.gov/faces/billTextClient.xhtm...

    (old title was some thing like New California law prohibits using AI as basis to deny health insurance claims)

    • Aurornis 2 days ago

      Thanks. That actually makes sense. Most insurance approvals are done according to pre-decided “algorithms” which indicate which conditions much be met for a treatment to be warranted. The talk of banning “algorithms” sounded wrong.

      • dang 2 days ago

        Thanks - let's use the subtitle instead (shortened to fit HN's 80 char limit).

      • siliconc0w 2 days ago

        There needs to be a consequence for improper denials, regardless of mechanism. Banning AI just means they have to hire people to rubber-stamp.

        • martin-t a day ago

          This is what most people don't understand or don't want to understand.

          If the value proposition for antisocial behavior is positive when successful and zero when caught, then antisocial people will keep trying. Self-policing or teaching them to be "well-behaved" does not work. Many of them don't feel shame or remorse, some from birth, some because or psychological adaptations later in life. If their goal is to climb the ladder, get rich, get famous, they will not care how many people they hurt along the way unless the rest of society makes them lose something they value.

          The value proposition needs to be sufficiently negative to offset the potential payoff.

          • luckman212 a day ago

            This is the reason some believe the "value proposition" of having the CEO executed point blank is justified.

          • Mountain_Skies 2 days ago

            Debt collection companies have been sued and lost over similar type of behaviors where they had staff attorneys signing thousands of statements attesting they'd personally reviewed filings for errors, especially when it came to serving notice to defendants. Can't imagine the health insurance companies would do better than this unless they had scary pain beyond a potential token fine if caught.

            • Jimmc414 2 days ago

              Unfortunately, after much debate and ABA lobbying the CFPB opted NOT to include safe harbor provisions for meaningful attorney involvement in the Final Debt Collection Practices Rule [0]

              [0] https://www.americanbar.org/advocacy/governmental_legislativ...

              • kevinventullo 2 days ago

                The problem is that insurance companies can play chicken with regulators. If a debt collection company disappears overnight, basically some investors lose money and some debtors get a clean slate. If a health insurance company disappears… I imagine many people would not get healthcare who otherwise would.

                • Y_Y 2 days ago

                  It wouldn't be the first time a health insurance company shut down in response to regulation.

                  For example they're was a case in Ireland regarding the local arm of Bupa[0]. To condense a complicated saga, a regulation obliged them to pay competitors to compensate for the fact that their clientele were a better risk than the average consumer. They considered all the obvious options: appeal, loophole, sell the business, temporary government takeover, merge into a competitor.

                  In the end they did several of these and the Irish entity continues on as Laya, independent of the Bupa parent. At no point was it considered that people would lose cover, and in fact the regulator and government ensured that continuity of things like pre-existing conditions was covered so that the consumers weren't disadvantaged.

                  Of course Ireland and the EU don't operate exactly like the US. I only mean to say it's possible to go through this process without harming the healthcare outcomes (except by the unavoidable loss of competition among providers).

                  • disgruntledphd2 a day ago

                    As an aside, it's pretty ludicrous that Community Rating is allowed to not be paid for the first three years of a company's operation, leads to shenanigans like this.

            • pc2g4d 2 days ago

              None of this would matter if there were real competition in the insurance market, instead of people having to change jobs to change insurance, and not getting a direct say even then.

              As it is, this is a dumb law, and prejudiced against decisions made in silico rather than in vivo.

              • tzs a day ago

                There are plenty of people who get their health insurance through their state's ACA healthcare marketplace or through HealthCare.gov if their state does not run its own ACA marketplace.

                Some of those marketplaces do have competition. For example in the county Seattle is in there are 9 different insurance companies offering marketplace plans.

                If lack of competition is the problem, should we expect to see less of those problems with insurance purchased on ACA marketplaces?

              • doctor_radium 2 days ago

                Hopefully the law is written broadly enough to include algorithm-driven decisions that existed long before the current "AI" trend.

                • Aurornis 2 days ago

                  Decision trees are inherent to every medical system around the world. Banning them would do nothing other than drive the cost of insurance way up because every single claim would require human review and involvement, likely checking the exact same decision tree that was being implemented automatically.

                  Fortunately the bill doesn’t actually do that. The way so many people in these comments are cheering for an obviously terrible cost adder demonstrates how off the rails discourse on this topic has become.

                  Forcing human involvement for every claim (by banning algorithms) would also make for massively higher error rates for simple things.

                  • janderson215 a day ago

                    People clearly no longer care about reality when it comes to healthcare. They’re either stupid, horribly mislead, or terrible people.

                    We’ll see an exodus of good people leaving the industry because they don’t care tolerate the sentiment and the US healthcare system will be run by actual terrible and/or stupid people and the problem will get far worse than it is today. There is no correcting mechanism to this sentiment when public figures including politicians and executives in other industries do not set the record straight and correct obvious misunderstandings. I’ve heard 1 person go the distance of attempting to set the record straight since Brian Thompson was assassinated and that is David Friedberg. (https://youtu.be/K2xfW3hgxb4?si=bjBXRLduRuAk_I4m 1:08:00 mark) At best, people have only said “murder is bad”. There are a lot of people, including my friends, who are totally okay with assassination. I think by now we know the perp is an anti-capitalist and that was his rationale.

                    People do not recognize the importance of business in the world.

                    Edit: also, the internet isn’t real and people don’t take this stuff seriously. Mix that in with bots from adversarial countries stoking the flames and we have a recipe for disaster.

                  • fragmede 2 days ago

                    It's stated as "artificial intelligence, algorithm, or other software tool" which seems broad enough.

                  • Jimmc414 2 days ago

                    >"For purposes of this subdivision, 'artificial intelligence' means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments."

                    This definition is overly broad and potentially problematic for multiple reasons:

                    The definition could encompass simple rule-based systems or basic statistical models. Even basic automated decision trees could potentially fall under this definition. There's no clear distinction between AI and traditional software algorithms. The bill groups "artificial intelligence, algorithm, or other software tool" together in its requirements. This makes it unclear whether different rules apply to different types of automation. Basic automation tools might unexpectedly fall under the AI regulations. The definition focuses on "autonomy" and "inference" without defining these terms. It doesn't distinguish between machine learning, deep learning, or simpler automated systems. The phrase "varies in its level of autonomy" is particularly vague and could apply to almost any software.

                    This is legislation that may sound effective and mean well, but the unintended consequences of increased costs and delayed decisions based upon a naive definition of AI seems inevitable.

                    • jackvalentine 2 days ago

                      I don’t think it’s unreasonable to include rule-based decision trees in this. The definition seems as broad as it needs to be to be effective.

                      • mithametacs 2 days ago

                        The dumber the technique, the less it should be used for this.

                      • id00 2 days ago

                        I'm really surprised that it's not set on a federal level yet.

                        I was working with background checks in the US and it was a rule for a while that every rejection has to go through a real person

                        • autobodie 2 days ago

                          See: Health Insurance Industry lobby

                        • from-nibly 2 days ago

                          Don't tell poeple how to do stuff. Thell them what outcomes they are responsible for. They will figure it out from there.

                          If they reject a claim that was legal that should open them up to liability for the results of rejecting that claim.

                          • brendoelfrendo a day ago

                            Yeah, but by that point, the patient is already disabled/disfigured/dead. Making insurers liable for wrongful deaths and disabilities does seem appropriate, in a general sense, but we should probably also make it illegal to do things that cause people to die or otherwise suffer harm.

                          • exabrial 2 days ago

                            how about we just ban "ai" to decision anything... kinda dumb people put so much faith in something that spits out wrong answers nearly every time.

                            • avidiax 2 days ago

                              I feel this might be counterproductive.

                              An AI and its training data are likely discoverable. Hallway conversations and group meetings are not.

                              • mmooss 2 days ago

                                > An AI and its training data are likely discoverable. Hallway conversations and group meetings are not.

                                You need a solution that works in everyday practice. Relying on customers to sue and win in order to keep insurance companies honest day-to-day seems insufficient.

                                • sciencesama 2 days ago

                                  The problem is that insurance companies dont train their own models instead rely on outside vendors and vendors who reject more are given the opportunity so this sets a wrong precedence just like how contracts to the dod makes the folks play it long game rather than making products fast !!

                                  • eqvinox 2 days ago

                                    Even if the training data is discoverable, there's be no way to provide any assurance on what the training result ended up as.

                                    • unsnap_biceps 2 days ago

                                      In theory, one could get a copy of the AI and run tests on it, proving or disproving a claimed bias.

                                      That said, I'm happy with the new law.

                                  • aitchnyu 2 days ago

                                    How is AI defined?

                                    • fragmede 2 days ago

                                      it's not, but the phrase used is "artificial intelligence, algorithm, or other software tool"

                                    • mrayycombi a day ago

                                      Even if this headline were true, they'd find another rationale to deny claims if that's their strategy.

                                      If AI didn't let them deny claims, they'd avoid it. See?

                                      • tengbretson 2 days ago

                                        TBH an ai model that is compelled to provide its chain of thought to the patient might be an upgrade over the status quo.

                                        • hx8 2 days ago

                                          What is the state of the art for "chain of thought" models? The last I read up on them was a decade ago, and decision trees were still the standard.

                                          • exabrial 2 days ago

                                            Yes, but only if it gave the opposite reasoning as well.