• caseyy a day ago

    Wow, they are using AI as some sort of fact machine. We (the general public) already know this is extremely incompetent, but they don’t care.

    At least precedent is building and hopefully soon unscrupulous use of facial recognition AI by the police will be enough to convince courts to have a serious second look at the evidence. But the people who are affected now may be imprisoned falsely, that is awful.

    It reminds me of that case where police somewhere in the US arrested a person who Google told them was in proximity of a crime at least once, without any real evidence. It is baffling they use these technological 8-balls as fact machines. An 8-ball would be more energy efficient and have better ergonomics, at this point. I hope they are not considering it, but my Llama2 says they are and that’s a fact after all by their measure, isn’t it?

    • lukev a day ago

      The inaccuracy is a feature, not a bug. Police have always wanted the power to arrest who they want, when they want.

      AI, like drug-sniffing dogs before it provides them the plausible deniability to do so.

      • blitzar a day ago

        Drug sniffing dogs hallucinate far less than Ai

    • JohnMakin a day ago
      • haswell a day ago

        I’m not a very conspiracy minded person, and this comment is mostly aimed at the Sam Altman’s of the world, but when people talk about AI harms, especially harms in the “risk to all human life” category, I’m increasingly convinced that it’s an intentional misdirect away from the very real harms that are happening in front of us - right now.

        The harm conversation needs to be refocused on these less sexy but nevertheless real emerging problems.

        As these tools make their way into more and more aspects of life, I can’t help but feel like new laws need to exist so that a “don’t use this for xyz high risk purpose” warning actually has teeth.

        • JohnMakin a day ago

          In this specific example, police coercing or using shaky evidence in a perp lineup has always been known and a big problem and led to a lot of false convictions. This just has the added wrinkle of "AI" giving it more credibility than it should. You can see, even in this story, the victim tried to say he wasn't really sure and they basically ignored him. They aren't trying to be "right" or catch the right guy. Someone goes to jail, solved. If you're wrong, let the courts shake it out. You can also probably make more insidious assumptions as to the type of people that typically end up in jail from this stuff and all the perverse incentives there.

          • rsynnott 16 hours ago

            > but when people talk about AI harms, especially harms in the “risk to all human life” category, I’m increasingly convinced that it’s an intentional misdirect away from the very real harms that are happening in front of us - right now.

            I'm not sure that it's _intentional_; a lot of people are in deep into the "superhuman AIs" thing, in a quasi-religious way. But certainly the approach to "AI safety" is far too much "what if a sci-fi thing happens?" and not enough "what if people take the output of the spicy autocomplete seriously?" It's really mostly a human factors problem, at least for now; these things are being used completely inappropriately (and of course the companies making them have an interest in that; constraining them to appropriate low-risk uses would make them useful approximately nowhere).

          • cyanydeez a day ago

            Now everyone could get the minority in the wrong neighborhood treatment.