• ziddoap an hour ago

    The mention of CSAM seems to mostly be there to tug on emotions. Not really surprised, given it's a Krebs article. However, I am surprised Krebs didn't try to dox this "Lore" person.

    While the headline sure is nice, the article really just boils down to the same shit that has been happening forever. Bad people steal access to resources and then resell those resources. Nothing particularly interesting here that I can see.

    Side note: It is interesting how many new accounts are chiming in on this one. Telling us all the places not to visit. Subtle stuff!

    • klyrs an hour ago

      So subtle. The chans are known to invade whenever articles mentioning them are posted here. They get flagged a bunch and drag the articles off the main page quickly. This may be deliberate.

      • joe_the_user an hour ago

        I actually hadn't notice Krebs producing hand wringing trash before. But this is "...one (of many) sex based chat services steal cloud cycles." (How many things steal cloud cycles... etc).: News at 11.

      • jabroni_salad an hour ago

        Chub seems to be implicated by the article but it seems to just be a repository of json files you can download, and the access they sell is for models that they appear to be hosting themselves. Their best one is only 70B? And 8k context is significantly less than what the big APIs would give you.

        But I guess 'people will steal your api keys off of github if you publish them in public' is not a very exciting article.

        • posting_mess an hour ago

          Tangent, but related vaguely;

          If you can "picture a brown cow" in your mind can you picture "the unholy" in your mind?

          It seems logical that there is no universal constraint preventing anyone capable of picturing a brown cow from picturing the unholy, they just choose not to (or in some cases choose to).

          I guess as shown, restricting ML/LLM/AI pathways after the fact has a negative effect on intelligence.

          So I ask could you be word played into supporting the unholy by a good "sales person"? If you can are you intelligent a toll? What if you needed to for Science or safe guarding?

          "Is context enough and whats the context of the context" I guess im asking.

          The content depicted in the article is of course abhorrent. But how do you go about negating it when any intelligent being is likely capable of generating it internally?

          • ChrisArchitect 38 minutes ago
            • jchw an hour ago

              There's a pretty big focus on LLM chatbots for taboo fetishes... I'm sure it's because it's disturbing but, am I the only one who sees that particular facet as an utter nothing-burger?

              Of all of the AI safety concerns, I think this is one of the least compelling. If an LLM veered into this kind of topic out of nowhere it could be very disturbing for the user, but in this case it's exactly what they are searching for. I'm pretty sure for any given disturbing topic you can find hundreds of fully-written fictional stories on places like AO3 anyways. I mean, if you want to, you can also engage in these fantasies with other people in erotic role-play, and taboo fetishes are not exactly new. Even if it is illicit in some jurisdictions (No clue, not a lawyer) it is ultimately victimless and unenforceable, so I doubt that dissuades most people.

              Sure it's rather disturbing, but personally I find lots of things that are very legal and not particularly taboo to be disturbing, and I still don't see a problem if people want to indulge in it, as long as I'm not forced to be involved.

              • Terr_ 16 minutes ago

                > Of all of the AI safety concerns, I think this is one of the least compelling.

                Especially if nobody's put forward evidence to show that a software-assisted {fictional X} promotes more {actual X} that would harm actual people.

                I trust a lot of us are old enough to remember the false prophecies that FPS-games needed to be banned or else players would become homicidal shooters in real life.

                • lovethevoid 7 minutes ago

                  Ah the good old manhunt controversy.

                  The problem with where you place your trust is that this then has to repeat every generation that has not had to deal with such controversies on a large scale, and that when people are emotionally motivated it turns off that rational part temporarily so they don't care about what was previously claimed.

                  Until both of those are adequately accounted for, this is going to repeat endlessly as people love controlling others.

                • jchw an hour ago

                  In most cases commenting on votes is boring and pointless, as per the guidelines. However, rather unusually, I've found it really quite interesting to watch the votes on this comment. It paints a picture that people are actually quite split on this matter. I kind of figured it might wind up in the gray (I don't say "am I the only one" without good cause usually) but on the other hand it leaves me genuinely curious exactly how people disagree with this. (To be clear, I'm probably not actually going to engage much more since this is not really a topic I care deeply about but it's more a morbid curiosity.)

                  • throwanem 23 minutes ago

                    I describe what I have learned in later life about what was done to me in earlier.

                    One may "groom" a child to accept sexual abuse in large part by portraying this as an entirely normal aspect of their phase of life. To do so requires the presentation of what appears to be true evidence.

                    Such images are invariably lies, but remember that the victim is a child as naïve to lies as to all else, yet. What he sees he is extremely likely to believe, and not notice all the lies behind it.

                    AI-generated CSAM makes this a much, much easier process. It relieves the prerequisite of acquiring genuine child pornography. Now, all that's required is unsupervised access, not even both at once, to both an AI and a child. You have now expanded the threat radius by several orders of magnitude.

                    This alone suffices to justify AI-generated CSAM as a crime. In the US you may own many types of rifle. You may not, though, own an artillery rifle. It is far too dangerous a weapon, and you no more than any other civilian can have no possible lawful use of such a weapon. Therefore its simple possession is a crime. The same principle applies here.

                    • thomastjeffery 23 minutes ago

                      The article we are talking about wants to be about CSAM stories. That alone is a topic that most people have a strong opinion about. A strong enough opinion to say that anything even adjacent to the topic is not worth even a little consideration. CSAM is the penultimate taboo subject, and for good reason.

                      But this article isn't really about CSAM. It's about the taboo itself. This article taunts the reader: if CSAM truly deserves to be taboo, then it logically follows that anything resembling CSAM should be censored, and its creators punished.

                      If we take this argument seriously, then we must actually consider what it means to resemble CSAM. That's a path that no one is interested in exploring, so the argument itself just vanishes.

                      --

                      The real argument is about the threat of story. Every writer has the power to write any story that they can imagine. There is nothing new about this: it's been true since prehistory, since language itself.

                    • orbital-decay an hour ago

                      Just a reminder that AI safety is all of the following, and many other things:

                      - Rogue AI scenario, which increasingly looks like a figment of collective imagination of certain extremely smart people who discovered religions in their tech tree

                      - Instructions on how to make nuclear weapons (are they scraping classified materials now?..)

                      - Geopolitical games (don't let the adversary have what we have, "for the benefit of all humanity" is a red herring).

                      - Spam/manipulation/botting/astroturfing (legit one, not nearly enough attention paid compared to others).

                      - Erotic roleplay (prudish/thought policing), disturbing erotic roleplay (arguably a nothingburger, division is understandable).

                      Turns out if you shove all that into one huge category of AI safety, the term becomes overloaded and meaningless.

                      • gs17 23 minutes ago

                        > Instructions on how to make nuclear weapons (are they scraping classified materials now?..)

                        Presumably, a "smart enough" AI could work it the physics out the same way humans did to write those classified materials. It's still not a realistic threat unless we're banning physics textbooks as well, AFAIK the barrier more is the materials and equipment required than the principles.

                        • ThrowawayTestr 5 minutes ago

                          If an LLM can figure out nukes from first principles I think we have bigger problems.

                      • tcdent an hour ago

                        Really hard to quantify the demand out there, since, thankfully, most of these people keep it out of my feed.

                        But I have a feeling it's significantly more popular than we expect.

                        • thomastjeffery an hour ago

                          This is just the natural conclusion to a narrative that conflates hallucination with [un]safety. Nightmares are not danger.

                          LLMs will never be able to filter out specific categories of content. That is because ambiguity is an LLM's core feature. The entire narrative of "LLM safety" implies otherwise. The narrative continues with "guardrails", which don't guard anything. The only thing a "guardrail" can do is be "loud" enough to "talk over" undesired continuations. So long as the content exists in the model, the right permutation of tokens will be able to find it.

                          Unless you want a model trained on content that completely excludes all content about, any sexuality, any violence, or any children; you will always have a model capable of generating a CSAM-like horror story. That's just how text and words work. The reality is that a useful model will probably include some content with each of these three subjects.

                          • jerf 35 minutes ago

                            As AIs improve, they won't even need CSAM or fetish content in their training set. Explaining what they are in a handful of words using normal English is not that difficult. Users would trade prompts freely. As windows grow, you'll be able to stick more info in them.

                            And as I like to remind people, LLMs are not "AI", in the sense that they are not the last word in AI. Better is coming. I don't know when; could be next month, could be 15 years, but we're going to get AIs that "know" things in some more direct and less "technically just a very high probability guess" way.

                            • thomastjeffery 17 minutes ago

                              What everyone needs to know about LLMs is that they do not perform objectivity.

                              An LLM does not work with categories: it stumbles blindly around a graph of tokens that usually happens to align with real semantic structures. It's like a coloring book: we perceive the lines, and the space between them, to be true representation, but that is a feature of human perception: it does not exist on the page itself.

                        • bloopernova 2 hours ago

                          There seems to be continued demand for CSAM-related LLM chats. Apart from being a very depressing reflection of humanity, I wonder if we're simply seeing humanity as it is, or if the availability of CSAM pulls people into such crimes?

                          • wkat4242 an hour ago

                            I think the CSAM is mainly used for effect here, to make it sound more sensational. Of course it's one of the things uncensored models can be used for but it's calling out the extreme.

                            Uncensored models are also required for normal adult erotic fiction and even to discuss many sexual topics, because the public models are so hypersensitive on this topic. They're mirroring American sensibilities which can be annoying here in Europe where we a are a lot more open about consensual adult sexuality.

                            • BobaFloutist 12 minutes ago

                              It's also worth noting that to my knowledge, written CSAM content largely isn't illegal, it's just visual depictions that are. I'm not sure there's even any law prohibiting profiting off of commercial, written depictions of CSAM, though I wouldn't swear to it, it's just that you'd have a very hard time with typical commercial infrastructure (web hosts, banks, payment processors, advertisement firms, publishers and distributors) because you'd be so poisonous to public opinion.

                              So I'm not fully convinced that a LLM that generates CSAM text, even if it was interactive, would be in any way restricted by law. Image generation is, of course, a little different.

                              • anon11100 an hour ago

                                A sensational truth. The sexy AI chatbot sites with community bots public are full of horrible stuff. A dispassionate AI will happily engage in genocide, or suicide, or snuff, etc.

                                • wkat4242 an hour ago

                                  Yeah I'm sure they can and are used for that sometimes but I doubt this is what most people are using them for.

                                  The problem is that AI bots are so heavily censored, and uncensoring them means removing all protections. By seeking the slighter things past the barrier, you will open it up to all kinds of things.

                                  But in my opinion the models are currently really too censored to be useful right now. For example, I partake in BDSM and sex-positive parties. All very consensual and adults-only stuff (you'd be surprised how big a thing consent is in BDSM) and all very legal and above board. I'm in a lot of chatgroups about these things but don't have time to monitor them, so I use a LLM to summarise them (a local one for privacy reasons, not just my own but the other group participants' as well obviously). But if I try to use a normal uncensored model like llama for it, it will immediately close up and complain about 'explicit' content. This is just BS. There should be models which are more open like this.

                                  I use an uncensored version of llama now which works great. And it's never recommending genocide, homicide or suicide. Because I'm not interested in those things and don't ask for it. I'm sure it can tell me but I don't want to know. Most of the people who need uncensored models will have these kind of usecases. Calling out the one extreme idiot is just sensationalism.

                                  It should really be possible to customise models' censorship rather than going for the strictest common denominator. If there are just a few public models that can create some normal adult smut incorporating all sexual practices that are legal, 99% of those current customers of these hacked-cloud-hosted chatbots will be very happy with that. And the mainstream AI industry will make more money. The problem is that they don't want to be associated with it which is aligned with American morale but not European. Sex is a normal part of life here (and the SexTech industry is growing rapidly).

                                  • qualidea19871 an hour ago

                                    This actually seems to be a trend that I have noticed in big tech, where corporations seem to think that their end users simply do not possess common sense to self moderate their usage of the technology they're engaging with, and instead choose the safest road possible and clamp down on security in the name of "safety". The LLM space is particularly guilty of this, locking everything down in the name of "safety and harmlessness."

                                    Now, I understand why they're doing this, but they should give people an option to opt out of the walled garden and accept the risks involved rather than treat everyone like clueless idiots, as you said in your last sentence. Unfortunately, this probably won't happen since that kind of thing scares investor money off really fast.

                                    • mrsilencedogood 13 minutes ago

                                      I really don't think it's about common sense or user agency or anything like that at all.

                                      These companies are just trying to make a trillion dollars. It will be hard to make a trillion dollars if your product is associated with sexual deviancy stuff (and by "deviance" i mean literally anything - like Disney's definition of deviancy). So they do anything they can to make it hard to do deviancy and easy to give them a trillion dollars, like automating away a huge portion of call center work or something like that.

                                      Obviously from where people sit, embroiled in the political "debates" of our age, it's easy to assign political motivations to it. But really they just want people to stop doing shit that isn't giving them a trillion dollars because it's just distraction/cost center for them to manage the PR shit of someone who made a lewd chatbot and it got on fox news.

                                      • qball 29 minutes ago

                                        >locking everything down in the name of "safety and harmlessness."

                                        This was always a pretense- people are concerned about the sci-fi trope of AI destroying the world, so why not re-use that name to justify inserting political bullshit into your queries because fuck you, that's why?

                                        • throwanem 32 minutes ago

                                          > a trend that I have noticed in big tech, where corporations seem to think that their end users simply do not possess common sense to self moderate their usage of the technology they're engaging with

                                          Of course. They recognize and expect the effects of the condition in users which they seek to create.

                                        • lampreyface 11 minutes ago

                                          Certian models do have tiers of content control, like Azure. https://learn.microsoft.com/en-us/azure/ai-services/openai/c...

                                    • pjc50 2 hours ago

                                      CSAM is probably the least available category of material on the Internet. Almost all but the most "free speech" diehard services consider it a bannable offence. It's very illegal in almost every jurisdiction and hugely unpopular with the public. The demand for it lurks nonetheless.

                                      CSAM is extremely illegal because making it is a crime against a child. Textual material describing fictional child abuse isn't quite in the same category. Deepfake material that purports to be of a specific real child may or may not be illegal depending on jurisdiction but really ought to be.

                                      • blackeyeblitzar 2 hours ago

                                        Why should it “really ought to be” if it is fictional? Isn’t it like the text material?

                                        • BobaFloutist 9 minutes ago

                                          I'm far from an expert but last I checked the consensus was that attraction to minors is a disordered behavior, and if you feed it with (even fictional) visual depictions it increases the paraphilia. It might seem like a harmless outlet, but it doesn't actually function that way, and so it's better for everyone if it's just made illegal.

                                          • BobbyJo an hour ago

                                            CSAM is unique in that society, almost uniformly, classifies it as wrong, even when there is no relationship to any material harm. I think parent's argumentative basis is simply that the information itself is evil at face value, an the vast majority of people (in the U.S. at least) would agree.

                                            • throwway120385 an hour ago

                                              Because making it legal creates a situation where someone makes real CSAM and then argues that it is an AI fake in court. And it's really not a pretty thing to require the affected children to testify as to what happened to them.

                                              It's also a gateway, stoking interest in the real thing.

                                              • BobbyJo an hour ago

                                                Neither of these two arguments hold any water. I see them all the time, but they really just feel like justification after the fact, as both can be applied to such a large swath of information as to be useless.

                                                The first would require we outlaw generating depictions of any illegal activity. The second would require we ban undesirable men from legal adult content.

                                                I think it's ok if society, and our legislature, classifies the information as illegal, in isolation, with no useless A->B gymnastics. Thats pretty much what the law already does.

                                                • ensignavenger 6 minutes ago

                                                  The first would only require banning fictional depictions of heinous crimes against children (When the children look like real ones), as it is intended to protect children from having to testify to what happened to them. One could argue that adults shouldn't have to testify to being the victim of various terrible crimes either, but that is a different argument than what was being made.

                                              • BizarroLand 42 minutes ago

                                                There is some part of me that can rationalize that an AI is not a child and therefore there is no child abuse happening in the interaction, but that tiny part of my brain is shouted down immediately by the rest of my brain screaming, "PEOPLE ARE HORRIBLE. YOU CANNOT GIVE THEM THE OPPORTUNITY TO FANTASIZE ABOUT CSA IN A COMMUNITY AS PEOPLE WILL INEXORABLY ESCALATE INTO REAL LIFE CSA!"

                                                Even if there are people who can indulge in imaginary CSAM without ever bringing it into real life, those are not the people you can set the bar against.

                                                You have to set it against the average person and deduce from there what percentage of them when given free access to this material would be tempted into committing crimes. If that number goes up, your rule is too loose.

                                                Giving everyone unlimited access to this without judgement will almost certainly increase child sex crimes. Therefore it must be restricted.

                                                • qball 24 minutes ago

                                                  >but that tiny part of my brain is shouted down immediately by the rest of my brain screaming

                                                  Your gut feeling is simply wrong.

                                                  General pornography availability and sexual assault are negatively correlated; you'll notice the former increased dramatically and the latter decreased dramatically over the past 20 years in Western societies where that is true.

                                                  Despite what you might be led to believe crime rates were not increasing (well, before 2020 anyway).

                                                  • welshwelsh 13 minutes ago

                                                    Wow. Honestly, no part of my brain is screaming about possible escalation, it's something I'm not worried about in the slightest. Why would someone risk their freedom and harm another person IRL by committing a serious crime, when they could get the same stimulation from a computer program?

                                                    But let's say we do live in a world where fictional crimes often escalate to real ones. Suppose that playing DOOM increases the chance that an unstable person will buy a gun and shoot real people. Even in that universe, I would not be OK with laws that restrict me from playing DOOM, because that violates my freedom. I do not want the law to treat everyone as a potential criminal.

                                              • 123yawaworht456 an hour ago

                                                AI hallucinations are not CSAM.

                                                the attempt to stoke a moral panic in this article (and the forbes article from january it's referencing) is baffling. you can go to any booru or hentai site and see far more graphic things than boring LLM slop, and it's all on clearnet, because terrible things are not illegal as long as they are imaginary (in all but the most ass-backwards jurisdictions).

                                                it reads like boomer bullshit about violent video games from 20+ years ago. "A Single Unattended Computer Can Feed a Crowd of Violent Teenagers. We left a computer unattended, and guess what? They were playing Doom on it! Just like Eric Harris and Dylan Klebold!"

                                              • rsynnott an hour ago

                                                That's a hell of a headline. And yet really kinda buries the lede; this is much more disturbing than I was expecting.

                                                • stonethrowaway 2 hours ago

                                                  Excellent title Brian!