• Aachen 3 days ago

    This reply sounds like a lot more sensible take: https://old.reddit.com/r/LocalLLaMA/comments/1q7qcux/the_no_...

    OP replied and there's another in-depth reply to that below it

    • nerdsniper 3 days ago

      OP's reply to that appears to be drafted with heavy help from ChatGPT:

      > I appreciate you citing the specific clauses. You are reading the 'Black Letter Law' correctly, but you are missing the Litigation Reality (how this actually plays out in court). The 'Primarily Designed' Trap: You argue this only targets a specific 'Arnold Bot.' blah blah blah.

      • nerdponx 3 days ago

        Still smells like malicious anti-competitive lobbying by the AI industry.

        • terminalshort 3 days ago

          The industry does not fear open source models. This is just Karens doing what they do best.

          • beeflet 3 days ago

            If the industry doesn't fear open-weight models, they should. There is no moat

            • mmcwilliams 3 days ago

              The moat is building GPU infrastructure to serve inference at scale.

              • troyvit 2 days ago

                Not just building GPU infrastructure to serve inference at scale, but also constraining GPU infrastructure and pricing it out of reach for individuals who wish to pursue open models.

                I'm not sure what entirely to do about it, but the fact that some providers still release open models is hopeful to me.

              • hhh 3 days ago

                Why? Almost all of the open source models end up actually sucking for general purpose use, and require a decent bit of effort to even make remotely usable for specialized use cases.

                • torginus 3 days ago

                  From what I could tell (Corridor Digital mentioned this as well), actual pros use local models through ComfyUI, rather than prompting these proprietary models.

                  Control and consistency, as well as support for specialized workflows is more important, even if it comes at the expense of model quality.

                  • hackable_sand 3 days ago

                    Corridor Digital is also pretty despicable.

                    • torginus 3 days ago

                      Why do you say that? I feel like such strong blanket statements at least warrant an explanation.

                  • mcny 3 days ago

                    Because from what I understand, the whole premise of billionaires giving AI all their money is

                    1. AI succeeds and this success is defined by they will be able to raise prices really high

                    2. AI fails and receives a bailout

                    But if there are alternatives I can run locally, they can't raise prices as high because at some point I'd rather run stuff locally even if not state of the art. Like personally, USD 200 a month is already too expensive if you ask me.

                    And the prices can only go up, right?

                    • hhh 3 days ago

                      not really, in a scenario where demand dies down surplus gpu compute becomes so under-utilized we would see it drop even lower I would think. Prices will go up of course if we keep seeing more tokens needed to solve problems, and demand keeps going up with no increase in efficiency or capacity, which is again not what we're seeing really.

                    • realusername 3 days ago

                      There's two scenarios going forward that I can see.

                      Either there's a massive reduction of inference and training costs due to new discoveries and then those big hardware hoarders have no moat anymore or nothing new is discovered and they won't be able to scale forward.

                      Either way it doesn't sound good for them.

                      • HoJojoMojo 3 days ago

                        You are saying it needs at most minor tweaks when used for something specific like a product support channel?

                        For the long-term value of a horribly overpriced and debt ridden corporation it matters quite a lot if the catch up of open source to adequate is less than a decade in instead of several decades in some uses and a few more in others.

                        • like_any_other 2 days ago

                          At the moment. AI company management is not so myopic and can see the risk that this could change.

                          • immibis 3 days ago

                            FYI there are no open source models, only open weights. Open source means you get source code.

                            • mistrial9 2 days ago

                              yes there are completely open models -- look at Allen AI and others

                  • dleeftink 3 days ago

                    Not saying this would be the right way to go about preventing undesirable uses, but shouldn't building 'risky' technologies signal some risk to the ones developing them? Safe harbor clauses have long allowed the risks to be externalised onto the user, fostering non-responsibility on the developers behalf.

                    • akersten 3 days ago

                      Foisting the responsibility of the extremely risky transport industry onto the road developers would certainly prevent all undesirable uses of those carriageways. Once they are at last responsible for the risky uses of their technology, like bank robberies and car crashes, the incentive to build these dangerous freeways evaporates.

                      • idle_zealot 3 days ago

                        I think this is meant to show that moving the responsibility this way would be absurd because we don't do it for cars but... yeah, we probably should've done that for cars? Maybe then we'd have safe roads that don't encourage reckless driving.

                        • interroboink 3 days ago

                          But I think you're missing their "like bank robberies" point. Punishing the avenue of transport for illegal activity that's unrelated to the transport itself is problematic. I.e. people that are driving safely, but using the roads to carry out bad non-driving-related activities.

                          It's a stretched metaphor at this point, but I hope that makes sense (:

                          • wolrah 3 days ago

                            It is definitely getting stretchy at this point, but there is the point to be made that a lot of roads are built in a way which not only enables but encourages driving much faster than may be desired in the area where they're located. This, among other things, makes these roads more interesting as getaway routes for bank robbers.

                            If these roads had been designed differently, to naturally enforce the desired speeds, it would be a safer road in general and as a side effect be a less desirable getaway route.

                            Again I agree we're really stretching here, but there is a real common problem where badly designed roads don't just enable but encourage illegal and potentially unsafe driving. Wide, straight, flat roads are fast roads, no matter what the posted speed limit is. If you want low traffic speeds you need roads to be designed to be hostile to high speeds.

                            • interroboink 2 days ago

                              I think you are imagining a high-speed chase, and I agree with you in that case.

                              But what I was trying to describe is a "mild mannered" getaway driver. Not fleeing from cops, not speeding. Just calmly driving to and from crimes. Should we punish the road makers for enabling such nefarious activity?

                              (it's a rhetorical question; I'm just trying to clarify the point)

                          • akersten 3 days ago

                            We wouldn't have roads at all is my point, because no contractor in their right mind would take on unbounded risk for limited gain.

                            • dleeftink 3 days ago

                              Which in case of digital replicas that can feign real people, may be worth considering. Not a blanket legislation as proposed here, but something that signals the downstream risks to the developer to prevent undesired uses.

                              • tracker1 2 days ago

                                Then only foreign developers will be able to work with these kinds of technologies... the tools will still be made, they'll just be made by those outside jurisdiction.

                                • akersten 3 days ago

                                  Unless they released a model named "Tom Cruise-inator 3000," I don't see any way to legislate that intent that would provide any assurances to a developer that their misused model couldn't result in them facing significant legal peril. So anything in this ballpark has a huge chilling effect in my view. I think it's far too early in the AI game to even be putting pen to paper on new laws (the first AI bubble hasn't even popped, after all) but I understand that view is not universal.

                                  • dleeftink 3 days ago

                                    I would say a text-based model carries a different risk profile compared to video-based ones. At some point (now?) we'd probably need to have the difficult conversation of what level of media-impersonation we are comfortable with.

                                    • akersten 3 days ago

                                      It's messy because media impersonation has been a problem since the advent of communication. In the extreme, we're sort of asking "should we make lying illegal?"

                                      The model (pardon) in my mind is like this:

                                      * The forger of the banknote is punished, not the maker of the quill

                                      * The author of the libelous pamphlet is punished, not the maker of the press

                                      * The creep pasting heads onto scandalous bodies is punished, not the author of Photoshop

                                      In this world view, how do we handle users of the magic bag of math? We've scarcely thought before that a tool should police its own use. Maybe, we can say, because it's too easy to do bad things with, it's crossed some nebulous line. But it's hard to argue for that on principle, as it doesn't sit consistently with the more tangible and well-trodden examples.

                                      With respect to the above, all the harms are clearly articulated in the law as specific crimes (forgery, libel, defamation). The square I can't circle with proposals like the one under discussion is that they open the door for authors of tools to be responsible for whatever arbitrary and undiscovered harms await from some unknown future use of their work. That seems like a regressive way of crafting law.

                                      • thaumasiotes 3 days ago

                                        > The creep pasting heads onto scandalous bodies is punished, not the author of Photoshop

                                        In this case the guy making the images isn't doing anything wrong either.

                                        Why would we punish him for pasting heads onto images, but not punish the artist who supplied the mannequin of Taylor Swift for the music video to Famous?†

                                        https://www.youtube.com/watch?v=p7FCgw_GlWc

                                        Why would we punish someone for drawing us a picture of Jerry Falwell having sex with his mother when it's fine to describe him doing it?

                                        (Note that this video, like the recent SNL "Home Alone" sketch, has been censored by YouTube and cannot be viewed anonymously. Do we know why YouTube has recently kicked censorship up to these levels?)

                                • Retric 3 days ago

                                  Selling anything takes on unbounded risk for limited gain. That’s why the limited liability company exists.

                                  Risk becomes bound to the total value of the company and you can start acting rationally.

                                  • thaumasiotes 3 days ago

                                    Historically it's the other way around - limited liability for corporations let juries feel free to award absurdly high judgments against them.

                                • dleeftink 3 days ago

                                  And I am talking about user-facing app development specifically, which has a different risk profile compared to automative or civil engineering.

                                  • KaiserPro 3 days ago

                                    > then we'd have safe roads that don't encourage reckless driving.

                                    You mean like speed limits, drivers licenses, seat belts, vehicle fitness and specific police for the roads?

                                    I still can't see a legitimate use for anyone cloning anyone else's voice. Yes, satire and fun, but also a bunch of malicious uses as well. The same goes with non-fingerprinted video gen. Its already having a corrosive effect on public trust. Great memes, don't get me wrong, but I'm not sure thats worth it.

                                    • ndriscoll 3 days ago

                                      Creative work has obvious applications. e.g. AISIS - The Lost Tapes[0] was a sort of Oasis AI tribute album (the songs are all human written and performed, and then the band used a model of Liam Gallagher's mid 90s voice. Liam approved of the album after hearing it, saying he sounded "mega"). Some people have really unique voices and energy, and even the same artist might lose it over time (e.g. 90s vs 00s Oasis), so you could imagine voice cloning becoming just a standard part of media production.

                                      [0] https://www.youtube.com/watch?v=whB21dr2Hlc

                                      • KaiserPro 3 days ago

                                        So can image gen systems.

                                        As a former VFX person, I know that a couple of shows are testing out how/where it can be used. (currently its still more expensive than trad VFX, unless you are using it to make base models.)

                                        Productivity gains in the VFX industry over the last 20 years has been immense. (ie a mid budget TV show has more, and more complex VFX work than most movies that are 10 years old, and look better.)

                                        But, does that mean we should allow any bad actor to flood the floor with fake clips of whatever agenda they want to push? no. If I as a VFX enthusiast gets fooled by GenAI videos (Picture area done deal, its super hard to stop reliably) then we are super fucked.

                                        • ndriscoll 2 days ago

                                          You said you can't see a legitimate use, but clearly there are legitimate uses (the "no legitimate use" idea is used to justify bad drug policy for example, so we should be skeptical of it). As to whether we should allow it, I don't see how we have a choice. The models are already out there. Even if they weren't, it becomes cheaper every year to train new ones, and eventually today's training supercomputers will be tomorrow's commodity. The whole idea of AI "fingerprinting" is bad anyway; you don't fingerprint that something is inauthentic. You sign that it is authentic.

                                          • KaiserPro 2 days ago

                                            > The models are already out there. Even if they weren't, it becomes cheaper every year to train new ones,

                                            Yes, lets just give up as bad actors undermine society, scam everyone and generally profit from us.

                                            > You sign that it is authentic.

                                            Signing means you denote ownership. A signed message means you can prove where it comes from. A service should own the shit it generates.

                                            Which is the point, because if I cannot reliably see what is generated, how is a normal person able to tell. being able to provide a mechanism for the normal person to verify is a reasonable ask.

                                            • ndriscoll 2 days ago

                                              You put the bad actors in prison, or if they're outside your jurisdiction, and they're harming your citizens, and you're America, you go murder them. This has to be the solution anyway because the technology is already widely available. You can't make everyone in the world delete the models.

                                              Yes signing so the way you show something is authentic. Like when the Hunter Biden email thing happened I didn't understand (well, I did) why the news was pretending we have no way to check whether they're real or whether the laptop was tampered with. It was a gmail account; they're signed by Google. Check the signatures! If that's his email address (presumably easy enough to corroborate), done. Missed opportunity to educate the public about the fact that there's all sorts of infrastructure to prove you made/sent something on a computer.

                                              • KaiserPro 2 days ago

                                                > You put the bad actors in prison,

                                                how do you detect it?

                                                • ndriscoll 12 hours ago

                                                  People who get scammed make police reports, same as without voice models.

                                    • renewiltord 3 days ago

                                      Well it would also apply to bike lanes.

                                  • __MatrixMan__ 3 days ago

                                    How can you know how people are going to use the stuff you make? This is how we end up in a world where a precondition to writing code is having lawyers on staff.

                                    • ronsor 2 days ago

                                      No.

                                      The reason safe harbor clauses externalize risks onto the user is because the user gets the most use (heh) of the software.

                                      No developer is going to accept unbounded risk based on user behavior for a limited reward, especially not if they're working for free.

                                      • tracker1 2 days ago

                                        The reason safe harbor clauses exist is because you don't blame the car manufacturer for making the bank robbery get away car.

                                      • hansvm 3 days ago

                                        Just last weekend I developed a faster reed-solomon encoder. I'm looking forward to my jail time when somebody uses it to cheaply and reliably persist bootlegged Disney assets, just because I had the gall to optimize some GF256 math.

                                        • dleeftink 2 days ago

                                          That is not what I said. It is about signalling risks to developers, not criminalising them. And in terms of encoders, I would say it relates more to digital 'form' than 'content' anyways, the container of a creative work vs the 'creative' (created) work itself.

                                          While both can be misused, to me the latter category seems to afford a far larger set of tertiary/unintended uses.

                                        • beeflet 3 days ago

                                          No

                                        • alphazard 3 days ago

                                          Likely unconstitutional as it violates the 1st amendment, which has done a very good job of protecting the right to author and distribute software over the years. Clearly an unintended positive consequence, since no one who worked or voted on the Bill of Rights had a computer.

                                          If the courts upheld the part in question, it would create a clear path to go after software authors for any crime committed by a user. Cryptocurrencies would become impossible to develop in the US. Holding authors responsible for the actions of their users basically means everyone has to stop distributing software under their real names. There would be a serious chilling effect, as most open source projects shutdown or went underground.

                                          • Guvante 2 days ago

                                            The law in question was very specific and not as broad as you imply here. In fact more specific than the Redittor implies too since the law doesn't cover audio models in general only those purpose built to replicate a particular individual.

                                            As the law is specifically targeting models made to duplicate an individual it isn't hard to provide sufficient evidence to clear the hurdles required of restrictions on free speech as examples of the negative effects are well documented and "speech model for an individual" isn't a broad category.

                                            Also I would point out first amendment isn't used by the US to protect software delivery anywhere. Instead it is Congress explicitly encouraging it through the laws it passes.

                                          • geraldog 3 days ago

                                            You know kids, in the 80s, a lot of time before the First Crypto Wars, we had something called the Porn Wars on American Congress. I could leave out many depositions to Congress on Youtube, but I will leave you with some good music.

                                            (shill)

                                            https://www.youtube.com/watch?v=2HMsveLMdds

                                            Which is of course the European Version, not the evil American Version.

                                            • 71bw 3 days ago

                                              Is there some other music artist that has been present in the Congress as prevalently as Zappa...?

                                              • p0w3n3d 3 days ago

                                                sorry for offtop but I've just discovered Frank Zappa and he looks and sounds like the precursor of Serj Tankian (SOAD) - I mean "sounds" like as in similar crazy all over the place style

                                              • SamInTheShell 3 days ago

                                                The actual link from the initial reddit post without the GOOG tracking: https://www.congress.gov/bill/119th-congress/senate-bill/136...

                                                Edit: Also does this mean OpenAI can bring back the Sky voice?

                                                • zx8080 3 days ago

                                                  > I contacted my reps email to flag this as an "innovation killer."

                                                  Chinese companies will happy to drive innovation further after Google and OpenAI giants goes on with this to kill competition in the US.

                                                  US capitalism eats itself alive with this.

                                                  • scotty79 3 days ago

                                                    I really love how, thanks to China, people are beginning to see how technologically suppresive American oligarchy is and who's the reason why we can't have nice things.

                                                  • _def 3 days ago

                                                    "Open Source" in this case means "ML models with open weights"

                                                    (not my interpretation, it's what the post states - personally that is not what I think of when I read "Open Source")

                                                    • 0928374082 3 days ago

                                                      ML models with open weights? Like, say, Qwen?

                                                      • scotty79 3 days ago

                                                        Freeware models would be more accurate term, but people went with stronger meme for this one.

                                                      • undefined 3 days ago
                                                        [deleted]
                                                        • siliconc0w 3 days ago

                                                          Yay another bill modeled after the DMCA, what could go wrong?

                                                          • echelon 3 days ago

                                                            This is worse than the DMCA because there's no provision for companies that develop or host the tech.

                                                            Why the hell can't these legislators keep to punishing the law breakers instead of creating unending legal pitfalls for innovation?

                                                            We have laws against murdering people with kitchen knives. Not laws against dinnerware. Williams Sonoma doesn't have to worry about lawsuits for its Guy Degrenne Beau Manoir 20-Piece Flatware Set.

                                                            • Retr0id 3 days ago

                                                              > Not laws against dinnerware.

                                                              Don't worry, the UK is on the case https://reason.com/2019/10/07/the-u-k-must-ban-pointy-knives...

                                                              • p0w3n3d 3 days ago

                                                                like in this meme: Oi you cheeky ..., is that a knoife

                                                                I thought this was an exaggeration

                                                                • ErroneousBosh 3 days ago

                                                                  Yeah, you know that's a right-wing ragebait site, that posts made-up stories to make the "critical thinkers" angry, right?

                                                                  • Retr0id 3 days ago

                                                                    I wasn't aware of that, it was just the first non-paywalled article about it I found. The primary source: https://www.rochester.anglican.org/communications/news/gover...

                                                                    • nhinck3 3 days ago

                                                                      You do realise you are talking about a petition started seven years ago by a small diocese that got less than a thousand signatures?

                                                                      • Retr0id 2 days ago

                                                                        Yup!

                                                                        • nhinck3 2 days ago

                                                                          So in what way is the UK on the case?

                                                                    • cowboylowrez 3 days ago

                                                                      lol says you, its been my experience that reason.com does NOT post made-up stories, they simply have priorities aligned with their political biases which is relatively normal nowadays. AI agrees.

                                                                      google:

                                                                      "Reason.com is a reputable source that adheres to journalistic standards, but its content is filtered through a specific, consistent libertarian lens."

                                                                      • undefined 2 days ago
                                                                        [deleted]
                                                                        • ErroneousBosh 2 days ago

                                                                          Okay, so they're repeating an anti-immigrant neonazi talking point because they're "libertarian". Okay.

                                                                          • cowboylowrez 2 days ago

                                                                            can you post the neonazi part? I couldn't find it. if you're uncomfortable posting actual nazi text, post the wordcount that precedes it in the article and I can count the words that precede your neonazi discovery.

                                                                • xmprt 3 days ago

                                                                  I think this title is quite misleading given that it's only impacting open source models which is a very narrow interpretation of open source.

                                                                  • josalhor 3 days ago

                                                                    We do have tech that is "behind doors". Just look at military applications (nuclear, tank and jet design etc). Should "clonable voice and video" be behind close doors? Or should AGI be behind close doors? I think that the approach of the suggested legistation may not the right way to go about; but at a certain level of implementation capability I'm not sure how I would handle this situation.

                                                                    If current tech appeared all of a sudden in 1999; I am sure as a society we would all accept this, but slow boiling frog theory I guess.

                                                                    • mdhb 3 days ago

                                                                      [flagged]

                                                                      • logicchains 3 days ago

                                                                        It should be called the anti-AGI bill, because trying to ban AI with certain capabilities is essentially banning embodied AI capable of learning/updating its weights live. The same logic applied to humans would essentially ban all humans, because any human can learn to draw and paint nudes of someone else.

                                                                        • rookderby 3 days ago

                                                                          Reddit is one of the domains I block using StepenBlack's hosts list [0].

                                                                          Here is some background info on the act from wikipedia: https://en.wikipedia.org/wiki/No_Fakes_Act.

                                                                          [0] https://github.com/StevenBlack/hosts/blob/master/readme.md

                                                                          • karlgkk 3 days ago

                                                                            > voice-conversion RVC model on HuggingFace, and someone else uses it to fake a celebrity, you (the dev) can be liable for statutory damages ($5k-$25k per violation). There is no Section 230 protection here. This effectively makes hosting open weights for audio models a legal suicide mission unless you are OpenAI or Google.

                                                                            Good.

                                                                            • kouteiheika 3 days ago

                                                                              So you'd prefer that only rich megacorporations and criminals have access to this technology, and not normal people and researchers?

                                                                              • renewiltord 3 days ago

                                                                                How is that surprising? The advent of modern AI tools has resulted in most people being heavily pro-IP. Everyone now talks about who has the copyright to something and so on.

                                                                                • kouteiheika 3 days ago

                                                                                  Yes, people are now very pro-IP because it's the big corporations that are pirating stuff and harvesting data en-masse to train their models, and not just some random teenagers in their basements grabbing an mp3 off LimeWire. So now the IP laws, instead of being draconian, are suddenly not adequate.

                                                                                  But what is frustrating to me is that the second order effects of making the law more restrictive will be doing us all a big disfavor. It will not stop this technology, but it will just make it more inaccessible to normal people and put more power into the hands of the big corporations which the "they're stealing our data!" people would like to stop.

                                                                                  Right now I (a random nobody) can go on HuggingFace, download model which is more powerful that anything that was available 6 months ago, and run it locally on my machine, unrestricted and private.

                                                                                  Can we agree that's, in general, a good thing?

                                                                                  So now if you make the model creators liable for misuse of the models, or make the models a derivative work of its training data, or anything along these lines - what do you think will happen? Yep. The model on HuggingFace is gone, and now the only thing you'll have access to is a paywalled, heavily filtered and censored version of it provided by a megacorporation, while the megacorporation itself has internally an unlimited, unfiltered access to that model.

                                                                                  • Joel_Mckay 2 days ago

                                                                                    >Can we agree that's, in general, a good thing?

                                                                                    The models come from overt piracy, and are often used to make fake news, slander people, or other illegal content. Sure it can be funny, but the poison fruit from a poison tree is always going to be overt piracy.

                                                                                    I agree research is exempt from copyright, but people cashing in on unpaid artists works for commercial purposes is a copyright violation predating the DMCA/RIAA.

                                                                                    We must admit these models require piracy, and can never be seen as ethical. =3

                                                                                    '"Generative AI" is not what you think it is'

                                                                                    https://www.youtube.com/watch?v=ERiXDhLHxmo

                                                                                    • kouteiheika 2 days ago

                                                                                      > are often used to make fake news, slander people, or other illegal content.

                                                                                      That's not how these models are used in the the vast majority of cases.

                                                                                      This argument is like saying "kitchen knives are often used to kill people so we need to ban the sale of kitchen knives". Do some people use kitchen knives to kill? Sure. Does it mean they should be banned because of that?

                                                                                      > I agree research is exempt from copyright, but people cashing in on unpaid artists works for commercial purposes is a copyright violation predating the DMCA/RIAA. We must admit these models require piracy, and can never be seen as ethical. =3

                                                                                      So, may I ask - where exactly do you draw the line? For the sake of argument, let's imagine something like this:

                                                                                          1. I scrape the whole internet onto my disk.
                                                                                          2. I go through the text, and gather every word bigram, and build a frequency table.
                                                                                          3. I delete everything I scraped.
                                                                                          4. I use that frequency table (which, compared to the exabytes of the source text I used to build it, is a couple hundred megabytes at most) to build a text generator.
                                                                                          5. I profit from this text generator.
                                                                                      
                                                                                      Would you consider this unethical too? Because this is essentially how LLMs work, just in a slightly fancier way. On what exact basis do you draw the line between "ethical" and "unethical" here?
                                                                                      • Joel_Mckay a day ago

                                                                                        > 1. I scrape the whole internet onto my disk.

                                                                                        This is illegal under theft-of-service laws, and a violation of most sites terms-of-service. If these spider scapers respected the robot exclusion standard under its intended use-case for search-engines, than getting successfully sued for overt copyright piracy and quietly settling for billions would seem unfair.

                                                                                        Note too, currently >52% of the web is LLM generated slop, so any model trained on that output will inherit similar problems.

                                                                                        > 2. I go through the text, and gather every word bigram, and build a frequency table.

                                                                                        And when (not if) a copyrighted work is plagiarized without citation it is academic misconduct, IP theft, and an artistic counterfeit. Copyright law is odd, and often doesn't make a distinction about the origin of similar works. Note this part of the law was recently extended to private individuals this year:

                                                                                        "OpenAI Stole Scarlet Johansson's Voice"

                                                                                        https://www.youtube.com/watch?v=YhgYMH6n004

                                                                                        > 3. I delete everything I scraped.

                                                                                        This doesn't matter if the output violates copyright. Images in jpeg format are compressed in the frequency domain, have been around for ages, and still get people sued or stuck in jail regularly.

                                                                                        Academic evaluation usually does fall under a fair-use exception, but the instant someone sells or uses IP in some form of trade/promotion it becomes a copyright violation.

                                                                                        > 4. I use that frequency table

                                                                                        See above, the how it is made argument is 100% BS. The statistical salience of LLM simply can't prevent plagiarism and copyright violations. This was cited in the original topic links.

                                                                                        > 5. I profit from this text generator.

                                                                                        Since this content may inject liabilities into commercial settings, only naive fools will use this in a commercial context. Most "AI" companies lose around $4.50 per new customer, and are a economic fiction driven by some very silly people.

                                                                                        LLM businesses are simply an unsustainable exploit. Unfortunately they also proved wealthy entities can evade laws through regulatory capture, and settling the legal problems they couldn't avoid.

                                                                                        I didn't make the rules, but do disagree cleverness supersedes a just rule of law. Have a wonderful day =3

                                                                                  • beeflet 3 days ago

                                                                                    intellectual property isn't going to save us. it's a flimsy retort, like the water usage complaints

                                                                                    • Joel_Mckay 2 days ago

                                                                                      This covers the data center resource green-washing rhetoric, and most taxpayers will be paying more for energy now regardless of what they think:

                                                                                      '"Generative AI" is not what you think it is'

                                                                                      https://www.youtube.com/watch?v=ERiXDhLHxmo

                                                                                      And this paper proved the absurd outcome of the bubble is hype:

                                                                                      'Researchers Built a Tiny Economy. AIs Broke It Immediately'

                                                                                      https://www.youtube.com/watch?v=KUekLTqV1ME

                                                                                      It is true bubbles driven by the irrational can't be stopped, but one may profit from peoples delusions... and likely get discount GPUs when the economic fiction inevitably implodes. Best of luck =3

                                                                                      • beeflet 2 days ago

                                                                                        We can generate more energy, fabricate more computer chips and collect more water, but the impact on labor will be irreversible.

                                                                                        • Joel_Mckay 2 days ago

                                                                                          Energy is finite, and asking the public to fund a private firms irrational project is unethical.

                                                                                          "Memoirs of extraordinary popular delusions and the madness of crowds" (Charles Mackay, 1852)

                                                                                          https://www.gutenberg.org/files/24518/24518-h/24518-h.htm

                                                                                          I look forwards to buying the failed data center assets. LLM make great search engines, but are not the path to "AGI". Neuromorphic computing looks more interesting. Have a great day =3

                                                                                          • beeflet 2 days ago

                                                                                            The amount of electricity we can produce is limited only by regulation, because we have practically unlimited amount of fission energy under our feet. That is what you are seeing now with all of these new nuclear plants being built and de-decommissioned. If that is too scary for you, we also have the world's greatest reserves of shale gas.

                                                                                            I am not pro-AI, and I agree that the market will crash. But what I take issue with is this NIMBY mentality that we should nitpick proposals with a thousand fake reasons for why we can't build anything in this country. We can't do big engineering projects like china because they are too much of an eyesore or they use too much water or they're not zoned correctly.

                                                                                            We can't put up a new apartment block, it's too much of a strain on the local water supply. Okay can we collect more water, invest in a new reservoir? Of course not, it will endanger the tumbleweed population.

                                                                                            We can't let a new datacenter go up because it will cause everyone's power prices to increase. Okay maybe we can produce more power?? No, BECAUSE ENERGY IS FINITE AND THE SUN IS JUST GOING TO EXPLODE ANYWAYS SO WHY DO YOU EVEN CARE. WTF?

                                                                                            Why can't we build things? Because we just can't, and actually it's impossible and you are rude for suggesting we build anything ever. It's circular reasoning designed to placate suburban NPCs.

                                                                                            If you oppose AI because it is ruining art, or it will drive people out of jobs, just say that. Because these fake complaints about power and water are neither compelling nor effective (they are just technological and material problems which will be ironed out in the coming generations).

                                                                                            • Joel_Mckay 2 days ago

                                                                                              These firms can do what they like if and only if they pay for every $7B reactor, the 30k year waste stewardship, and disconnect from community resources people paid for with taxes. However, currently the unethical burden cities with the endless bill for resources, contribute no actual value, and one may spot the data center waste heat signatures and industrial run-off from space.

                                                                                              Consider most "AI" firms lost on average $4.50 for every new user, rely on overt piracy, and delusional boards sand-bagging for time... these LLM businesses are simply unsustainable fictions.

                                                                                              Many problems don't have simple answers, but one may merely profit by their predictable nature. I would recommend volunteering with a local pet rescue society if you find yourself getting upset about trivia. Have a great day. =3

                                                                                              https://www.youtube.com/watch?v=JAcwtV_bFp4

                                                                                              https://www.youtube.com/watch?v=Xx4Tpsk_fnM

                                                                                              https://www.youtube.com/watch?v=t-8TDOFqkQA

                                                                                              https://www.youtube.com/watch?v=yftBiNu0ZNU

                                                                                              https://www.youtube.com/watch?v=vrTrOCQZoQE

                                                                                              • beeflet a day ago

                                                                                                What trivia? I don't disagree that the AI companies are unprofitable.

                                                                                                These AI companies are paying for the reactors. As for waste, The Department of Energy handles spent nuclear fuel. Protests against the construction of yucca mountain have made this impossible. Nuclear power plants repeatedly sue the US Government for the cost of storing this nuclear waste on-site, because it's the DOE's problem.

                                                                                                And it is a totally artificial political problem. It is not even nessisarially "waste" in the sense that we ordinarily think: there is a significant amount of fissile isotope in spent fuel and countries like france recycle the majority of spent nuclear fuel. We could do the same with the right infrastructure, and it would vastly decrease the amount of waste we produce and uranium we need to mine.

                                                                                                My point is that complaints in these youtube videos you link (which I am very accustomed to, I have been following this for decades) present the argument that AI is politically dangerous, and this is totally separate from these material complaints (not enough water, not enough power, not enough chips, etc.) you pretend are a significant problem.

                                                                                                These are just extrinsic flaws which can be solved (and WILL be solved, if the USA is able to restore its manufacturing base, which it should). But my issue is purely with the intrinsic dangers of this tech, which are not fixable.

                                                                                                Some of the videos you link are just this suburban NIMBY nagging about muh noise pollution. You might as well get a video of people complaining about EMF pollution. The big issue here is that AI is going to take all of our jobs and will essentially harken the end of the world as we know it. It is going to get incredibly ugly very soon. Who cares about what some 50 year old boomer homeowner (who isn't going to live to see this unfold anyways) thinks about some gray building being built remotely nearby their suburb. They should go back to watching TV.

                                                                                                As for me, I am going to campaign to have my local pet rescue society demolished. It uses too much water and space and electricity, and for what? Something I don't care for? Seems unethical to me that I should bear the cost incurred through increased demand for these resources, even though I did not explicitly consent to the animal shelter being constructed.

                                                                                                • Joel_Mckay a day ago

                                                                                                  >These AI companies are paying for the reactors.

                                                                                                  This is demonstrably false with negative revenue, and when the gamblers default on the loans it is the public that will bear the consequences. Similar to sub-prime mortgages people on the con are getting tired.

                                                                                                  Dismissing facts because you personally feel they are not important is silly. If you think the US will "win" the "AGI" race... than you are fooling yourself, as everything has already been stolen.

                                                                                                  Have a great day, and maybe go outside for a walk to settle down a bit if you are uncomfortable with the way imaginary puppies, bunnies, and kittens make you feel. Community non-profit organizations offer tangible goodwill, and are very different from ephemeral LLM fads externalizing a suckers-bet on the public. =3

                                                                                                  https://www.youtube.com/watch?v=FcGLveebwjo

                                                                                  • Joel_Mckay 3 days ago

                                                                                    The studios did already rip off Mark Hamill of all people.

                                                                                    Arguing regulatory capture versus overt piracy is a ridiculous premise. The "AI" firms have so much liquid capital now... they could pay the fines indefinitely in districts that constrain damages, and already settled with larger copyright holders like it was just another nuisance fee. =3

                                                                                    • spencerflem 3 days ago

                                                                                      Why not? I don’t think normal people have very many good uses for deepfake tech.

                                                                                      • scotty79 3 days ago

                                                                                        Who is normal person? Non-creative? Deepfakes have immense creative potential.

                                                                                        • spencerflem 2 days ago

                                                                                          I don’t really see it to be honest. I feel like their best and most natural use is scams.

                                                                                          Maybe a different comparison you would agree with is Stingrays, the devices that track cell phones. Ideally nobody would have them but as is, I’m glad they’re not easily available to any random person to abuse.

                                                                                          • Joel_Mckay 2 days ago

                                                                                            >Deepfakes have immense creative potential

                                                                                            ...and the lawyers win. =3

                                                                                            https://www.youtube.com/watch?v=zpcWv1lHU6I

                                                                                      • Alex2037 3 days ago

                                                                                        [dead]